Building a Static WordPress

Photo by Vidsplay from StockSnap

Now that I have Nginx in Front of WordPress, I thought the next logic step was to try and hide my WordPress even more. What exactly would this mean? In my mind, I figured that I would restrict access to all of the backend functions of my WordPress site to just my IP Addresses. From there, I would simply serve static versions of the content.

Part of the reason that I can do this is because my site is mostly static. I don’t allow comments or other dynamic plugins. The site is only used to publish my blog posts and that’s about it. I also setup WordPress to use the permalink format of /%year%/%monthnum%/%post_id%/

First Step, Mirror the Site to a Private Repo

Just as the heading states, I needed to first get all of my content available outside of WordPress. Luckily, I realized that I had a few previous blog posts:

that could help me accomplish the initial steps. I won’t completely bore you with the details contained in these posts. I’m going to assume that you can get a basic idea of how to setup the private repo using Creating a Private GitHub Repo. You can setup your repo however you like but for future planning purposes, I decided to create a html directory inside of it to house the website files. My initial repo looked like the following:

 % ls -al
 total 8
 drwxr-xr-x   5 salgatt  staff   160 Dec 31 08:46 .
 drwxr-xr-x  49 salgatt  staff  1568 Jan  7 12:32 ..
 drwxr-xr-x  15 salgatt  staff   480 Jan  7 09:05 .git
 -rw-r--r--   1 salgatt  staff    18 Dec 30 18:57 README.md
 drwxr-xr-x   4 salgatt  staff   128 Jan  5 21:31 html 

With the private repo created, I needed to get all of my content into the repo for later use by Nginx. I just did a wget to pull only the page content down. The reason I did this is because there were a number of js and css files that are required for the admin pages and possibly for other “things” that I might not use right away:

 % cd html
 % wget --mirror --follow-tags=a,img --no-parent https://live-blog.shellnetsecurity.com
 --2021-01-07 16:37:24--  https://live-blog.shellnetsecurity.com/
 Resolving live-blog.shellnetsecurity.com (live-blog.shellnetsecurity.com)... 157.230.75.245
 Connecting to live-blog.shellnetsecurity.com (live-blog.shellnetsecurity.com)|157.230.75.245|:443... connected.
 HTTP request sent, awaiting response... 200 OK
 Length: 17266 (17K) [text/html]
 Saving to: ‘live-blog.shellnetsecurity.com/index.html’
 

 live-blog.shellnetsecurity.com/index.html       100%[=======================================================================================>]  16.86K  --.-KB/s    in 0.09s   
...
 --2021-01-07 16:37:41--  https://live-blog.shellnetsecurity.com/author/salgatt/page/2/
 Connecting to live-blog.shellnetsecurity.com (live-blog.shellnetsecurity.com)|157.230.75.245|:443... connected.
 HTTP request sent, awaiting response... 200 OK
 Length: 41746 (41K) [text/html]
 Saving to: ‘live-blog.shellnetsecurity.com/author/salgatt/page/2/index.html’
 

 live-blog.shellnetsecurity.com/author/salgatt/p 100%[=======================================================================================>]  40.77K  --.-KB/s    in 0.1s    
 

 2021-01-07 16:37:44 (398 KB/s) - ‘live-blog.shellnetsecurity.com/author/salgatt/page/2/index.html’ saved [41746/41746]
 

 FINISHED --2021-01-07 16:37:44--
 Total wall clock time: 19s
 Downloaded: 56 files, 2.7M in 3.4s (821 KB/s) 

My wget command runs the –mirror command to ummm mirror the site. I do the –follow-tags=a,img so that I only nab the html plus images and follow only href tags. Finally, I want to stay within my site and not download any other sites’ content by issuing –no-parent. With that, I now have a live-blog.shellnetsecurity.com directory in my repo’s html directory.

 % ls -al
 total 0
 drwxr-xr-x   4 salgatt  staff  128 Jan  5 21:31 .
 drwxr-xr-x   5 salgatt  staff  160 Dec 31 08:46 ..
 drwxr-xr-x  18 salgatt  staff  576 Jan  7 08:38 live-blog.shellnetsecurity.com 

Now, I need to get all of my static content into the repo as well. In order to do that, I just did a simple copy of the static files from my container running wordpress using kubectl cp:

 % kubectl cp -n wordpress wordpress-85589d5658-48ncz:/opt/wordpress/wp-content ./live-blog.shellnetsecurity.com/wp-content
 tar: Removing leading `/' from member names
 % kubectl cp -n wordpress wordpress-85589d5658-48ncz:/opt/wordpress/wp-includes ./live-blog.shellnetsecurity.com/wp-includes
 tar: Removing leading `/' from member names 

These copy commands grab ALL files in these two directories. The idea is that I’m grabbing the js and css for any plugins running in my WordPress and any theme related files. Since these directories contain PHP files and other files I don’t need in my static repo, I remove them with a nice little find command:

 % find live-blog.shellnetsecurity.com/wp-includes -type f -not -name '*.js' -not -name '*.css' -not -name '*.jpg' -not -name '*.png' -delete
 % find live-blog.shellnetsecurity.com/wp-content -type f -not -name '*.js' -not -name '*.css' -not -name '*.jpg' -not -name '*.png' -delete 

At this point, I now have a repo that should have all of the content ready to go. I commit all of the changes and push the changes to main.

Serve the Static Repo

Like I said before, I’m not going to clutter this post with the details that can be found in Building a Kubernetes Container That Synchs with Private Git Repo. Assuming you have this all ready to go, I’m going to cut straight to the configuration portion. I’m assuming the nginx container is mounting the private repo at /dir/wordpress_static. I am also going to build upon the nginx configmap that was created in Adding Nginx in Front of WordPress. I’m first going to change the root directory to be the static WordPress blog:

         root /dir/wordpress_static/html/live-blog.shellnetsecurity.com; 

I also need to change some of my original reverse proxy mappings to serve most content from static but still leave a few requests go to my WordPress

         location /status {
                 return 200 "healthy\n";
         }
 
         location / {
                 try_files $uri $uri/ /index.html;
         }
 
         location /sitemap {
                 proxy_pass https://wordpress;
                 proxy_ssl_verify off;
                 proxy_set_header Host live-blog.shellnetsecurity.com;
                 proxy_set_header X-Forwarded-For $remote_addr;
         }
 
         location /wp-sitemap {
                 proxy_pass https://wordpress;
                 proxy_ssl_verify off;
                 proxy_set_header Host live-blog.shellnetsecurity.com;
                 proxy_set_header X-Forwarded-For $remote_addr;
         }
 
         location /wp-json {
                 allow 1.1.1.1;
                 allow 2.2.2.2;
                 deny all;
                 proxy_pass https://wordpress;
                 proxy_ssl_verify off;
                 proxy_set_header Host live-blog.shellnetsecurity.com;
                 proxy_set_header X-Forwarded-For $remote_addr;
         }
 
         location /wp-login {
                 allow 1.1.1.1;
                 allow 2.2.2.2;
                 deny all;
                 proxy_pass https://wordpress; 
                 proxy_ssl_verify off;
                 proxy_set_header Host live-blog.shellnetsecurity.com;
                 proxy_set_header X-Forwarded-For $remote_addr;
         }
 
         location /admin {
                 allow 1.1.1.1;
                 allow 2.2.2.2;
                 deny all;
                 proxy_pass https://wordpress;
                 proxy_ssl_verify off;
                 proxy_set_header Host live-blog.shellnetsecurity.com;
                 proxy_set_header X-Forwarded-For $remote_addr;
         }
 
         location /wp-admin {
                 allow 1.1.1.1;
                 allow 2.2.2.2;
                 deny all;
                 proxy_pass https://wordpress;
                 proxy_ssl_verify off;
                 proxy_set_header Host live-blog.shellnetsecurity.com;
                 proxy_set_header X-Forwarded-For $remote_addr;
         }

Through some trial and error, I found that I needed to have all of the following paths allowed for my admin functionalities:

  • /wp-admin
  • /admin
  • /wp-login
  • /wp-json

Since these are required for admin functions, I have made sure to run my IP restrictions on them and only allow my addresses to access them. For now, I am managing my sitemaps from within WordPress so I also allowed requests from any clients to go directly to my WordPress server still (something I’ll correct in a future post when I talk about automation). Aside from these exceptions, I’m using try_files to find the other content. This means that requests for any other content will be sent into the root directive, aka /dir/wordpress_static/html/live-blog.shellnetsecurity.com, aka the private repo! Notice the trailing /index.html on the directive? That just means that I’ll serve /index.html whenever the page isn’t found.

With that, I am now serving content from my mirrored content that is running from the private repo. I can still manage my WordPress site like I normally do from the backend and generate content and make changes and life is mostly good.

I am an idiot

Yes, you don’t need to tell me this! I know there are some obvious flaws in what I’ve setup like:

  • What happens when I post a new article?!
  • What do I do when WordPress is upgraded?
  • What happens when a plugin is upgraded?
  • Do you know that doing a wget for just pages won’t download pretty little images?
  • Did you know that serving /index.html for css/jpg/png/js files is ugly?
  • This manual process is terrible!

I know! I have already begun to tackle these and I’ll have more details on that when I write my Automating Static WordPress Updates (Currently in Draft). As a sneak peak to all of this, there’s a really cool WordPress plugin that will send various notifications to Slack. Oh the fun that we will have when talking about using Slack as a message bus and writing and app and and …. ok I’ll contain my excitement for now!

[Survey] What are Important Features for a Blog?

overexposed question mark
Photo by Emily Morter from StockSnap

When building a blog, it can be overwhelming knowing what all features are important to enable. Your blog software of choice can offer all kinds of features. Those features could overwhelm you while others could overwhelm your visitors. Given all of the options available, I thought that it would be interesting to gather personal preferences of those that read blogs.

Below is a survey hosted by SurveyLegend, that is aimed at gathering some of these details from you the reader. Please participate in this short survey by providing your personal preferences. I know some of these features are important for SEO rankings but I’m more concerned about what the reader thinks not the computer.

Kubernetes Upgrades Break My DigitalOcean LoadBalancer

Photo by Austin Neill from StockSnap

Disclosure: I have included some affiliate / referral links in this post. There’s no cost to you for accessing these links but I do indeed receive some incentive for it if you buy through them.

I’ve talked about it in previous posts about my thus far overall enjoyment running in DigitalOcean. While I had tinkered with a number of other cloud providers, I settled with them for many things. I do still run in some other providers like OVHCloud (maybe more on my project there for another day). Despite my love for DigitalOcean, I do have one complaint regarding their Kubernetes and their LoadBalancer.

The Problem: I’m Cheap

I guess thrifty sounds so much better but I’m cheap. It’s a fact. I have in fact created my own problem with DigitalOcean due to my cheapness. They do have a number of excellent integration points between their Kubernetes and other components such as storage and load balancers. I can issue Kubernetes commands to create a new LoadBalancer or PVC and boom life is good. My problem is that LoadBalancers cost money. To date, I have only been able to figure out a 1:1 mapping between the LoadBalancer and Kubernetes. This 1:1 means that I can only manage a single LoadBalancer per exposed port.

If I only ever intend to expose a single application to the world, this is great! This is not me. I run a number of different applications that I want to expose. That means I need to pay for a LoadBalancer for each application or do I? Here come the Forwarding Rules! Each LoadBalancer can be configured with a number of forwarding rules like so:

With these rules in place, I’m able to expose multiple ports/applications on the same load balancer. This is wonderful except upgrades to the Kubernetes clusters like to blow away my custom settings such as:

  • Forwarding Rules
  • SSL Redirects
  • Proxy Protocol
  • Backend Keepalive

For the longest time, I had to come back in reconfigure everything every time I did a Kubernetes cluster upgrade. Worse yet, I didn’t know things got blown away whenever the cluster upgraded automatically. I had setup port/application monitors to alert me when things when down so I could manually reconfigure them.

The Solution : DigitalOcean API

While the manual fix has always been a waste of time and has sometimes prevented me from upgrading Kubernetes (bad security d00d), I still did the upgrades and manually fixed it. I never really learned a “new thing” to try and get this fixed in a less manual manner. Today was the the day I changed all of that. I’m sure there’s some other way that I hadn’t thought of yet but we’re going with baby steps. Instead of taking manual screenshots of the configuration page for the LoadBalancer and then trying to manually go back in and change the settings to what I thought they were, I am now using the API. Some good general documentation on the DigitalOcean API can be found here:

Setting Up API Access

The first step in getting all of this working is getting an API token. There’s no sense reinventing the wheel when DigitalOcean already has something written up very well such as How to Create a Personal Access Token. This walks you through creating the token to be used with the API. It is very important to make sure you create a token that has write access.

Get Your Existing LoadBalancer Configuration

With token in hand, it is time to get a copy of your existing configuration with a curl command. First, you need to know the id of your load balancer so we just query for all load balancers:

% curl -X GET -H "Content-Type: application/json" -H "Authorization: Bearer your_api_token_here" https://api.digitalocean.com/v2/load_balancers|jq .   
 {
   "load_balancers": [
     {
       "id": "ffff-ffff-ffff-ffff-b75c",
       "name": "my-lb-01",
       "size": "lb-small",
       "algorithm": "round_robin",
       "status": "active",
       "created_at": "2019-10-25T19:56:00Z",
       "forwarding_rules": [
         {
           "entry_protocol": "tcp",
           "entry_port": 80,
           "target_protocol": "tcp",
           "target_port": 31640,
           "certificate_id": "",
           "tls_passthrough": false
         },
         {
           "entry_protocol": "tcp",
           "entry_port": 4514,
           "target_protocol": "tcp",
           "target_port": 31643,
           "certificate_id": "",
           "tls_passthrough": false
         }
       ],
       "region": {
         "name": "San Francisco 2",
         "slug": "sfo2"
       },
       "tag": "",
       "droplet_ids": [
         111,
         222,
         333
       ],
       "redirect_http_to_https": false,
       "enable_proxy_protocol": false,
       "enable_backend_keepalive": false,
     },
     {
       "id": "ffff-ffff-ffff-ffff-72a4",
       "name": "my-lb-02",
       "size": "lb-small",
       "algorithm": "round_robin",
       "status": "active",
       "created_at": "2020-12-02T07:54:13Z",
       "forwarding_rules": [
         {
           "entry_protocol": "https",
           "entry_port": 443,
           "target_protocol": "http",
           "target_port": 31645,
           "certificate_id": "aaaa-aaaa-aaaa-aaaa-bcf8",
           "tls_passthrough": false
         },
         {
           "entry_protocol": "tcp",
           "entry_port": 80,
           "target_protocol": "tcp",
           "target_port": 31645,
           "certificate_id": "",
           "tls_passthrough": false
         }
       ],
       "region": {
         "name": "San Francisco 2",
         "slug": "sfo2"
       },
       "tag": "",
       "droplet_ids": [
         111,
         222,
         333
       ],
       "redirect_http_to_https": true,
       "enable_proxy_protocol": true,
       "enable_backend_keepalive": true,
     }
   ],
   "links": {},
   "meta": {
     "total": 2
   }
 }

There are two load balancers here in this example, my-lb-01 and my-lb-02. While my-lb-01 was my original load balancer that gave me the most trouble, I’m going to focus on my-lb-02 since it has more customizations not just to the forwarding rules.

We need to first identify the configuration that we’d like to save. Then, we’ll save this configuration into it’s own json, let’s call it my-lb-02.json. Notice in the above JSON, the configurations are housed within a “loadbalancer” array? In order to create our my-lb-02.json file, we simply pull the single JSON element from the array like this:

     {
       "id": "ffff-ffff-ffff-ffff-72a4",
       "name": "my-lb-02",
       "size": "lb-small",
       "algorithm": "round_robin",
       "status": "active",
       "created_at": "2020-12-02T07:54:13Z",
       "forwarding_rules": [
         {
           "entry_protocol": "https",
           "entry_port": 443,
           "target_protocol": "http",
           "target_port": 31645,
           "certificate_id": "aaaa-aaaa-aaaa-aaaa-bcf8",
           "tls_passthrough": false
         },
         {
           "entry_protocol": "tcp",
           "entry_port": 80,
           "target_protocol": "tcp",
           "target_port": 31645,
           "certificate_id": "",
           "tls_passthrough": false
         }
       ],
       "region": {
         "name": "San Francisco 2",
         "slug": "sfo2"
       },
       "tag": "",
       "droplet_ids": [
         111,
         222,
         333
       ],
       "redirect_http_to_https": true,
       "enable_proxy_protocol": true,
       "enable_backend_keepalive": true,
     }

We need to remove a few useless items from that JSON so remove the following:

  • status
  • name
  • size
  • created_at
  • region (do remember the “slug” entry as we’ll need this to recreate the region)

As noted above, we also need to remove the existing region entry and instead replace it with the value of “slug” aka “sfo2” in this example. With those changes made, here’s our new JSON:

     {
       "id": "ffff-ffff-ffff-ffff-72a4",
       "algorithm": "round_robin",
       "forwarding_rules": [
         {
           "entry_protocol": "https",
           "entry_port": 443,
           "target_protocol": "http",
           "target_port": 31645,
           "certificate_id": "aaaa-aaaa-aaaa-aaaa-bcf8",
           "tls_passthrough": false
         },
         {
           "entry_protocol": "tcp",
           "entry_port": 80,
           "target_protocol": "tcp",
           "target_port": 31645,
           "certificate_id": "",
           "tls_passthrough": false
         }
       ],
       "region": "sfo2",
       "tag": "",
       "droplet_ids": [
         111,
         222,
         333
       ],
       "redirect_http_to_https": true,
       "enable_proxy_protocol": true,
       "enable_backend_keepalive": true,
     }

How Do I Unbreak Things in the Future?

I’m glad you asked! Now that you have your my-lb-02 JSON file ready to go, you can simply wait for the next upgrade of your Kubernetes cluster to rebuild everything. Below, you can see my-lb-02 broken in the DigitalOcean control panel:

There’s one little catch to fixing everything. You’ll need to first get the IDs of the new cluster nodes in order to be able to add them to the load balancer. Whenever the cluster is upgraded, DigitalOcean deletes the old versioned node and adds in a new versioned one. You can do this by doing a GET to one of the load balancer’s configurations:

 % curl -X GET -H "Content-Type: application/json" -H "Authorization: Bearer your_api_token_here" https://api.digitalocean.com/v2/load_balancers/ffff-ffff-ffff-ffff-72a4|jq .load_balancer.droplet_ids
 [
   123,
   456,
   789
 ] 

In order to make my life easier, I piped my results through jq and told it to only bring back the json path I cared about, load_balancer.droplet_ids. Now we see that the droplets have changed from our original list of 111, 222, 333 to 123, 456, 789. We need to make this change to our JSON

     {
       "id": "ffff-ffff-ffff-ffff-72a4",
       "algorithm": "round_robin",
       "forwarding_rules": [
         {
           "entry_protocol": "https",
           "entry_port": 443,
           "target_protocol": "http",
           "target_port": 31645,
           "certificate_id": "aaaa-aaaa-aaaa-aaaa-bcf8",
           "tls_passthrough": false
         },
         {
           "entry_protocol": "tcp",
           "entry_port": 80,
           "target_protocol": "tcp",
           "target_port": 31645,
           "certificate_id": "",
           "tls_passthrough": false
         }
       ],
       "region": "sfo2",
       "tag": "",
       "droplet_ids": [
         123,
         456,
         789
       ],
       "redirect_http_to_https": true,
       "enable_proxy_protocol": true,
       "enable_backend_keepalive": true,
     }

With the JSON updated, we now issue a PUT command to the load balancer API for the specific load balancer like so:

 % curl -X PUT -H "Content-Type: application/json" -H "Authorization: Bearer your_api_token_here" https://api.digitalocean.com/v2/load_balancers/ffff-ffff-ffff-ffff-72a4 -d @my-lb-02.json

Now we can go look at the control panel again and confirm everything is back to normal!

Everything Works Great!

After an upgrade runs, I can simply come back through with a few minor steps and put everything back the way it should. Yes, there’s still some manual aspects to this and automating all of it shouldn’t be too terrible but I’ll save that for another time when I decide that this manual process is just too much anymore. Although, it took me nearly a year to hate the original manual process….

Adding Nginx in Front of WordPress

Photo by Lenharth Systems from StockSnap

The future is here! In my previous article, Testing Out the Digital Ocean Container Registry, I talked about using the Digital Ocean Container Registry to build a custom nginx. In that article, I talked about the future, aka a future, aka this post. When I moved to WordPress, I did so using Digital Ocean’s 1-Click install to drop WordPress into my Kubernetes cluster. This was the easy way to go for sure. I already run Kubernetes so deploying it to an existing cluster made life easier on me. Who doesn’t love it when life is made easier?

There are a few drawbacks to the 1-Click install. I’m planning to tinker with something really cool down the road to fix one of those problems (I know the future again). Luckily, I’m going to address my first initial concern in this post. What is that concern you ask? Protecting my WordPress admin of course! Sure, there are a number of WordPress vulnerabilities roaming around and talks of zero days and the sort. I make life easier on any attacker if I just leave my WordPress admin open to anyone. In this post, we look at taking my custom nginx and deploying it in front of my WordPress site to enforce IP access control to the admin page.

Setting Up the Container Registry for Kubernetes

In my Testing Out the Digital Ocean Container Registry, I explained how to get a custom nginx into the Container Registry. In order to use that container and registry with my cluster, I had to enable DigitalOcean Kubernetes integration in the settings of the registry. You can do the same by doing the following:

  1. Login to your DigitalOcean account
  2. Go to the Container Registry link
  3. Click on the Settings tab of the Container Registry
  4. Click the Edit button next to DigitalOcean Kubernetes Integration
  5. Place a check mark next to the Kubernetes clusters that you want to have access to this registry (Note, if you have multiple namespaces, this action will add access for all namespaces).

Once these steps are complete, you can confirm access by looking for a new secrets in your cluster:

# kubectl get secrets
NAME                   TYPE                                  DATA   AGE
default-token          kubernetes.io/service-account-token   3      423d
json-key               kubernetes.io/dockerconfigjson        1      396d
k8-registry            kubernetes.io/dockerconfigjson        1      18d
key-secret             Opaque                                2      419d

Notice the k8-registry secret that I now have in my secrets list? You can also see that this exists in my wordpress namespace as well:

# kubectl get secrets -n wordpress
NAME                  TYPE                                  DATA   AGE
default-token         kubernetes.io/service-account-token   3      18d
k8-registry           kubernetes.io/dockerconfigjson        1      18d
wp                    Opaque                                1      18d
wp-db                 Opaque                                2      18d

Adding Nginx to the Cluster

This should be super easy! I start by first creating configMap that stores my Nginx configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
  namespace: wordpress
  labels:
    app: nginx
    release: wordpress
data:
  siteConfig: |
    server {
        listen 8080 default_server;
        listen [::]:8080 default_server;

        root /var/www/html;

        index index.html index.htm index.nginx-debian.html;

        server_name _;

        location /status {
                return 200 "healthy\n";
        }

        location / {
                proxy_pass https://wordpress;
                proxy_set_header Host live-blog.shellnetsecurity.com;
                proxy_set_header X-Forwarded-For $remote_addr;
        }

        location /admin {
                allow 1.1.1.1;
                allow 2.2.2.2;
                deny all;
                proxy_pass https://wordpress;
                proxy_set_header Host live-blog.shellnetsecurity.com;
                proxy_set_header X-Forwarded-For $remote_addr;
        }

        location /wp-admin {
                allow 1.1.1.1;
                allow 2.2.2.2;
                deny all;
                proxy_pass https://wordpress;
                proxy_set_header Host live-blog.shellnetsecurity.com;
                proxy_set_header X-Forwarded-For $remote_addr;
        }
    }

  serverConfig: |
    user www-data;
    worker_processes auto;
    pid /run/nginx.pid;
    include /etc/nginx/modules-enabled/*.conf;

    events {
        worker_connections 768;
    }

    http {
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        include /usr/local/nginx/conf/mime.types;
        default_type application/octet-stream;
        access_log /dev/stdout;
        error_log /dev/stdout;
        gzip on;
        
        include /etc/nginx/sites-enabled/*;
    }

    daemon off;

I mostly added a set of standard nginx configurations. If you look at the serverConfig closely, you’ll notice that I’ve directed the access_log and error_log to /dev/stdout. This is so all of the logs are written to stdout (duh). This also allows me to run kubectl logs -f on the created pod and see the access and error logs.

Nginx is going to be acting like a reverse proxy so I took a relatively standard default site-available configuration and added a few new location blocks. The /status block is simply for me to perform healthchecks on the running nginx instance. The other statements are proxy_pass statements to send requests to the “wordpress” pod that was installed by the 1-Click install. I’m also making sure that I send over the Host header with live-blog.shellnetsecurity.com. If I don’t do this, the 1-Click install will build funky URLs that don’t work. Luckily, it will read the Host header and build links based upon that. I force the host header to be what I want with this statement.

Finally, you’ll see my allow statements for 1.1.1.1 and 2.2.2.2 (not really my IPs but let’s play make believe). These are followed by deny all. This should make it so that only my 1.1.1.1 and 2.2.2.2 addresses are allowed to /admin and /wp-admin. All others will be denied.

Next, I create a Deployment yaml that tells Kubernetes what containers to build and how to use my configMap:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: wordpress
  labels:
    app: nginx
    release: wordpress
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
      release: wordpress
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: nginx
        release: wordpress
    spec:
      volumes:
      - name: siteconfig
        configMap:
          name: nginx-config
          items:
          - key: siteConfig
            path: default
      - name: serverconfig
        configMap:
          name: nginx-config
          items:
          - key: serverConfig
            path: nginx.conf
      imagePullSecrets:
      - name: k8-registry
      containers:
      - name: nginx
        image: registry.digitalocean.com/k8-registry/c-core-nginx:1.1
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: siteconfig
          mountPath: /etc/nginx/sites-enabled/default
          subPath: default
        - name: serverconfig
          mountPath: /usr/local/nginx/conf/nginx.conf
          subPath: nginx.conf

Take note to the blue colored text above. I am using the imagePullSecrets configuration to tell kubernetes that it will need credentials to access the container registry where my image sits. I am also pointing it to the k8-registry credentials that were added by the DigitalOcean Kubernetes Integration change we made earlier. Finally, I am also providing the full path, version tag included, to the custom image I am hosting in the DigitalOcean registry with the image statement pointing to registry.digitalocean.com/k8-registry/c-core-nginx:1.1.

Next up, I need to add a NodePort that I can configure on the load balancer to send traffic over.

apiVersion: v1
kind: Service
metadata:
  namespace: wordpress
  name: nginx
  labels:
    app: nginx
    release: wordpress
spec:
  selector:
    app: nginx
    release: wordpress
  type: NodePort
  ports:
    - port: 8080
      nodePort: 31645

So I do a little kubectl apply -f to those yaml files I just created. Everything comes up. Next step is to setup the load balancer to forward traffic over. Since I have the nodePort configured as 31645, I just need to tell the load balancer to send traffic that I want to that port. I don’t want to mess with the existing setup so I decide to simply forward http port 8443 over to http port 31645.

Everything should be all set, so let’s open a browser and test

I am getting blocked like I expected! The problem is that I’m coming from my 2.2.2.2 address. What could be the issue? Good thing I told the logs to be sent to stdout so let’s check them for 403s:

kubectl logs -f -n wordpress nginx-9cdf87f68-tss6x|grep 403
...
blog.shellnetsecurity.com - - [20/Dec/2020:13:37:33 +0000] "GET /admin HTTP/1.1" 403 187 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36"
...

I see the problem! That is not my 2.2.2.2 address! That was my request though. It seems that I’m not getting the real IP of the client but instead internal IPs from the load balancer.

Enter the PROXY Protocol

For access control, I didn’t want to rely on the X-Forwarded-For header since it is something that comes from the client. This means that someone could spoof the headers to get around my control. In addition to that, the DigitalOcean load balancer does not send this header so it’s a moot point. DigitalOcean does provide the PROXY Procotol in it’s load balancers but not by default. The short explanation is that this protocol will send in the IP like I want but it requires some configuration. It’s either enable PROXY protocol or not as well and there is no mixing or matching.

Enabling the PROXY Protocol on the load balancer was easy. You simply enable it in the Settings of the load balancer.

It is very important to NOT enable this until Nginx was configured. Otherwise, the site would have gone down. I explain my specific configuration below, but you are also welcome to explore the Nginx documentation on the PROXY Protocol.

Configuring Nginx

In my Testing Out the Digital Ocean Container Registry article, I built nginx with the PROXY protocol capability by enabling the ngx_http_realip module. It was like I wrote that previous article after getting this all working….? With the module already enabled, it was pretty easy to simply update the configuration and go. I added the following line to my sever block:

        set_real_ip_from blog.shellnetsecurity.com/24;

Just like that, I was good to go so I thought. I was now getting denied only sometimes. I checked the logs again to find out why:

kubectl logs -f -n wordpress nginx-9cdf87f68-tss6x
...
blog.shellnetsecurity.com - - [20/Dec/2020:13:37:33 +0000] "GET /admin HTTP/1.1" 403 187 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36"
2.2.2.2 - - [20/Dec/2020:18:35:49 +0000] "POST /admin HTTP/1.1" 200 98 "https://live-blog.shellnetsecurity.com/wp-admin/post.php?post=93&action=edit" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36"
3.3.3.3 - - [20/Dec/2020:13:37:33 +0000] "GET /admin HTTP/1.1" 403 187 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36"
...

Let’s chat about the set_real_ip_from statement. We need to add this statement for all potential IP addresses that we trust to provide us with the real client IP. In my case, it turned out that blog.shellnetsecurity.com/24 was not a large enough block for the internal IP addresses so I needed to change that to a /16. Also, notice the 3.3.3.3 address? That’s the external IP of one of nodes in the kubernetes cluster. Armed with that knowledge, I expanded my server block to include multiple set_real_ip_from statements:

        set_real_ip_from blog.shellnetsecurity.com/16;
        set_real_ip_from 3.3.3.3;
        set_real_ip_from 4.4.4.4;
        set_real_ip_from 5.5.5.5;

I reloaded everything and tested again and success every time! I got denied when I wasn’t on my 1.1.1.1 or 2.2.2.2 address. I also see others getting denied as well. When I’m sitting on 1.1.1.1 or 2.2.2.2, I’m able to get into my WordPress admin!