Adding Nginx in Front of WordPress

Photo by Lenharth Systems from StockSnap

The future is here! In my previous article, Testing Out the Digital Ocean Container Registry, I talked about using the Digital Ocean Container Registry to build a custom nginx. In that article, I talked about the future, aka a future, aka this post. When I moved to WordPress, I did so using Digital Ocean’s 1-Click install to drop WordPress into my Kubernetes cluster. This was the easy way to go for sure. I already run Kubernetes so deploying it to an existing cluster made life easier on me. Who doesn’t love it when life is made easier?

There are a few drawbacks to the 1-Click install. I’m planning to tinker with something really cool down the road to fix one of those problems (I know the future again). Luckily, I’m going to address my first initial concern in this post. What is that concern you ask? Protecting my WordPress admin of course! Sure, there are a number of WordPress vulnerabilities roaming around and talks of zero days and the sort. I make life easier on any attacker if I just leave my WordPress admin open to anyone. In this post, we look at taking my custom nginx and deploying it in front of my WordPress site to enforce IP access control to the admin page.

Setting Up the Container Registry for Kubernetes

In my Testing Out the Digital Ocean Container Registry, I explained how to get a custom nginx into the Container Registry. In order to use that container and registry with my cluster, I had to enable DigitalOcean Kubernetes integration in the settings of the registry. You can do the same by doing the following:

  1. Login to your DigitalOcean account
  2. Go to the Container Registry link
  3. Click on the Settings tab of the Container Registry
  4. Click the Edit button next to DigitalOcean Kubernetes Integration
  5. Place a check mark next to the Kubernetes clusters that you want to have access to this registry (Note, if you have multiple namespaces, this action will add access for all namespaces).

Once these steps are complete, you can confirm access by looking for a new secrets in your cluster:

# kubectl get secrets
NAME                   TYPE                                  DATA   AGE
default-token          kubernetes.io/service-account-token   3      423d
json-key               kubernetes.io/dockerconfigjson        1      396d
k8-registry            kubernetes.io/dockerconfigjson        1      18d
key-secret             Opaque                                2      419d

Notice the k8-registry secret that I now have in my secrets list? You can also see that this exists in my wordpress namespace as well:

# kubectl get secrets -n wordpress
NAME                  TYPE                                  DATA   AGE
default-token         kubernetes.io/service-account-token   3      18d
k8-registry           kubernetes.io/dockerconfigjson        1      18d
wp                    Opaque                                1      18d
wp-db                 Opaque                                2      18d

Adding Nginx to the Cluster

This should be super easy! I start by first creating configMap that stores my Nginx configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
  namespace: wordpress
  labels:
    app: nginx
    release: wordpress
data:
  siteConfig: |
    server {
        listen 8080 default_server;
        listen [::]:8080 default_server;

        root /var/www/html;

        index index.html index.htm index.nginx-debian.html;

        server_name _;

        location /status {
                return 200 "healthy\n";
        }

        location / {
                proxy_pass http://wordpress;
                proxy_set_header Host blog.shellnetsecurity.com;
                proxy_set_header X-Forwarded-For $remote_addr;
        }

        location /admin {
                allow 1.1.1.1;
                allow 2.2.2.2;
                deny all;
                proxy_pass http://wordpress;
                proxy_set_header Host blog.shellnetsecurity.com;
                proxy_set_header X-Forwarded-For $remote_addr;
        }

        location /wp-admin {
                allow 1.1.1.1;
                allow 2.2.2.2;
                deny all;
                proxy_pass http://wordpress;
                proxy_set_header Host blog.shellnetsecurity.com;
                proxy_set_header X-Forwarded-For $remote_addr;
        }
    }

  serverConfig: |
    user www-data;
    worker_processes auto;
    pid /run/nginx.pid;
    include /etc/nginx/modules-enabled/*.conf;

    events {
        worker_connections 768;
    }

    http {
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        include /usr/local/nginx/conf/mime.types;
        default_type application/octet-stream;
        access_log /dev/stdout;
        error_log /dev/stdout;
        gzip on;
        
        include /etc/nginx/sites-enabled/*;
    }

    daemon off;

I mostly added a set of standard nginx configurations. If you look at the serverConfig closely, you’ll notice that I’ve directed the access_log and error_log to /dev/stdout. This is so all of the logs are written to stdout (duh). This also allows me to run kubectl logs -f on the created pod and see the access and error logs.

Nginx is going to be acting like a reverse proxy so I took a relatively standard default site-available configuration and added a few new location blocks. The /status block is simply for me to perform healthchecks on the running nginx instance. The other statements are proxy_pass statements to send requests to the “wordpress” pod that was installed by the 1-Click install. I’m also making sure that I send over the Host header with blog.shellnetsecurity.com. If I don’t do this, the 1-Click install will build funky URLs that don’t work. Luckily, it will read the Host header and build links based upon that. I force the host header to be what I want with this statement.

Finally, you’ll see my allow statements for 1.1.1.1 and 2.2.2.2 (not really my IPs but let’s play make believe). These are followed by deny all. This should make it so that only my 1.1.1.1 and 2.2.2.2 addresses are allowed to /admin and /wp-admin. All others will be denied.

Next, I create a Deployment yaml that tells Kubernetes what containers to build and how to use my configMap:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: wordpress
  labels:
    app: nginx
    release: wordpress
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
      release: wordpress
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: nginx
        release: wordpress
    spec:
      volumes:
      - name: siteconfig
        configMap:
          name: nginx-config
          items:
          - key: siteConfig
            path: default
      - name: serverconfig
        configMap:
          name: nginx-config
          items:
          - key: serverConfig
            path: nginx.conf
      imagePullSecrets:
      - name: k8-registry
      containers:
      - name: nginx
        image: registry.digitalocean.com/k8-registry/c-core-nginx:1.1
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: siteconfig
          mountPath: /etc/nginx/sites-enabled/default
          subPath: default
        - name: serverconfig
          mountPath: /usr/local/nginx/conf/nginx.conf
          subPath: nginx.conf

Take note to the blue colored text above. I am using the imagePullSecrets configuration to tell kubernetes that it will need credentials to access the container registry where my image sits. I am also pointing it to the k8-registry credentials that were added by the DigitalOcean Kubernetes Integration change we made earlier. Finally, I am also providing the full path, version tag included, to the custom image I am hosting in the DigitalOcean registry with the image statement pointing to registry.digitalocean.com/k8-registry/c-core-nginx:1.1.

Next up, I need to add a NodePort that I can configure on the load balancer to send traffic over.

apiVersion: v1
kind: Service
metadata:
  namespace: wordpress
  name: nginx
  labels:
    app: nginx
    release: wordpress
spec:
  selector:
    app: nginx
    release: wordpress
  type: NodePort
  ports:
    - port: 8080
      nodePort: 31645

So I do a little kubectl apply -f to those yaml files I just created. Everything comes up. Next step is to setup the load balancer to forward traffic over. Since I have the nodePort configured as 31645, I just need to tell the load balancer to send traffic that I want to that port. I don’t want to mess with the existing setup so I decide to simply forward http port 8443 over to http port 31645.

Everything should be all set, so let’s open a browser and test

I am getting blocked like I expected! The problem is that I’m coming from my 2.2.2.2 address. What could be the issue? Good thing I told the logs to be sent to stdout so let’s check them for 403s:

kubectl logs -f -n wordpress nginx-9cdf87f68-tss6x|grep 403
...
10.126.32.147 - - [20/Dec/2020:13:37:33 +0000] "GET /admin HTTP/1.1" 403 187 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36"
...

I see the problem! That is not my 2.2.2.2 address! That was my request though. It seems that I’m not getting the real IP of the client but instead internal IPs from the load balancer.

Enter the PROXY Protocol

For access control, I didn’t want to rely on the X-Forwarded-For header since it is something that comes from the client. This means that someone could spoof the headers to get around my control. In addition to that, the DigitalOcean load balancer does not send this header so it’s a moot point. DigitalOcean does provide the PROXY Procotol in it’s load balancers but not by default. The short explanation is that this protocol will send in the IP like I want but it requires some configuration. It’s either enable PROXY protocol or not as well and there is no mixing or matching.

Enabling the PROXY Protocol on the load balancer was easy. You simply enable it in the Settings of the load balancer.

It is very important to NOT enable this until Nginx was configured. Otherwise, the site would have gone down. I explain my specific configuration below, but you are also welcome to explore the Nginx documentation on the PROXY Protocol.

Configuring Nginx

In my Testing Out the Digital Ocean Container Registry article, I built nginx with the PROXY protocol capability by enabling the ngx_http_realip module. It was like I wrote that previous article after getting this all working….? With the module already enabled, it was pretty easy to simply update the configuration and go. I added the following line to my sever block:

        set_real_ip_from 10.126.32.0/24;

Just like that, I was good to go so I thought. I was now getting denied only sometimes. I checked the logs again to find out why:

kubectl logs -f -n wordpress nginx-9cdf87f68-tss6x
...
10.126.32.147 - - [20/Dec/2020:13:37:33 +0000] "GET /admin HTTP/1.1" 403 187 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36"
2.2.2.2 - - [20/Dec/2020:18:35:49 +0000] "POST /admin HTTP/1.1" 200 98 "https://blog.shellnetsecurity.com/wp-admin/post.php?post=93&action=edit" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36"
3.3.3.3 - - [20/Dec/2020:13:37:33 +0000] "GET /admin HTTP/1.1" 403 187 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36"
...

Let’s chat about the set_real_ip_from statement. We need to add this statement for all potential IP addresses that we trust to provide us with the real client IP. In my case, it turned out that 10.126.32.0/24 was not a large enough block for the internal IP addresses so I needed to change that to a /16. Also, notice the 3.3.3.3 address? That’s the external IP of one of nodes in the kubernetes cluster. Armed with that knowledge, I expanded my server block to include multiple set_real_ip_from statements:

        set_real_ip_from 10.126.32.0/16;
        set_real_ip_from 3.3.3.3;
        set_real_ip_from 4.4.4.4;
        set_real_ip_from 5.5.5.5;

I reloaded everything and tested again and success every time! I got denied when I wasn’t on my 1.1.1.1 or 2.2.2.2 address. I also see others getting denied as well. When I’m sitting on 1.1.1.1 or 2.2.2.2, I’m able to get into my WordPress admin!

Testing Out the Digital Ocean Container Registry

Disclosure: I have included some affiliate / referral links in this post. There’s no cost to you for accessing these links but I do indeed receive some incentive for it if you buy through them.

Photo by Guillaume Bolduc from StockSnap

The house use to be full of random computers and networking gear but I’ve reduced the home presence over the years. I’ve messed with a number of cloud providers both inexpensive and expensive. My base for the majority of my toys reside in Digital Ocean. I’ve really liked what they’ve done over the years. Recently, they announced a Container Registry. If you follow this blog, then you remember my post, Posting a Custom Image to Docker Hub. In that post, I explained how to build an image and push it up Docker Hub. Some images might not need to be public for whatever the reason. Needless to say, Digital Ocean’s Container Registry announcement, intrigued me. With the move to WordPress, I figured that I should also build a custom nginx build to run in my Kubernetes cluster on Digital Ocean.

Building the Custom Nginx

This part was pretty easy. I simply created a Dockerfile for the build.

FROM ubuntu

ENV DEBIAN_FRONTEND noninteractive

MAINTAINER Scott Algatt

RUN apt-get update \
    && apt-get install -y libjansson-dev libcurl4-openssl-dev libapr1-dev libaprutil1-dev libssl-dev build-essential devscripts libtool m4 automake pkg-config libpcre3-dev zlib1g-dev\
    && apt -y upgrade \
    && apt -y autoremove \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* \
    && curl -o /tmp/nginx.tgz http://nginx.org/download/nginx-1.18.0.tar.gz

WORKDIR /tmp

RUN tar zxf nginx.tgz \
    && cd nginx-1.18.0 \
    && ./configure --with-http_realip_module\
    && make \
    && make install

EXPOSE 80
CMD ["/usr/local/nginx/sbin/nginx"]

As you can see from the Dockerfile, this is a really super simple build. It is also not very custom aside from my compile command where I’ve added –with-http_realip_module. This little addition is something that I will use later in a future post (I know everything will be in the future) but you can see what it does by visiting the nginx documentation. Anyhow, there you go. Aside from the configure command, I’m just setting up ubuntu to compile code and I download nginx and compile it. Then expose port 80 and run nginx.

Once you have created the Dockerfile, you can run a build to generate your docker image. You’ll see that my build command tags the build with a name, c-core-nginx, and specific version, 1.1. I would suggest doing this to help keep versions straight in your repository.

% docker build -t c-core-nginx:1.1 .
Sending build context to Docker daemon  21.72MB
Step 1/9 : FROM ubuntu
 ---> 4e2eef94cd6b
Step 2/9 : ENV DEBIAN_FRONTEND noninteractive
 ---> Using cache
 ---> decc285ce9e4
Step 3/9 : MAINTAINER Scott Algatt
 ---> Using cache
 ---> 197e4c81b654
Step 4/9 : RUN apt-get update     && apt-get install -y libjansson-dev libcurl4-openssl-dev libapr1-dev libaprutil1-dev libssl-dev build-essential devscripts libtool m4 automake pkg-config libpcre3-dev zlib1g-dev    && apt -y upgrade     && apt -y autoremove     && apt-get clean     && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*     && curl -o /tmp/nginx.tgz http://nginx.org/download/nginx-1.18.0.tar.gz
 ---> Using cache
 ---> d5c8a70c412f
Step 5/9 : COPY ./perimeterx-c-core /tmp/perimeterx-c-core
 ---> Using cache
 ---> d325026c19b6
Step 6/9 : WORKDIR /tmp
 ---> Using cache
 ---> 8fb23db246a3
Step 7/9 : RUN tar zxf nginx.tgz     && cd nginx-1.18.0     && ./configure --add-module=/tmp/perimeterx-c-core/modules/nginx --with-threads --with-http_realip_module    && make     && make install
 ---> Using cache
 ---> 25af69d04a9f
Step 8/9 : EXPOSE 80
 ---> Using cache
 ---> e74b4cc64160
Step 9/9 : CMD ["/usr/local/nginx/sbin/nginx"]
 ---> Using cache
 ---> 6f10e3bebefc
Successfully built 6f10e3bebefc
Successfully tagged c-core-nginx:1.1

After the build completes, you can confirm that your image is listed on your local docker repo

% docker images c-core-nginx
REPOSITORY     TAG       IMAGE ID       CREATED       SIZE
c-core-nginx   1.1       6f10e3bebefc   2 weeks ago   584MB
c-core-nginx   1.0       b3673b4bf518   2 weeks ago   584MB

Pushing Your Image to the Container Registry

I’m not going to spend a ton of effort in this section because the Digital Ocean Container Registry announcement I posted above explains the setup really well. At a high level, you simply complete the following steps:

  1. Install and configure doctl (assuming you had never done this like me)
  2. Login into your Digital Ocean account
  3. Go to the Container Registry link
  4. Create the Container Registry
  5. Login to your registry using the doctl command
  6. Push your desired container(s) to the registry

The below image shows a screenshot of my c-core-nginx images that I uploaded to my Container Registry.

Notice something really cool? The size of those images in my local registry is 584MB but they are roughly 194MB when uploaded. They are being compressed in the registry. This is a really nice feature since the initial free tier of Digital Ocean’s Container Registry is a single repo of 500MB.

In the future, you will see how I actually used this new feature for fun and zero profit.

Automatically Rebuild Image on Docker Hub

This post focuses on me being lazy. In the previous post, I talked about building a custom image and posting it to the Docker Hub. I have also talked about creating a Git repo and storing everything in it thus far. What if we could make a commit rebuild our image for us? As luck would have it, you can do this!

This post is going to focus on making that very simple change to your Docker Hub repository so that every commit causes the image to be rebuilt to the latest. How fun!

Connecting Docker Hub to Your Git Account

The major thing to accomplish here is configuring Docker Hub to monitor Git. In order to do that, you’ll need to first sign into your Docker Hub account. This should bring you to the main page where you see the list of repos you maintain:

From there, click on the repo that you plan to configure. In my case, it’s the testnginximage repo. On the resulting screen, click on the Builds link to reveal the below page:

Click on the Link to GitHub button, to open your preferences to configure linked accounts.

Click the Connect link on this screen, to link to your GitHub account. If you are already signed into GitHub, Docker Hub will automatically connect to whatever account you are signed in with. If you are not already signed into GitHub, you’ll see the below login to GitHub screen:

Login to the GitHub account you used to store your Dockerfile we created in the previous post. Once connected, you’ll return to your Docker Hub profile with your GitHub account connected and the account name used listed:

At this point, you now have your Docker Hub and GitHub accounts connected. The next step will be to enable automatic builds.

Enabling Automatic Builds in Docker Hub

With Docker Hub and GitHub connected, the next step is to tell Docker Hub which repo to use and where the Dockerfile is located. In order to do that, go back to your repo and once again, click on the Build link. Within the Build screen, again, click on the Link to GitHub button. This time, the button should say “Connected” on it as shown below:

On the resulting page, configure the username and repo you would like to use as your source. Since I have been building everything in my mysamplerepo repo, I’m choosing this from the drop down:

In my prior examples, I created the Dockerfile in the nginxdocker directory within my mysamplerepo. Assuming you have done the same, scroll down the page and set the Build Context to be the nginxdocker in the Build Rules. This Build Context would be the path from the root of your repo that contains the Dockerfile. If you’ve placed your Dockerfile in a different path within your repo, make sure you have Build Context configured for that particular path.

Once you have this all configured, click on the Save and Build button at the bottom of the page. This should take you back to the Build page where you can monitor the status of the build.

Monitor the progress to make sure everything builds correctly. Once done, you should see a success status for the build.

Use a Commit to Generate a Build

Now that we have everything connected and working, let’s see if we can do a commit to our repo and confirm that the commit makes a build trigger. Let’s just make a simple change and no longer expose port 443 from for the image:

FROM ubuntu
  
 MAINTAINER Scott Algatt
  
 RUN apt-get update 
     && apt-get install -y nginx libnginx-mod-http-lua libnginx-mod-http-subs-filter software-properties-common
     && add-apt-repository -y universe 
     && add-apt-repository -y ppa:certbot/certbot 
     && apt-get update 
     && apt-get -y install certbot python-certbot-nginx 
     && apt-get clean 
     && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
 COPY ./conf/nginx.conf /etc/nginx/nginx.conf
 COPY ./conf/site.conf /etc/nginx/sites-available/default
  
 EXPOSE 80
 CMD ["nginx"]

With that change, let’s do a commit and push:

$ git commit -a
 [master 0e01193] Removing port 443
  Committer: Scott <scott@iMacs-iMac.local>
  
  2 files changed, 2 deletions(-)
$ git push origin master
 Counting objects: 6, done.
 Delta compression using up to 4 threads.
 Compressing objects: 100% (6/6), done.
 Writing objects: 100% (6/6), 499 bytes | 499.00 KiB/s, done.
 Total 6 (delta 3), reused 0 (delta 0)
 remote: Resolving deltas: 100% (3/3), completed with 3 local objects.
 To github.com:algattsm/mysamplerepo.git
    1d4d448..0e01193  master -> master

After performing the commit, refresh your Build page in Docker Hub and you should see a build trigger:

This means that you’ll be able to simply use your GitHub to generate a new image anytime you like! This also means that with every commit, you’ll be exposing the latest version of your image on Docker Hub.

Referenced File

In case you want to make sure you have the correct file, here would be the only file I referenced in this post:

Posting a Custom Image to Docker Hub

Welcome to 2020! I hope the new year finds everyone in good spirits and ready to continue listening to me babble about my struggles with technology.

So far, the focus has been on using default Docker images for our builds. This is great if you plan to deploy stock instances and only need to serve custom content with some minor configuration tweaks. Note that we were able to make configuration changes using a configMap yaml. What if you needed Nginx modules that weren’t already installed in the base image? Sure, you could come up with some funky CMD statement in your yaml file that tells Kubernetes to install the modules. Of course, that’ll take some time for the pod to be available while it boots up and runs through the install steps. This will also defeat the purpose of what I’m attempting to show you too 🙂

The focus of this article is simple. We’re going to setup a Docker Hub account and build a custom Nginx image to post there. From there, there are some future articles to help us use this new found knowledge to do some cool stuff.

Let’s stop the babble and start the fun!

Creating a Docker Hub Account

This is pretty straight forward so we’ll cover it briefly.

  1. Go to https://hub.docker.com/
  2. Click the Sign up for Docker Hub button
  3. Enter your information
  4. Sign up
  5. Wait for the verification Email from Docker
  6. Verify your email via the verification email
  7. Sign in

Done

Create a Docker Hub Repository

Now that you have a Docker Hub account, you’ll want to create a repo to be able to store your custom docker image. Assuming that you are still signed in from the steps above, you should see a Create a Repository button:

If you don’t see the Create a Repository button, worry not, you can get there by clicking on the Repositories link on the top menu and then the Create Repository button:

On the resulting Create Repository screen, let’s add in some details such as below:

You may call the repo whatever you want and feel free to give it a description. For now, we’re going to make this a public repo. Once you have this information filled out, scroll to the bottom and click Create. You should now see something similar to the below:

Make note of the docker push command in the black background box on the right. In my case, it is

docker push algattblog/testnginximage:tagname

We’ll need this later when we build our custom image.

Configuring Our Custom Nginx Docker Image

In order to keep everything in one place and keep things backed up, we’ll be building this out within our previously defined Git Repo. It’s a private repo so reasonably protected and Github is a really nice place to maintain our backup. So the first step will be to make sure we’re in the root of our repo and we’ll make a new directory to store this image.

$ mkdir nginxdocker

From there, we’ll change into the directory so that we can start with our Dockerfile

$ cd nginxdocker/

Now let’s create a new Dockerfile that looks like the following:

FROM ubuntu

MAINTAINER Scott Algatt

RUN apt-get update 
    && apt-get install -y nginx libnginx-mod-http-lua libnginx-mod-http-subs-filter software-properties-common
    && add-apt-repository -y universe 
    && add-apt-repository -y ppa:certbot/certbot 
    && apt-get update 
    && apt-get -y install certbot python-certbot-nginx 
    && apt-get clean 
    && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

COPY ./conf/nginx.conf /etc/nginx/nginx.conf
COPY ./conf/site.conf /etc/nginx/sites-available/default

EXPOSE 80
EXPOSE 443

CMD ["nginx"]

Let’s see what this does…First, we’re going to build this new Docker using ubuntu as our base image. From there, we’re going to install nginx, libnginx-mod-http-lua, libnginx-mod-http-subs-filter, and software-properties-common. We’re installing software-properties-common so that we can add the certbot repo and then add certbot. We’re going to also copy over some custom Nginx configuration files so we won’t need to leverage our configMap anymore. We make sure ports 80 and 443 are exposed to the running container. Finally, the container should run the “nginx” command to start the nginx server.

Next, we’ll want to create those files referenced by the COPY commands. We start by creating the conf directory with and then change into the directory:

$ mkdir conf
$ cd conf

Create the nginx.conf file with the following (Basically, we’re defining a custom log format):

user www-data;
worker_processes auto;
pid /run/nginx.pid;

events {
        worker_connections 768;
}

http {
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        include /etc/nginx/mime.types;
        default_type text/html;
        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;
        log_format  graylog2_format  '$remote_addr $request_method "$request_uri" $status $bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for" "$http_if_none_match"';
        gzip on;
        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

daemon off;

The MOST important item at the bottom of this file would be the daemon off; statement. Without this, our container will start run nginx and then COMPLETE and stop. We want nginx to run in the foreground and not background as a daemon so this is why this is here. Now create the referenced site.conf file.

server {
    listen       80;
    server_name  localhost;
    access_log /var/log/nginx/access.log graylog2_format;
    error_log /var/log/nginx/error.log graylog2_format;

    location / {
        root   /usr/share/nginx/www/html;
        index  index.html index.htm;
    }

    error_page   500 502 503 504  /50x.html;

    location = /50x.html {
        root   /usr/share/nginx/www/html;
    }

  location ~ .php$ {
      root /usr/share/nginx/www/html;
      try_files $uri =404;
      fastcgi_split_path_info ^(.+.php)(/.+)$;
      fastcgi_pass phpfpm:9000;
      fastcgi_index index.php;
      include fastcgi_params;
      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
      fastcgi_param PATH_INFO $fastcgi_path_info;
  }
}

This looks good so let’s first save our changes and commit them to our repo.

 3 files changed, 23 insertions(+), 37 deletions(-)$ cd ../..
$ git add .
$ git commit -a
[master ade43da] Adding in our stuff
 Committer: Scott <scott@iMacs-iMac.local>
 3 files changed, 23 insertions(+), 37 deletions(-)
 rewrite nginxdocker/conf/site.conf (99%)
 delete mode 100644 nginxdocker/conf/test
$ git push origin master
Enter passphrase for key '/Users/scott/.ssh/id_rsa': 
Counting objects: 6, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 754 bytes | 754.00 KiB/s, done.
Total 6 (delta 2), reused 0 (delta 0)
remote: Resolving deltas: 100% (2/2), completed with 2 local objects.
To github.com:algattsm/mysamplerepo.git
   97b7a1c..ade43da  master -> master

Now that we’ve got that squared away, it’s onto the next step!

Building our Docker Image and Publishing it

This is the easy part as we just watch it run. Make sure we’re in the directory that contains our Dockerfile and then we’ll run the build command:

$ cd nginxdocker/
 imacs-imac:nginxdocker scott$ docker build -t algattblog/testnginximage:latest .
  
 Sending build context to Docker daemon  6.144kB
 Step 1/8 : FROM ubuntu
 latest: Pulling from library/ubuntu
 2746a4a261c9: Pull complete 
 4c1d20cdee96: Pull complete 
 0d3160e1d0de: Pull complete 
 c8e37668deea: Pull complete 
 Digest: sha256:250cc6f3f3ffc5cdaa9d8f4946ac79821aafb4d3afc93928f0de9336eba21aa4
 Status: Downloaded newer image for ubuntu:latest
  ---> 549b9b86cb8d
 Step 2/8 : MAINTAINER Scott Algatt
  ---> Running in ff6d8459f56b
 Removing intermediate container ff6d8459f56b
  ---> 666acba43494
 Step 3/8 : RUN apt-get update     && apt-get install -y nginx libnginx-mod-http-lua libnginx-mod-http-subs-filter software-properties-common    && add-apt-repository -y universe     && add-apt-repository -y ppa:certbot/certbot     && apt-get update     && apt-get -y install certbot python-certbot-nginx     && apt-get clean     && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
  ---> Running in acfefd676a08
 Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
 Get:2 http://archive.ubuntu.com/ubuntu bionic InRelease [242 kB]
 ...
 Setting up python-certbot-nginx (0.31.0-1+ubuntu18.04.1+certbot+1) ...
 Removing intermediate container acfefd676a08
  ---> 09301061d312
 Step 4/8 : COPY ./conf/nginx.conf /etc/nginx/nginx.conf
  ---> c82b8d22e6a0
 Step 5/8 : COPY ./conf/site.conf /etc/nginx/sites-available/default
  ---> 841e6ecfc3d9
 Step 6/8 : EXPOSE 80
  ---> Running in f2f36c350457
 Removing intermediate container f2f36c350457
  ---> 79af7e01f9c0
 Step 7/8 : EXPOSE 443
  ---> Running in 9d6a8dcdba31
 Removing intermediate container 9d6a8dcdba31
  ---> a534c821c51b
 Step 8/8 : CMD ["nginx"]
  ---> Running in 82ceccd20644
 Removing intermediate container 82ceccd20644
  ---> 6728616336a3
 Successfully built 6728616336a3
  
 Successfully tagged algattblog/testnginximage:latest

Next, we need to publish it on Docker Hub (remember that push command from earlier?):

$ docker push algattblog/testnginximage:latest
 The push refers to repository [docker.io/algattblog/testnginximage]
 010c4615edf3: Pushed 
 98c06aef3fd3: Pushed 
 229f4ffc7b88: Pushed 
 918efb8f161b: Mounted from library/ubuntu 
 27dd43ea46a8: Mounted from library/ubuntu 
 9f3bfcc4a1a8: Mounted from library/ubuntu 
 2dc9f76fb25b: Mounted from library/ubuntu 
 latest: digest: sha256:4545730a7dd5b5818f0ce9a78666f40ea9a864198665022dc29000a34cc4b402 size: 1778

Testing the New Image in our Cluster

At this point, we’ve got our newly created image uploaded to Docker Hub. The next step is to test it out and see if it works in our cluster. In order to do that, we’ll want to change the image in our webserver.yaml file to reference the newly created image:

        image: algattblog/testnginximage:latest

Now let’s apply the updated deployment:

# kubectl apply -f webserver.yaml 
 configmap/webserver-config unchanged
 service/webserver unchanged
 deployment.apps/webserver configured

Wait for it to build the new webserver pod by keeping an eye on kubectl:

# kubectl get pod
 NAME                         READY   STATUS    RESTARTS   AGE
 phpfpm-7b8d87955c-rps2w      2/2     Running   0          3d1h
 webserver-6b577db595-wgwwb   2/2     Running   0          25s

Looks like it’s up and running so now let’s check and see if we can test to make sure everything looks good. We’ll start by connecting to the webserver, installing curl, and making sure everything works again.

# kubectl exec -it webserver-6b577db595-wgwwb /bin/bash -c webserver
 groups: cannot find name for group ID 65533
 root@webserver-6b577db595-wgwwb:/# apt update     
 Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
  
 Get:2 http://archive.ubuntu.com/ubuntu bionic InRelease [242 kB]           
 Setting up libcurl4:amd64 (7.58.0-2ubuntu3.8) ...
 Setting up curl (7.58.0-2ubuntu3.8) ...
 Processing triggers for libc-bin (2.27-3ubuntu1) ...
 root@webserver-6b577db595-wgwwb:/# curl localhost
 <html>
 <body>
 hello world! Everything must be cleaned up at this point
 </body>
 </html>
 root@webserver-6b577db595-wgwwb:/# curl localhost/index.php
  
 hello world from php

We can ignore the groups error for now. Some idiot left out instructions regarding fixing that but not a biggie. Looks we’re serving content properly. Let’s check our nginx configuration to confirm it’s also running the correct config:

root@webserver-6b577db595-wgwwb:/# cat /etc/nginx/nginx.conf 
 user www-data;
 ...
  log_format  graylog2_format  '$remote_addr $request_method "$request_uri" $status $bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for" "$http_if_none_match"';
  gzip on;
  include /etc/nginx/conf.d/*.conf;
  include /etc/nginx/sites-enabled/*;
 }
  
 daemon off;
 root@webserver-6b577db595-wgwwb:/# 

Confirmed! This looks great! You’ve now got a custom Docker image that you can use and expand upon. I know I left you hanging here but if I continued to build my attention span couldn’t tolerate typing anymore. More to come….