Blog

Speeding Up WordPress

I started messing around with my WordPress by first adding in a layer of security in Adding Nginx in Front of WordPress. After putting Nginx in front of my WordPress, I decided that I would further secure it by also Building a Static WordPress. That’s great and all but maybe it was time to make Nginx give me some performance gains rather than just some security controls. That is exactly what we’re going to do in this blog post. Now that Nginx is sitting in front of WordPress, we can use it to control some of the performance aspects.

Generating a Baseline Performance Report

First thing’s first though. Let’s first get us a baseline of where the site is at and what needs work. Google’s PageSpeed is a great tool for finding out what’s slowing down your site. Below is the report for this blog.

PageSpeed mobile numbers

I guess those numbers aren’t terrible but I’m sure they could be better.

Figuring Out What to Fix

As you scroll down the report, there are a number of things to correct. An example of such things would be the Opportunities section:

Opportunities section of report

In addition, there are some diagnostic items that show up:

Example cache policy items

Fixing Some of the Items

Adding a Caching Policy

An initial first step to correct some performance issues, would be to enable caching policies on the Nginx server. Given that we’re serving mostly all static content now, there’s no need to cache any content that we serve up. Nginx is already serving static data so we don’t need to rely on a backend. Let’s modify the static path’s caching policy for clients by adding the cache-control response header:

...
         location /status {
                 return 200 "healthy\n";
         }
 
         location / {
                 try_files $uri $uri/ /index.html;
                 add_header 'Cache-Control' "public,max-age=31536000,stale-while-revalidate=360";
                 #proxy_pass https://wordpress;
                 #proxy_ssl_verify off;
                 #proxy_set_header Host blog.shellnetsecurity.com;
                 #proxy_set_header X-Forwarded-For $remote_addr;
         }
 
         location /sitemap {
                 proxy_pass https://wordpress;
                 proxy_ssl_verify off; 
                 proxy_set_header Host blog.shellnetsecurity.com;
                 proxy_set_header X-Forwarded-For $remote_addr;
         } 
...

This example configuration snippet shows that we are adding the Cache-Control response header to the requests to “/”. This means we’re doing what we planned and are only telling clients to cache data that isn’t sent to the backend WordPress server. Additional parameters that can be supplied to Cache-Control are documented here.

Enable Gzip Compression

By default, even with gzip on, Nginx will not compress all files. Let’s add some additional content to our http config block (note the additional gzip directives bolded listed below gzip on):

    http {
         proxy_set_header X-Real-IP       $proxy_protocol_addr;
         proxy_set_header X-Forwarded-For $proxy_protocol_addr;
         sendfile on;
         tcp_nopush on;
         tcp_nodelay on;
         keepalive_timeout 65;
         types_hash_max_size 2048;
         include /usr/local/nginx/conf/mime.types;
         default_type application/octet-stream;
         access_log /dev/stdout;
         error_log /dev/stdout;
         gzip on;
         gzip_vary on;
         gzip_min_length 1000;
         gzip_proxied expired no-cache no-store private auth;
         gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml;
         gzip_disable "MSIE [1-6]\.";
         resolver kube-dns.kube-system.svc.cluster.local;
 
         include /etc/nginx/sites-enabled/*;
     } 

With those changes added to our Nginx configuration, restart Nginx for the changes to take effect.

Testing Our Page Again

Now that those changes should be live in your Nginx, let’s check how we did again on PageSpeed.

Updated PageSpeed details.

The numbers aren’t amazingly stellarly awesomer but they are better. If you look at the overall scoring, we jumped from a 66 to a 72. The final problem left is not something we can correct using Nginx. There are a number of first and third party scripts that are loading and slowing the site down. Next steps will involve researching those scripts and attempting to determine if there are any that can be removed. Until next time!

LED Lighting

Photo by Suzy Hazelwood from StockSnap

Disclosure: I have included some affiliate / referral links in this post. There’s no cost to you for accessing these links but I do indeed receive some incentive for it if you buy through them.

It’s time to get back to some lighting as I spent a little time enhancing my setup that I left off configuring in Making the Lights Dance. In my Building the RaspberryPi Christmas Light Box post, I blamed a friend for starting me down this path. Once again, I’m blaming a different friend for causing me to wander down the LED lighting road. This friend saw some of my posts regarding the simplistic lighting box I created, and they suggested that I tinker with WS2811 lights. Let the tinkering begin!

Hardware List

It’s always good to talk through the hardware that we’ll be using for this. To start, I extended my previous Building the RaspberryPi Christmas Light Box system to be able to do some LED lighting. Here are the items that I purchased from Amazon:

By the way, your hardware may vary slightly but I bought the 5v lights just because. There are LED tape strips and these 12mm bullets. There’s indoor only and IP68 rated. There are 12v lights and many other options to choose from. I just happened to pick these because they were the cheapest at the time.

Tiny Electrical Lesson on the Hardware

I’m not an electrician by any stretch of the imagination but I know a guy that helped me get through it a little. He’s more AC than DC but we had a good chat about it all (ok enough babbling on that).

There are some very important concepts to keep in mind on the power requirements. These important items were things that I had to dig up and learn from my electrician. I figured I’d drop them here to help anyone else going down the LED lighting route as blindly as I did. You need to make sure the output voltage on the power supply matches the input requirements on the lights. Aside from voltage drops (I’ll cover that in a later article when I go bigger on the system), the voltage will remain constant. I am using 5V lights so an adapter capable of outputting 5V was required.

Wattage is cumulative! This is a very important point to remember in all of this. The example lights used have a spec of roughly 0.3W per LED bulb. Each strand has 50 bulbs. If you take 0.3W * 50 bulbs, you get 15W. This means that a single strand of lights requires 15W of power. If you wanted to use two strands, you’d do 0.3W * 100bulbs for 30W of power required. This means that when you purchase your power supply, it will need to support 5V and 15W * <The number of strands>. Given the power supply listed above in the hardware list, I can only run a single strand of lights :facepalm: here. For those wanting to do more than one strand, I would suggest possibly getting the SHNITPWR 4V – 12V Power Supply 10A 120W AC to DC Adapter. This was used when I went bigger with the LED lighting.

The only other thing to keep in mind would be the data channel on the lights. The lights have 3 wires (5 if you count the separate power leads at each end of the strand but who’s really counting?) that supply power and data. The power has your standard +/- and the third middle wire is data. The nice thing is that data does not suffer from voltage drop like the power. Data is repeated at each bulb so it doesn’t lose signal (as long as you get good power to the first bulb. A warning I didn’t fully head until I built the bigger system. Again, more details later).

Connecting Everything

The awesome folks over at Adafruit have put together some really nice articles to also help out. In order to get everything up and running, it was pretty simple to follow their NeoPixels on Raspberry Pi Wiring guide (FYI, this guide contained the warning that I ignored regarding the data power requirements. They refer to it as level shifting. In my testing of a single strand and later connecting 8 strands, I had zero issues without doing the level shifting. The moment I wired everything up outside for the bigger system, I did end up needing to level shift 🙂 ). Ok so I ignored all warnings and went straight for wiring the light strand directly to my Raspberry Pi 4 along with adding the power supply.

I started with my mess of goodies

hardware pieces to be assembled

Something very important to know about the LED lights is that the data is a one way street. When you connect your data wire to the Raspberry Pi, you need to make sure the Pi is feeding “in” to the strand. These strands come with a little arrow to explain how the data is expected to flow (sorry the arrow is a little blurry).

Picture of data directional arrow on 12mm bulb

This isn’t too terribly difficult to get wrong or right to be honest. The example I have above shows the arrow pointing up from the wire. This means that the data will come “in” from that wire to the bulb. This will be the end that you connect your power supply and Raspberry Pi onto. Speaking of that!

Power supply connections and pigtail

The nice thing about these lights is that there was a pigtail included that connected to the existing connectors. Also, the ground aka “-” has a dotted line on the wire while the positive does not. The above picture shows you the 5 wires I talked about. I have the power connected directly to the “-” and “+” light strand’s separate power wires. Those are connected to the proper terminals on the power plug connector. On the pigtail, the color scheme is like this:

  • Green == Data Wire
  • Red == +
  • White == –

With the power connected to the power plug adapter and the pigtail connected to the lights, I needed to connect everything to the Raspberry Pi. This is where the breadboard comes in:

Connections into the breadboard

On the left side of the image, you can see that I have my pigtail wired onto the breadboard. On the right side, the wires are destined for the Pi. The below table explains how I have the wires connected

Red Wire From Light Strand pigtailConnected to the “+” rail on breadboardThis serves no purpose whatsoever. I just didn’t want a lose wire roaming around
Green Wire From Light Strand pigtailConnected to row “44” on breadboardThis is the data wire to the light strand and we’ll need to connect this to GPIO18 on the Raspberry Pi via the breadboard
White Wire From Light Strand pigtailConnected to the “-” rail on the breadboardThis is connecting the ground wire from the light strand to the Raspberry Pi via the breadboard
Red Wire from Raspberry Pi Pin 6 (GND)Connected to the “-” tail on the breadboardThis is connecting the ground from the Raspberry Pi to the ground on the light strand via the breadboard
Tan Wire from Raspberry Pi Pin 12 (GPIO18)Connected to row “44” on breadboardThis is connecting GPIO18 to the light strand via the breadboard.

GPIO18 is very important to use as our data connection to the light strand. Below is a full picture of the wiring.

full wiring diagram of breadboard and pi

Getting the Software

With all of the hardware in place, it is now time to fire everything up and get the software we need to run this light show! I’m assuming you can login to your Pi and create a directory called LEDs. We’ll use this directory to house our testing code. I’m also going to assume that you are ok with using NodeJS (sorry I’ve been on a Javascript kick lately. There’s also Python code available to do this as well). Let’s get into that directory and install the rpi-ws281x-native library we’ll need to get the lights running:

 $ cd LEDs/
 pi@raspberrypi:~/LEDs $ npm install rpi-ws281x-native
 npm WARN npm npm does not support Node.js v10.21.0
 npm WARN npm You should probably upgrade to a newer version of node as we
 npm WARN npm can't make any promises that npm will work with this version.
 npm WARN npm Supported releases of Node.js are the latest release of 4, 6, 7, 8, 9.
 npm WARN npm You can find the latest version at https://nodejs.org/
 
 > rpi-ws281x-native@0.10.1 install /home/pi/LEDs/node_modules/rpi-ws281x-native
 > node-gyp rebuild
 
 make: Entering directory '/home/pi/LEDs/node_modules/rpi-ws281x-native/build'
   CC(target) Release/obj.target/rpi_libws2811/src/rpi_ws281x/ws2811.o
   CC(target) Release/obj.target/rpi_libws2811/src/rpi_ws281x/pwm.o
   CC(target) Release/obj.target/rpi_libws2811/src/rpi_ws281x/dma.o
   CC(target) Release/obj.target/rpi_libws2811/src/rpi_ws281x/pcm.o
   CC(target) Release/obj.target/rpi_libws2811/src/rpi_ws281x/mailbox.o
   CC(target) Release/obj.target/rpi_libws2811/src/rpi_ws281x/rpihw.o
   AR(target) Release/obj.target/rpi_libws2811.a
   COPY Release/rpi_libws2811.a
   CXX(target) Release/obj.target/rpi_ws281x/src/rpi-ws281x.o
   SOLINK_MODULE(target) Release/obj.target/rpi_ws281x.node
   COPY Release/rpi_ws281x.node
   COPY ../lib/binding/rpi_ws281x.node
   TOUCH Release/obj.target/action_after_build.stamp
 make: Leaving directory '/home/pi/LEDs/node_modules/rpi-ws281x-native/build'
 npm WARN saveError ENOENT: no such file or directory, open '/home/pi/LEDs/package.json'
 npm WARN enoent ENOENT: no such file or directory, open '/home/pi/LEDs/package.json'
 npm WARN LEDs No description
 npm WARN LEDs No repository field.
 npm WARN LEDs No README data
 npm WARN LEDs No license field.
 
 + rpi-ws281x-native@0.10.1
 updated 1 package in 13.568s
 pi@raspberrypi:~/LEDs $  

With that all set and ready to go, I suggest grabbing the example libraries hosted in the rpi-ws281x-native GitHub repo. Note that you will need to modify the require lines in those scripts from:

var ws281x = require('../lib/ws281x-native');

to something like this:

var ws281x = require('rpi-ws281x-native');

From there, you can try and test out the one of the scripts. Remember, you must specify the total pixels that are in the strand to be tested. By default, the code will only light up 10 bulbs. The example below shows how you would run the command for all 50 bulbs in our example strand:

 $ sudo node rainbow.js 50
 Press <ctrl>+C to exit.
 pi@raspberrypi:~/LEDs $  

Very important to run them using “sudo” because the code requires root access in order to be able to properly signal the strand.

Video: The Example Scripts in Action

Here is an example of me running those scripts and the lights in action

Automating Static WordPress Updates

Photo by Alex Knight from StockSnap

In my previous post, Building a Static WordPress, I setup my Nginx sitting in front of WordPress to load static content from a private repo. This is great but could become tedious long term. Most notably, this becomes challenging as you begin to post more content. Each time content is posted, we need to fetch all of the update pages including category updates and any new images. Sure, we can just run a few wget commands manually and then update our repo and all is better. Just because you “can” doesn’t mean you should.

From that previous post, you’ll note that I had a bunch of unanswered questions. Some of those questions might remain unanswered. By the time you get to the end of this post, you might be able to address them yourself. I’m going to focus on automating static WordPress updates whenever a new post is published. This similar logic should be possible to replicate when it comes to needing to update static content based upon WordPress and WordPress plugin updates.

Setting Up Slack Notifications

I guess the secret is out! I’m going to be using Slack as part of this. I’m going to assume you have your own slack setup with a channel dedicated to wordpress notifications. In my case, that’s going to be called #wordpress. Given the way my site is configured, I figured Slack notifications would be the best method of triggering my automation. I’m not going to reinvent the wheel here so please refer to the wpbeginner post, How to Get Slack Notifications From Your WordPress Site, on how to configure the Slack Notifications plugin.

After walking through the wpbeginner post, you should have Slack Notifications installed, configured, and successfully tested. For the purposes of this post, you’ll want to create a Posts Published notification like below:

Building the Automation

I wanted to keep everything self contained in my Kubernetes so I decided to build a nice little Node service to do everything that I wanted. We need a service to connect to the #wordpress slack channel and watch for updates. Depending upon the update type it has received, it should commit the new article content to our private repo.

Follow the Existing Tutorial

Slack has created a Javascript related bot framework called Bolt. For this reason, I’m not going to spend too much time explaining how to build a bot when Slack already created a great tutorial, Building an app with Bolt for JavaScript. When setting up the bot, make sure you add the following permissions:

  • message.channels
  • message.im
  • message.groups
  • message.mpim

Go ahead, get familiar with Bolt. I’ll wait while you make the example bot. Once you’re done, come back and continue to the next section.

Extending the Existing Tutorial

This next section takes the above tutorial and extends that to include what we actually need for our bot to handle Post Published notifications. The first step is to generate a SSH Key that has read write to the static content repo. Remember, this was created as a private repo so we’ll need a key that has read write access and that is used by our bot. If you need a little refresher on the process of creating SSH Keys and adding them to your private repo, checkout the previous post on this topic, Creating a Private GitHub Repo. You can store the key where ever you like just keep it handy.

With the SSH Key created, you need to add nodegit to the bot project. Make sure you are in the project root of the bolt bot project you created above.

$ npm install nodegit

Next, we’ll add some variables and constants to the app. Edit your app.js and add in the following:

 const BLOG_HOSTNAME = 'blog.shellnetsecurity.com';
 const WORDPRESS_CONTAINER_URL = 'wordpress_container.default.svc.cluster.local';
 const cloneURL = "git@github.com:my_account/wordpress_content.git";
 const clonePath = "/tmp/clone"; 
 const sshEncryptedPublicKeyPath = "/opt/slackapp/testKey.pub";
 const sshEncryptedPrivateKeyPath = "/opt/slackapp/testKey";
 const sshKeyPassword = "abcd1234";
 const { exec } = require("child_process"); 
 var NodeGit = require("nodegit");
 var Repository = NodeGit.Repository;
 var Clone = NodeGit.Clone;
 const fs = require('fs'); 

You’ll see how we leverage these variables later but the below table explains how we’ll use them.

BLOG_HOSTNAMEThis should be the URL of your blog
WORDPRESS_CONTAINER_URLThis should be the kubernetes DNS hostname of your wordpress container
cloneURLThis will be the SSH link to your static content repo
clonePathThis path will be used for staging our replicated repo.
sshEncryptedPublicKeyPathPath to the location of the public SSH key you created earlier
sshEncryptedPrivateKeyPathPath to the location of the private SSH key you created earlier
sshKeyPasswordPassword for the SSH key you created earlier
explanation of added const

The other remaining items should be self explanatory so we’ll move on by adding some functions of use. We’ll start by adding a new feature to our bot with the botMessages function:

 // Listener middleware - filters out messages that have subtype 'bot_message'
 async function botMessages({ message, next }) {
   if (message.subtype && message.subtype === 'bot_message' && validBotMessageByText(message.text) === true) {
     removeRepo(clonePath, cloneURL, BLOG_HOSTNAME, WORDPRESS_CONTAINER_URL);
     await next();
   }
 } 

Since the notifications will be coming in from a bot, we check that the message contains a subtype of “bot_message”. The validBotMessageByText function is used to confirm that the message is supported by our flow:

 function validBotMessageByText(text) {
   let re = RegExp('The post .* was published'); // Test for a post scheduled message
   if(re.test(text)) {
     return true;
   }
   return false;
 } 

This is a simple function that contains a regex looking for the Post Published message. If the message is valid, then botMessage executes removeRepo:

 function removeRepo(clonePath, cloneURL, blogHostname, wpUrl) {
   // delete directory recursively
   try {
       fs.rmdirSync(clonePath, { recursive: true });
 
      console.log(`${clonePath} is deleted!`);
       this.clonePrivateSite(cloneURL, clonePath, blogHostname, wpUrl);
   } catch (err) {
       console.log(`Error while deleting ${clonePath}.`);
   }
 } 

The removeRepo function attempts to delete the clonePath directory if it exists and then runs clonePrivateSite.

 function clonePrivateSite(cloneURL, clonePath, blogHostname, wpUrl) {
     var opts = {
       fetchOpts: {
         callbacks: {
           certificateCheck: () => 0,
           credentials: function(cloneURL, userName) {
             return NodeGit.Cred.sshKeyNew(
               userName,
               sshEncryptedPublicKeyPath,
               sshEncryptedPrivateKeyPath,
               sshKeyPassword
             );
           }
         }
       }
     };
 
     Clone(cloneURL, clonePath, opts).catch(function(err) {console.log(err);}).then(function(){ this.mirrorSite(blogHostname, 'https://' + wpUrl, cloneURL, clonePath);} );
 } 

clonePrivateSite creates an options object to configure nodegit with SSH credentials created earlier. The Clone command clones the latest version of the repo to the clonePath. Next, mirrorSite is used to pull down a copy of the current dynamic WordPress site running on our backend:

 function mirrorSite(blogHostname, blogURL, cloneURL, clonePath) {
   var cmd = `wget -q --mirror --no-if-modified-since --follow-tags=a,img --no-parent --span-hosts --domains=${blogHostname} --directory-prefix=${clonePath}/html/ --header="Host: ${blogHostname}" --no-check-certificate ${blogURL}`;
   console.log('Executed Command : ' + cmd);
   var child = exec(
     cmd,
     function (error, stdout, stderr) {
       this.fixUrls(cloneURL, clonePath);
       if (error !== null) {
         console.log('exec error: ' + error);
       }
     }
   );
 } 

This is the lazy man’s method but it does work. mirrorSite is simply calling the wget command I had in the previous article and saving the output to the html directory of our clonePath. I still haven’t taken the time to fix my site so fixUrls is doing that for me:

 
 function fixUrls(cloneURL, clonePath) {
   var cmd = `find ${clonePath} -type f -print0 | xargs -0 sed -i'' -e 's/http:\\/\\/blog/https:\\/\\/blog/g'`;
   console.log('Executed Command : ' + cmd);
   var child = exec(
     cmd,
     function (error, stdout, stderr) {
       this.commitPrivateRepo(cloneURL, clonePath, 'Some Commit Message Here');
       if (error !== null) {
         console.log('exec error: ' + error);
       }
     }
   );
 } 

Because of a rip in the space time continuum, all of my wgets clone URLs like https://blog.shellnetsecurity.com so fixUrls runs find to swap all of those out to https://blog.shellnetsecurity.com. Once it completes, it calls commitPrivateRepo to commit all of our changes.

 function commitPrivateRepo(cloneURL, clonePath, commitMsg) {
     repoFolder = clonePath + '/.git';
     
     var repo, index, oid, remote;
     
     NodeGit.Repository.open(repoFolder)
       .then(function(repoResult) {
         repo = repoResult;
         return repoResult.refreshIndex();
       })
       .then(function(indexResult) {
         index  = indexResult;
         index.read(1);
         var paths = [];
         return NodeGit.Status.foreach(repo, function(path) {
           paths.push(path);
         }).then(function() {
           return Promise.resolve(paths);
         });
       })
       .then(function(paths) {
         return index.addAll(paths);
       })
       .then(function() { 
         index.write();
         return index.writeTree();
       })
       .then(function(oidResult) {
         oid = oidResult;
     
         return NodeGit.Reference.nameToId(repo, 'HEAD');
       })
       .then(function(head) {
         return repo.getCommit(head);
       })
       .then(function(parent) {
         author = NodeGit.Signature.now('Slack App', 'author@email.com');
         committer = NodeGit.Signature.now('Slack App', 'commiter@email.com');
     
         return repo.createCommit('HEAD', author, committer, commitMsg, oid, [parent]);
       })
       .then(function(commitId) {
         return console.log('New Commit: ', commitId);
       })
     
       /// PUSH
       .then(function() {
         return NodeGit.Remote.createAnonymous(repo, cloneURL)
         .then(function(remoteResult) {
           remote = remoteResult;
 
           // Create the push object for this remote
           return remote.push(
             ["refs/heads/main:refs/heads/main"],
             {
               callbacks: {
                 credentials: function(url, userName) {
                   return NodeGit.Cred.sshKeyNew(
                     userName,
                     sshEncryptedPublicKeyPath,
                     sshEncryptedPrivateKeyPath,
                     sshKeyPassword
                   );
                 }
               }
             }
           );
         });
       })
       .then(function() {
         console.log('remote Pushed!')
       })
       .catch(function(reason) {
         console.log(reason);
       })
 } 

This function goes through the clonePath directory and adds all new and changed files to the commit. After committing all changes to the local repo, it pushes those changes to the remote repo again using the SSH credentials created previously.

What’s Next?

After making all of the above changes, restart the bolt bot and do some testing. If you publish any posts, you should receive a notification to Slack and eventually an update to your repo. Also, you should be able to extend this bot to handle other types of WordPress notifications coming into Slack.

Samsung Phone Dropping WiFi

Image by IO-Images from Pixabay

Disclosure: I have included some affiliate / referral links in this post. There’s no cost to you for accessing these links but I do indeed receive some incentive for it if you buy through them.

I was getting so frustrated with my new phone. I got on the preorder list and was all excited to get my brand new Samsung Note20 Ultra. After it was delivered, I did the standard switch to the new phone. Thank you AT&T and Samsung for making the upgrade from my Note8 to the Note20 so easy!

5G is Great, WiFi is Useless

The new 5G was great and working just fine but my problem is that I was using the mobile network way more than my WiFi connection. Anytime I would pick up my phone, the WiFi would be dead. I would either need to wait several minutes for the connection to return or turn off/on my WiFi. This great new awesome phone was useless in the house! More importantly, I have a few Google Home devices so I was unable to cast to them.

I searched all over the Internet and felt like I was the only one with this problem. Nobody appeared to be having the same issue. I found countless articles on how to “repair” your WiFi. I found just about every article that equated to the “did you reboot it?” question I used to ask customers when they were having problems connecting to their MindSpring (I had to add this for nostalgia purposes of my days working technical support. Also, this company rocks and its memory should never die) accounts.

The details around the problem are as follows:

  • Use phone on WiFi
  • Phone works for a period of time but then WiFi stops responding
  • The WiFi icon on the phone only makes “up arrow” requests meaning that it was sending requests but not getting responses
  • My DNS server would stop seeing DNS requests from my phone
  • Nothing WiFi related would work on my phone
  • After some random period of time passed, my phone would then go wild catching up making a ton of queued up DNS requests and everything would start working.

Everything Got Better or Did it?

For reasons I’ll discuss later (*cough* house full of kids with gaming consoles “LAG” *cough*), I decided to buy a new WiFi router. In the house, I already have the following WiFi devices:

  • Verizon FiOS WiFi – This is so my wife and I can watch TV on our iPads (this is mesh capable and I had issues here as well)
  • Xfinity WiFi – This is my connection for my work gear (Never tried using this one)
  • Google WiFi Mesh – The kids and their devices + guests are permitted to use this
  • Google Nest Mesh – This is where all of the home automation devices live

Why would even consider adding another WiFi router to this house?!?! My original plan was to buy the baddest device you could find at the time and then reduce my networks a little. I bought an ASUS ROG Rapture GT-AX11000 AX11000 Tri-Band 10 Gigabit WiFi 6 Gaming Router to be the next replacement router. I added this router to my WiFi arsenal and connected my phone to it to configure everything and test connectivity. My phone worked great! I didn’t drop WiFi ALL day!

Problem solved move on, right? This clearly fixed my WiFi problem and seemed to be an obvious solution, a new router. Why didn’t my Google Nest Mesh seem to fix the problem? This hardware was rather new and I allow early access code so I should be bleeding edge and without worry.

The Real Solution, I Think

I stayed on my ASUS router and never had an issue. I still wanted to be able to talk to my Googles so every now and then jumped to my mesh only to find the same problems. I still didn’t give up trying to find the real solution and I think I have FINALLY found the problem and solution in this post on the community Samsung forums. This appears to be a problem with Google Location Accuracy. I believe it is the Wi-Fi scanning feature to be exact. I didn’t want to completely disable my Google Location Accuracy so I started first by disabling Improve accuracy setting, Wi-Fi scanning. After disabling this feature, I haven’t seen an issue with my Google Nest Mesh.

Building a Static WordPress

Photo by Vidsplay from StockSnap

Now that I have Nginx in Front of WordPress, I thought the next logic step was to try and hide my WordPress even more. What exactly would this mean? In my mind, I figured that I would restrict access to all of the backend functions of my WordPress site to just my IP Addresses. From there, I would simply serve static versions of the content.

Part of the reason that I can do this is because my site is mostly static. I don’t allow comments or other dynamic plugins. The site is only used to publish my blog posts and that’s about it. I also setup WordPress to use the permalink format of /%year%/%monthnum%/%post_id%/

First Step, Mirror the Site to a Private Repo

Just as the heading states, I needed to first get all of my content available outside of WordPress. Luckily, I realized that I had a few previous blog posts:

that could help me accomplish the initial steps. I won’t completely bore you with the details contained in these posts. I’m going to assume that you can get a basic idea of how to setup the private repo using Creating a Private GitHub Repo. You can setup your repo however you like but for future planning purposes, I decided to create a html directory inside of it to house the website files. My initial repo looked like the following:

 % ls -al
 total 8
 drwxr-xr-x   5 salgatt  staff   160 Dec 31 08:46 .
 drwxr-xr-x  49 salgatt  staff  1568 Jan  7 12:32 ..
 drwxr-xr-x  15 salgatt  staff   480 Jan  7 09:05 .git
 -rw-r--r--   1 salgatt  staff    18 Dec 30 18:57 README.md
 drwxr-xr-x   4 salgatt  staff   128 Jan  5 21:31 html 

With the private repo created, I needed to get all of my content into the repo for later use by Nginx. I just did a wget to pull only the page content down. The reason I did this is because there were a number of js and css files that are required for the admin pages and possibly for other “things” that I might not use right away:

 % cd html
 % wget --mirror --follow-tags=a,img --no-parent https://blog.shellnetsecurity.com
 --2021-01-07 16:37:24--  https://blog.shellnetsecurity.com/
 Resolving blog.shellnetsecurity.com (blog.shellnetsecurity.com)... 157.230.75.245
 Connecting to blog.shellnetsecurity.com (blog.shellnetsecurity.com)|157.230.75.245|:443... connected.
 HTTP request sent, awaiting response... 200 OK
 Length: 17266 (17K) [text/html]
 Saving to: ‘blog.shellnetsecurity.com/index.html’
 

 blog.shellnetsecurity.com/index.html       100%[=======================================================================================>]  16.86K  --.-KB/s    in 0.09s   
...
 --2021-01-07 16:37:41--  https://blog.shellnetsecurity.com/author/salgatt/page/2/
 Connecting to blog.shellnetsecurity.com (blog.shellnetsecurity.com)|157.230.75.245|:443... connected.
 HTTP request sent, awaiting response... 200 OK
 Length: 41746 (41K) [text/html]
 Saving to: ‘blog.shellnetsecurity.com/author/salgatt/page/2/index.html’
 

 blog.shellnetsecurity.com/author/salgatt/p 100%[=======================================================================================>]  40.77K  --.-KB/s    in 0.1s    
 

 2021-01-07 16:37:44 (398 KB/s) - ‘blog.shellnetsecurity.com/author/salgatt/page/2/index.html’ saved [41746/41746]
 

 FINISHED --2021-01-07 16:37:44--
 Total wall clock time: 19s
 Downloaded: 56 files, 2.7M in 3.4s (821 KB/s) 

My wget command runs the –mirror command to ummm mirror the site. I do the –follow-tags=a,img so that I only nab the html plus images and follow only href tags. Finally, I want to stay within my site and not download any other sites’ content by issuing –no-parent. With that, I now have a blog.shellnetsecurity.com directory in my repo’s html directory.

 % ls -al
 total 0
 drwxr-xr-x   4 salgatt  staff  128 Jan  5 21:31 .
 drwxr-xr-x   5 salgatt  staff  160 Dec 31 08:46 ..
 drwxr-xr-x  18 salgatt  staff  576 Jan  7 08:38 blog.shellnetsecurity.com 

Now, I need to get all of my static content into the repo as well. In order to do that, I just did a simple copy of the static files from my container running wordpress using kubectl cp:

 % kubectl cp -n wordpress wordpress-85589d5658-48ncz:/opt/wordpress/wp-content ./blog.shellnetsecurity.com/wp-content
 tar: Removing leading `/' from member names
 % kubectl cp -n wordpress wordpress-85589d5658-48ncz:/opt/wordpress/wp-includes ./blog.shellnetsecurity.com/wp-includes
 tar: Removing leading `/' from member names 

These copy commands grab ALL files in these two directories. The idea is that I’m grabbing the js and css for any plugins running in my WordPress and any theme related files. Since these directories contain PHP files and other files I don’t need in my static repo, I remove them with a nice little find command:

 % find blog.shellnetsecurity.com/wp-includes -type f -not -name '*.js' -not -name '*.css' -not -name '*.jpg' -not -name '*.png' -delete
 % find blog.shellnetsecurity.com/wp-content -type f -not -name '*.js' -not -name '*.css' -not -name '*.jpg' -not -name '*.png' -delete 

At this point, I now have a repo that should have all of the content ready to go. I commit all of the changes and push the changes to main.

Serve the Static Repo

Like I said before, I’m not going to clutter this post with the details that can be found in Building a Kubernetes Container That Synchs with Private Git Repo. Assuming you have this all ready to go, I’m going to cut straight to the configuration portion. I’m assuming the nginx container is mounting the private repo at /dir/wordpress_static. I am also going to build upon the nginx configmap that was created in Adding Nginx in Front of WordPress. I’m first going to change the root directory to be the static WordPress blog:

         root /dir/wordpress_static/html/blog.shellnetsecurity.com; 

I also need to change some of my original reverse proxy mappings to serve most content from static but still leave a few requests go to my WordPress

         location /status {
                 return 200 "healthy\n";
         }
 
         location / {
                 try_files $uri $uri/ /index.html;
         }
 
         location /sitemap {
                 proxy_pass https://wordpress;
                 proxy_ssl_verify off;
                 proxy_set_header Host blog.shellnetsecurity.com;
                 proxy_set_header X-Forwarded-For $remote_addr;
         }
 
         location /wp-sitemap {
                 proxy_pass https://wordpress;
                 proxy_ssl_verify off;
                 proxy_set_header Host blog.shellnetsecurity.com;
                 proxy_set_header X-Forwarded-For $remote_addr;
         }
 
         location /wp-json {
                 allow 1.1.1.1;
                 allow 2.2.2.2;
                 deny all;
                 proxy_pass https://wordpress;
                 proxy_ssl_verify off;
                 proxy_set_header Host blog.shellnetsecurity.com;
                 proxy_set_header X-Forwarded-For $remote_addr;
         }
 
         location /wp-login {
                 allow 1.1.1.1;
                 allow 2.2.2.2;
                 deny all;
                 proxy_pass https://wordpress; 
                 proxy_ssl_verify off;
                 proxy_set_header Host blog.shellnetsecurity.com;
                 proxy_set_header X-Forwarded-For $remote_addr;
         }
 
         location /admin {
                 allow 1.1.1.1;
                 allow 2.2.2.2;
                 deny all;
                 proxy_pass https://wordpress;
                 proxy_ssl_verify off;
                 proxy_set_header Host blog.shellnetsecurity.com;
                 proxy_set_header X-Forwarded-For $remote_addr;
         }
 
         location /wp-admin {
                 allow 1.1.1.1;
                 allow 2.2.2.2;
                 deny all;
                 proxy_pass https://wordpress;
                 proxy_ssl_verify off;
                 proxy_set_header Host blog.shellnetsecurity.com;
                 proxy_set_header X-Forwarded-For $remote_addr;
         }

Through some trial and error, I found that I needed to have all of the following paths allowed for my admin functionalities:

  • /wp-admin
  • /admin
  • /wp-login
  • /wp-json

Since these are required for admin functions, I have made sure to run my IP restrictions on them and only allow my addresses to access them. For now, I am managing my sitemaps from within WordPress so I also allowed requests from any clients to go directly to my WordPress server still (something I’ll correct in a future post when I talk about automation). Aside from these exceptions, I’m using try_files to find the other content. This means that requests for any other content will be sent into the root directive, aka /dir/wordpress_static/html/blog.shellnetsecurity.com, aka the private repo! Notice the trailing /index.html on the directive? That just means that I’ll serve /index.html whenever the page isn’t found.

With that, I am now serving content from my mirrored content that is running from the private repo. I can still manage my WordPress site like I normally do from the backend and generate content and make changes and life is mostly good.

I am an idiot

Yes, you don’t need to tell me this! I know there are some obvious flaws in what I’ve setup like:

  • What happens when I post a new article?!
  • What do I do when WordPress is upgraded?
  • What happens when a plugin is upgraded?
  • Do you know that doing a wget for just pages won’t download pretty little images?
  • Did you know that serving /index.html for css/jpg/png/js files is ugly?
  • This manual process is terrible!

I know! I have already begun to tackle these and I’ll have more details on that when I write my Automating Static WordPress Updates (Currently in Draft). As a sneak peak to all of this, there’s a really cool WordPress plugin that will send various notifications to Slack. Oh the fun that we will have when talking about using Slack as a message bus and writing and app and and …. ok I’ll contain my excitement for now!

[Survey] What are Important Features for a Blog?

overexposed question mark
Photo by Emily Morter from StockSnap

When building a blog, it can be overwhelming knowing what all features are important to enable. Your blog software of choice can offer all kinds of features. Those features could overwhelm you while others could overwhelm your visitors. Given all of the options available, I thought that it would be interesting to gather personal preferences of those that read blogs.

Below is a survey hosted by SurveyLegend, that is aimed at gathering some of these details from you the reader. Please participate in this short survey by providing your personal preferences. I know some of these features are important for SEO rankings but I’m more concerned about what the reader thinks not the computer.

Kubernetes Upgrades Break My DigitalOcean LoadBalancer

Photo by Austin Neill from StockSnap

Disclosure: I have included some affiliate / referral links in this post. There’s no cost to you for accessing these links but I do indeed receive some incentive for it if you buy through them.

I’ve talked about it in previous posts about my thus far overall enjoyment running in DigitalOcean. While I had tinkered with a number of other cloud providers, I settled with them for many things. I do still run in some other providers like OVHCloud (maybe more on my project there for another day). Despite my love for DigitalOcean, I do have one complaint regarding their Kubernetes and their LoadBalancer.

The Problem: I’m Cheap

I guess thrifty sounds so much better but I’m cheap. It’s a fact. I have in fact created my own problem with DigitalOcean due to my cheapness. They do have a number of excellent integration points between their Kubernetes and other components such as storage and load balancers. I can issue Kubernetes commands to create a new LoadBalancer or PVC and boom life is good. My problem is that LoadBalancers cost money. To date, I have only been able to figure out a 1:1 mapping between the LoadBalancer and Kubernetes. This 1:1 means that I can only manage a single LoadBalancer per exposed port.

If I only ever intend to expose a single application to the world, this is great! This is not me. I run a number of different applications that I want to expose. That means I need to pay for a LoadBalancer for each application or do I? Here come the Forwarding Rules! Each LoadBalancer can be configured with a number of forwarding rules like so:

With these rules in place, I’m able to expose multiple ports/applications on the same load balancer. This is wonderful except upgrades to the Kubernetes clusters like to blow away my custom settings such as:

  • Forwarding Rules
  • SSL Redirects
  • Proxy Protocol
  • Backend Keepalive

For the longest time, I had to come back in reconfigure everything every time I did a Kubernetes cluster upgrade. Worse yet, I didn’t know things got blown away whenever the cluster upgraded automatically. I had setup port/application monitors to alert me when things when down so I could manually reconfigure them.

The Solution : DigitalOcean API

While the manual fix has always been a waste of time and has sometimes prevented me from upgrading Kubernetes (bad security d00d), I still did the upgrades and manually fixed it. I never really learned a “new thing” to try and get this fixed in a less manual manner. Today was the the day I changed all of that. I’m sure there’s some other way that I hadn’t thought of yet but we’re going with baby steps. Instead of taking manual screenshots of the configuration page for the LoadBalancer and then trying to manually go back in and change the settings to what I thought they were, I am now using the API. Some good general documentation on the DigitalOcean API can be found here:

Setting Up API Access

The first step in getting all of this working is getting an API token. There’s no sense reinventing the wheel when DigitalOcean already has something written up very well such as How to Create a Personal Access Token. This walks you through creating the token to be used with the API. It is very important to make sure you create a token that has write access.

Get Your Existing LoadBalancer Configuration

With token in hand, it is time to get a copy of your existing configuration with a curl command. First, you need to know the id of your load balancer so we just query for all load balancers:

% curl -X GET -H "Content-Type: application/json" -H "Authorization: Bearer your_api_token_here" https://api.digitalocean.com/v2/load_balancers|jq .   
 {
   "load_balancers": [
     {
       "id": "ffff-ffff-ffff-ffff-b75c",
       "name": "my-lb-01",
       "size": "lb-small",
       "algorithm": "round_robin",
       "status": "active",
       "created_at": "2019-10-25T19:56:00Z",
       "forwarding_rules": [
         {
           "entry_protocol": "tcp",
           "entry_port": 80,
           "target_protocol": "tcp",
           "target_port": 31640,
           "certificate_id": "",
           "tls_passthrough": false
         },
         {
           "entry_protocol": "tcp",
           "entry_port": 4514,
           "target_protocol": "tcp",
           "target_port": 31643,
           "certificate_id": "",
           "tls_passthrough": false
         }
       ],
       "region": {
         "name": "San Francisco 2",
         "slug": "sfo2"
       },
       "tag": "",
       "droplet_ids": [
         111,
         222,
         333
       ],
       "redirect_http_to_https": false,
       "enable_proxy_protocol": false,
       "enable_backend_keepalive": false,
     },
     {
       "id": "ffff-ffff-ffff-ffff-72a4",
       "name": "my-lb-02",
       "size": "lb-small",
       "algorithm": "round_robin",
       "status": "active",
       "created_at": "2020-12-02T07:54:13Z",
       "forwarding_rules": [
         {
           "entry_protocol": "https",
           "entry_port": 443,
           "target_protocol": "http",
           "target_port": 31645,
           "certificate_id": "aaaa-aaaa-aaaa-aaaa-bcf8",
           "tls_passthrough": false
         },
         {
           "entry_protocol": "tcp",
           "entry_port": 80,
           "target_protocol": "tcp",
           "target_port": 31645,
           "certificate_id": "",
           "tls_passthrough": false
         }
       ],
       "region": {
         "name": "San Francisco 2",
         "slug": "sfo2"
       },
       "tag": "",
       "droplet_ids": [
         111,
         222,
         333
       ],
       "redirect_http_to_https": true,
       "enable_proxy_protocol": true,
       "enable_backend_keepalive": true,
     }
   ],
   "links": {},
   "meta": {
     "total": 2
   }
 }

There are two load balancers here in this example, my-lb-01 and my-lb-02. While my-lb-01 was my original load balancer that gave me the most trouble, I’m going to focus on my-lb-02 since it has more customizations not just to the forwarding rules.

We need to first identify the configuration that we’d like to save. Then, we’ll save this configuration into it’s own json, let’s call it my-lb-02.json. Notice in the above JSON, the configurations are housed within a “loadbalancer” array? In order to create our my-lb-02.json file, we simply pull the single JSON element from the array like this:

     {
       "id": "ffff-ffff-ffff-ffff-72a4",
       "name": "my-lb-02",
       "size": "lb-small",
       "algorithm": "round_robin",
       "status": "active",
       "created_at": "2020-12-02T07:54:13Z",
       "forwarding_rules": [
         {
           "entry_protocol": "https",
           "entry_port": 443,
           "target_protocol": "http",
           "target_port": 31645,
           "certificate_id": "aaaa-aaaa-aaaa-aaaa-bcf8",
           "tls_passthrough": false
         },
         {
           "entry_protocol": "tcp",
           "entry_port": 80,
           "target_protocol": "tcp",
           "target_port": 31645,
           "certificate_id": "",
           "tls_passthrough": false
         }
       ],
       "region": {
         "name": "San Francisco 2",
         "slug": "sfo2"
       },
       "tag": "",
       "droplet_ids": [
         111,
         222,
         333
       ],
       "redirect_http_to_https": true,
       "enable_proxy_protocol": true,
       "enable_backend_keepalive": true,
     }

We need to remove a few useless items from that JSON so remove the following:

  • status
  • name
  • size
  • created_at
  • region (do remember the “slug” entry as we’ll need this to recreate the region)

As noted above, we also need to remove the existing region entry and instead replace it with the value of “slug” aka “sfo2” in this example. With those changes made, here’s our new JSON:

     {
       "id": "ffff-ffff-ffff-ffff-72a4",
       "algorithm": "round_robin",
       "forwarding_rules": [
         {
           "entry_protocol": "https",
           "entry_port": 443,
           "target_protocol": "http",
           "target_port": 31645,
           "certificate_id": "aaaa-aaaa-aaaa-aaaa-bcf8",
           "tls_passthrough": false
         },
         {
           "entry_protocol": "tcp",
           "entry_port": 80,
           "target_protocol": "tcp",
           "target_port": 31645,
           "certificate_id": "",
           "tls_passthrough": false
         }
       ],
       "region": "sfo2",
       "tag": "",
       "droplet_ids": [
         111,
         222,
         333
       ],
       "redirect_http_to_https": true,
       "enable_proxy_protocol": true,
       "enable_backend_keepalive": true,
     }

How Do I Unbreak Things in the Future?

I’m glad you asked! Now that you have your my-lb-02 JSON file ready to go, you can simply wait for the next upgrade of your Kubernetes cluster to rebuild everything. Below, you can see my-lb-02 broken in the DigitalOcean control panel:

There’s one little catch to fixing everything. You’ll need to first get the IDs of the new cluster nodes in order to be able to add them to the load balancer. Whenever the cluster is upgraded, DigitalOcean deletes the old versioned node and adds in a new versioned one. You can do this by doing a GET to one of the load balancer’s configurations:

 % curl -X GET -H "Content-Type: application/json" -H "Authorization: Bearer your_api_token_here" https://api.digitalocean.com/v2/load_balancers/ffff-ffff-ffff-ffff-72a4|jq .load_balancer.droplet_ids
 [
   123,
   456,
   789
 ] 

In order to make my life easier, I piped my results through jq and told it to only bring back the json path I cared about, load_balancer.droplet_ids. Now we see that the droplets have changed from our original list of 111, 222, 333 to 123, 456, 789. We need to make this change to our JSON

     {
       "id": "ffff-ffff-ffff-ffff-72a4",
       "algorithm": "round_robin",
       "forwarding_rules": [
         {
           "entry_protocol": "https",
           "entry_port": 443,
           "target_protocol": "http",
           "target_port": 31645,
           "certificate_id": "aaaa-aaaa-aaaa-aaaa-bcf8",
           "tls_passthrough": false
         },
         {
           "entry_protocol": "tcp",
           "entry_port": 80,
           "target_protocol": "tcp",
           "target_port": 31645,
           "certificate_id": "",
           "tls_passthrough": false
         }
       ],
       "region": "sfo2",
       "tag": "",
       "droplet_ids": [
         123,
         456,
         789
       ],
       "redirect_http_to_https": true,
       "enable_proxy_protocol": true,
       "enable_backend_keepalive": true,
     }

With the JSON updated, we now issue a PUT command to the load balancer API for the specific load balancer like so:

 % curl -X PUT -H "Content-Type: application/json" -H "Authorization: Bearer your_api_token_here" https://api.digitalocean.com/v2/load_balancers/ffff-ffff-ffff-ffff-72a4 -d @my-lb-02.json

Now we can go look at the control panel again and confirm everything is back to normal!

Everything Works Great!

After an upgrade runs, I can simply come back through with a few minor steps and put everything back the way it should. Yes, there’s still some manual aspects to this and automating all of it shouldn’t be too terrible but I’ll save that for another time when I decide that this manual process is just too much anymore. Although, it took me nearly a year to hate the original manual process….

Adding Nginx in Front of WordPress

Photo by Lenharth Systems from StockSnap

The future is here! In my previous article, Testing Out the Digital Ocean Container Registry, I talked about using the Digital Ocean Container Registry to build a custom nginx. In that article, I talked about the future, aka a future, aka this post. When I moved to WordPress, I did so using Digital Ocean’s 1-Click install to drop WordPress into my Kubernetes cluster. This was the easy way to go for sure. I already run Kubernetes so deploying it to an existing cluster made life easier on me. Who doesn’t love it when life is made easier?

There are a few drawbacks to the 1-Click install. I’m planning to tinker with something really cool down the road to fix one of those problems (I know the future again). Luckily, I’m going to address my first initial concern in this post. What is that concern you ask? Protecting my WordPress admin of course! Sure, there are a number of WordPress vulnerabilities roaming around and talks of zero days and the sort. I make life easier on any attacker if I just leave my WordPress admin open to anyone. In this post, we look at taking my custom nginx and deploying it in front of my WordPress site to enforce IP access control to the admin page.

Setting Up the Container Registry for Kubernetes

In my Testing Out the Digital Ocean Container Registry, I explained how to get a custom nginx into the Container Registry. In order to use that container and registry with my cluster, I had to enable DigitalOcean Kubernetes integration in the settings of the registry. You can do the same by doing the following:

  1. Login to your DigitalOcean account
  2. Go to the Container Registry link
  3. Click on the Settings tab of the Container Registry
  4. Click the Edit button next to DigitalOcean Kubernetes Integration
  5. Place a check mark next to the Kubernetes clusters that you want to have access to this registry (Note, if you have multiple namespaces, this action will add access for all namespaces).

Once these steps are complete, you can confirm access by looking for a new secrets in your cluster:

# kubectl get secrets
NAME                   TYPE                                  DATA   AGE
default-token          kubernetes.io/service-account-token   3      423d
json-key               kubernetes.io/dockerconfigjson        1      396d
k8-registry            kubernetes.io/dockerconfigjson        1      18d
key-secret             Opaque                                2      419d

Notice the k8-registry secret that I now have in my secrets list? You can also see that this exists in my wordpress namespace as well:

# kubectl get secrets -n wordpress
NAME                  TYPE                                  DATA   AGE
default-token         kubernetes.io/service-account-token   3      18d
k8-registry           kubernetes.io/dockerconfigjson        1      18d
wp                    Opaque                                1      18d
wp-db                 Opaque                                2      18d

Adding Nginx to the Cluster

This should be super easy! I start by first creating configMap that stores my Nginx configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
  namespace: wordpress
  labels:
    app: nginx
    release: wordpress
data:
  siteConfig: |
    server {
        listen 8080 default_server;
        listen [::]:8080 default_server;

        root /var/www/html;

        index index.html index.htm index.nginx-debian.html;

        server_name _;

        location /status {
                return 200 "healthy\n";
        }

        location / {
                proxy_pass http://wordpress;
                proxy_set_header Host blog.shellnetsecurity.com;
                proxy_set_header X-Forwarded-For $remote_addr;
        }

        location /admin {
                allow 1.1.1.1;
                allow 2.2.2.2;
                deny all;
                proxy_pass http://wordpress;
                proxy_set_header Host blog.shellnetsecurity.com;
                proxy_set_header X-Forwarded-For $remote_addr;
        }

        location /wp-admin {
                allow 1.1.1.1;
                allow 2.2.2.2;
                deny all;
                proxy_pass http://wordpress;
                proxy_set_header Host blog.shellnetsecurity.com;
                proxy_set_header X-Forwarded-For $remote_addr;
        }
    }

  serverConfig: |
    user www-data;
    worker_processes auto;
    pid /run/nginx.pid;
    include /etc/nginx/modules-enabled/*.conf;

    events {
        worker_connections 768;
    }

    http {
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        include /usr/local/nginx/conf/mime.types;
        default_type application/octet-stream;
        access_log /dev/stdout;
        error_log /dev/stdout;
        gzip on;
        
        include /etc/nginx/sites-enabled/*;
    }

    daemon off;

I mostly added a set of standard nginx configurations. If you look at the serverConfig closely, you’ll notice that I’ve directed the access_log and error_log to /dev/stdout. This is so all of the logs are written to stdout (duh). This also allows me to run kubectl logs -f on the created pod and see the access and error logs.

Nginx is going to be acting like a reverse proxy so I took a relatively standard default site-available configuration and added a few new location blocks. The /status block is simply for me to perform healthchecks on the running nginx instance. The other statements are proxy_pass statements to send requests to the “wordpress” pod that was installed by the 1-Click install. I’m also making sure that I send over the Host header with blog.shellnetsecurity.com. If I don’t do this, the 1-Click install will build funky URLs that don’t work. Luckily, it will read the Host header and build links based upon that. I force the host header to be what I want with this statement.

Finally, you’ll see my allow statements for 1.1.1.1 and 2.2.2.2 (not really my IPs but let’s play make believe). These are followed by deny all. This should make it so that only my 1.1.1.1 and 2.2.2.2 addresses are allowed to /admin and /wp-admin. All others will be denied.

Next, I create a Deployment yaml that tells Kubernetes what containers to build and how to use my configMap:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: wordpress
  labels:
    app: nginx
    release: wordpress
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
      release: wordpress
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: nginx
        release: wordpress
    spec:
      volumes:
      - name: siteconfig
        configMap:
          name: nginx-config
          items:
          - key: siteConfig
            path: default
      - name: serverconfig
        configMap:
          name: nginx-config
          items:
          - key: serverConfig
            path: nginx.conf
      imagePullSecrets:
      - name: k8-registry
      containers:
      - name: nginx
        image: registry.digitalocean.com/k8-registry/c-core-nginx:1.1
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: siteconfig
          mountPath: /etc/nginx/sites-enabled/default
          subPath: default
        - name: serverconfig
          mountPath: /usr/local/nginx/conf/nginx.conf
          subPath: nginx.conf

Take note to the blue colored text above. I am using the imagePullSecrets configuration to tell kubernetes that it will need credentials to access the container registry where my image sits. I am also pointing it to the k8-registry credentials that were added by the DigitalOcean Kubernetes Integration change we made earlier. Finally, I am also providing the full path, version tag included, to the custom image I am hosting in the DigitalOcean registry with the image statement pointing to registry.digitalocean.com/k8-registry/c-core-nginx:1.1.

Next up, I need to add a NodePort that I can configure on the load balancer to send traffic over.

apiVersion: v1
kind: Service
metadata:
  namespace: wordpress
  name: nginx
  labels:
    app: nginx
    release: wordpress
spec:
  selector:
    app: nginx
    release: wordpress
  type: NodePort
  ports:
    - port: 8080
      nodePort: 31645

So I do a little kubectl apply -f to those yaml files I just created. Everything comes up. Next step is to setup the load balancer to forward traffic over. Since I have the nodePort configured as 31645, I just need to tell the load balancer to send traffic that I want to that port. I don’t want to mess with the existing setup so I decide to simply forward http port 8443 over to http port 31645.

Everything should be all set, so let’s open a browser and test

I am getting blocked like I expected! The problem is that I’m coming from my 2.2.2.2 address. What could be the issue? Good thing I told the logs to be sent to stdout so let’s check them for 403s:

kubectl logs -f -n wordpress nginx-9cdf87f68-tss6x|grep 403
...
10.126.32.147 - - [20/Dec/2020:13:37:33 +0000] "GET /admin HTTP/1.1" 403 187 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36"
...

I see the problem! That is not my 2.2.2.2 address! That was my request though. It seems that I’m not getting the real IP of the client but instead internal IPs from the load balancer.

Enter the PROXY Protocol

For access control, I didn’t want to rely on the X-Forwarded-For header since it is something that comes from the client. This means that someone could spoof the headers to get around my control. In addition to that, the DigitalOcean load balancer does not send this header so it’s a moot point. DigitalOcean does provide the PROXY Procotol in it’s load balancers but not by default. The short explanation is that this protocol will send in the IP like I want but it requires some configuration. It’s either enable PROXY protocol or not as well and there is no mixing or matching.

Enabling the PROXY Protocol on the load balancer was easy. You simply enable it in the Settings of the load balancer.

It is very important to NOT enable this until Nginx was configured. Otherwise, the site would have gone down. I explain my specific configuration below, but you are also welcome to explore the Nginx documentation on the PROXY Protocol.

Configuring Nginx

In my Testing Out the Digital Ocean Container Registry article, I built nginx with the PROXY protocol capability by enabling the ngx_http_realip module. It was like I wrote that previous article after getting this all working….? With the module already enabled, it was pretty easy to simply update the configuration and go. I added the following line to my sever block:

        set_real_ip_from 10.126.32.0/24;

Just like that, I was good to go so I thought. I was now getting denied only sometimes. I checked the logs again to find out why:

kubectl logs -f -n wordpress nginx-9cdf87f68-tss6x
...
10.126.32.147 - - [20/Dec/2020:13:37:33 +0000] "GET /admin HTTP/1.1" 403 187 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36"
2.2.2.2 - - [20/Dec/2020:18:35:49 +0000] "POST /admin HTTP/1.1" 200 98 "https://blog.shellnetsecurity.com/wp-admin/post.php?post=93&action=edit" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36"
3.3.3.3 - - [20/Dec/2020:13:37:33 +0000] "GET /admin HTTP/1.1" 403 187 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36"
...

Let’s chat about the set_real_ip_from statement. We need to add this statement for all potential IP addresses that we trust to provide us with the real client IP. In my case, it turned out that 10.126.32.0/24 was not a large enough block for the internal IP addresses so I needed to change that to a /16. Also, notice the 3.3.3.3 address? That’s the external IP of one of nodes in the kubernetes cluster. Armed with that knowledge, I expanded my server block to include multiple set_real_ip_from statements:

        set_real_ip_from 10.126.32.0/16;
        set_real_ip_from 3.3.3.3;
        set_real_ip_from 4.4.4.4;
        set_real_ip_from 5.5.5.5;

I reloaded everything and tested again and success every time! I got denied when I wasn’t on my 1.1.1.1 or 2.2.2.2 address. I also see others getting denied as well. When I’m sitting on 1.1.1.1 or 2.2.2.2, I’m able to get into my WordPress admin!

Testing Out the Digital Ocean Container Registry

Disclosure: I have included some affiliate / referral links in this post. There’s no cost to you for accessing these links but I do indeed receive some incentive for it if you buy through them.

Photo by Guillaume Bolduc from StockSnap

The house use to be full of random computers and networking gear but I’ve reduced the home presence over the years. I’ve messed with a number of cloud providers both inexpensive and expensive. My base for the majority of my toys reside in Digital Ocean. I’ve really liked what they’ve done over the years. Recently, they announced a Container Registry. If you follow this blog, then you remember my post, Posting a Custom Image to Docker Hub. In that post, I explained how to build an image and push it up Docker Hub. Some images might not need to be public for whatever the reason. Needless to say, Digital Ocean’s Container Registry announcement, intrigued me. With the move to WordPress, I figured that I should also build a custom nginx build to run in my Kubernetes cluster on Digital Ocean.

Building the Custom Nginx

This part was pretty easy. I simply created a Dockerfile for the build.

FROM ubuntu

ENV DEBIAN_FRONTEND noninteractive

MAINTAINER Scott Algatt

RUN apt-get update \
    && apt-get install -y libjansson-dev libcurl4-openssl-dev libapr1-dev libaprutil1-dev libssl-dev build-essential devscripts libtool m4 automake pkg-config libpcre3-dev zlib1g-dev\
    && apt -y upgrade \
    && apt -y autoremove \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* \
    && curl -o /tmp/nginx.tgz http://nginx.org/download/nginx-1.18.0.tar.gz

WORKDIR /tmp

RUN tar zxf nginx.tgz \
    && cd nginx-1.18.0 \
    && ./configure --with-http_realip_module\
    && make \
    && make install

EXPOSE 80
CMD ["/usr/local/nginx/sbin/nginx"]

As you can see from the Dockerfile, this is a really super simple build. It is also not very custom aside from my compile command where I’ve added –with-http_realip_module. This little addition is something that I will use later in a future post (I know everything will be in the future) but you can see what it does by visiting the nginx documentation. Anyhow, there you go. Aside from the configure command, I’m just setting up ubuntu to compile code and I download nginx and compile it. Then expose port 80 and run nginx.

Once you have created the Dockerfile, you can run a build to generate your docker image. You’ll see that my build command tags the build with a name, c-core-nginx, and specific version, 1.1. I would suggest doing this to help keep versions straight in your repository.

% docker build -t c-core-nginx:1.1 .
Sending build context to Docker daemon  21.72MB
Step 1/9 : FROM ubuntu
 ---> 4e2eef94cd6b
Step 2/9 : ENV DEBIAN_FRONTEND noninteractive
 ---> Using cache
 ---> decc285ce9e4
Step 3/9 : MAINTAINER Scott Algatt
 ---> Using cache
 ---> 197e4c81b654
Step 4/9 : RUN apt-get update     && apt-get install -y libjansson-dev libcurl4-openssl-dev libapr1-dev libaprutil1-dev libssl-dev build-essential devscripts libtool m4 automake pkg-config libpcre3-dev zlib1g-dev    && apt -y upgrade     && apt -y autoremove     && apt-get clean     && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*     && curl -o /tmp/nginx.tgz http://nginx.org/download/nginx-1.18.0.tar.gz
 ---> Using cache
 ---> d5c8a70c412f
Step 5/9 : COPY ./perimeterx-c-core /tmp/perimeterx-c-core
 ---> Using cache
 ---> d325026c19b6
Step 6/9 : WORKDIR /tmp
 ---> Using cache
 ---> 8fb23db246a3
Step 7/9 : RUN tar zxf nginx.tgz     && cd nginx-1.18.0     && ./configure --add-module=/tmp/perimeterx-c-core/modules/nginx --with-threads --with-http_realip_module    && make     && make install
 ---> Using cache
 ---> 25af69d04a9f
Step 8/9 : EXPOSE 80
 ---> Using cache
 ---> e74b4cc64160
Step 9/9 : CMD ["/usr/local/nginx/sbin/nginx"]
 ---> Using cache
 ---> 6f10e3bebefc
Successfully built 6f10e3bebefc
Successfully tagged c-core-nginx:1.1

After the build completes, you can confirm that your image is listed on your local docker repo

% docker images c-core-nginx
REPOSITORY     TAG       IMAGE ID       CREATED       SIZE
c-core-nginx   1.1       6f10e3bebefc   2 weeks ago   584MB
c-core-nginx   1.0       b3673b4bf518   2 weeks ago   584MB

Pushing Your Image to the Container Registry

I’m not going to spend a ton of effort in this section because the Digital Ocean Container Registry announcement I posted above explains the setup really well. At a high level, you simply complete the following steps:

  1. Install and configure doctl (assuming you had never done this like me)
  2. Login into your Digital Ocean account
  3. Go to the Container Registry link
  4. Create the Container Registry
  5. Login to your registry using the doctl command
  6. Push your desired container(s) to the registry

The below image shows a screenshot of my c-core-nginx images that I uploaded to my Container Registry.

Notice something really cool? The size of those images in my local registry is 584MB but they are roughly 194MB when uploaded. They are being compressed in the registry. This is a really nice feature since the initial free tier of Digital Ocean’s Container Registry is a single repo of 500MB.

In the future, you will see how I actually used this new feature for fun and zero profit.

Making the Lights Dance

My previous post, Making the Little Lights Twinkle, covered my coding of the NodeJS server that could take simply relay numbers and command (on|off) and put them to use. Now that I have a server/service up and running, it was time for me to be able to control the lights. I removed my Orchestra of Lights and put my new Raspberry Pi hardware device in its place. The only downside is that my setup has 8 outlets and the Orchestra of Lights only had 6. As I built out some of my light sequences, I noticed there’s a delay due to relays 7 and 8 being triggered but nothing being connected to them. Maybe that’s something I can do for next year. I’ll plan for 8 lighting areas instead of just 6.

In addition to needing to be able to plan for two more strands of lights, I still hadn’t figured out my audio configuration yet. I’ve only had the time to focus on the lights themselves. For the time being, I’ve got a weatherproof bluetooth speaker outside that I sync my iPad to for music. I just tell my iPad to loop through a playlist that my wonderful wife has put together. Thank you deer!

The Initial Script

While I still struggled to find some time to continue to research making the lights dance to music, I figured it was a really good idea to get something up and running. I thought randomness was key. I wanted it to be random because I was sick of the Orchestra of lights running through the same/similar light pattern over and over and over and over…..and over. With that, I built the initial script control my lights:

#!/bin/bash

COMMANDS=("off" "on")
i=0

trap exitout SIGINT

exitout() {
  echo "We Are Done Here!"
  exit
}

while :
do
  CMD=$(( ${RANDOM} % 2 ))
  RELAY=$(( ${RANDOM} % 8 + 1 ))
  curl 127.0.0.1:8080/light/$RELAY/${COMMANDS[$CMD]}
  echo $RELAY ${COMMANDS[$CMD]}
  sleep 0.2
done

Obviously, this is a really quick hack of a script but it works. It’s also obvious that it is a simple shell script. I created the array “COMMANDS” with two elements “off” and “on”. I ended up not using “i” but oh well it’s still here. I have the script setup to exit “cleanly” by calling exitout whenever someone presses CTRL+C on the keyboard. You need this because as you can tell the script is created with a while loop that never exits.

Let’s talk about what is going on inside that while loop. With each iteration, I’m setting CMD to a random number modulo 2 which gives us either 0 or 1. I’m also setting RELAY to a random number modulo 8 which gives us something 0 – 7 on each iteration. You’ll notice I add +1 so that I actually make RELAY be 1 – 8. Next, I run a curl command with my relay number and the element 0 or 1 aka off on from the COMMANDS array. From there, I just echo out what I sent to the server and then sleep for 0.2 seconds. This loop runs forever until something crashes or the user does CTRL+C.

Oh My Goodness This is Ok

The above simple script did the trick. This made my lights randomly turn on and off. I was quite happy with the results we got with this. I had light strands turning on and off. Sometimes, the lights were doing something that appeared like it was in tune with the music. There was one little flaw in this. The flaw was that because it was random, you could have the same relay being given the same command or the randomness would focus on a single light strand turning on and off.

Ultimately, my major concern was that we had too many lights out at the same time. So, this was a really good initial step and worked well for my immediate wants and needs. I wanted more. I got bored with random and put a little time into something a little more.

Building the More Interesting Client

Ok like I said before, I was getting bored with the original client script. I wanted to be able to do just a little bit more. I built a script that does a few sequences as you can see below.

#!/bin/bash

COMMANDS=("blinkUp" "blinkDown" "danceUp" "danceDown" "blinkAll" "crazyRun")
i=0

trap exitout SIGINT

exitout() {
  echo "We Are Done Here!"
  exit
}

allLights() {
  a=1
  while [ $a -le 8 ]
  do
    curl 127.0.0.1:8080/light/$a/$1
    echo ""
    a=$(( a + 1 ))
  done 
}

blinkUp() {
  echo "Blinking Up"
  allLights on
  a=1
  while [ $a -le 8 ]
  do
    curl 127.0.0.1:8080/lights/$a/off
    echo ""
    sleep 0.5
    curl 127.0.0.1:8080/lights/$a/on
    echo ""
    sleep 1
    a=$(( a + 1 ))
  done
}

blinkDown() {
  echo "Blinking Down"
  allLights on
  a=8
  while [ $a -ge 1 ]
  do
    curl 127.0.0.1:8080/lights/$a/off
    echo ""
    sleep 0.5
    curl 127.0.0.1:8080/lights/$a/on
    echo ""
    sleep 1
    a=$(( a - 1 ))
  done
}

danceUp() {
  echo "Dancing Up"
  allLights off
  a=1
  while [ $a -le 8 ]
  do
    curl 127.0.0.1:8080/lights/$a/on
    echo ""
    sleep 0.5
    curl 127.0.0.1:8080/lights/$a/off
    echo ""
    a=$(( a + 1 ))
  done
}

danceDown() {
  echo "Dancing Down"
  allLights off
  a=8
  while [ $a -ge 1 ]
  do
    curl 127.0.0.1:8080/lights/$a/on
    echo ""
    sleep 0.5
    curl 127.0.0.1:8080/lights/$a/off
    echo ""
    a=$(( a - 1 ))
  done
}

blinkAll() {
  echo "Blinking All"
  allLights on
  allLights off
  allLights on
  allLights off
  allLights on
  allLights off
}

crazyRun() {
  echo "Doing Crazy Shit"
  danceUp
  danceDown
  allLights off
  sleep 0.5
  allLights on
  sleep 0.5
  danceDown
  danceUp
  blinkAll
  danceUp
  danceDown
  danceUp
  danceDown
}

while :
do
  CMD=$(( ${RANDOM} % 6 ))
  ${COMMANDS[$CMD]}
  sleep 0.2
done

I’ve added a bunch of different functions to this new script:

allLightsThis function takes an argument of “on” or “off”. When called, it will either turn on or off all of the lights by issuing curl commands for all relays and either on or off.
blinkUpThis function starts by turning on all of the lights and then every 0.5 seconds it turns off and then on a light staying at 1 and continuing through 8.
blinkDownThis function is similar to blinkUp but it just works backwards from 8 through 1.
danceUpThis function starts by turning all of the lights off and then works its way up from 1 through to 8. It works up by turning each light on and then off.
danceDownThis function is similar to danceUp but it just works backwards starting at 8 and continuing through to 1.
blinkAllThis function just flashes all of the lights on and off.
crazyRunThis function just takes each of the above options and runs them all as a sequence that I’ve picked out.

Finally, the while statement in this script is now being used to randomly select one of the predefined sequences. So now I’ve got something a little more sophisticated to run my light show. I still want to up the game on this to sequence on its own to music and add more lights. Here’s an example below.

This got us through this Christmas season so more upgrades for next year and I can’t wait!