Blog

Making the Little Lights Twinkle

My previous post, Building the RaspberryPi Christmas Light Box, explained at a hight level building out the hardware. That step was a little scary than I think it should be but it all worked out just fine in the end. Now that I had everything put together and powered on, I was stuck here:

pi@raspberrypi:~ $ 

What are the next steps? I’ve got this box all wired up and ready to go but now I’m just sitting at a prompt waiting. As I mentioned before, I broke this down into a few parts to make my life easier and not get overwhelmed. After doing some more and more reading, I figured I had two options. I could program everything in Python or I could program everything in NodeJS.

I’m comfortable either language and no matter how hard I tried I kept going in circles. Something told me that I should write it in NodeJS because I felt that I should consider a client-server model. There were TONS of examples of people that had written all kinds of programs and libraries for handling sound and music and lights and GPIOs. I ended up throwing myself a curve ball. I decided that the client-server model was indeed what I should consider for future expansion of my new found hobby of Christmas tree lighting so I decided on NodeJS.

Well Folks, Here’s the Start of the Code

It all started pretty easy. I wanted to first make sure I had all of my GPIOs hooked up correctly. I wanted to make sure things blinked on and off like I expect. It seems that the only thing I needed to make NodeJS work was the onoff package. I popped into a directory on my Pi and installed it:

npm install onoff --save

Great! I guess the next step was to steal one of the sample JS scripts that gives you an example of how to make a relay turn on and off (my blink.js is born):

var Gpio = require('onoff').Gpio; //include onoff to interact with the GPIO
var LED = new Gpio(23, 'out'); //use GPIO pin 23, and specify that it is output
var blinkInterval = setInterval(blinkLED, 250); //run the blinkLED function every 250ms

function blinkLED() { //function to start blinking
  if (LED.readSync() === 0) { //check the pin state, if the state is 0 (or off)
    LED.writeSync(1); //set pin state to 1 (turn LED on)
  } else {
    LED.writeSync(0); //set pin state to 0 (turn LED off)
  }
}

function endBlink() { //function to stop blinking
  clearInterval(blinkInterval); // Stop blink intervals
  LED.writeSync(0); // Turn LED off
  LED.unexport(); // Unexport GPIO to free resources
}

setTimeout(endBlink, 5000); //stop blinking after 5 seconds

That seemed to work just wonderfully so I wanted to make sure that I could get all of the relays to go click click for me so enter the flowled.js (This is a great way to annoy ANYONE within ear shot. Remember from my previous post that I have the analog relays so they go *click click* when they turn on and off). While my wife was excited at the thoughts of our new Christmas lighting show, she was getting annoyed of the various clicking combinations that I came up. Thank you for your patience and appearing to be just as excited as I was deer!:

var Gpio = require('onoff').Gpio; //include onoff to interact with the GPIO
var RELAY01 = new Gpio(24, 'out'), //use declare variables for all the GPIO output pins
  RELAY02 = new Gpio(25, 'out'),
  RELAY03 = new Gpio(23, 'out'),
  RELAY04 = new Gpio(22, 'out'),
  RELAY05 = new Gpio(12, 'out'),
  RELAY06 = new Gpio(13, 'out'),
  RELAY07 = new Gpio(16, 'out'),
  RELAY08 = new Gpio(26, 'out');

//Put all the RELAY variables in an array
var leds = [RELAY01, RELAY02, RELAY03, RELAY04, RELAY05, RELAY06, RELAY07, RELAY08];
var indexCount = 0; //a counter
dir = "up"; //variable for flowing direction

var flowInterval = setInterval(flowingLeds, 100); //run the flowingLeds function every 100ms

function flowingLeds() { //function for flowing Leds
  leds.forEach(function(currentValue) { //for each item in array
    currentValue.writeSync(0); //turn off RELAY
  });
  if (indexCount == 0) dir = "up"; //set flow direction to "up" if the count reaches zero
  if (indexCount >= leds.length) dir = "down"; //set flow direction to "down" if the count reaches 7
  if (dir == "down") indexCount--; //count downwards if direction is down
  leds[indexCount].writeSync(1); //turn on RELAY that where array index matches count
  if (dir == "up") indexCount++ //count upwards if direction is up
};

function unexportOnClose() { //function to run when exiting program
  clearInterval(flowInterval); //stop flow interwal
  leds.forEach(function(currentValue) { //for each RELAY
    currentValue.writeSync(0); //turn off RELAY
    currentValue.unexport(); //unexport GPIO
  });
};

process.on('SIGINT', unexportOnClose); //function to run when user closes using ctrl+c

Time to Build the REST API!

Now that I had sufficiently annoyed everyone in the house by showing them the little LED on the relays blink and click, I think it was time to actually put this to work for me. All good client server models work via APIs, right? I had dreams of multiple Pis being setup around the yard and house being controlled by a central Pi that played the music and made all of the magical lighting happen. This will indeed be the case in a few years I’m sure. But we’re crawling before we can dead sprint. With that, I added http to my project with a little:

npm install http --save

So now I had onoff and http installed and saved to my package.json. With onoff and http ready to go, it was time for me to create the REST API server. I started with a few constants:

var Gpio = require('onoff').Gpio; //include onoff to interact with the GPIO
var RELAY1 = new Gpio(24, 'out'), //use declare variables for all the GPIO output pins
  RELAY2 = new Gpio(25, 'out'),
  RELAY3 = new Gpio(23, 'out'),
  RELAY4 = new Gpio(22, 'out'),
  RELAY5 = new Gpio(12, 'out'),
  RELAY6 = new Gpio(13, 'out'),
  RELAY7 = new Gpio(16, 'out'),
  RELAY8 = new Gpio(26, 'out');

const NUM_RELAYS = 8;
const RELAYS = [RELAY1, RELAY2, RELAY3, RELAY4, RELAY5, RELAY6, RELAY7, RELAY8];

const COMMANDS = [ 'on', 'off' ];

const PORT = process.env.PORT || 8080

Of course, the first var is to bring in the GPIO control and then I mapped the various RELAY vars to the appropriate gpios that I was using on my Pi. I have a total of 8 relays to choose from and then I also created two arrays, RELAYS and COMMANDS. These will make more sense later. Finally, I’m defining a default port for my API server to run on:

var http = require('http').createServer(handler); //require http server, and create server with function handler()

console.log(`Server Running on ${PORT}`);
http.listen(PORT);

Then we fire up the http server with my “handler” middleware. The handler function is below:

function handler (req, res) { //create server
  if (req.url.startsWith('/light') && req.method == 'GET') {
    getCommands(req.url, (err, commands) => {
      if(err) {
        var message = `{"test": "Failed","message": "${err}"}`;
      } else {
        var message = doCommand(commands.relay, commands.command);
      }
      res.statusCode = 200;
      res.setHeader('Content-Type', 'application/json');
      res.end(message);
    })
  } else if ( req.url == '/status' && req.method == 'GET' ) {
    res.statusCode = 200;
    res.setHeader('Content-Type', 'application/json');
    res.end('{"status": 200, "message": "test ok"}');
  } else {
    //Set the response HTTP header with HTTP status and Content type
    res.statusCode = 200;
    res.setHeader('Content-Type', 'application/json');
    res.end('{"status": 200, "message": "ok"}');
  }
}

I setup handler to push two routes that will accept GET requests, /light and /status. The /light route is where everything happens and /status was future proofing to make sure that we could possibly check the status of the server when we build the monolithic light show like Clark W! Of course, the final “else” is my garbage eat all where I just return “ok” to any request.

For those that don’t know, I’m a security geek so I built my getCommands function into the /light route:

function getCommands(string, cb) {

  var data = string.split('/');

  if(data.length != 4) {
    return cb('Wrong Number of Arguments Provided');
  }

  if(!COMMANDS.includes(data[3])) {
    return cb('Unsupported Command');
  }

  if(data[2] > NUM_RELAYS || data[2] < 1) {
    return cb('Sorry We Cannot Control That One');
  }

  result = {
    "relay" : data[2],
    "command" : data[3]
  }

  return cb(null, result);
}

The purpose of this function is to make sure we’re not fed garbage by anyone. I’m checking to make sure we get the right number of items in the request path (aka /light/<RELAY #>/<COMMAND>). If these fail, then I fail the request and do nothing. Assuming we pass validation, we get to the work horse, doCommand:

function doCommand(relay, command) {
  var myIndex = relay - 1;
  try {
    RELAYS[myIndex].writeSync(COMMANDS.indexOf(command));
    var message = 'OMG This is Great!';
  } catch (e) {
    var message = e;
  }
  var resp = `{"status": "Ok", "message": "${message}"}`

  return resp;
}

This function just takes the received command (on or off) and runs it against the specified relay (1 – 8). In curl the command would look a little something like this to turn on relay 4:

$ curl localhost:8080/light/4/on
{"status": "Ok", "message": "OMG This is Great!"}

To turn off the same relay, we would issue:

$ curl localhost:8080/light/4/off
{"status": "Ok", "message": "OMG This is Great!"}

Now I have a NodeJS server that can handle REST API calls to be able to turn on and off certain relays. What an accomplishment! Next post will cover how I put this all together to at least do some crappy light shows.

Building the RaspberryPi Christmas Light Box

Disclosure: I have included some affiliate / referral links in this post. There’s no cost to you for accessing these links but I do indeed receive some incentive for it if you buy through them.

Let’s Cover Some Background Here

I have always enjoyed Christmas lights. For quite some time, I was very intrigued at the notion of putting the lights to music or at least making them dance in motion. About 3 years ago, my wife and I bought a string of lights that had a mind of their own. It was neat to watch it random go through the different patterns of blinking, dancing, and chasing. Last year was the year that we got the really neat ones. They were icicles that changed colors AND danced and chased and more.

All of this was great but I wanted more this year. I wanted to be able to set the lights to music just like the fancy light shows you go to see. I figured the easiest way to do this was to just simply buy something at the store, right? So that’s what we did. We ended up buying an Orchestra of Lights (sure a plug for them yaay). The concept is really cool. We bought the speak box that comes with 6 outlets that are all controlled by the wifi hub. We’ll just say that if you’d like to buy one of these, ask me about purchasing a barely used one for a deep discount.

As part of a little additional background, I’ve always wanted to get my hands on a Pi or Arduino but I could never justify buying one. I could never figure out what legitimate project to apply such an amazing device. Enter the friend….

The Friend Made Me Do It

This is what friends are for right? As we were stringing up all of the lights outside, a friend of ours stopped by who is also a geek. I explained what we were doing and he very promptly asked if I was using a Raspberry Pi to do all of it. As the gears began to turn in my head, I know my face gave it all away. We weren’t doing it at the time, but we would be in just about a week!

The Planning Phase

I’m an over planning and over thinker so I was looking everywhere for what I needed how I needed to do it and what I should do next. My end goal was to be able build a device I could put outside that would play music and control the lights automatically to whatever little tune was playing at the time. This turned out to be a little more difficult than I bargained for but not a big deal. I looked at a bunch of sites and decided that I should probably break this project down into parts.

  1. Build out the hardware
  2. Make it do “something”
  3. Look at how I could possibly get the music to control the action

During this planning phase, I happened upon a really great article that gave amazing details on hooking up the hardware (https://system76.com/weekend-project/holiday-light-show). I mostly ignored everything but the pretty pictures. This site along with a bunch of other sites helped me put together my shopping list.

The Shopping List

Now remember that I said I had already ordered the Pi but let’s still list it here so that you know what I had coming:

This would be the very the basic shopping list. I also bought some little connectors and such so that I could conduit all of the metal pieces together. I also bought some tiny screws and bushings so that I could install the Pi and relay into the breaker box.

Putting it all Together

I lined up the Pi and Relay and marked my holes. I drilled them out. I added my bushings. I mounted everything and was quite proud of it all. Next steps were to wire everything up. Ok why reinvent the wheel here. As I noted before, I basically did exactly what was done in Steps C – K in here. This was a really great write up on how to wire everything.

After everything was done I had me a nice little system. This would be a GREAT spot to add a picture but I already have it hooked up on the porch, plugged in, and nicely hidden away so the neighbors can’t see it. I’ll remember to take some pictures next time.

Next, I’ll go through the code that I put together to get the thing clicking like crazy!

The Move to WordPress

I think I stopped blogging because I just wasn’t quite sure I liked the blogger platform. I guess WordPress is the place to be in the blogging world. After agonizing for a very long time, it seems that it was not time to make the big switch. Turns out, it wasn’t that hard at all. The best thing is that my hosting provider made it totally easy to integrate into my existing infrastructure. On top of that, WordPress has a nice little utility that can migrate your blogger blog write into WordPress. Fancy that!

In case you wanted to learn more about this migration, you can read about it on the WordPress support site here.

Now that I’m onto a platform that seems to be a little more friendly, I hope that I can find some time to write about my latest challenge, my Raspberry Pi 4!

Stay tuned for the next set of posts where I walk through my first project with a Pi in hand.

Automatically Rebuild Image on Docker Hub

This post focuses on me being lazy. In the previous post, I talked about building a custom image and posting it to the Docker Hub. I have also talked about creating a Git repo and storing everything in it thus far. What if we could make a commit rebuild our image for us? As luck would have it, you can do this!

This post is going to focus on making that very simple change to your Docker Hub repository so that every commit causes the image to be rebuilt to the latest. How fun!

Connecting Docker Hub to Your Git Account

The major thing to accomplish here is configuring Docker Hub to monitor Git. In order to do that, you’ll need to first sign into your Docker Hub account. This should bring you to the main page where you see the list of repos you maintain:

From there, click on the repo that you plan to configure. In my case, it’s the testnginximage repo. On the resulting screen, click on the Builds link to reveal the below page:

Click on the Link to GitHub button, to open your preferences to configure linked accounts.

Click the Connect link on this screen, to link to your GitHub account. If you are already signed into GitHub, Docker Hub will automatically connect to whatever account you are signed in with. If you are not already signed into GitHub, you’ll see the below login to GitHub screen:

Login to the GitHub account you used to store your Dockerfile we created in the previous post. Once connected, you’ll return to your Docker Hub profile with your GitHub account connected and the account name used listed:

At this point, you now have your Docker Hub and GitHub accounts connected. The next step will be to enable automatic builds.

Enabling Automatic Builds in Docker Hub

With Docker Hub and GitHub connected, the next step is to tell Docker Hub which repo to use and where the Dockerfile is located. In order to do that, go back to your repo and once again, click on the Build link. Within the Build screen, again, click on the Link to GitHub button. This time, the button should say “Connected” on it as shown below:

On the resulting page, configure the username and repo you would like to use as your source. Since I have been building everything in my mysamplerepo repo, I’m choosing this from the drop down:

In my prior examples, I created the Dockerfile in the nginxdocker directory within my mysamplerepo. Assuming you have done the same, scroll down the page and set the Build Context to be the nginxdocker in the Build Rules. This Build Context would be the path from the root of your repo that contains the Dockerfile. If you’ve placed your Dockerfile in a different path within your repo, make sure you have Build Context configured for that particular path.

Once you have this all configured, click on the Save and Build button at the bottom of the page. This should take you back to the Build page where you can monitor the status of the build.

Monitor the progress to make sure everything builds correctly. Once done, you should see a success status for the build.

Use a Commit to Generate a Build

Now that we have everything connected and working, let’s see if we can do a commit to our repo and confirm that the commit makes a build trigger. Let’s just make a simple change and no longer expose port 443 from for the image:

FROM ubuntu
  
 MAINTAINER Scott Algatt
  
 RUN apt-get update 
     && apt-get install -y nginx libnginx-mod-http-lua libnginx-mod-http-subs-filter software-properties-common
     && add-apt-repository -y universe 
     && add-apt-repository -y ppa:certbot/certbot 
     && apt-get update 
     && apt-get -y install certbot python-certbot-nginx 
     && apt-get clean 
     && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
 COPY ./conf/nginx.conf /etc/nginx/nginx.conf
 COPY ./conf/site.conf /etc/nginx/sites-available/default
  
 EXPOSE 80
 CMD ["nginx"]

With that change, let’s do a commit and push:

$ git commit -a
 [master 0e01193] Removing port 443
  Committer: Scott <scott@iMacs-iMac.local>
  
  2 files changed, 2 deletions(-)
$ git push origin master
 Counting objects: 6, done.
 Delta compression using up to 4 threads.
 Compressing objects: 100% (6/6), done.
 Writing objects: 100% (6/6), 499 bytes | 499.00 KiB/s, done.
 Total 6 (delta 3), reused 0 (delta 0)
 remote: Resolving deltas: 100% (3/3), completed with 3 local objects.
 To github.com:algattsm/mysamplerepo.git
    1d4d448..0e01193  master -> master

After performing the commit, refresh your Build page in Docker Hub and you should see a build trigger:

This means that you’ll be able to simply use your GitHub to generate a new image anytime you like! This also means that with every commit, you’ll be exposing the latest version of your image on Docker Hub.

Referenced File

In case you want to make sure you have the correct file, here would be the only file I referenced in this post:

Posting a Custom Image to Docker Hub

Welcome to 2020! I hope the new year finds everyone in good spirits and ready to continue listening to me babble about my struggles with technology.

So far, the focus has been on using default Docker images for our builds. This is great if you plan to deploy stock instances and only need to serve custom content with some minor configuration tweaks. Note that we were able to make configuration changes using a configMap yaml. What if you needed Nginx modules that weren’t already installed in the base image? Sure, you could come up with some funky CMD statement in your yaml file that tells Kubernetes to install the modules. Of course, that’ll take some time for the pod to be available while it boots up and runs through the install steps. This will also defeat the purpose of what I’m attempting to show you too 🙂

The focus of this article is simple. We’re going to setup a Docker Hub account and build a custom Nginx image to post there. From there, there are some future articles to help us use this new found knowledge to do some cool stuff.

Let’s stop the babble and start the fun!

Creating a Docker Hub Account

This is pretty straight forward so we’ll cover it briefly.

  1. Go to https://hub.docker.com/
  2. Click the Sign up for Docker Hub button
  3. Enter your information
  4. Sign up
  5. Wait for the verification Email from Docker
  6. Verify your email via the verification email
  7. Sign in

Done

Create a Docker Hub Repository

Now that you have a Docker Hub account, you’ll want to create a repo to be able to store your custom docker image. Assuming that you are still signed in from the steps above, you should see a Create a Repository button:

If you don’t see the Create a Repository button, worry not, you can get there by clicking on the Repositories link on the top menu and then the Create Repository button:

On the resulting Create Repository screen, let’s add in some details such as below:

You may call the repo whatever you want and feel free to give it a description. For now, we’re going to make this a public repo. Once you have this information filled out, scroll to the bottom and click Create. You should now see something similar to the below:

Make note of the docker push command in the black background box on the right. In my case, it is

docker push algattblog/testnginximage:tagname

We’ll need this later when we build our custom image.

Configuring Our Custom Nginx Docker Image

In order to keep everything in one place and keep things backed up, we’ll be building this out within our previously defined Git Repo. It’s a private repo so reasonably protected and Github is a really nice place to maintain our backup. So the first step will be to make sure we’re in the root of our repo and we’ll make a new directory to store this image.

$ mkdir nginxdocker

From there, we’ll change into the directory so that we can start with our Dockerfile

$ cd nginxdocker/

Now let’s create a new Dockerfile that looks like the following:

FROM ubuntu

MAINTAINER Scott Algatt

RUN apt-get update 
    && apt-get install -y nginx libnginx-mod-http-lua libnginx-mod-http-subs-filter software-properties-common
    && add-apt-repository -y universe 
    && add-apt-repository -y ppa:certbot/certbot 
    && apt-get update 
    && apt-get -y install certbot python-certbot-nginx 
    && apt-get clean 
    && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

COPY ./conf/nginx.conf /etc/nginx/nginx.conf
COPY ./conf/site.conf /etc/nginx/sites-available/default

EXPOSE 80
EXPOSE 443

CMD ["nginx"]

Let’s see what this does…First, we’re going to build this new Docker using ubuntu as our base image. From there, we’re going to install nginx, libnginx-mod-http-lua, libnginx-mod-http-subs-filter, and software-properties-common. We’re installing software-properties-common so that we can add the certbot repo and then add certbot. We’re going to also copy over some custom Nginx configuration files so we won’t need to leverage our configMap anymore. We make sure ports 80 and 443 are exposed to the running container. Finally, the container should run the “nginx” command to start the nginx server.

Next, we’ll want to create those files referenced by the COPY commands. We start by creating the conf directory with and then change into the directory:

$ mkdir conf
$ cd conf

Create the nginx.conf file with the following (Basically, we’re defining a custom log format):

user www-data;
worker_processes auto;
pid /run/nginx.pid;

events {
        worker_connections 768;
}

http {
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        include /etc/nginx/mime.types;
        default_type text/html;
        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;
        log_format  graylog2_format  '$remote_addr $request_method "$request_uri" $status $bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for" "$http_if_none_match"';
        gzip on;
        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

daemon off;

The MOST important item at the bottom of this file would be the daemon off; statement. Without this, our container will start run nginx and then COMPLETE and stop. We want nginx to run in the foreground and not background as a daemon so this is why this is here. Now create the referenced site.conf file.

server {
    listen       80;
    server_name  localhost;
    access_log /var/log/nginx/access.log graylog2_format;
    error_log /var/log/nginx/error.log graylog2_format;

    location / {
        root   /usr/share/nginx/www/html;
        index  index.html index.htm;
    }

    error_page   500 502 503 504  /50x.html;

    location = /50x.html {
        root   /usr/share/nginx/www/html;
    }

  location ~ .php$ {
      root /usr/share/nginx/www/html;
      try_files $uri =404;
      fastcgi_split_path_info ^(.+.php)(/.+)$;
      fastcgi_pass phpfpm:9000;
      fastcgi_index index.php;
      include fastcgi_params;
      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
      fastcgi_param PATH_INFO $fastcgi_path_info;
  }
}

This looks good so let’s first save our changes and commit them to our repo.

 3 files changed, 23 insertions(+), 37 deletions(-)$ cd ../..
$ git add .
$ git commit -a
[master ade43da] Adding in our stuff
 Committer: Scott <scott@iMacs-iMac.local>
 3 files changed, 23 insertions(+), 37 deletions(-)
 rewrite nginxdocker/conf/site.conf (99%)
 delete mode 100644 nginxdocker/conf/test
$ git push origin master
Enter passphrase for key '/Users/scott/.ssh/id_rsa': 
Counting objects: 6, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 754 bytes | 754.00 KiB/s, done.
Total 6 (delta 2), reused 0 (delta 0)
remote: Resolving deltas: 100% (2/2), completed with 2 local objects.
To github.com:algattsm/mysamplerepo.git
   97b7a1c..ade43da  master -> master

Now that we’ve got that squared away, it’s onto the next step!

Building our Docker Image and Publishing it

This is the easy part as we just watch it run. Make sure we’re in the directory that contains our Dockerfile and then we’ll run the build command:

$ cd nginxdocker/
 imacs-imac:nginxdocker scott$ docker build -t algattblog/testnginximage:latest .
  
 Sending build context to Docker daemon  6.144kB
 Step 1/8 : FROM ubuntu
 latest: Pulling from library/ubuntu
 2746a4a261c9: Pull complete 
 4c1d20cdee96: Pull complete 
 0d3160e1d0de: Pull complete 
 c8e37668deea: Pull complete 
 Digest: sha256:250cc6f3f3ffc5cdaa9d8f4946ac79821aafb4d3afc93928f0de9336eba21aa4
 Status: Downloaded newer image for ubuntu:latest
  ---> 549b9b86cb8d
 Step 2/8 : MAINTAINER Scott Algatt
  ---> Running in ff6d8459f56b
 Removing intermediate container ff6d8459f56b
  ---> 666acba43494
 Step 3/8 : RUN apt-get update     && apt-get install -y nginx libnginx-mod-http-lua libnginx-mod-http-subs-filter software-properties-common    && add-apt-repository -y universe     && add-apt-repository -y ppa:certbot/certbot     && apt-get update     && apt-get -y install certbot python-certbot-nginx     && apt-get clean     && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
  ---> Running in acfefd676a08
 Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
 Get:2 http://archive.ubuntu.com/ubuntu bionic InRelease [242 kB]
 ...
 Setting up python-certbot-nginx (0.31.0-1+ubuntu18.04.1+certbot+1) ...
 Removing intermediate container acfefd676a08
  ---> 09301061d312
 Step 4/8 : COPY ./conf/nginx.conf /etc/nginx/nginx.conf
  ---> c82b8d22e6a0
 Step 5/8 : COPY ./conf/site.conf /etc/nginx/sites-available/default
  ---> 841e6ecfc3d9
 Step 6/8 : EXPOSE 80
  ---> Running in f2f36c350457
 Removing intermediate container f2f36c350457
  ---> 79af7e01f9c0
 Step 7/8 : EXPOSE 443
  ---> Running in 9d6a8dcdba31
 Removing intermediate container 9d6a8dcdba31
  ---> a534c821c51b
 Step 8/8 : CMD ["nginx"]
  ---> Running in 82ceccd20644
 Removing intermediate container 82ceccd20644
  ---> 6728616336a3
 Successfully built 6728616336a3
  
 Successfully tagged algattblog/testnginximage:latest

Next, we need to publish it on Docker Hub (remember that push command from earlier?):

$ docker push algattblog/testnginximage:latest
 The push refers to repository [docker.io/algattblog/testnginximage]
 010c4615edf3: Pushed 
 98c06aef3fd3: Pushed 
 229f4ffc7b88: Pushed 
 918efb8f161b: Mounted from library/ubuntu 
 27dd43ea46a8: Mounted from library/ubuntu 
 9f3bfcc4a1a8: Mounted from library/ubuntu 
 2dc9f76fb25b: Mounted from library/ubuntu 
 latest: digest: sha256:4545730a7dd5b5818f0ce9a78666f40ea9a864198665022dc29000a34cc4b402 size: 1778

Testing the New Image in our Cluster

At this point, we’ve got our newly created image uploaded to Docker Hub. The next step is to test it out and see if it works in our cluster. In order to do that, we’ll want to change the image in our webserver.yaml file to reference the newly created image:

        image: algattblog/testnginximage:latest

Now let’s apply the updated deployment:

# kubectl apply -f webserver.yaml 
 configmap/webserver-config unchanged
 service/webserver unchanged
 deployment.apps/webserver configured

Wait for it to build the new webserver pod by keeping an eye on kubectl:

# kubectl get pod
 NAME                         READY   STATUS    RESTARTS   AGE
 phpfpm-7b8d87955c-rps2w      2/2     Running   0          3d1h
 webserver-6b577db595-wgwwb   2/2     Running   0          25s

Looks like it’s up and running so now let’s check and see if we can test to make sure everything looks good. We’ll start by connecting to the webserver, installing curl, and making sure everything works again.

# kubectl exec -it webserver-6b577db595-wgwwb /bin/bash -c webserver
 groups: cannot find name for group ID 65533
 root@webserver-6b577db595-wgwwb:/# apt update     
 Get:1 http://security.ubuntu.com/ubuntu bionic-security InRelease [88.7 kB]
  
 Get:2 http://archive.ubuntu.com/ubuntu bionic InRelease [242 kB]           
 Setting up libcurl4:amd64 (7.58.0-2ubuntu3.8) ...
 Setting up curl (7.58.0-2ubuntu3.8) ...
 Processing triggers for libc-bin (2.27-3ubuntu1) ...
 root@webserver-6b577db595-wgwwb:/# curl localhost
 <html>
 <body>
 hello world! Everything must be cleaned up at this point
 </body>
 </html>
 root@webserver-6b577db595-wgwwb:/# curl localhost/index.php
  
 hello world from php

We can ignore the groups error for now. Some idiot left out instructions regarding fixing that but not a biggie. Looks we’re serving content properly. Let’s check our nginx configuration to confirm it’s also running the correct config:

root@webserver-6b577db595-wgwwb:/# cat /etc/nginx/nginx.conf 
 user www-data;
 ...
  log_format  graylog2_format  '$remote_addr $request_method "$request_uri" $status $bytes_sent "$http_referer" "$http_user_agent" "$http_x_forwarded_for" "$http_if_none_match"';
  gzip on;
  include /etc/nginx/conf.d/*.conf;
  include /etc/nginx/sites-enabled/*;
 }
  
 daemon off;
 root@webserver-6b577db595-wgwwb:/# 

Confirmed! This looks great! You’ve now got a custom Docker image that you can use and expand upon. I know I left you hanging here but if I continued to build my attention span couldn’t tolerate typing anymore. More to come….

Deploying Nginx + PHP + git-sync on Kubernetes

In my previous post, I explained how to setup a simple nginx instance that could be used to sync to a private Git repo. The only drawback is that this setup will only serve static pages. What if you wanted to be able to run a server with dynamic code like PHP? I’m glad you asked! In this post, we’ll update our config to include a php-fpm instance to allow us to serve PHP pages.

I have planned these articles out so that they build on each other. With that in mind, I’m assuming you have followed my articles to date and therefore we’ll be simply extended the current deployment.

If you’re impatient like me, just scroll to the bottom and download the full files.

Setting Up The PHP-FPM Instance

First we need to get our PHP-FPM yaml setup. By default, php-fpm runs on port 9000. This means we need a service definition to expose this to the cluster. This will also need access to the git repo we created so we’ll add in the git container spec. Instead of running the nginx image, we’ll run the php-fpm image. In order to make life easy on ourselves, I’m going to use the webserver.yaml from my previous post as a template. I’m going to make the following changes to it:

  1. Replace any reference of “webserver” with “phpfpm”.
  2. Change the following in the service definition
    1. change the port name from http to phpfpm
    2. change the port number from 80 to 9000
  3. Remove the ConfigMap
    1. Remove the definition of it from the top of the file
    2. Remove the references to it in the spec volumes and the container volumeMounts
  4. Change the image of the second container from nginx:latest to php:fpm
  5. Change the containerPort from 80 to 9000

If we’ve done this all correctly, we should have a yaml that looks similar to the below:

apiVersion: v1
  
 kind: Service
 metadata:
   name: phpfpm
   labels:
     tier: backend
 spec:
   selector:
     app: phpfpm
     tier: backend
   ports:  
   - name: phpfpm
     port: 9000
 ---
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: phpfpm
   labels:
     tier: backend
 spec:
   replicas: 1
   selector:
     matchLabels:
       app: phpfpm
       tier: backend
   template:
     metadata:
       labels:
         app: phpfpm
         tier: backend
     spec:
       securityContext:
         fsGroup: 65533 # to make SSH key readable
       volumes:
       - name: dir
         emptyDir: {}
       - name: git-secret
         secret:
           secretName: github-creds
           defaultMode: 288
       containers:
       - env:
         - name: GIT_SYNC_REPO
           value: git@github.com:<some user>/mysamplerepo.git
         - name: GIT_SYNC_BRANCH
           value: master
         - name: GIT_SYNC_SSH
           value: "true"
         - name: GIT_SYNC_PERMISSIONS
           value: "0777"
         - name: GIT_SYNC_DEST
           value: www
         - name: GIT_SYNC_ROOT
           value: /git
         name: git-sync
         image: k8s.gcr.io/git-sync:v3.1.1
         securityContext:
           runAsUser: 65533 # git-sync user
         volumeMounts:
         - name: git-secret
           mountPath: /etc/git-secret
         - name: dir
           mountPath: /git
       - name: phpfpm
         image: php:fpm
         ports:
         - containerPort: 9000
         volumeMounts:
         - name: dir
           mountPath: /usr/share/nginx 

We can now save this yaml and apply it to our cluster:

# kubectl apply -f phpfpm.yaml 
 service/phpfpm unchanged
 deployment.apps/phpfpm configured

Assuming all went well, we should now have our webserver and phpfpm containers up and running:

# kubectl get pod
 NAME                         READY   STATUS    RESTARTS   AGE
 phpfpm-b46969c5f-zzh6d       2/2     Running   0          103s
 webserver-8fb84dc86-7xw4w    2/2     Running   0          10s

That’s just lovely but what next?

Configuring Nginx for PHP

At this point, we basically have two unassociated containers that are living independently in the same cluster. The only common bond is that they have the same set of files synched from the Git Repo. Next, we need to tell nginx to handle PHP requests and where to send them. This will require us to update our Nginx configMap. We do this by adding a location statement to handle php files like so:

      location ~ .php$ {
           try_files $uri =404;
           fastcgi_split_path_info ^(.+.php)(/.+)$;
           fastcgi_pass phpfpm:9000;
           fastcgi_index index.php;
           include fastcgi_params;
           fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
           fastcgi_param PATH_INFO $fastcgi_path_info;
       }

There’s lots going on here in this file but some important items to note. Nginx acts like a reverse proxy when handling PHP files. It simply takes the request and sends to php-fpm. The php-fpm service finds the request file locally, executes PHP on it, and sends the resulting processed output from PHP back to Nginx. Here is the full updated configMap:

apiVersion: v1
 kind: ConfigMap
 metadata:
   name: webserver-config
   labels:
     tier: backend
 data:
   config :
     server {
         listen       80;
         server_name  localhost;
  
         location / {
             root   /usr/share/nginx/www/html;
             index  index.html index.htm;
         }
  
         error_page   500 502 503 504  /50x.html;
         location = /50x.html {
             root   /usr/share/nginx/www/html;
         }
  
       location ~ .php$ {
           root /usr/share/nginx/www/html;
           try_files $uri =404;
           fastcgi_split_path_info ^(.+.php)(/.+)$;
           fastcgi_pass phpfpm:9000;
           fastcgi_index index.php;
           include fastcgi_params;
           fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
           fastcgi_param PATH_INFO $fastcgi_path_info;
       }
  
     }

Let’s apply this to our cluster:

# kubectl apply -f configmap.yaml 
 configmap/webserver-config configured

With the new configuration running, we’ll need Nginx to reload it. There’s a number of different ways we could do this but I’m going to use a hack that will allow us to test the config and then restart. First step, I want to make sure the new config will work for us:

# kubectl exec -it webserver-8fb84dc86-7xw4w -c webserver -- /usr/sbin/nginx -t
 nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
 nginx: configuration file /etc/nginx/nginx.conf test is successful

It looks like the configuration is acceptable so let’s reload Nginx.

# kubectl exec -it webserver-8fb84dc86-7xw4w -c webserver -- /usr/sbin/nginx -s reload
 2019/12/28 14:01:32 [notice] 2804#2804: signal process started

We should now be ready to commit a PHP file to our repo and test.

Testing Our Configuration

Let’s create a simple PHP file in the html directory of our. 

We’ll jump onto the web server, install curl and test:

# kubectl exec -it webserver-8fb84dc86-7xw4w -c webserver -- /bin/bash
 root@webserver-8fb84dc86-7xw4w:/# apt update
 Hit:1 http://deb.debian.org/debian buster InRelease
 Hit:2 http://deb.debian.org/debian buster-updates InRelease
 Hit:3 http://security-cdn.debian.org/debian-security buster/updates InRelease
 Reading package lists... Done
 Building dependency tree       
 Reading state information... Done
 All packages are up to date.
 root@webserver-8fb84dc86-7xw4w:/# apt install curl
 Reading package lists... Done
 Building dependency tree       
 Reading state information... Done
 curl is already the newest version (7.64.0-4).
 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
 root@webserver-8fb84dc86-7xw4w:/# curl localhost/index.php
  
 hello world from php

A great question to ask is how does php-fpm know which file and where that file exists? Like I said, great question.

This is handled by the fastcgi_param SCRIPT_FILENAME entry. This means that Nginx is going to tell php-fpm that it should try to load the $document_root$fastcgi_script_name file for the request. If you look at our configMap, we define document root as /usr/share/nginx/www/html. Assuming a request comes for index.php into Nginx, Nginx will tell php-fpm to also load /usr/share/www/html/index.php. In an environment where Nginx + PHP live on the same host, this doesn’t appear to be a problem because that file will exist for sure. In our configuration, we running two separate hosts aka containers. So we need to make sure the file exists on both servers in the same location. That’s the easy part! It does! Reason being, we’re using gitsynch on both containers and mounting that synched directory to the same location!

Full Working Configs

In case you want to just cheat and load the configurations, feel free to download them and play around:

Building a Kubernetes Container That Synchs with Private Git Repo

My previous post explained how to create a private git repo. On its own, that post is roughly useless unless you planned to maintained some private copy of your project so nobody can see it. In this post, we’re going to put that private repo to use in a Kubernetes environment. A basic assumption is that you already have a Kubernetes environment setup.

Adding Another SSH Key to the Repo

The first step would be to add another SSH Key to our repo. The purpose of this key is to be used to configure access from the container to the repo. We’ll load the SSH key into Kubernetes as a secret. We can’t set a password on this key or we might get prompted for the password during container build and that’s not useful. Also, since the key will not have a password, we won’t give it Read / Write access to our repo.

Generate the SSH Key

As before, we’re going to run the ssh-keygen command but we’ll specify the file where to save the key and just simply hit enter at the password prompt so that it’s not password protected.

imacs-imac:~ scott$ ssh-keygen -t rsa
 Generating public/private rsa key pair.
 Enter file in which to save the key (/Users/scott/.ssh/id_rsa): /Users/scott/.ssh/GH_RO_key_rsa
 Enter passphrase (empty for no passphrase): 
 Enter same passphrase again: 
 Your identification has been saved in /Users/scott/.ssh/GH_RO_key_rsa.
 Your public key has been saved in /Users/scott/.ssh/GH_RO_key_rsa.pub.
 The key fingerprint is:
 SHA256:0v0koHVNHdJbt4j2PaNorHa25dXgNl0sQjJB8R3ClPA scott@imacs-imac.lan
 The key's randomart image is:
 +---[RSA 2048]----+
 |         .===+o. |
 |           *o+o.o|
 |        o + E ooo|
 |       + + * ..o |
 |      o S + + + o|
 |       .   + + Bo|
 |          . o.=.=|
 |         . *oo.. |
 |        ..=...   |
 +----[SHA256]-----+
 imacs-imac:~ scott$ 

Upload the Key to our Git Repo

With our new SSH Key created, we’ll want to once again take the contents of the .pub file aka GH_RO_key_rsa.pub if you’re following along and paste that into our repo’s Deploy Keys like below:

Be sure that Allow write access is NOT selected and paste in the contents of the pub file to the Key box. Next, click Add Key. You should now have two keys listed:

Configuring Kubernetes

Now that we have our new Read Only key added to the repo, it’s time to setup Kubernetes. This is going to be a simple configuration so that we can display static HTML pages on our Kubernetes cluster.

Add SSH Key to Kubernetes

In order to have Kubernetes be able to use the SSH key, we need to add it as a secret that we’ll reference in our pod deployment. The first step is to create a known hosts file to be used along with the key so we don’t have to worry about acknowledging any new key messages.

~# ssh-keyscan github.com > /tmp/known_hosts
 # github.com:22 SSH-2.0-babeld-778045a0
 # github.com:22 SSH-2.0-babeld-778045a0
 # github.com:22 SSH-2.0-babeld-778045a0
 ~# 

This copies the ssh key from github into the /tmp/known_hosts file. Next, we need to get the contents of our private key file. When we pasted the key into GitHub, we were working with the public key file..aka the .pub file…Since Kubernetes will need to authenticate using this key, it’ll need the private key file…aka the GH_RO_key_rsa file. We’ll use the kubectl command to add the key into Kubernetes:

~# kubectl create secret generic github-creds --from-file=ssh=.ssh/GH_RO_key_rsa --from-file=known_hosts=/tmp/known_hosts
 secret/github-creds created
 ~# 

Creating the Web Server Deployment

Now we’re going to create a YAML file to configure and setup everything. The start of that YAML file will be to configure Kubernetes to open a port that directs traffic to port 80 of our resulting pod. From there, we’ll need to setup a pod that runs two separate containers. One container will be our git-synch application and the other will be nginx. We could get into some “complex” discussions and added costs of running a PVC or some other Kubernetes shared storage but we’re only dealing with a small web site that is synched with github so we’re gonna simply leverage local storage on each node by defining two volumes:

      volumes:
       - name: dir
         emptyDir: {}
       - name: git-secret
         secret:
           secretName: github-creds
           defaultMode: 288

This creates two volumes dir and git-secret. The dir is simply an empty directory volume that we’ll be filling with our files that we synch from Github. The git-secret is the SSH Key we added above. This needs to be made available to our git-synch container. 

In the nginx container, we’re going to mount the dir volume as /usr/share/nginx. The default nginx image looks for web content, aka document root, in /usr/share/nginx/html. Therefore, we’re going mount the repo as /usr/share/nginx. We mount the dir volume to /git as this is where we’re going to write our synched data.

You can see all of these configurations in the git-synch container configuration such as the target location for our synched files as well as the secret to use.

      containers:
       - env:
         - name: GIT_SYNC_REPO
           value: git@github.com:<some user>/mysamplerepo.git
         - name: GIT_SYNC_BRANCH
           value: master
         - name: GIT_SYNC_SSH
           value: "true"
         - name: GIT_SYNC_PERMISSIONS
           value: "0777"
         - name: GIT_SYNC_DEST
           value: www
         - name: GIT_SYNC_ROOT
           value: /git
         name: git-sync
         image: k8s.gcr.io/git-sync:v3.1.1
         securityContext:
           runAsUser: 65533 # git-sync user
         volumeMounts:
         - name: git-secret
           mountPath: /etc/git-secret
         - name: dir
           mountPath: /git

You’ll want to make sure you change the GIT_SYNC_REPO to match the value of your clone/download link in Github. The GIT_SYNC_DEST should match the name of your repo.

Here is the full config for reference:

apiVersion: v1
 kind: Service
 metadata:
   name: webserver
   labels:
     tier: backend
 spec:
   selector:
     app: webserver
     tier: backend
   ports:  
   - name: http
     port: 80
 ---
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: webserver
   labels:
     tier: backend
 spec:
   replicas: 1
   selector:
     matchLabels:
       app: webserver
       tier: backend
   template:
     metadata:
       labels:
         app: webserver
         tier: backend
     spec:
       securityContext:
         fsGroup: 65533 # to make SSH key readable
       volumes:
       - name: dir
         emptyDir: {}
       - name: git-secret
         secret:
           secretName: github-creds
           defaultMode: 288
       containers:
       - env:
         - name: GIT_SYNC_REPO
           value: git@github.com:<some user>/mysamplerepo.git
         - name: GIT_SYNC_BRANCH
           value: master
         - name: GIT_SYNC_SSH
           value: "true"
         - name: GIT_SYNC_PERMISSIONS
           value: "0777"
         - name: GIT_SYNC_DEST
           value: www
         - name: GIT_SYNC_ROOT
           value: /git
         name: git-sync
         image: k8s.gcr.io/git-sync:v3.1.1
         securityContext:
           runAsUser: 65533 # git-sync user
         volumeMounts:
         - name: git-secret
           mountPath: /etc/git-secret
         - name: dir
           mountPath: /git
       - name: webserver
         image: nginx:latest
         ports:
         - containerPort: 80
         volumeMounts:
         - name: dir
           mountPath: /usr/share/nginx

With out configuration file all ready to go, we’ll use kubectl to apply the file:

~# kubectl apply -f webserver.yaml 
 service/webserver created
 deployment.apps/webserver created
 ~# 

After some time, we should be able to check the status and see the pod is online and the service is setup:

~# kubectl get pod
 NAME                         READY   STATUS    RESTARTS   AGE
 webserver-686854f667-cwq5f   2/2     Running   5          3m46s
 ~# kubectl get svc
 NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
 kubernetes   ClusterIP   10.152.183.1     <none>        443/TCP   149m
 webserver    ClusterIP   10.152.183.195   <none>        80/TCP    5m28s
 ~# 

Testing the Deployment

With everything deployed, we should have a web server up and running that is serving our git repo from the previous post. Without getting into deploying an ingress server and such, let’s take a short cut to test out our deployment. We can do this by connecting to the web server and doing a curl. First, we connect to the web server container:

# kubectl exec -it webserver-686854f667-cwq5f -c webserver /bin/bash

The above command will connect you to a shell in the container. By default, the nginx image does not have curl installed so we’ll need to install this to test further. Install curl using the below commands:

root@webserver-686854f667-cwq5f:/# apt update;apt -y install curl

With curl installed, let’s connect to the local web server:

root@webserver-686854f667-cwq5f:/# curl localhost
 <html>
 <head><title>403 Forbidden</title></head>
 <body>
 <center><h1>403 Forbidden</h1></center>
 <hr><center>nginx/1.17.6</center>
 </body>
 </html>

That does not seem right…I broke something…didn’t I? Oh wait, I know let’s try…

root@webserver-686854f667-cwq5f:/# curl localhost/html/
 <html>
 <body>
 hello world!
 </body>
 </html>

That works better. Looks like we need to fix something here but first let’s see if making a change to the repo works. Let’s cheat and use the github file editor and make a change to the index.html file like the below:

If we run our curl again, survey says….

root@webserver-686854f667-cwq5f:/# curl localhost/html/
 <html>
 <body>
 hello world! Test #2
 </body>
 </html>

Boom! Just like that it’s working. Kinda…

Fixing Our Deployment

In case the problem isn’t quite obvious, we are attempting to mount the git repo in a location that nginx isn’t quite looking for. It’s a bad idea to mount the entire git repo as the document root since it could allow people to look at your .git directory and possibly other files that you didn’t consider. In order to fix our deployment and secure just a little further we’re going to first adjust the nginx configuration with a Kubernetes configmap:

apiVersion: v1
 kind: ConfigMap
 metadata:
   name: webserver-config
   labels:
     tier: backend
 data:
   config :
     server {
         listen       80;
         server_name  localhost;
     
         location / {
             root   /usr/share/nginx/www/html;
             index  index.html index.htm;
         }
     
         error_page   500 502 503 504  /50x.html;
         location = /50x.html {
             root   /usr/share/nginx/www/html;
         }
    
     }

This configmap supplies nginx with a new configuration for the default site that tells nginx that the document root is now located in /usr/share/nginx/www/html. We also made some changes to the original webserver.yaml to add this new configuration as well as changing the mount point for git and nginx. The full configuration is here.

apiVersion: v1
 kind: ConfigMap
 metadata:
   name: webserver-config
   labels:
     tier: backend
 data:
   config :
     server {
         listen       80;
         server_name  localhost;
         location / {
             root   /usr/share/nginx/www/html;
             index  index.html index.htm;
         }
         error_page   500 502 503 504  /50x.html;
         location = /50x.html {
             root   /usr/share/nginx/www/html;
         }
     }
 ---
 apiVersion: v1
 kind: Service
 metadata:
   name: webserver
   labels:
     tier: backend
 spec:
   selector:
     app: webserver
     tier: backend
   ports:  
   - name: http
     port: 80
 ---
 apiVersion: apps/v1
 kind: Deployment
 metadata:
   name: webserver
   labels:
     tier: backend
 spec:
   replicas: 1
   selector:
     matchLabels:
       app: webserver
       tier: backend
   template:
     metadata:
       labels:
         app: webserver
         tier: backend
     spec:
       securityContext:
         fsGroup: 65533 # to make SSH key readable
       volumes:
       - name: dir
         emptyDir: {}
       - name: git-secret
         secret:
           secretName: github-creds
           defaultMode: 288
       - name: config
         configMap:
           name: webserver-config
           items:
           - key: config
             path: default.conf
       containers:
       - env:
         - name: GIT_SYNC_REPO
           value: git@github.com:<some user>/mysamplerepo.git
         - name: GIT_SYNC_BRANCH
           value: master
         - name: GIT_SYNC_SSH
           value: "true"
         - name: GIT_SYNC_PERMISSIONS
           value: "0777"
         - name: GIT_SYNC_DEST
           value: www
         - name: GIT_SYNC_ROOT
           value: /git
         name: git-sync
         image: k8s.gcr.io/git-sync:v3.1.1
         securityContext:
           runAsUser: 65533 # git-sync user
         volumeMounts:
         - name: git-secret
           mountPath: /etc/git-secret
         - name: dir
           mountPath: /git
       - name: webserver
         image: nginx:latest
         ports:
         - containerPort: 80
         volumeMounts:
         - name: dir
           mountPath: /usr/share/nginx
         - name: config
           mountPath: /etc/nginx/conf.d

Let’s apply this updated configuration using kubectl:

root@do-nyc04:/tmp# kubectl apply -f webserver.yaml 
 configmap/webserver created
 service/webserver unchanged
 deployment.apps/webserver configured

Let’s now reconnect and test our configuration:

root@do-nyc04:/tmp# kubectl get pod -o wide
 NAME                         READY   STATUS    RESTARTS   AGE    IP             NODE              NOMINATED NODE   READINESS GATES
 webserver-8fb84dc86-5chm5    2/2     Running   0          17s    10.244.1.53    pool-sfo01-ssy1   <none>           <none>
 root@do-nyc04:/tmp# kubectl exec -it webserver-8fb84dc86-5chm5 -c webserver /bin/bash
 root@webserver-8fb84dc86-5chm5:/# apt update;apt -y install curl
 Get:1 http://deb.debian.org/debian buster InRelease [122 kB]
 Get:2 http://deb.debian.org/debian buster-updates InRelease [49.3 kB]             
 Get:3 http://security-cdn.debian.org/debian-security buster/updates InRelease [65.4 kB]
 Get:4 http://deb.debian.org/debian buster/main amd64 Packages [7908 kB]
 Get:5 http://deb.debian.org/debian buster-updates/main amd64 Packages [5792 B]
 Get:6 http://security-cdn.debian.org/debian-security buster/updates/main amd64 Packages [167 kB]
 Fetched 8317 kB in 2s (3534 kB/s)                         
 Reading package lists... Done
 Building dependency tree       
 Reading state information... Done
 All packages are up to date.
 Reading package lists... Done
 Building dependency tree       
 Reading state information... Done
 The following additional packages will be installed:
 ...
 128 added, 0 removed; done.
 Setting up libgssapi-krb5-2:amd64 (1.17-3) ...
 Setting up libcurl4:amd64 (7.64.0-4) ...
 Setting up curl (7.64.0-4) ...
 Processing triggers for libc-bin (2.28-10) ...
 Processing triggers for ca-certificates (20190110) ...
 Updating certificates in /etc/ssl/certs...
 0 added, 0 removed; done.
 Running hooks in /etc/ca-certificates/update.d...
 done.
  
 root@webserver-8fb84dc86-5chm5:/# curl localhost
 <html>
 <body>
 hello world! Test #2
 </body>
  
 </html>

Great news! It looks like it’s fixed. Just to make sure things are working still, let’s make another change and see if it publishes.

root@webserver-8fb84dc86-5chm5:/# curl localhost
 <html>
 <body>
 hello world! Everything must be cleaned up at this point
 </body>
 </html>

W00t! Looks like everything is working and as we expect. Although, this configuration is mostly useless unless you are actually within the Kubernetes cluster. For the next article, I’ll provide some options and a hack for exposing this web server to the world.

Creating a Private GitHub Repo

The first step in my adventure was to first create a location to store my web content. The mostly likely location for this was GitHub. The process for signing up for a GitHub account is pretty easy so I won’t bother going through that process here. I’m going to assume that you figured that part out and I will begin with that assumption.

 Setting Up a Private GitHub Repo

Once you have your account, you’ll need to next setup your very first repo. Login to GitHub and click on the “New” button to create a repo
On the resulting screen:
  1. Enter a name for your repo
  2. Enter a description if you like
  3. We’re creating a private repo here because we don’t want anyone messing with it or having access to it (more on that in a future post).
  4. Let’s also initialize the repo with a blank README. This way you can add notes to the repo later if you have anything specific to remind yourself of

The configuration should look something like this:

  1. Click the Create repository button and like magic you have your very own Private repo!
Make note of the “Private” listed at the top of the repo. This lets you know that the repo is not available to the public on GitHub. Only those that have been specifically granted access will be permitted to view the contents and make changes.

Granting Access to the Repo Via Deploy Keys

Now that we have this private repo created, we need to be able to grant access to it for anyone or anything that will want to make changes to it. In order to do this, we’re going to generate a SSH key on our client machine so that we can have access via CLI (Sorry, I work is much from CLI as possible). These instructions will help you generate the ssh key and add it to the repo’s settings.

Generating the SSH Keys

First, we’ll need to generate our ssh keys that we intend to use for the repo. To do this, pop open a terminal window and run the ssh-keygen command as shown below. We’re going to make this key capable or performing Read and Write operations in our repo so be sure to supply a password when generating the key.

imacs-imac:~ scott$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/scott/.ssh/id_rsa):
Created directory '/Users/scott/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /Users/scott/.ssh/id_rsa.
Your public key has been saved in /Users/scott/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:o16ykEsBSueHW4G/Qu4a/5U26nfYsdrdbAUkxN60xFk scott@imacs-imac.lan
The key's randomart image is:

+---[RSA 2048]----+
|          o.. oE |
|    .      o *   |
| ..o .    . * .  |
|..o.o .    . +   |
|.  +.+  S     .  |
|  o +o…o     . |
| . ++.o=+ o   .  |
|  +..++*o+. o.   |
| ..o+++.o. ..o   |
+----[SHA256]-----+

This is generating a new ssh key that is protected by a password. The key will be called id_rsa and be located in the default location of ~/.ssh.

Adding the SSH Key to Our Repo

Now that we have our SSH Key, we will need to give it permissions to our repo. We go back to our repo on GitHub and click on the Settings link towards the top of the screen. From there, click on the Deploy Keys link on the left navigation menu. You should be looking at a screen similar to the below page:

Click the Add Deploy Key button on the Deploy keys page. Go back to your CLI (or however you wanna get the contents of the file) and dump the contents of your public key file (aka the one that ends in .pub). The private one (aka the one WITHOUT .pub should not be shared with anyone). You can see below how I’ve gotten the contents using cat.

imacs-imac:~ scott$ cat /Users/scott/.ssh/id_rsa.pub
 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDCHVTkeP69+YLgiyWx9+DQ9TFftis6kiJMDTr/hi4nqzHlGdDUR78fy/kfAzU
Wu5cwaiuTpxvXtFK2FA+qrAoNqOzKecaVRv017PxznbRQhZ+FIfbKRua3Gt3rGSzrMvOErmL1He23jO5OZZAqpkt97E5kGO1gFmt
fb90moXDyE0GC6s/3dVcZdEDw+uge6toBF9BGO27lFtdwIs3x3rUj88BcACfi0D/0nkFxK3UjgaEuAcICpneKfVhd/jY5DnguCD5ST5lTi
Z/9hNDKfU4L1sQ0jz9gdmGhBpxpW3lRYWxBHadxKNYZFSI0IFO5VAFecNzgo/eSerIi2A9ahmTX 
scott@imacs-imac.lan

Copy the entire contents of this file and go back to the Add deploy key screen on GitHub. Paste the contents into the Key field. Give it a Title and be sure to check the Allow write access box as shown below:

Once that is done, click the Add key button to add the key. If you were successful, you should see something like the following:

Synching With Our Repo

Now that we have the repo setup and access granted, we’ll want to do our first commit to it. In order to do that, we first need to add our ssh repo key to our ssh agent. We do this by using the ssh-add -k command. This will prompt for the password you used when creating the key and upon successful authentication, your key will be added.

imacs-imac:~ scott$ ssh-add -k /Users/scott/.ssh/id_rsa
 Enter passphrase for /Users/scott/.ssh/id_rsa: 
 Identity added: /Users/scott/.ssh/id_rsa (scott@imacs-imac.lan)
 imacs-imac:~ scott$ 

With the key loaded, we should be able to clone our repo to our local machine. First we can cheat by getting the clone link for our repo. Go back to the main repo page by clicking on the name of the repo at the top of the page (next to the “Private” tag). This should bring you back to a page similar to this displaying your empty README.

Click the Clone or download button to reveal the link to your repo and then copy the link. The link should look something like:

 git@github.com:<my_git_user>/mysamplerepo.git

where my_git_user is your GitHub username. With that copied, let’s go back to our terminal and do a clone to get the repo on our local machine.

imacs-imac:~ scott$ git clone git@github.com:<my_git_user>/mysamplerepo.git
 Cloning into 'mysamplerepo'...
 The authenticity of host 'github.com (192.30.253.112)' can't be established.
 RSA key fingerprint is SHA256:nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8.
 Are you sure you want to continue connecting (yes/no)? yes
 Warning: Permanently added 'github.com,192.30.253.112' (RSA) to the list of known hosts.
 remote: Enumerating objects: 3, done.
 remote: Counting objects: 100% (3/3), done.
 remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0
 Receiving objects: 100% (3/3), done.
 imacs-imac:~ scott$ 

You should now have a directory called mysamplerepo on your machine.

imacs-imac:~ scott$ ls -al mysamplerepo/
 total 8
 drwxr-xr-x   4 scott  staff  136 Dec 20 15:41 .
 drwxr-xr-x+ 22 scott  staff  748 Dec 20 15:41 ..
 drwxr-xr-x  13 scott  staff  442 Dec 20 15:41 .git
 -rw-r--r--   1 scott  staff   14 Dec 20 15:41 README.md

Now we’ve got a local copy of our repo…yaaaaay!

Committing Our First Change From Local Machine

Now that we’ve got our repo, we’ll want to next set this up for some feature articles. The first step is to create an html directory within the repo and add a simple html file to it.

imacs-imac:mysamplerepo scott$ mkdir html
 imacs-imac:mysamplerepo scott$ cd html/
 imacs-imac:html scott$ vi index.html

Inside the index.html, I put our obligatory “hello world” for now.

<html>
 <body>
 hello world
 </body>
 </html>

Save the contents of the file (:wq) for those following along with CLI. From there, we’ll need to add our untracked files…aka git add and then commit the changes and push them to the repo.

imacs-imac:html scott$ cd ..
 imacs-imac:mysamplerepo scott$ git add .
 imacs-imac:mysamplerepo scott$ git commit -a
 [master 6d43978] Creating our first deployment
  Committer: Scott <scott@imacs-imac.lan>
 Your name and email address were configured automatically based
 on your username and hostname. Please check that they are accurate.
 You can suppress this message by setting them explicitly. Run the
 following command and follow the instructions in your editor to edit
 your configuration file:
  
     git config --global --edit
  
 After doing this, you may fix the identity used for this commit with:
  
     git commit --amend --reset-author
  
  1 file changed, 5 insertions(+)
  create mode 100644 html/index.html
 imacs-imac:mysamplerepo scott$ git push origin master
 Warning: Permanently added the RSA host key for IP address '140.82.113.4' to the list of known hosts.
 Counting objects: 4, done.
 Delta compression using up to 4 threads.
 Compressing objects: 100% (2/2), done.
 Writing objects: 100% (4/4), 368 bytes | 368.00 KiB/s, done.
 Total 4 (delta 0), reused 0 (delta 0)
 To github.com:algattsm/mysamplerepo.git
    89800ef..6d43978  master -> master
 imacs-imac:mysamplerepo scott$ 

I cheated and just moved up one directory so I’m sure to add the html directory and the contents of it to my commit. Running git add . will make sure we grab all of the new (aka untracked) files to the commit. Then the git commit -a says to comment all of our changes and we give a reason. Finally, the git push origin master commits all of changes to our repo. Let’s go check and see if this all happened properly.

If you go to GitHub in your browser again and refresh the page, you should see the html directory and it should contain the index.html file we created.

Looks like I did something right! Hopefully at this point, so did you.

What Does it all Mean and What’s Next?

Glad you asked. It’s means nothing at this point but it’s a step in the direction I took. At this point, you and I should be successful at creating a private repo and being able to publish content to it. High-five bro! Until the next article!

Stay Tuned…

This is just a placeholder for what is to come. I decided to start tinkering around with Kubernetes and getting it to do “fun” stuff. This will be my first of future posts to explain how I have my personal website setup and running using a combination of:

  • Kubernetes
  • Private GitHub Repo
  • Nginx
  • PHP
After a little discussion and examples on these, I hope to put it all together and show how I have things configured and running.

The other purpose of starting this blog was to help me capture some of the steps taken along the way. I’ve looked at random configurations, examples, setups, websites…etc… I’ve also done some random trial and error steps and somehow got things working. In addition to providing steps to success for others, I’m intending to use this to also act as documentation for myself.

So as the title of this first post says, stay tuned as I recreate and document my environment via this blog.