Making application configuration files dynamic with confd and Azure Redis

Service discovery and hot reconfiguration is a common problem we face in cloud development nowadays. In some cases we can rely on an orchestration engine like Kubernetes to do all the work for us. In other cases we can leverage a configuration management system and do the orchestration ourselves. However, there are still some cases where either of these solutions are impractical or just too complex for the immediate problem… and you don’t have a Consul cluster at hand either :(.

confd to the rescue

Confd is a Golang written binary that allows us to make configuration files dynamic by providing a templating engine driven by backend data stores like etcd, Consul, DynamoDb, Redis, Vault, Zookeeper. It is commonly used to allow classic load balancers like Nginx and HAProxy to automatically reconfigure themselves when new healthy upstream services come online under different IP addresses.

NOTE: For the sake of simplicity I will use a very simple example to demonstrate how to use confd to remotely reconfigure an Nginx route by listening to changes performed against an Azure Redis Cache backend. However, this idea can be extrapolated to solve service discovery problems whereby application instances continuously report their health and location to a Service Registry (in our case Azure Redis) that is monitored by the Load Balancer service in order to reconfigure itself if necessary.

https://www.nginx.com/blog/service-discovery-in-a-microservices-architecture

Just as a side note, confd was created by Kelsey Hightower (now Staff Developer Advocate, Google Cloud Platform) in the early Docker and CoreOS days. If you haven’t heard of Kelsey I totally recommend you YouTube around for him to watch any of his talks.

Prerequisites

Azure Redis Cache

Redis, our Service Discovery data store will be listening on XXXX-XXXX-XXXX.redis.cache.windows.net:6380 (whereXXXX-XXXX-XXXX is your DNS prefix). confd will monitor changes on the /myapp/suggestions/drink cache key and then update Nginx configuration accordingly.

Container images

confd + nginx container image
confd’s support for Redis backend using a password is still not available under the stable or alpha release as of August 2017. I explain how to easily compile the binary and include it in an Nginx container in a previous post.

TLDR: docker pull xynova/nginx-confd

socat container image
confd is currently unable to connect to Redis through TLS (required by Azure Redis Cache). To overcome this limitation we will use a protocol translation tool called socat which I also talk about in a previous post.

TLDR: docker pull xynova/socat

Preparing confd templates

Driving Nginx configuration with Azure Redis

We first start a xynova/nginx-confd container and mount our prepared confd configurations as a volume under the /etc/confd path. We are also binding port 80 to 8080 on localhost so that we can access Nginx by browsing to http://localhost:8080.


The interactive session logs show us that confd fails to connect to Redis on 127.0.0.1:6379 because there is no Redis service inside the container.

To fix this we bring xynova/socat to create a tunnel that confd can use to talk to Azure Redis Cache in the cloud. We open a new terminal and type the following (note: replace XXXX-XXXX-XXXX with your own Azure Redis prefix).

Notice that by specifying --net container:nginx option, I am instructing the xynova/socat container to join the xynova/nginx-confd container network namespace. This is the way we get containers to share their own private localhost sandbox.

Now looking back at our interactive logs we can see that confd is now talking to Azure Redis but it cannot find the/myapp/suggestions/drink cache key.

Lets just set a value for that key:

confd is now happily synchronized with Azure Redis and the Nginx service is up and running.

We now browse to http://localhost:8080 and check test our container composition:

Covfefe… should we fix that?
We just set the /myapp/suggestions/drink key to coffee.

Watch how confd notices the change and proceeds to update the target config files.

Now if we refresh our browser we see:

Happy hacking.

 

Build from source and package into a minimal image with the new Docker Multi-Stage Build feature

Confd is a Golang written binary that can help us make configuration files dynamic. It achieves this by providing a templating engine that is driven by backend data stores like etcd, consul, dynamodb, redis, vault, zookeeper.

https://github.com/kelseyhightower/confd

A few days ago I started putting together a BYO load-balancing PoC where I wanted to use confd and Nginx. I realised however that some features that I needed from confd were not yet released. Not a problem; I was able to compile the master branch and package the resulting binary into an Nginx container all in one go, and without even having Golang installed on my machine. Here is how:

confd + Nginx with Docker Multi-Stage builds

First I will create my container startup script  docker-confd/nginx-confd.sh.
This script launches nginx and confd in the container but tracks both processes so that I can exit the container if either of them fail.

Normally you want to have only once process per container. In my particular case I have inter-process signaling between confd and Nginx and therefore it is easier for me to keep both processes together.

Now I create my Multi-Stage build Dockerfile:  docker-confd/Dockerfile
I denote a build stage by using the AS <STAGE-NAME> keyword: FROM golang:1.8.3-alpine3.6 AS confd-build-stage. I can reference the stage by name further down when I am copying the resulting binary into the Nginx container.

Now I build my image by executing docker build -t confd-nginx-local docker-confd-nginx.

DONE!, just about 15MB extra to the Nginx base alpine image.

Read more about the Multi-Stage Build feature on the Docker website.

Happy hacking.

Creating a simple nodejs API on AWS (including nginx)

On a recent project I was part of a team developing an AngularJS website with a C# ASP.NET backend API hosted in Azure.  It was a great project as I got to work with a bunch of new tools, but it got me wondering on how simple it could be to use a Javascript API instead.  That way the entire development stack would be written in Javascript.

And so a personal project was born.  To create a simple JS API and get it running in the cloud.

Getting started

So here goes, a quick guide on setting up a nodejs API using the express framework.

I’ll start by getting the environment running locally on my mac in 6 simple steps:

# 1. Create a directory for your application
$ mkdir [your_api_name]

# 2. Install Express (the -g will install it globally)
$ npm install express -g

# 3. Use the express generator as it makes life easier!
$ npm install express-generator -g

# 4. Create your project
$ express [your_project_name_here]

# 5. Install any missing dependencies
$ npm install

# 6. Start your API
$ npm start

That’s it. You now have a nodejs API running locally on your development environment!

To test it, and prove to yourself it’s working fine, run the following curl command:

$ curl http://localhost:3000/users

If everything worked as planned, you should see “respond with a resource” printed in your terminal window. Now this is clearly as basic as it gets, but you can easily make it [a bit] more interesting by adding a new file to your project called myquickapi.js in your [app name]/routes/ folder, and add the following content:

var express = require('express');
var router = express.Router();

// get method route
router.get('/', function (req, res) {
res.send('You sent a get request');
});

// post method route
router.post('/', function(req, res) {
res.send('You sent me ' + req.param('data'));
});

module.exports = router;

A quick change to the app.js file to update our routes, by adding the following 2 lines:

var myquickapi = require(‘./routes/myquickapi');
app.use('/myquickapi', myquickapi);

Re-start your node service, and run:


$ curl -X POST http://localhost:3000/myquickapi?data=boo

And you will see the API handle the request parameter and echo it back to the caller.

Spin up an AWS EC2 instance

Log into the AWS portal and create a new EC2 instance.  For my project, as it is only a dev environment, I have gone for a General Purpose t2.Micro Ubuntu Server.  Plus it’s free which happens to be okay too!

Once the instance is up and running you will want to configure the security group to allow all inbound traffic using port 443, and 80 – after all it is a web api and I guess you want to access it!  I also enabled SSH for my local network:

Security_group

Using your pem file ssh into your new instance, and once connected, run the following commands:


# 1. update any out of date packages
sudo apt-get update

# 2. install nodejs
sudo apt-get install nodejs

# 3. install node package manager
sudo apt-get install npm

Now you can run node using the nodejs command. This is great, but not for the JS packages we’ll be using later on.  They reference the node command instead.  A simple fix is to create a symlink to the nodejs command:


$ sudo ln -s /usr/bin/nodejs /usr/bin/node

Set up nginx on you EC2 instance

We’ll use nginx on our server to proxy network traffic to the running nodejs instance.  Install nginx using the following commands:


# 1. install nginx

$ sudo apt-get install nginx

# 2. make a directory for your sites
$ sudo mkdir -p /var/www/express-api/public

# 3. set the permission of the folder so it is accessible publicly
$ sudo chmod 755 /var/www

# 4. remove the default nginx block
$ sudo rm /etc/nginx/sites-enabled/default

# 5. create the virtual hosts file
$ sudo nano /etc/nginx/sites-available/[your_api_name]

Now copy the following content into your virtual hosts file and save it:

upstream app_nodejs {
server 127.0.0.1:3000;
keepalive 8;
}

server {
listen 80;
listen [::]:80 default_server ipv6only=on;
listen 443 default ssl;

root /var/www/[your_site_folder]/public/[your_site_name];
index index.html index.htm;

# Make site accessible from http://localhost/
server_name [your_server_domain_name];

location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;

proxy_pass http://localhost:3000/;
proxy_redirect off;
}
}

This basically tells your server to listen on ports 80 and 443 and redirect any incoming traffic to your locally running nodes server on port 3000. A simple approach for now, but all that is needed to get our API up and running.

Activate your newly created hosts file by running the following command:

$ sudo ln -s /etc/nginx/sites-available/[your_api_name] /etc/nginx/sites-enabled/[your_api_name]

Now restart nginx to make your settings take place:

$ sudo service nginx restart

As a sanity test you can run the following command to confirm everything is setup correctly. If you are in good shape you will see a confirmation that the test has run successfully.

$ sudo nginx -c /etc/nginx/nginx.conf -t

Final steps

The last step in the process, which you could argue is the most important, is to copy your api code onto your new web server.  If you are creating this for a production system then I would encourage a deployment tool at this stage (and to be frank, probably a different approach altogether), but for now a simple secure copy is probably all that’s required:


$ scp -r [your_api_directory] your_username@aws_ec2_api:/var/www/[your_site_folder]/public/

And that’s it.  Fire up a browser and try running the curl commands against your EC2 instance rather than your local machine.  If all has gone well then you should get the same response as you did with your local environment (and it’ll be lightning fast).

… Just for fun

If you disconnect the ssh connection to your server it will stop the application from running.  A fairly big problem for a web api, but a simple fix to resolve.

A quick solution is to use the Forever tool.

Install it, and run your app (you’ll be glad you added the symlink to nodejs earlier):


$ sudo npm install -g forever

$ sudo forever start /var/www/[your_site_folder]/public/[your_site_name]/bin/www

 

Hopefully this will have provided a good insight into setting up a nodejs API on AWS. At the moment it is fairly basic, but time permitting, I would like to build on the API and add additional features to make it more useable – I’d particularly like to add a Javascript OAuth 2.0 endpoint!

Watch this space for future updates, as I add new features I will be sure to blog about the learnings I find along the way.

As Always; any questions, then just reach out to me, or post them below