Preparing your Docker container for Azure App Services

Similar to other cloud platforms, Azure is starting to leverage containers to provide flexible managed environments for us to run Applications. The App Service on Linux being such a case, allows us to bring in our own home-baked Docker images containing all the tools we need to make our Apps work.

This service is still in preview and obviously has a few limitations:

  • Only one container per service instance in contrast to Azure Container Instances,
  • No VNET integration.
  • SSH server required to attach to the container.
  • Single port configuration.
  • No ability to limit the container’s memory or processor.

Having said this, we do get a good 50% discount for the time being which is not a bad thing.

The basics

In this post I will cover how to set up an SSH server into our Docker images so that we can inspect and debug our containers hosted in the Azure App Service for Linux.

It is important to note that running SSH in containers is a highly disregarded practice and should be avoided in most cases. Azure App Services mitigates the risk by only granting SSH port access to the Kudu infrastructure which we tunnel through. However, we don’t need SSH if we are not running in the App Services engine so we can just secure ourselves by only enabling SSH when a flag like ENABLE_SSH environment variable is present.

Running an SSH daemon with our App also means that we will have more than one process per container. For cases like these, Docker allows us to enable an init manager per container that makes sure no orphaned child processes are left behind on container exit. Since this feature requires docker run rights that for security reasons the App services does not grant, we must package and configure this binary ourselves when building the Docker image.

Building our Docker image

TLDR: docker pull xynova/appservice-ssh

The basic structure looks like the following:

The SSH configuration

The /ssh-config/sshd_config specifies the SSH server configuration required by App Services to establish connectivity with the container:

  • The daemon needs to listen on port 2222.
  • Password authentication must be enabled.
  • The root user must be able to login.
  • Ciphers and MACs security settings must be the one displayed below.

The container startup script

The entrypoint.sh script manages the application startup:

If the ENABLE_SSH environment variable equals true then the setup_ssh() function sets up the following:

  • Change the root user password to Docker! (required by App Services).
  • Generate the SSH host keys required by SSH clients to authenticate SSH server.
  • Start the SSH daemon into the background.

App Services requires the container to have an Application listening on the configurable public service port (80 by default). Without this listener, the container will be flagged as unhealthy and restarted indefinitely. The start_app(), as its name implies, runs a web server (http-echo) that listens on port 80 and just prints all incoming request headers back out to the response.

The Dockerfile

There is nothing too fancy about the Dockerfile either. I use the multistage build feature to compile the http-echo server and then copy it across to an alpine image in the PACKAGING STAGE. This second stage also installs openssh, tini and sets up additional configs.

Note that the init process manager is started through ENTRYPOINT ["/sbin/tini","--"] clause, which in turn receives the monitored entrypoint.sh script as an argument.

Let us build the container image by executing docker build -t xynova/appservice-ssh docker-ssh. You are free to tag the container and push it to your own Docker repository.

Trying it out

First we create our App Service on Linux instance and set the custom Docker container we will use (xynova/appservice-ssh if you want to use mine). Then we then set theENABLE_SSH=true environment variable to activate the SSH Server on container startup.

Now we can make a GET request the the App Service url to trigger a container download and activation. If everything works, you should see something like the following:

One thing to notice here is the X-Arr-Ssl header. This header is passed down by the Azure App Service internal load balancer when the App it is being browsed through SSL. You can check on this header if you want to trigger http to https redirections.

Moving on, we jump into the Kudu dashboard as follows:

Select the SSH option from the Debug console (the Bash option will take you to the Kudu container instead).

DONE! we are now inside the container.

Happy hacking!

Google Cloud Platform: an entrée

The recent opening of a Google Cloud Platform region in Sydney about 2 months ago triggered my interest in learning more about the platform and understand how their offering would affect the local market moving forward.

So far, I have concentrated mainly on GCPs IaaS offering by digging information out of videos, documentation and venturing through the portal and Cloud Shell. I would like to share my first findings and highlight a few features that, in my opinion, make it worth having a closer look.

Virtual Networks are global

Virtual Private Clouds (VPC) are global by default; this means that workloads in any GCP region can be one trace-route hop away from each other in the same private network. Firewall rules can also be applied in a global scope, simplifying preparation activities for regional failover.

Global HTTP Load Balancing is another feature that allows a single entry-point address to direct traffic to the most appropriate backend around the world. This comes as a very interesting advantage over a DNS based solutions because Global Load Balancing can react instantaneously.

Subnets and Availability Zones are independent 

Google Cloud Platform subnets cover an entire region. Regions still have multiple Availability Zones but they are not directly bound to a particular subnet. This comes in handy when we want to move a Virtual Machine across AZs but keep the same IP address.

Subnets also enable turning on/off Private Google API access with simple switch. Private access allows Virtual Machines without Internet access to reach Google APIs and Services using their internal IPs.

Live Migration across Availability Zones

GCP supports Live Migration within a region. This feature maintains machines up and running during events like infrastructure maintenance, host and security upgrades, failed hardware, etc.

A very nice addition to this feature is the ability to migrate a Virtual Machine into a different AZ with a single command:

$ gcloud compute instances move example-instance  \
  --zone <ZONEA> --destination-zone <ZONEB>

Notice the internal IP is preserved:

The Snapshot service is also global

Moving instances across regions is not as straight forward as moving them within Availability Zones. However, since Compute Engine’s Snapshot service is global, the process is still quite simple.

  1. I create a Snapshot from the VM instance’s disk.
  2. I crate a new Disk from the Snapshot but I place it in the target region’s AZ I want to move the VM to.
  3. Then I can create a new VM using the Disk.

An interesting consequence of Snapshots being global is that it allows us to use them as a data transfer alternative between regions that results in no ingress-egress charges.

You can attach VMs to multiple VPCs

Although still in beta, GCP allows us to attach multiple NICs to a machine and have each interface connect to a different VPCs.

Aside from the usual security benefits of perimeter and DMZ isolation, this feature gives us the ability to share third-party appliances across different projects: for example having all Internet ingress and egress traffic inspected and filtered by a common custom firewall box in the account.

Cloud Shell comes with batteries included

Cloud Shell is just awesome. Apart from its outgoing connections restricted to 20, 21, 22, 80, 443, 2375, 2376, 3306, 8080, 9600, and 50051, it is such a handy tool that you can use to quickly put together PoCs.

  • You get your own Debian VM with tmux multi tab support.
  • Docker up and running to build and test containers.
  • Full apt-get capabilities.
  • You can upload files into it directly from your desktop.
  • A brand new integrated code editor if you don’y like using vim, nano and so on.
  • Lastly, it has a web preview feature allowing you to run your own web server on ports 8080 to 8084 to test your PoC from the internet.

SSH is managed for you

GCP SSH key management is one of my favourite features so far. SSH key pairs are created and managed for you whenever you connect to an instance from the browser or with the gcloud command-line tool. User access to is controlled by Identity and Access Management (IAM) roles having CGP create and apply short lived SSH key pairs on the fly when necessary.

Custom instances, custom pricing

Although a custom machine type can be viewed as something that covers a very niche use case, it can in fact help us price the right instance RAM and CPU for the job at hand. Having said this, we also get the option to buy plenty of RAM and CPU that we will never need (see below).

 – Discounts, discounts and more discounts

I wouldn’t put my head in the lion’s mouth about pricing at this time but there are a large number of Cloud cost analysis reports that categorise GPC as being cheaper than the competition. Having said this, I still believe it comes down to having the right implementation and setup: you might not manage the infrastructure directly in the Cloud but you should definitely manage your costs.

GCP offers sustained-use discounts for instances that have been run over a percent of the overall billing month (25%, 50%, 75% and 100%) and it also recently released 1 and 3 year committed-use discounts which can reach up to 57% of the original instance price. Finally, Preemptible instances (similar to AWS spot instances) can reach up to 80% discount from list price.

Another very nice feature to help managing cost is their Compute sizing recommendations. These recommendations are generated based on system metrics and can help identifying workloads that can be resized to have a more appropriate use of resources.

Interesting times ahead

Google has been making big progress with its platform in the last two years. According to some analyses it still has to cover some ground to reach its competitors level but as we just saw GCP is coming with some very interesting cards under its sleeve.

One thing is for sure… interesting times lie ahead.

Happy window shopping!

 

Making application configuration files dynamic with confd and Azure Redis

Service discovery and hot reconfiguration is a common problem we face in cloud development nowadays. In some cases we can rely on an orchestration engine like Kubernetes to do all the work for us. In other cases we can leverage a configuration management system and do the orchestration ourselves. However, there are still some cases where either of these solutions are impractical or just too complex for the immediate problem… and you don’t have a Consul cluster at hand either :(.

confd to the rescue

Confd is a Golang written binary that allows us to make configuration files dynamic by providing a templating engine driven by backend data stores like etcd, Consul, DynamoDb, Redis, Vault, Zookeeper. It is commonly used to allow classic load balancers like Nginx and HAProxy to automatically reconfigure themselves when new healthy upstream services come online under different IP addresses.

NOTE: For the sake of simplicity I will use a very simple example to demonstrate how to use confd to remotely reconfigure an Nginx route by listening to changes performed against an Azure Redis Cache backend. However, this idea can be extrapolated to solve service discovery problems whereby application instances continuously report their health and location to a Service Registry (in our case Azure Redis) that is monitored by the Load Balancer service in order to reconfigure itself if necessary.

https://www.nginx.com/blog/service-discovery-in-a-microservices-architecture

Just as a side note, confd was created by Kelsey Hightower (now Staff Developer Advocate, Google Cloud Platform) in the early Docker and CoreOS days. If you haven’t heard of Kelsey I totally recommend you YouTube around for him to watch any of his talks.

Prerequisites

Azure Redis Cache

Redis, our Service Discovery data store will be listening on XXXX-XXXX-XXXX.redis.cache.windows.net:6380 (whereXXXX-XXXX-XXXX is your DNS prefix). confd will monitor changes on the /myapp/suggestions/drink cache key and then update Nginx configuration accordingly.

Container images

confd + nginx container image
confd’s support for Redis backend using a password is still not available under the stable or alpha release as of August 2017. I explain how to easily compile the binary and include it in an Nginx container in a previous post.

TLDR: docker pull xynova/nginx-confd

socat container image
confd is currently unable to connect to Redis through TLS (required by Azure Redis Cache). To overcome this limitation we will use a protocol translation tool called socat which I also talk about in a previous post.

TLDR: docker pull xynova/socat

Preparing confd templates

Driving Nginx configuration with Azure Redis

We first start a xynova/nginx-confd container and mount our prepared confd configurations as a volume under the /etc/confd path. We are also binding port 80 to 8080 on localhost so that we can access Nginx by browsing to http://localhost:8080.


The interactive session logs show us that confd fails to connect to Redis on 127.0.0.1:6379 because there is no Redis service inside the container.

To fix this we bring xynova/socat to create a tunnel that confd can use to talk to Azure Redis Cache in the cloud. We open a new terminal and type the following (note: replace XXXX-XXXX-XXXX with your own Azure Redis prefix).

Notice that by specifying --net container:nginx option, I am instructing the xynova/socat container to join the xynova/nginx-confd container network namespace. This is the way we get containers to share their own private localhost sandbox.

Now looking back at our interactive logs we can see that confd is now talking to Azure Redis but it cannot find the/myapp/suggestions/drink cache key.

Lets just set a value for that key:

confd is now happily synchronized with Azure Redis and the Nginx service is up and running.

We now browse to http://localhost:8080 and check test our container composition:

Covfefe… should we fix that?
We just set the /myapp/suggestions/drink key to coffee.

Watch how confd notices the change and proceeds to update the target config files.

Now if we refresh our browser we see:

Happy hacking.

 

Build from source and package into a minimal image with the new Docker Multi-Stage Build feature

Confd is a Golang written binary that can help us make configuration files dynamic. It achieves this by providing a templating engine that is driven by backend data stores like etcd, consul, dynamodb, redis, vault, zookeeper.

https://github.com/kelseyhightower/confd

A few days ago I started putting together a BYO load-balancing PoC where I wanted to use confd and Nginx. I realised however that some features that I needed from confd were not yet released. Not a problem; I was able to compile the master branch and package the resulting binary into an Nginx container all in one go, and without even having Golang installed on my machine. Here is how:

confd + Nginx with Docker Multi-Stage builds

First I will create my container startup script  docker-confd/nginx-confd.sh.
This script launches nginx and confd in the container but tracks both processes so that I can exit the container if either of them fail.

Normally you want to have only once process per container. In my particular case I have inter-process signaling between confd and Nginx and therefore it is easier for me to keep both processes together.

Now I create my Multi-Stage build Dockerfile:  docker-confd/Dockerfile
I denote a build stage by using the AS <STAGE-NAME> keyword: FROM golang:1.8.3-alpine3.6 AS confd-build-stage. I can reference the stage by name further down when I am copying the resulting binary into the Nginx container.

Now I build my image by executing docker build -t confd-nginx-local docker-confd-nginx.

DONE!, just about 15MB extra to the Nginx base alpine image.

Read more about the Multi-Stage Build feature on the Docker website.

Happy hacking.

SSL Tunneling with socat in Docker to safely access Azure Redis on port 6379

Redis Cache is an advanced key-value store that we should have all come across in one way or another by now. Azure, AWS and many other cloud providers have fully managed offerings for it, which is “THE” way we want to consume it.  As a little bit of insight, Redis itself was designed for use within a trusted private network and does not support encrypted connections. Public offerings like Azure use TLS reverse proxies to overcome this limitation and provide security around the service.

However some Redis client libraries out there do not talk TLS. This becomes a  problem when they are part of other tools that you want to compose your applications with.

Solution? We bring in something that can help us do protocol translation.

socat – Multipurpose relay (SOcket CAT)

Socat is a command line based utility that establishes two bidirectional byte streams and transfers data between them. Because the streams can be constructed from a large set of different types of data sinks and sources (see address types), and because lots of address options may be applied to the streams, socat can be used for many different purposes.

https://linux.die.net/man/1/socat

In short: it is a tool that can establish a communication between two points and manage protocol translation between them.

An interesting fact is that socat is currently used to port forward docker exec onto nodes in Kubernetes. It does this by creating a tunnel from the API server to Nodes.

Packaging socat into a Docker container

One of the great benefits of Docker is that it allows you to work in sandbox environments. These environments are then fully transportable and can eventually become part of your overall application.

The following procedure prepares a container that includes the socat binary and common certificate authorities required for public TLS certificate chain validation.

We first create our  docker-socat/Dockerfile

Now we build a local Docker image by executing docker build -t socat-local docker-socat. You are free to push this image to a Docker Registry at this point.

Creating TLS tunnel into Azure Redis

To access Azure Redis you we need 2 things:

  1. The FQDN: XXXX-XXXX-XXXX.redis.cache.windows.net:6380
    where all the X’s represent your dns name.
  2. The access key, found under the Access Keys menu of your Cache instance. I will call it THE-XXXX-PASSWORD

Let’s start our socat tunnel by spinning up the container we just built an image for. Notice I am binding port 6379 to my Desktop so that I can connect to the tunnel from localhost:6379 on my machine.

Now let’s have a look at the  arguments I am passing in to socat (which is automatically invoked thanks to the ENTRYPOINT ["socat"] instruction we included when building the container image).

  1. -v
    For checking logs when when doing docker logs socat
  2. TCP-LISTEN:6379,fork,reuseaddr
    – Start a socket listener on port 6379
    – fork to allow for subsequent connections (otherwise a one off)
    – reuseaddr to allow socat to restart and use the same port (in case a previous one is still held by the OS)

  3. openssl-connect:XXXX-XXXX-XXXX.redis.cache.windows.net:6380
    – Create a TLS connect tunnel to the Azure Redis Cache.

Testing connectivity to Azure Redis

Now I will just test my tunnel using redis-cli which I can also use from a container.  In this case THE-XXXX-PASSWORD is the Redis Access Key.

The thing to notice here is the--net host flag. This instructs Docker not to create a new virtual NIC and namespace to isolate the container network but instead use the Host’s (my desktop) interface. This means that localhost in the container is really localhost on my Desktop.

If everything is set up properly and outgoing connections on port6379 are allowed, you should get a PONG message back from redis.

Happy hacking!

Gracefully managing Gulp process hierarchy on Windows

Process Tree

When developing client side JavaScript, one thing that really comes in handy is the ability to create fully functional stubs that can mimic real server APIs. This decouples project development dependencies and allows different team members to work in parallel against an agreed API contract.

To allow people to have an isolated environment to work on and get immediate feedback on their changes, I leverage the Gulp.js + Node.js duo.

“Gulp.js is a task runner that runs on Node.js that it is normally used to automate UI development workflows such as LESS to CSS conversion, making HTML templates, minify CSS and JS, etc. However, it can also be used to fire up local web servers, custom processes and many other things.”

To make things really simple, I set up Gulp to allow anyone to enter gulp dev on a terminal to start the following workflow:

  1. Prepare distribution files
    – Compile LESS or SASS style sheets
    – Minify and concatenate JavaScript files
    – Compile templates (AngularJS)
    – Version resources for cache busting (using a file hash) 
    – etc
  2. Start stub servers
    – Start an API mockup server
    – Start a server that can serve local layouts HTML files
  3. Open a browser pointing to the local App (chrome or IE depending on the platform)
  4. Fire up file watchers to trigger specific builds and browser reloads when project files change.

The following is an extract of the entry point for these gulp tasks (gulp and gulp dev):

The DEV START.servers task is imported by the requireDir('./gulp-tasks') instruction. This task triggers other dependent workflows that spin up plain Node.js web servers that are, in turn, kept alive by a process supervisor library called forever-monitor.

There is a catch, however. If you are working on a Unix-friendly system (like a Mac), processes are normally managed hierarchically. In a normal situation, a root process (let’s say gulp dev) will create child processes (DEV START.api.server and DEV START.layouts.server) that will only live during the lifetime of their parent. This is great because whenever we terminate the parent process, all its children are terminated too.

In a Windows environment, processes management is done in a slightly different way – even if you close a parent process its child processes will stay alive doing what they were already doing. Child processes will contain only the parent ID as a reference. This means that it is still possible to mimic Unix process tree behaviour, but it is just a little bit more tedious and some library creators avoid dealing with the problem. This is the case with Gulp.

So in our scenario we listen to the SIGINT signal and gracefully terminate all processes when it is raised through keyboard input (hitting Ctrl-B). This prevents developers on windows from having to go to the task manager and terminate orphaned child processes themselves.

I am also using process.once(... event listener instead of process.on(... to prevent an error surfaced on Macs when Ctrl-B is hit more than once. We don’t want the OS to complain when we try to terminate a process that has already been terminated :).

That is it for now..

Happy hacking!

Creating self-signed certs using OpenSSL on Windows

ssl

Working with Linux technologies exposes you to a huge number of open source tools that can simplify and speed up your development workflow. Interestingly enough, many of these tools are now flooding into the Windows ecosystem allowing us to increase the portability of our development assets across multiple operating systems.

Today I am going to demonstrate how easy it is to install OpenSSL on Windows and how simple it is to quickly create self-signed certificates for our development TLS needs that will work on a range of operating systems.

We will start by installing the following tools:

1. Chocolatey
https://chocolatey.org/
“Chocolatey is a package manager for Windows (like apt-get or yum but for Windows). It was designed to be a decentralized framework for quickly installing applications and tools that you need. It is built on the NuGet infrastructure currently using PowerShell as its focus for delivering packages from the distros to your door, err computer.”

2. Cmder
https://chocolatey.org/packages?q=cmder
“Cmder is a software package created out of pure frustration over the absence of nice console emulators on Windows. It is based on amazing software, and spiced up with the Monokai color scheme and a custom prompt layout. Looking sexy from the start”

Benefits of this approach

  • Using OpenSSL provides portability for our scripts by allowing us to run the same commands no matter which OS you are working on: Mac OSX, Windows or Linux.
  • The certificates generated through OpenSSL can be directly imported as custom user certificates on Android and iOS (this is not the case with other tools like makecert.exe, at least not directly).
  • Chocolatey is a very effective way of installing and configuring software on your Windows machine in a scriptable way (Fiddler, Chrome, NodeJS, Docker, Sublime… you name it).
  • The “Cmder” package downloads a set of utilities which are commonly used in the Linux world. This once again allows for better portability of your code, especially if you want to start using command line tools like Vagrant, Git and many others.

Magically getting OpenSSL through Cmder

Let’s get started by installing the Chocolatey package manager onto our machine. It only takes a single like of code! See: https://chocolatey.org/install

Now that we have our new package manager up and running, getting the Cmder package installed becomes as simple as typing in the following instruction:

C:\> choco install cmder
Cmdr window

reference: http://cmder.net/

The Cmder package shines “big time” because it installs for us a portable release of the latest Git for Windows tools (https://git-for-windows.github.io). The Git for Windows project (aka msysGit) gives us access to the most traditional commands found on Linux and Mac OSX: ls, ssh, git, cat, cp, mv, find, less, curl, ssh-keygen, tar ….

mssysgit

… and OpenSSL.

Generating your Root CA

The following instructions will help you generating your Root Certificate Authority (CA) certificate. This is the CA that will be trusted by your devices and that will be used to sign your own TLS HTTPS development certs.

root CA files

We now double-click on the myRootCA.pfx file to fire up the Windows Certificate Import Wizard and get the Root CA imported into the Trusted Root Certification Authorities store. Too easy… let’s move on to signing our first TLS certs with it!

Generating your TLS cert

The following commands will quickly get the ball rolling by generating and signing the certificate request in interactive mode (entering cert fields by hand). In later stages you might want to use a cert request configuration file and pass it in to the OpenSSL command in order to make the process scriptable and therefore repeatable.

Just for the curious, I will be creating a TLS cert for “sweet-az.azurewebsites.net” to allow me to setup a local dev environment that mimics an Azure Web App.

Just as we did in the previous step, we can double click on the packaged myTSL.pfx file to get the certificate imported into the Local Machine/Personal Windows Certificate Store.

Testing things out

Finally we will just do a smoke test against IIS following the traditional steps:

  • Create an entry for the hostname used in the cert in your hosts file:
    127.0.0.1            sweet-az.azurewebsites.net
  •  Create an 433 binding for the default site in IIS Management Console.

SSL site bindings

Let’s confirm that everything has worked correctly by opening a browser session and navigating to our local version of the https://sweet-az.azurewebsites.net website.

Sweet as!

That’s all folks!

In following posts I will cover how we get these certs installed and trusted on our mobile phones and then leverage other simple techniques to help us developing mobile applications (proxying and ssh tunneling). Until then….

Happy hacking!