Preparing your Docker container for Azure App Services

Similar to other cloud platforms, Azure is starting to leverage containers to provide flexible managed environments for us to run Applications. The App Service on Linux being such a case, allows us to bring in our own home-baked Docker images containing all the tools we need to make our Apps work.

This service is still in preview and obviously has a few limitations:

  • Only one container per service instance in contrast to Azure Container Instances,
  • No VNET integration.
  • SSH server required to attach to the container.
  • Single port configuration.
  • No ability to limit the container’s memory or processor.

Having said this, we do get a good 50% discount for the time being which is not a bad thing.

The basics

In this post I will cover how to set up an SSH server into our Docker images so that we can inspect and debug our containers hosted in the Azure App Service for Linux.

It is important to note that running SSH in containers is a highly disregarded practice and should be avoided in most cases. Azure App Services mitigates the risk by only granting SSH port access to the Kudu infrastructure which we tunnel through. However, we don’t need SSH if we are not running in the App Services engine so we can just secure ourselves by only enabling SSH when a flag like ENABLE_SSH environment variable is present.

Running an SSH daemon with our App also means that we will have more than one process per container. For cases like these, Docker allows us to enable an init manager per container that makes sure no orphaned child processes are left behind on container exit. Since this feature requires docker run rights that for security reasons the App services does not grant, we must package and configure this binary ourselves when building the Docker image.

Building our Docker image

TLDR: docker pull xynova/appservice-ssh

The basic structure looks like the following:

The SSH configuration

The /ssh-config/sshd_config specifies the SSH server configuration required by App Services to establish connectivity with the container:

  • The daemon needs to listen on port 2222.
  • Password authentication must be enabled.
  • The root user must be able to login.
  • Ciphers and MACs security settings must be the one displayed below.

The container startup script

The entrypoint.sh script manages the application startup:

If the ENABLE_SSH environment variable equals true then the setup_ssh() function sets up the following:

  • Change the root user password to Docker! (required by App Services).
  • Generate the SSH host keys required by SSH clients to authenticate SSH server.
  • Start the SSH daemon into the background.

App Services requires the container to have an Application listening on the configurable public service port (80 by default). Without this listener, the container will be flagged as unhealthy and restarted indefinitely. The start_app(), as its name implies, runs a web server (http-echo) that listens on port 80 and just prints all incoming request headers back out to the response.

The Dockerfile

There is nothing too fancy about the Dockerfile either. I use the multistage build feature to compile the http-echo server and then copy it across to an alpine image in the PACKAGING STAGE. This second stage also installs openssh, tini and sets up additional configs.

Note that the init process manager is started through ENTRYPOINT ["/sbin/tini","--"] clause, which in turn receives the monitored entrypoint.sh script as an argument.

Let us build the container image by executing docker build -t xynova/appservice-ssh docker-ssh. You are free to tag the container and push it to your own Docker repository.

Trying it out

First we create our App Service on Linux instance and set the custom Docker container we will use (xynova/appservice-ssh if you want to use mine). Then we then set theENABLE_SSH=true environment variable to activate the SSH Server on container startup.

Now we can make a GET request the the App Service url to trigger a container download and activation. If everything works, you should see something like the following:

One thing to notice here is the X-Arr-Ssl header. This header is passed down by the Azure App Service internal load balancer when the App it is being browsed through SSL. You can check on this header if you want to trigger http to https redirections.

Moving on, we jump into the Kudu dashboard as follows:

Select the SSH option from the Debug console (the Bash option will take you to the Kudu container instead).

DONE! we are now inside the container.

Happy hacking!

Static Security Analysis of Container Images with CoreOS Clair

Container security is (or should be) a concern to anyone running software on Docker Containers. Gone are the days when running random Images found on the internet was common place. Security guides for Containers are common now: examples from Microsoft and others can be found easily online.

The two leading Container Orchestrators also offer their own security guides: Kubernetes Security Best Practices and Docker security.

Container Image Origin

One of the single biggest factors in Container security is determined by the origin of container Images:

  1. It is recommended to run your own private Registry to distribute Images
  2. It is recommended to scan these Images against known vulnerabilities.

Running a private Registry is easy these days (Azure Container Registry for instance).

I will concentrate on the scanning of Images in the remainder of this post and show how to look for common vulnerabilities using Core OS Clair. Clair is probably the most advanced non commercial scanning solution for Containers at the moment, though it requires some elbow grease to run this way. It’s important to note that the GUI and Enterprise features are not free and are sold under the Quay brand.

As security scanning is recommended as part of the build process through your favorite CI/CD pipeline, we will see how to configure Visual Studio Team Services (VSTS) to leverage Clair.

Installing CoreOS Clair

First we are going to run Clair in minikube for the sake of experimenting. I have used Fedora 26 and Ubuntu. I will assume you have minikube, kubectl and docker installed (follow the respective links to install each piece of software) and that you have initiated a minikube cluster by issuing the “minikube start” command.

Clair is distributed through a docker image or you can also compile it yourself by cloning the following Github repository: https://github.com/coreos/clair.

In any case, we will run the following commands to clone the repository, and make sure we are on the release 2.0 branch to benefit from the latest features (tested on Fedora 26):

~/Documents/github|⇒ git clone https://github.com/coreos/clair
Cloning into 'clair'...
remote: Counting objects: 8445, done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 8445 (delta 0), reused 2 (delta 0), pack-reused 8440
Receiving objects: 100% (8445/8445), 20.25 MiB | 2.96 MiB/s, done.
Resolving deltas: 100% (3206/3206), done.
Checking out files: 100% (2630/2630), done.

rafb@overdrive:~/Documents/github|⇒ cd clair
rafb@overdrive:~/Documents/github/clair|master
⇒ git fetch
rafb@overdrive:~/Documents/github/clair|master
⇒ git checkout -b release-2.0 origin/release-2.0
Branch release-2.0 set up to track remote branch release-2.0 from origin.
Switched to a new branch 'release-2.0'

The Clair repository comes with a Kubernetes deployment found in the contrib/k8s subdirectory as shown below. It’s the only thing we are after in the repository as we will run the Container Image distributed by Quay:

rafb@overdrive:~/Documents/github/clair|release-2.0
⇒ ls -l contrib/k8s
total 8
-rw-r--r-- 1 rafb staff 1443 Aug 15 14:18 clair-kubernetes.yaml
-rw-r--r-- 1 rafb staff 2781 Aug 15 14:18 config.yaml

We will modify these two files slightly to run Clair version 2.0 (for some reason the github master branch carries an older version of the configuration file syntax – as highlighted in this github issue).

In the config.yaml, we will change the postgresql source from:

source: postgres://postgres:password@postgres:5432/postgres?sslmode=disable

to

source: host=postgres port=5432 user=postgres password=password sslmode=disable

In config.yaml, we will change the version of the Clair image from latest to 2.0.1:

From:

image: quay.io/coreos/clair

to

image: quay.io/coreos/clair:v2.0.1

Once these changes have been made, we can deploy Clair to our minikube cluster by running those two commands back to back:

kubectl create secret generic clairsecret --from-file=./config.yaml 
kubectl create -f clair-kubernetes.yaml

By looking at the startup logs for Clair, we can see it fetches a vulnerability list at startup time:

[rbigeard@flanger ~]$ kubectl get pods NAME READY STATUS RESTARTS AGE 
clair-postgres-l3vmn 1/1 Running 1 7d 
clair-snmp2 1/1 Running 4 7d 
[rbigeard@flanger ~]$ kubectl logs clair-snmp2 
...
{"Event":"fetching vulnerability updates","Level":"info","Location":"updater.go:213","Time":"2017-08-14 06:37:33.069829"}
{"Event":"Start fetching vulnerabilities","Level":"info","Location":"ubuntu.go:88","Time":"2017-08-14 06:37:33.069960","package":"Ubuntu"} 
{"Event":"Start fetching vulnerabilities","Level":"info","Location":"oracle.go:119","Time":"2017-08-14 06:37:33.092898","package":"Oracle Linux"} 
{"Event":"Start fetching vulnerabilities","Level":"info","Location":"rhel.go:92","Time":"2017-08-14 06:37:33.094731","package":"RHEL"}
{"Event":"Start fetching vulnerabilities","Level":"info","Location":"alpine.go:52","Time":"2017-08-14 06:37:33.097375","package":"Alpine"}

Scanning Images through Clair Integrations

Clair is just a backend and we therefore need a frontend to “feed” Images to it. There are a number of frontends listed on this page. They range from full Enterprise-ready GUI frontends to simple command line utilities.

I have chosen to use “klar” for this post. It is a simple command line tool that can be easily integrated into a CI/CD pipeline (more on this in the next section). To install klar, you can compile it yourself or download a release.

Once installed, it’s very easy to use and parameters are passed using environment variables. In the following example, CLAIR_OUTPUT is set to “High” so that we only see the most dangerous vulnerabilities. CLAIR_ADDRESS is the address of Clair running on my minikube cluster.

Note that since I am pulling an image from an Azure Container Registry instance and I have specified a DOCKER_USER and DOCKER_PASSWORD variable in my environment.

# CLAIR_OUTPUT=High CLAIR_ADDR=http://192.168.99.100:30060 \
klar-1.4.1-linux-amd64 romainregistry1.azurecr.io/samples/nginx

Analysing 3 layers 
Found 26 vulnerabilities 
CVE-2017-8804: [High]  
The xdr_bytes and xdr_string functions in the GNU C Library (aka glibc or libc6) 2.25 mishandle failures of buffer deserialization, which allows remote attackers to cause a denial of service (virtual memory allocation, or memory consumption if an overcommit setting is not used) via a crafted UDP packet to port 111, a related issue to CVE-2017-8779. 
https://security-tracker.debian.org/tracker/CVE-2017-8804 
----------------------------------------- 
CVE-2017-10685: [High]  
In ncurses 6.0, there is a format string vulnerability in the fmt_entry function. A crafted input will lead to a remote arbitrary code execution attack. 
https://security-tracker.debian.org/tracker/CVE-2017-10685 
----------------------------------------- 
CVE-2017-10684: [High]  
In ncurses 6.0, there is a stack-based buffer overflow in the fmt_entry function. A crafted input will lead to a remote arbitrary code execution attack. 
https://security-tracker.debian.org/tracker/CVE-2017-10684 
----------------------------------------- 
CVE-2016-2779: [High]  
runuser in util-linux allows local users to escape to the parent session via a crafted TIOCSTI ioctl call, which pushes characters to the terminal's input buffer. 
https://security-tracker.debian.org/tracker/CVE-2016-2779 
----------------------------------------- 
Unknown: 2 
Negligible: 15 
Low: 1 
Medium: 4 
High: 4

So Clair is showing us the four “High” level common vulnerabilities found in the nginx image that I pulled from Docker Hub. At times of writing, this is consistent with the details listed on docker hub. It’s not necessarily a deal breaker as those vulnerabilities are only potentially exploitable by local users on the Container host which mean we would need to protect the VMs that are running Containers well!

Automating the Scanning of Images in Azure using a CI/CD pipeline

As a proof-of-concept, I created a “vulnerability scanning” Task in a build pipeline in VSTS.

Conceptually, the chain is as follows:

Container image scanning VSTS pipeline

I created an Ubuntu Linux VM and built my own VSTS agent following published instructions after which I installed klar.

I then built a Kubernetes cluster in Azure Container Service (ACS) (see my previous post on the subject which includes a script to automate the deployment of Kubernetes on ACS), and deployed Clair to it, as shown in the previous section.

Little gotcha here: my Linux VSTS agent and my Kubernetes cluster in ACS ran in two different VNets so I had to enable VNet peering between them.

Once those elements are in place, we can create a git repo with a shell script that calls klar and a build process in VSTS with a task that will execute the script in question:

Scanning Task in a VSTS Build

The content of scan.sh is very simple (This would have to be improved for a production environment obviously, but you get the idea):

#!/bin/bash
CLAIR_ADDR=http://X.X.X.X:30060 klar Ubuntu

Once we run this task in VSTS, we get the list of vulnerabilities in the output which can be used to “break” the build based on certain vulnerabilities being discovered.

Build output view in VSTS

Summary

Hopefully you have picked up some ideas around how you can ensure Container Images you run in your environments are secure, or at least you know what potential issues you are having to mitigate, and that a build task similar to the one described here could very well be part of a broader build process you use to build Containers Images from scratch.

Making application configuration files dynamic with confd and Azure Redis

Service discovery and hot reconfiguration is a common problem we face in cloud development nowadays. In some cases we can rely on an orchestration engine like Kubernetes to do all the work for us. In other cases we can leverage a configuration management system and do the orchestration ourselves. However, there are still some cases where either of these solutions are impractical or just too complex for the immediate problem… and you don’t have a Consul cluster at hand either :(.

confd to the rescue

Confd is a Golang written binary that allows us to make configuration files dynamic by providing a templating engine driven by backend data stores like etcd, Consul, DynamoDb, Redis, Vault, Zookeeper. It is commonly used to allow classic load balancers like Nginx and HAProxy to automatically reconfigure themselves when new healthy upstream services come online under different IP addresses.

NOTE: For the sake of simplicity I will use a very simple example to demonstrate how to use confd to remotely reconfigure an Nginx route by listening to changes performed against an Azure Redis Cache backend. However, this idea can be extrapolated to solve service discovery problems whereby application instances continuously report their health and location to a Service Registry (in our case Azure Redis) that is monitored by the Load Balancer service in order to reconfigure itself if necessary.

https://www.nginx.com/blog/service-discovery-in-a-microservices-architecture

Just as a side note, confd was created by Kelsey Hightower (now Staff Developer Advocate, Google Cloud Platform) in the early Docker and CoreOS days. If you haven’t heard of Kelsey I totally recommend you YouTube around for him to watch any of his talks.

Prerequisites

Azure Redis Cache

Redis, our Service Discovery data store will be listening on XXXX-XXXX-XXXX.redis.cache.windows.net:6380 (whereXXXX-XXXX-XXXX is your DNS prefix). confd will monitor changes on the /myapp/suggestions/drink cache key and then update Nginx configuration accordingly.

Container images

confd + nginx container image
confd’s support for Redis backend using a password is still not available under the stable or alpha release as of August 2017. I explain how to easily compile the binary and include it in an Nginx container in a previous post.

TLDR: docker pull xynova/nginx-confd

socat container image
confd is currently unable to connect to Redis through TLS (required by Azure Redis Cache). To overcome this limitation we will use a protocol translation tool called socat which I also talk about in a previous post.

TLDR: docker pull xynova/socat

Preparing confd templates

Driving Nginx configuration with Azure Redis

We first start a xynova/nginx-confd container and mount our prepared confd configurations as a volume under the /etc/confd path. We are also binding port 80 to 8080 on localhost so that we can access Nginx by browsing to http://localhost:8080.


The interactive session logs show us that confd fails to connect to Redis on 127.0.0.1:6379 because there is no Redis service inside the container.

To fix this we bring xynova/socat to create a tunnel that confd can use to talk to Azure Redis Cache in the cloud. We open a new terminal and type the following (note: replace XXXX-XXXX-XXXX with your own Azure Redis prefix).

Notice that by specifying --net container:nginx option, I am instructing the xynova/socat container to join the xynova/nginx-confd container network namespace. This is the way we get containers to share their own private localhost sandbox.

Now looking back at our interactive logs we can see that confd is now talking to Azure Redis but it cannot find the/myapp/suggestions/drink cache key.

Lets just set a value for that key:

confd is now happily synchronized with Azure Redis and the Nginx service is up and running.

We now browse to http://localhost:8080 and check test our container composition:

Covfefe… should we fix that?
We just set the /myapp/suggestions/drink key to coffee.

Watch how confd notices the change and proceeds to update the target config files.

Now if we refresh our browser we see:

Happy hacking.

 

Build from source and package into a minimal image with the new Docker Multi-Stage Build feature

Confd is a Golang written binary that can help us make configuration files dynamic. It achieves this by providing a templating engine that is driven by backend data stores like etcd, consul, dynamodb, redis, vault, zookeeper.

https://github.com/kelseyhightower/confd

A few days ago I started putting together a BYO load-balancing PoC where I wanted to use confd and Nginx. I realised however that some features that I needed from confd were not yet released. Not a problem; I was able to compile the master branch and package the resulting binary into an Nginx container all in one go, and without even having Golang installed on my machine. Here is how:

confd + Nginx with Docker Multi-Stage builds

First I will create my container startup script  docker-confd/nginx-confd.sh.
This script launches nginx and confd in the container but tracks both processes so that I can exit the container if either of them fail.

Normally you want to have only once process per container. In my particular case I have inter-process signaling between confd and Nginx and therefore it is easier for me to keep both processes together.

Now I create my Multi-Stage build Dockerfile:  docker-confd/Dockerfile
I denote a build stage by using the AS <STAGE-NAME> keyword: FROM golang:1.8.3-alpine3.6 AS confd-build-stage. I can reference the stage by name further down when I am copying the resulting binary into the Nginx container.

Now I build my image by executing docker build -t confd-nginx-local docker-confd-nginx.

DONE!, just about 15MB extra to the Nginx base alpine image.

Read more about the Multi-Stage Build feature on the Docker website.

Happy hacking.

SSL Tunneling with socat in Docker to safely access Azure Redis on port 6379

Redis Cache is an advanced key-value store that we should have all come across in one way or another by now. Azure, AWS and many other cloud providers have fully managed offerings for it, which is “THE” way we want to consume it.  As a little bit of insight, Redis itself was designed for use within a trusted private network and does not support encrypted connections. Public offerings like Azure use TLS reverse proxies to overcome this limitation and provide security around the service.

However some Redis client libraries out there do not talk TLS. This becomes a  problem when they are part of other tools that you want to compose your applications with.

Solution? We bring in something that can help us do protocol translation.

socat – Multipurpose relay (SOcket CAT)

Socat is a command line based utility that establishes two bidirectional byte streams and transfers data between them. Because the streams can be constructed from a large set of different types of data sinks and sources (see address types), and because lots of address options may be applied to the streams, socat can be used for many different purposes.

https://linux.die.net/man/1/socat

In short: it is a tool that can establish a communication between two points and manage protocol translation between them.

An interesting fact is that socat is currently used to port forward docker exec onto nodes in Kubernetes. It does this by creating a tunnel from the API server to Nodes.

Packaging socat into a Docker container

One of the great benefits of Docker is that it allows you to work in sandbox environments. These environments are then fully transportable and can eventually become part of your overall application.

The following procedure prepares a container that includes the socat binary and common certificate authorities required for public TLS certificate chain validation.

We first create our  docker-socat/Dockerfile

Now we build a local Docker image by executing docker build -t socat-local docker-socat. You are free to push this image to a Docker Registry at this point.

Creating TLS tunnel into Azure Redis

To access Azure Redis you we need 2 things:

  1. The FQDN: XXXX-XXXX-XXXX.redis.cache.windows.net:6380
    where all the X’s represent your dns name.
  2. The access key, found under the Access Keys menu of your Cache instance. I will call it THE-XXXX-PASSWORD

Let’s start our socat tunnel by spinning up the container we just built an image for. Notice I am binding port 6379 to my Desktop so that I can connect to the tunnel from localhost:6379 on my machine.

Now let’s have a look at the  arguments I am passing in to socat (which is automatically invoked thanks to the ENTRYPOINT ["socat"] instruction we included when building the container image).

  1. -v
    For checking logs when when doing docker logs socat
  2. TCP-LISTEN:6379,fork,reuseaddr
    – Start a socket listener on port 6379
    – fork to allow for subsequent connections (otherwise a one off)
    – reuseaddr to allow socat to restart and use the same port (in case a previous one is still held by the OS)

  3. openssl-connect:XXXX-XXXX-XXXX.redis.cache.windows.net:6380
    – Create a TLS connect tunnel to the Azure Redis Cache.

Testing connectivity to Azure Redis

Now I will just test my tunnel using redis-cli which I can also use from a container.  In this case THE-XXXX-PASSWORD is the Redis Access Key.

The thing to notice here is the--net host flag. This instructs Docker not to create a new virtual NIC and namespace to isolate the container network but instead use the Host’s (my desktop) interface. This means that localhost in the container is really localhost on my Desktop.

If everything is set up properly and outgoing connections on port6379 are allowed, you should get a PONG message back from redis.

Happy hacking!

Running Containers on Azure

Running Containers in public cloud environments brings advantages beyond the realm of “fat” virtual machines: easy deployments through a registry of Images, better use of resources, orchestration are but a few examples.

Azure is embracing containers in a big way (Brendan Burns, one of the primary instigators of Kubernetes while at Google, joined Microsoft last year which might have contributed to it!)

Running Containers nowadays is almost always synonymous with running an orchestrator which allows for automatic deployments of multi-Container workloads.

Here we will explore the use of Kubernetes and Docker Swarm which are arguably the two most popular orchestration frameworks for Containers at the moment. Please note that Mesos is also supported on Azure but Mesos is an entirely different beast which goes beyond the realm of just Containers.

This post is not about comparing the merits of Docker Swarm and Kubernetes, but rather a practical introduction to running both on Azure, as of August 2017.

VMs vs ACS vs Container Instances

When it comes to running containers on Azure, you have the choice of running them on Virtual Machines you create yourself or via the Azure Container Service which takes care of creating the underlying infrastructure for you.

We will explore both ways of doing things with a Kubernetes example running on ACS and a Docker Swarm example running on VMs created by Docker Machine. Note that at times of writing, ACS does not support the relatively new Swarm mode of Docker, but things move extremely fast in the Container space… for instance, Azure Container Instances are a brand new service allowing users to run Containers directly and billed on a per second basis.

In any case, both Docker Swarm and Kubernetes offer a powerful way of managing the lifecycle of Containerised application environments alongside storage and network services.

Kubernetes

Kubernetes is a one-stop solution for running a Container cluster and deploying applications to said cluster.

Although the WordPress example found in the official Kubernetes documentation is not specifically geared at Azure, it will run easily on it thanks to the level of standardisation Kubernetes (aka K8s) has achieved.

This example deploys a MySQL instance Container with a persistent volume to store data and a separate Container which combines WordPress and Apache, sporting its own persistent volume.

The PowerShell script below leverages the “az acs create” Azure CLI 2.0 command to spin up a two node Kubernetes cluster. Yes, one command is all it takes to create a K8s cluste! If you need ten agent nodes, just change the “–agent-count” value to 10.

It is invoked as follows:

The azureSshKey parameter point to a private SSH key (the corresponding public key must exist as well) and kubeWordPressLocation is the path to the git clone of the Kubernetes WordPress example.

Note that you need to have ssh connectivity to the Kubernetes Master i.e. TCP port 22.

Following the creation of the cluster, the script leverages kubectl (“az acs kubernetes install-cli” to install it) to deploy the aforementioned WordPress example.

“The” script:

Note that if you specify an OMS workspace ID and key, the script will install the OMS agent on the underlying VMs to monitor the infrastructure and the Containers (more on this later).

Once the example has been deployed, we can check the status of the cluster using the following command (be patient as it takes nearly 20 minutes for the service to be ready):

Screen Shot 2017-07-28 at 15.31.23

Note the value for EXTERNAL-IP (here 13.82.180.10). By entering this value in the location bar of your browser after both containers are running gives you access to the familiar WordPress install screen:

Screen Shot 2017-07-28 at 16.08.36

This simple K8 visualizer written a while back by Brendan Burns shows the various containers running on the cluster:

Screen Shot 2017-07-28 at 15.36.43.png

Docker Swarm

Docker Swarm is the “other” Container orchestrator, delivered by Docker themselves. Since Docker version 1.12, the so called “Swarm mode” does not require any discovery service like consul as it is handled by the Swarm itself. As mentioned in the introduction, Azure Container Service does not support Swarm mode yet so we will run our Swarm on VMs created by Docker Machine.

It is now possible to use Docker Compose against Docker Swarm in Swarm mode in order to deploy complex applications making the trifecta Swarm/Machine/Compose a complete solution competing with Kubernetes.

swarm-diagram

The new-ish Docker Swarm mode handles service discovery itself

As with the Kubernetes example I have created a script which automates the steps to create a Swarm then deploys a simple nginx continer on it.

As the Swarm mode is not supported by Azure ACS yet, I have leveraged Docker Machine, the docker sanctioned tool to create “docker-ready” VMs in all kinds of public or private clouds.

The following PowerShell script (easily translatable to a Linux flavour of shell) leverages the Azure CLI 2.0, docker machine and docker to create a Swarm and deploy an nginx Container with an Azure load balancer in front. Docker machine and docker were installed on my Windows 10 desktop using chocolatey.

Kubernetes vs Swarm

Compared to Kubernetes on ACS, Docker Swarm takes a bit more effort and is not as compact but would probably be more portable as it does not leverage specific Azure capabilities, save for the load balancer. Since Brendan Burns joinded Microsoft last year, it is understandable that Kubernetes is seeing additional focus.

It is a fairly recent development but deploying multi-Container applications on a Swarm can be done via a docker stack deploy using a docker compose file version 3 (compose files for older revisions need some work as seen here).

Azure Registry Service

I will explore this in more detail a separate post, but a post about docker Containers would not be complete without some thoughts about where the source of Images for your deployments live (namely the registry).

Azure offers a Container Registry service (ACR) to store and retrieve Images which is an easy option to use that well integrated with the rest of Azure.

It might be obvious to point out, but deploying Containers based on Images downloaded from the internet can be a security hazard, so creating and vetting your own Images is a must-have in enterprise environments, hence the criticality of running your own Registry service.

Monitoring containers with OMS

My last interesting bit for Containers on Azure is around monitoring. If you deploy the Microsoft Operations Management Suite (OMS) agent on the underlying VMs running Containers and have the Docker marketplace extension for Azure as well as an OMS account, it is very easy to get useful monitoring information. It is also possible to deploy OMS as a Container but I found that I wasn’t getting metrics for the underlying VM that way.

Screen Shot 2017-07-28 at 16.05.30.png

This allows not only to grab performance metrics from containers and the underlying infrastructure but also to grab logs from applications, here the web logs for instance:

Screen Shot 2017-07-28 at 16.12.23

I hope you found this post interesting and I am looking forward to the next one on this topic.

Azure API Management Step by Step – Use Cases

jorge-fotoUse Cases

On this second post about Azure API management, let’s discuss about use cases. Why “Use Cases”?                  

Use cases helps to manage complexity, since it focuses on one specific usage aspect at the time. I am grouping and versioning use cases to facilitate your learning process and helping to keep track with future changes. You are welcome to use these diagrams to demonstrate Azure API management features.

API On-boarding is a key aspect of API governance and first thing to be discussed. How can I publish my existing and future APIs back-ends to API Management?

API description formats like Swagger Specification (aka Open API Initiative https://openapis.org/) are fundamental to properly implement automation and devops on your APIM initiative. API can be imported using swagger, created manually or as part of a custom automation/integration process.

Azure API management administrators can group APIs by product allowing subscription workflow. Products visibility are linked with user groups, providing restricted access to APIs. You can manage your API policies as Code thought an exclusive GIT source control repository available to your APIM instance. Secrets and constants used by policies are managed by a key/value(string) service called properties.

apim-use-cases-adm-api-onboarding

Azure API management platform provides a rich developers portal. Developers can create an account/profile, discover APIs and subscribe to products. API Documentation, multiple language source code samples, console to try APIs, API subscription keys management and Analytics are main features provided. 

apim-use-cases-developer

The management and operation of the platform plays an important role on daily tasks. For enterprises, user groups and user(developers) can be fully integrated with Active Directory. Analytics dashboards and reports are available. Email notification and templates are customizable. APIM REST API and powershell commands are available to most of platform features, including exporting analytics reports.

apim-use-cases-administrator

Security administration use cases groups different configurations. Delegation allows custom development of portal sign-in, sign-up and product subscription. OAuth 2.0 and OpenID providers registration are used by development portal console, when trying APIs, to generate required tokens. Client certificates upload and management are done here or using automation. Developers portal identities configurations brings out of the box integration with social providers. GIT source control settings/management and APIM REST API tokens are available as well.

apim-use-cases-adm-security

Administrators can customize developers portal using built in content management systems functionality. Custom pages and modern javascript development is now allowed. Blogs feature allow of the box blog/post publish/unpublish functionality. Developers submitted applications can be published/unpublished by administrator, to be displayed at developers portal.

apim-use-cases-adm-developer-poral

In Summary, Azure API management is a mature and live platform with a few new features under development, bringing a strong integration with Azure Cloud. Click here for RoadMap

In my next post, I will deep dive in API on-boarding strategies.  

Thanks for reading @jorgearteiro

Posts: 1) Introduction  2) Use Cases

Azure API Management Step by Step

jorge-fotoIntroduction

As a speaker and cloud consultant, I have learned and received a lot of feedback about Azure API management platform from customers and community members. I will share some of my learnings in this series of blog posts. Let’s get started!

apim-image

APIs – Application programming interfaces are everywhere! They are already part of many companies’ strategies. But how could we consolidate internal and external APIs? How could you productize and monetize them for your company?

We often build APIs to be consumed by a unique application. However, we could also build these APIs to be shared. If you write HTTP APIs around a single and specific business requirement, you can encourage API re-usability and adoption. Bleeding edge technologies like containers and serverless architecture are pushing this approach even further.

API strategy and Governance comes in play to help build a Gateway on top of your APIs. Companies are developing MVPs (minimum viable products) and time to market is fundamental. For example, we do not have time to write authentication, caching and Analytics over and over again.  Azure API management can help you make this happen. apim-consolidation

apim-consolidation-operations

This demo API Manegement instance that I created for Kloud solutions illustrates how you could create a unified API endpoint to expose your APIs. Multiple “Services” are published there with a single Authentication layer. If your Email Service back-end implementation uses an external API, like Sendgrid, you can Inject this authentication on the API Management gateway layer, making it transparent for end users.

Azure API management provides a high scalable and multi-regional Gateway that can be deployed on any Azure Region around the world. It is a fully PaaS (platform-as-a-service) API management solution, where you do not have to manage any infrastructure. This, combined with other Azure offerings, like App Services (Web Apps, API Apps, Logic Apps and Functions), provides an Enterprise grade platform to delivery any API strategy.

apim-diagram

Looking this diagram above, we can decouple API Management in 3 main components:

  • Developer Portal – Customizable web site exclusive to your company to allow internal and external developers to engage, discover and consume APIs.
  • Gateway (proxy) – Engine of APIM where Policies can be applied on you inbound, back-end and outbound traffic. It’s very scalable and allows multi-regional deployment, Azure Virtual Network VPN, Azure Active Directory integration and native caching solution. Policies are written in XML and C# expressions to define complex rules like: Rate limit, quota, caching, JWT token validation, Authentication, XML to Json and Json to XML transformations, rewrite URL, CORS, restrict IPs, Set Headers, etc.
  • Administration Portal (aka Publisher Portal) – Administration of your APIM instance can be done via the portal.  Automation and devops teams can use APIM management REST API and/or Powershell commands to fully integrate APIM in your onboarding, build and release processes.

Please keep in mind that this strategy can apply to any environment and architecture where HTTP APIs are exposed, whether they are new microservices or older legacy applications.

Feel free to create an user at https://kloud.portal.azure-api.net, I will try to keep this Azure API Management instance usable for demo purposes only, no guaranties. Then, you can create your own development instance from Azure Portal later.

In my next post, I will talk about API Management use cases and give you a broader view of how deep this platform can go. Click here.

Thanks for reading! @jorgearteiro

Posts: 1) Introduction  2) Use Cases

Enterprise Application platform with Microservices – A Service Fabric perspective

An enterprise application platform can be defined as a suite of products and services that enables development and management of enterprise applications. This platform should be responsible of abstracting complexities related to application development such as diversity of hosting environments, network connectivity, deployment workflows, etc. In a traditional world, applications are monolithic by nature. A monolithic application is composed of many components grouped into multiple tiers bundled together into a single deployable unit. Each tier here can be developed using a specific technology and will have the ability to scale independently. Monolithic application usually persists data in one common data store.

MicroServices - 1

Although a monolithic architecture logically simplifies the application, it introduces many challenges as the number of applications in your enterprise increases. Following are few issues with a monolithic design

  • Scalability – The unit of scale is scoped to a tier. It is not possible to scale bundled within an application tier without scaling the whole tier. This introduces massive resource wastage resulting in increase in operational expense.
  • Reuse and maintenance – The components within an application tier cannot be consumed outside the tier unless exposed as contracts. This forces development teams to replicate code which becomes very difficult to maintain.
  • Updates – As the whole application is one unit of deployment, updating a component will require updating the whole application which may cause downtime thereby affecting the availability of the application.
  • Low deployment density – The compute, storage and network requirements of an application, as a bundled deployable unit may not match the infrastructure capabilities of the hosting machine (VM). This may lead to wastage of shortage of resources.
  • Decentralized management – Due to the redundancy of components across applications, supporting, monitoring and troubleshooting becomes expensive overheads.
  • Data store bottlenecks – If there are multiple components accessing the data store, it becomes the single point of failure. This forces the data store to be highly available.
  • Cloud unsuitable – The hardware dependency of this architecture to ensure availability doesn’t work well with cloud hosting platforms where designing for failure is a core principle.

A solution to this problem is an Application platform based on Microservices architecture. Microservice architecture is a software architecture pattern where applications are composed of small, independent services which can be aggregated using communication channels to achieve an end to end business use case. The services are decoupled from one another in terms of the execution space in which they operate. Each of these services will have the capability to be scaled, tested and maintained separately.

MicroServices - 2

Microsoft Azure Service Fabric

Service Fabric is a distributed application platform that makes it easy to package, deploy, and manage scalable and reliable Microservices. Following are few advantages of a Service Fabric which makes it the ideal platform to build a Microservice based Application Platform

  • Highly scalable – Every service can be scaled without affecting other services. Service Fabric will support scaling based on VM scale sets which means that these services will have the ability to be auto-scales based on CPU consumption, memory usage, etc.
  • Updates – Services can be updated separately and different versions of a service can be co-deployed to support backward compatibility. Service Fabric also supports automatic rollback during updates to ensure consistency of an application deployment.
  • State redundancy – For state full Microservices, the state can be stored alongside compute for a service. If there are multiple instances of a service running, the state will be replicated for every instance. Service Fabric takes care of replicating the state changes through the stores.
  • Centralized management – The service can be centrally managed, monitored and diagnosed outside application boundaries.
  • High density deployment – Service Fabric supports high density deployment on a virtual machine cluster while ensuring even utilization of resources and distribution of work load.
  • Automatic fault tolerance – The cluster manager of Service Fabric ensures failover and resource balancing in case of a hardware failure. This ensures that your services are cloud ready.
  • Heterogeneous hosting platforms – Service Fabric supports hosting your Microservices across Azure, AWS, On premises or any other datacenter. Cluster manager is capable of managing service deployments with instances spanning multiple datacenters at a time. Apart from Windows, Service Fabric also supports Linux as a host operating system for your micro services.
  • Technology agnostic – Services can be written in any programming language and deployed as executables or hosted within containers. Service Fabric also supports a native Java SDK for Java developers.

Programming models

Service Fabric supports the following four programming models for developing your service:

  • Guest Container – Services packaged into Windows or Linux containers managed by Docker.
  • Guest executables – Services can be packaged as guest executables which are arbitrary executables, written in any language.
  • Reliable Services – An application development framework with fully supported application lifecycle management. Reliable services can be used to develop stateful as well as stateless services and supports transactions.
  • Reliable Actors – A virtual actor based development framework with built-in state management and communication management capabilities. Actor programming model is single threaded and is ideal for hyper scale out scenarios (1000s or instances)

More about Service Fabric programming models can be found here

Service Type

A service type in Service Fabric consists of three components

MicroServices - 3

Code package defines an entry point to the service. This can be an executable or a dynamic linked library. Config package specifies the service configuration for your services and the data package holds static resources like images. Each package can be independently versioned. Service fabric supports upgrade of each of these packages separately. Allocation of a service in a cluster and reallocation of a service on failure are responsibilities of Service Fabric cluster manager. For stateful services, Service Fabric also takes care of replicating the state across multiple instances of a service.

Application Type

A Service Fabric application type is composed of one or more service types.

MicroServices - 4

An application type is a declarative template for creating an application. Service fabric uses application types for packaging, deployment and versioning Microservices.

State stores – Reliable collections

Reliable Collections are highly available, scalable and high performance state store which can be used to store states alongside compute for Microservices. The replication of state and persistence of state on secondary storage is taken care of by the Service Fabric framework. A noticeable difference between Reliable Collections and other high-availability state store (such as cache, tables, queues, etc.) is that the state is kept locally in the service hosting instance while also being made highly available by means of replication. Reliable collections also support transactions and are asynchronous by nature while offering strong consistency.

More about reliable collection can be found here

Management

Service Fabric offers extensive health monitoring capabilities with built-in health status for clusters and services and custom app health reporting. Services are continuously monitored for real-time alerting on problems in production. Performance monitoring overheads are diluted with rich performance metrics for actors and services. Service Fabric analytics is capable of providing repair suggestion thereby supporting preventive healing of services. Custom ETW logs can also be captured for guest executables to ensure centralized logging for all your services. Apart from support for Microsoft tools such as Windows Azure Diagnostics, Operational Insights, Windows Event Viewer and Visual studio diagnostics events viewer, Service Fabric also supports easy integration with third party tools like Kibana and Elastic search as monitoring dashboards.

Conclusion and considerations

Microsoft Service Fabric is potential platform capable of hosting enterprise grade Microservices. Following are few considerations to be aware of while using Service Fabric as your Microservice hosting platform.

  • Choosing a programming model is critical for optimizing the management of Microservices hosted on Service Fabric. Reliable services and Reliable actors are more thickly integrated with the Service Fabric cluster manager compared to guest containers and guest executables.
  • The support for containers on Service Fabric is in an evolving stage. While Windows containers and Hyper-V containers are on the release roadmap, Service Fabric only supports Linux containers as of today.
  • The only two native SDKs supported by Service fabric as of today is based on .net and Java.

 

Get Started with Docker on Azure

siliconvalve

The most important part of this whole post is that you need to know that the whale in the Docker logo is officially named “Moby Dock“. Once you know that you can probably bluff your way through at least an introductory session on Docker :).

It’s been hard to miss the increasing presence of Docker, particularly if you work in cloud technology. Each of the major cloud providers has raced to provide container services (Azure, AWS, GCE and IBM) and these platforms see benefits in the higher density hosting they can achieve with minimal changes to existing infrastructure.

In this post I’m going to look at first steps to getting Docker running in Azure. There are other posts about that will cover this but there are a few gotchas along the way that I will cover off here.

First You Need a Beard

Anyone…

View original post 874 more words