Preparing your Docker container for Azure App Services

Similar to other cloud platforms, Azure is starting to leverage containers to provide flexible managed environments for us to run Applications. The App Service on Linux being such a case, allows us to bring in our own home-baked Docker images containing all the tools we need to make our Apps work.

This service is still in preview and obviously has a few limitations:

  • Only one container per service instance in contrast to Azure Container Instances,
  • No VNET integration.
  • SSH server required to attach to the container.
  • Single port configuration.
  • No ability to limit the container’s memory or processor.

Having said this, we do get a good 50% discount for the time being which is not a bad thing.

The basics

In this post I will cover how to set up an SSH server into our Docker images so that we can inspect and debug our containers hosted in the Azure App Service for Linux.

It is important to note that running SSH in containers is a highly disregarded practice and should be avoided in most cases. Azure App Services mitigates the risk by only granting SSH port access to the Kudu infrastructure which we tunnel through. However, we don’t need SSH if we are not running in the App Services engine so we can just secure ourselves by only enabling SSH when a flag like ENABLE_SSH environment variable is present.

Running an SSH daemon with our App also means that we will have more than one process per container. For cases like these, Docker allows us to enable an init manager per container that makes sure no orphaned child processes are left behind on container exit. Since this feature requires docker run rights that for security reasons the App services does not grant, we must package and configure this binary ourselves when building the Docker image.

Building our Docker image

TLDR: docker pull xynova/appservice-ssh

The basic structure looks like the following:

The SSH configuration

The /ssh-config/sshd_config specifies the SSH server configuration required by App Services to establish connectivity with the container:

  • The daemon needs to listen on port 2222.
  • Password authentication must be enabled.
  • The root user must be able to login.
  • Ciphers and MACs security settings must be the one displayed below.

The container startup script

The entrypoint.sh script manages the application startup:

If the ENABLE_SSH environment variable equals true then the setup_ssh() function sets up the following:

  • Change the root user password to Docker! (required by App Services).
  • Generate the SSH host keys required by SSH clients to authenticate SSH server.
  • Start the SSH daemon into the background.

App Services requires the container to have an Application listening on the configurable public service port (80 by default). Without this listener, the container will be flagged as unhealthy and restarted indefinitely. The start_app(), as its name implies, runs a web server (http-echo) that listens on port 80 and just prints all incoming request headers back out to the response.

The Dockerfile

There is nothing too fancy about the Dockerfile either. I use the multistage build feature to compile the http-echo server and then copy it across to an alpine image in the PACKAGING STAGE. This second stage also installs openssh, tini and sets up additional configs.

Note that the init process manager is started through ENTRYPOINT ["/sbin/tini","--"] clause, which in turn receives the monitored entrypoint.sh script as an argument.

Let us build the container image by executing docker build -t xynova/appservice-ssh docker-ssh. You are free to tag the container and push it to your own Docker repository.

Trying it out

First we create our App Service on Linux instance and set the custom Docker container we will use (xynova/appservice-ssh if you want to use mine). Then we then set theENABLE_SSH=true environment variable to activate the SSH Server on container startup.

Now we can make a GET request the the App Service url to trigger a container download and activation. If everything works, you should see something like the following:

One thing to notice here is the X-Arr-Ssl header. This header is passed down by the Azure App Service internal load balancer when the App it is being browsed through SSL. You can check on this header if you want to trigger http to https redirections.

Moving on, we jump into the Kudu dashboard as follows:

Select the SSH option from the Debug console (the Bash option will take you to the Kudu container instead).

DONE! we are now inside the container.

Happy hacking!

Making application configuration files dynamic with confd and Azure Redis

Service discovery and hot reconfiguration is a common problem we face in cloud development nowadays. In some cases we can rely on an orchestration engine like Kubernetes to do all the work for us. In other cases we can leverage a configuration management system and do the orchestration ourselves. However, there are still some cases where either of these solutions are impractical or just too complex for the immediate problem… and you don’t have a Consul cluster at hand either :(.

confd to the rescue

Confd is a Golang written binary that allows us to make configuration files dynamic by providing a templating engine driven by backend data stores like etcd, Consul, DynamoDb, Redis, Vault, Zookeeper. It is commonly used to allow classic load balancers like Nginx and HAProxy to automatically reconfigure themselves when new healthy upstream services come online under different IP addresses.

NOTE: For the sake of simplicity I will use a very simple example to demonstrate how to use confd to remotely reconfigure an Nginx route by listening to changes performed against an Azure Redis Cache backend. However, this idea can be extrapolated to solve service discovery problems whereby application instances continuously report their health and location to a Service Registry (in our case Azure Redis) that is monitored by the Load Balancer service in order to reconfigure itself if necessary.

https://www.nginx.com/blog/service-discovery-in-a-microservices-architecture

Just as a side note, confd was created by Kelsey Hightower (now Staff Developer Advocate, Google Cloud Platform) in the early Docker and CoreOS days. If you haven’t heard of Kelsey I totally recommend you YouTube around for him to watch any of his talks.

Prerequisites

Azure Redis Cache

Redis, our Service Discovery data store will be listening on XXXX-XXXX-XXXX.redis.cache.windows.net:6380 (whereXXXX-XXXX-XXXX is your DNS prefix). confd will monitor changes on the /myapp/suggestions/drink cache key and then update Nginx configuration accordingly.

Container images

confd + nginx container image
confd’s support for Redis backend using a password is still not available under the stable or alpha release as of August 2017. I explain how to easily compile the binary and include it in an Nginx container in a previous post.

TLDR: docker pull xynova/nginx-confd

socat container image
confd is currently unable to connect to Redis through TLS (required by Azure Redis Cache). To overcome this limitation we will use a protocol translation tool called socat which I also talk about in a previous post.

TLDR: docker pull xynova/socat

Preparing confd templates

Driving Nginx configuration with Azure Redis

We first start a xynova/nginx-confd container and mount our prepared confd configurations as a volume under the /etc/confd path. We are also binding port 80 to 8080 on localhost so that we can access Nginx by browsing to http://localhost:8080.


The interactive session logs show us that confd fails to connect to Redis on 127.0.0.1:6379 because there is no Redis service inside the container.

To fix this we bring xynova/socat to create a tunnel that confd can use to talk to Azure Redis Cache in the cloud. We open a new terminal and type the following (note: replace XXXX-XXXX-XXXX with your own Azure Redis prefix).

Notice that by specifying --net container:nginx option, I am instructing the xynova/socat container to join the xynova/nginx-confd container network namespace. This is the way we get containers to share their own private localhost sandbox.

Now looking back at our interactive logs we can see that confd is now talking to Azure Redis but it cannot find the/myapp/suggestions/drink cache key.

Lets just set a value for that key:

confd is now happily synchronized with Azure Redis and the Nginx service is up and running.

We now browse to http://localhost:8080 and check test our container composition:

Covfefe… should we fix that?
We just set the /myapp/suggestions/drink key to coffee.

Watch how confd notices the change and proceeds to update the target config files.

Now if we refresh our browser we see:

Happy hacking.

 

Build from source and package into a minimal image with the new Docker Multi-Stage Build feature

Confd is a Golang written binary that can help us make configuration files dynamic. It achieves this by providing a templating engine that is driven by backend data stores like etcd, consul, dynamodb, redis, vault, zookeeper.

https://github.com/kelseyhightower/confd

A few days ago I started putting together a BYO load-balancing PoC where I wanted to use confd and Nginx. I realised however that some features that I needed from confd were not yet released. Not a problem; I was able to compile the master branch and package the resulting binary into an Nginx container all in one go, and without even having Golang installed on my machine. Here is how:

confd + Nginx with Docker Multi-Stage builds

First I will create my container startup script  docker-confd/nginx-confd.sh.
This script launches nginx and confd in the container but tracks both processes so that I can exit the container if either of them fail.

Normally you want to have only once process per container. In my particular case I have inter-process signaling between confd and Nginx and therefore it is easier for me to keep both processes together.

Now I create my Multi-Stage build Dockerfile:  docker-confd/Dockerfile
I denote a build stage by using the AS <STAGE-NAME> keyword: FROM golang:1.8.3-alpine3.6 AS confd-build-stage. I can reference the stage by name further down when I am copying the resulting binary into the Nginx container.

Now I build my image by executing docker build -t confd-nginx-local docker-confd-nginx.

DONE!, just about 15MB extra to the Nginx base alpine image.

Read more about the Multi-Stage Build feature on the Docker website.

Happy hacking.

SSL Tunneling with socat in Docker to safely access Azure Redis on port 6379

Redis Cache is an advanced key-value store that we should have all come across in one way or another by now. Azure, AWS and many other cloud providers have fully managed offerings for it, which is “THE” way we want to consume it.  As a little bit of insight, Redis itself was designed for use within a trusted private network and does not support encrypted connections. Public offerings like Azure use TLS reverse proxies to overcome this limitation and provide security around the service.

However some Redis client libraries out there do not talk TLS. This becomes a  problem when they are part of other tools that you want to compose your applications with.

Solution? We bring in something that can help us do protocol translation.

socat – Multipurpose relay (SOcket CAT)

Socat is a command line based utility that establishes two bidirectional byte streams and transfers data between them. Because the streams can be constructed from a large set of different types of data sinks and sources (see address types), and because lots of address options may be applied to the streams, socat can be used for many different purposes.

https://linux.die.net/man/1/socat

In short: it is a tool that can establish a communication between two points and manage protocol translation between them.

An interesting fact is that socat is currently used to port forward docker exec onto nodes in Kubernetes. It does this by creating a tunnel from the API server to Nodes.

Packaging socat into a Docker container

One of the great benefits of Docker is that it allows you to work in sandbox environments. These environments are then fully transportable and can eventually become part of your overall application.

The following procedure prepares a container that includes the socat binary and common certificate authorities required for public TLS certificate chain validation.

We first create our  docker-socat/Dockerfile

Now we build a local Docker image by executing docker build -t socat-local docker-socat. You are free to push this image to a Docker Registry at this point.

Creating TLS tunnel into Azure Redis

To access Azure Redis you we need 2 things:

  1. The FQDN: XXXX-XXXX-XXXX.redis.cache.windows.net:6380
    where all the X’s represent your dns name.
  2. The access key, found under the Access Keys menu of your Cache instance. I will call it THE-XXXX-PASSWORD

Let’s start our socat tunnel by spinning up the container we just built an image for. Notice I am binding port 6379 to my Desktop so that I can connect to the tunnel from localhost:6379 on my machine.

Now let’s have a look at the  arguments I am passing in to socat (which is automatically invoked thanks to the ENTRYPOINT ["socat"] instruction we included when building the container image).

  1. -v
    For checking logs when when doing docker logs socat
  2. TCP-LISTEN:6379,fork,reuseaddr
    – Start a socket listener on port 6379
    – fork to allow for subsequent connections (otherwise a one off)
    – reuseaddr to allow socat to restart and use the same port (in case a previous one is still held by the OS)

  3. openssl-connect:XXXX-XXXX-XXXX.redis.cache.windows.net:6380
    – Create a TLS connect tunnel to the Azure Redis Cache.

Testing connectivity to Azure Redis

Now I will just test my tunnel using redis-cli which I can also use from a container.  In this case THE-XXXX-PASSWORD is the Redis Access Key.

The thing to notice here is the--net host flag. This instructs Docker not to create a new virtual NIC and namespace to isolate the container network but instead use the Host’s (my desktop) interface. This means that localhost in the container is really localhost on my Desktop.

If everything is set up properly and outgoing connections on port6379 are allowed, you should get a PONG message back from redis.

Happy hacking!

Getting started with Ubuntu on Windows (Windows Subsystem for Linux)

This week I was building in Azure a Linux Server (Ubuntu 14). I’d deployed my new Ubuntu Server and I went to connect to it. But I was on a brand new laptop. No tools with SSH installed. Damn. As I was about to go and get my usual windows favorite SSH tools I remembered a session of Build 2017 and Microsoft starting to talk more loudly about Windows Subsystem for Linux. Yes, Ubuntu on Windows, with SUSE and Fedora coming soon. TechCrunch story here.

Now it is still listed as Beta, but the changes appear to coming pretty fast. I figured it should have more than enough for what I needed, and I could hopefully avoid having to install other 3rd party tools and maybe even finally say goodbye to Cygwin. So I dove in, and here is my quick-start guide to get you started.

Prerequisite

Your computer must be running (at a minimum) a 64-bit version of Windows 10 Anniversary Update. OS Build 14393

Installing Windows Subsystem for Linux

To configure your Windows 10 machine to accept WSL go to Windows => Settings and select Update & Security.

Select For developers and enable Developer Mode.

Agree to the warning.

Now open Turn Windows Features on or off and select the checkbox for Windows Subsystem for Linux 

Restart your workstation

After the restart from an elevated command prompt type Bash to attempt to start a Bash Shell. As it is the first time, you will be prompted to install Ubuntu.

Following installation you will be prompted to create a Linux User. This is purely for the Linux environment so does not have anything to do with your Windows Login and Password.

Using SSH from WSL

Now that I have a bash shell on my Windows laptop, lets use SSH to connect to my new Ubuntu Server.

And I’m in. Happy days.

Ubuntu security hardening for the cloud.

Hardening Ubuntu Server Security For Use in the Cloud

The following describes a few simple means of improving Ubuntu Server security for use in the cloud. Many of the optimizations discussed below apply equally to other Linux based distribution although the commands and settings will vary somewhat.

Azure cloud specific recommendations

  1. Use private key and certificate based SSH authentication exclusively and never use passwords.
  2. Never employ common usernames such as root , admin or administrator.
  3. Change the default public SSH port away from 22.

AWS cloud specific recommendations

AWS makes available a small list of recommendation for securing Linux in their cloud security whitepaper.

Ubuntu / Linux specific recommendations

1. Disable the use of all insecure protocols (FTP, Telnet, RSH and HTTP) and replace them with their encrypted counterparts such as sFTP, SSH, SCP and HTTPS

yum erase inetd xinetd ypserv tftp-server telnet-server rsh-server

2. Uninstall all unnecessary packages

dpkg --get-selections | grep -v deinstall
dpkg --get-selections | grep postgres
yum remove packageName

For more information: http://askubuntu.com/questions/17823/how-to-list-all-installed-packages

3. Run the most recent kernel version available for your distribution

For more information: https://wiki.ubuntu.com/Kernel/LTSEnablementStack

4. Disable root SSH shell access

Open the following file…

sudo vim /etc/ssh/sshd_config

… then change the following value to no.

PermitRootLogin yes

For more information: http://askubuntu.com/questions/27559/how-do-i-disable-remote-ssh-login-as-root-from-a-server

5. Grant shell access to as few users as possible and limit their permissions

Limiting shell access is an important means of securing a system. Shell access is inherently dangerous because of the risk of unlawfully privilege escalations as with any operating systems, however stolen credentials are a concern too.

Open the following file…

sudo vim /etc/ssh/sshd_config

… then add an entry for each user to be allowed.

AllowUsers jim,tom,sally

For more information: http://www.cyberciti.biz/faq/howto-limit-what-users-can-log-onto-system-via-ssh/

6. Limit or change the IP addresses SSH listens on

Open the following file…

sudo vim /etc/ssh/sshd_config

… then add the following.

ListenAddress <IP ADDRESS>

For more information:

http://askubuntu.com/questions/82280/how-do-i-get-ssh-to-listen-on-a-new-ip-without-restarting-the-machine

7. Restrict all forms of access to the host by individual IPs or address ranges

TCP wrapper based access lists can be included in the following files.

/etc/hosts.allow
/etc/hosts.deny

Note: Any changes to your hosts.allow and hosts.deny files take immediate effect, no restarts are needed.

Patterns

ALL : 123.12.

Would match all hosts in the 123.12.0.0 network.

ALL : 192.168.0.1/255.255.255.0

An IP address and subnet mask can be used in a rule.

sshd : /etc/sshd.deny

If the client list begins with a slash (/), it is treated as a filename. In the above rule, TCP wrappers looks up the file sshd.deny for all SSH connections.

sshd : ALL EXCEPT 192.168.0.15

This will allow SSH connections from only the machine with IP address 192.168.0.15 and block all other connection attemps. You can use the options allow or deny to allow or restrict access on a per client basis in either of the files.

in.telnetd : 192.168.5.5 : deny
in.telnetd : 192.168.5.6 : allow

Warning: While restricting system shell access by IP address be very careful not to loose access to the system by locking the administrative user out!

For more information: https://debian-administration.org/article/87/Keeping_SSH_access_secure

8. Check listening network ports

Check listening ports and uninstall or disable all unessential or insecure protocols and deamons.

netstat -tulpn

9. Install Fail2ban

Fail2ban is a means of dealing with unwanted system access attempts over any protocol against a Linux host. It uses rule sets to automate variable length IP banning sources of configurable activity patterns such as SPAM, (D)DOS or brute force attacks.

“Fail2Ban is an intrusion prevention software framework that protects computer servers from brute-force attacks. Written in the Python programming language, it is able to run on POSIX systems that have an interface to a packet-control system or firewall installed locally, for example, iptables or TCP Wrapper.” – Wikipedia

For more information: https://www.digitalocean.com/community/tutorials/how-to-protect-ssh-with-fail2ban-on-ubuntu-14-04

10. Improve the robustness of TCP/IP

Add the following to harden your networking configuration…

10-network-security.conf

… such as

sudo vim /etc/sysctl.d/10-network-security.conf
Ignore ICMP broadcast requests
net.ipv4.icmp_echo_ignore_broadcasts = 1

# Disable source packet routing
net.ipv4.conf.all.accept_source_route = 0
net.ipv6.conf.all.accept_source_route = 0 
net.ipv4.conf.default.accept_source_route = 0
net.ipv6.conf.default.accept_source_route = 0

# Ignore send redirects
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0

# Block SYN attacks
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 5

# Log Martians
net.ipv4.conf.all.log_martians = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1

# Ignore ICMP redirects
net.ipv4.conf.all.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0 
net.ipv6.conf.default.accept_redirects = 0

# Ignore Directed pings
net.ipv4.icmp_echo_ignore_all = 1

And load the new rules as follows.

service procps start

For more information: https://blog.mattbrock.co.uk/hardening-the-security-on-ubuntu-server-14-04/

11. If you are serving web traffic install mod-security

Web application firewalls can be helpful in warning of and fending off a range of attack vectors including SQL injection, (D)DOS, cross-site scripting (XSS) and many others.

“ModSecurity is an open source, cross-platform web application firewall (WAF) module. Known as the “Swiss Army Knife” of WAFs, it enables web application defenders to gain visibility into HTTP(S) traffic and provides a power rules language and API to implement advanced protections.”

For more information: https://modsecurity.org/

12. Install a firewall such as IPtables

IPtables is a highlight configurable and very powerful Linux forewall which has a great deal to offer in terms of bolstering hosts based security.

iptables is a user-space application program that allows a system administrator to configure the tables provided by the Linux kernel firewall (implemented as different Netfilter modules) and the chains and rules it stores.” – Wikipedia.

For more information: https://help.ubuntu.com/community/IptablesHowTo

13. Keep all packages up to date at all times and install security updates as soon as possible

 sudo apt-get update        # Fetches the list of available updates
 sudo apt-get upgrade       # Strictly upgrades the current packages
 sudo apt-get dist-upgrade  # Installs updates (new ones)

14. Install multifactor authentication for shell access

Nowadays it’s possible to use multi-factor authentication for shell access thanks to Google Authenticator.

For more information: https://www.digitalocean.com/community/tutorials/how-to-set-up-multi-factor-authentication-for-ssh-on-ubuntu-14-04

15. Add a second level of authentication behind every web based login page

Stolen passwords are a common problem whether as a result of a vulnerable web application, an SQL injection, a compromised end user computer or something else altogether adding a second layer of protection using .htaccess authentication with credentials stored on the filesystem not in a database is great added security.

For more information: http://stackoverflow.com/questions/6441578/how-secure-is-htaccess-password-protection

Performance Tuning Ubuntu Server For Use in Azure cloud

The following describes how to performance tune Ubuntu Server virtual machines for use in Azure. Although this article focuses on Ubuntu Server because it’s better established in Azure at this time. It’s worth mentioning that Debian offers better performance and stability overall, albeit at the cost of some of the more recent functionality support available in Ubuntu. Regardless many of the optimizations discussed below apply equally to both although commands and settings may vary occasionally.

Best practice recommendations from Microsoft.

  1. Don’t use the OS disk for other workloads.
  2. Use a 1TB disk minimum for all data workloads.
  3. Use storage accounts in the same datacenter as your virtual machines.
  4. In need of additional IOPs? Add more, not bigger disks.
  5. Limit the number of disks in a storage account to no more than 40.
  6. Use Premium storage for blobs backed by SSDs where necessary.
  7. Disable ‘barriers’ for all premium disks using ‘Readonly’ or ‘None’ caching.
  8. Storage accounts have a limit of 20K IOPs and 500TB capacity.
  9. Enable ‘Read’ caching for small read datasets only, disable it if not.
  10. Don’t store your Linux swapfile on the temporary drive provided by default.
  11. Use EXT4 filesystem.
  12. In Azure IOPs are throttled according to VM size so choose accordingly.

Linux specific optimisations you might also consider.

1. Decrease memory ‘swappiness’ and increase inode caching:

sudo echo vm.swappiness=10 >> /etc/sysctl.conf
sudo echo vm.vfs_cache_pressure=50 >> /etc/sysctl.conf

For more information: http://askubuntu.com/questions/184217/why-most-people-recommend-to-reduce-swappiness-to-10-20

2. Disable CPU scaling / run at maximum frequency all the time:

sudo chmod -x /etc/init.d/ondemand

For more information: http://askubuntu.com/questions/523640/how-i-can-disable-cpu-frequency-scaling-and-set-the-system-to-performance

3. Mount all disks with ‘noatime’ and ‘nobarrier’ (see above) options:

sudo vim /etc/fstab

Add ‘noatime,nobarrier’ to the mount options of all disks.

For more information: https://wiki.archlinux.org/index.php/fstab

4. Upgrade to a more recent Ubuntu kernel image and remove the old:

sudo aptitude update
sudo aptitude search linux-image
sudo aptitude install -y linux-image-4.4.0-28-generic
sudo aptitude remove -y linux-image-3.19.0-65-generic

In the example above the latest available kernel version available is version ‘linux-image-4.4.0-28-generic’ and the version currently installed was ‘linux-image-3.19.0-65-generic’ but these will change of course.

5. Change IO scheduler to something more suited to SSDs (i.e. deadline):

Edit the grub defaults file.

sudo vim /etc/default/grub

Change the following line from

GRUB_CMDLINE_LINUX_DEFAULT=”quiet splash”

to

GRUB_CMDLINE_LINUX_DEFAULT=”quiet splash elevator=deadline

Then run

sudo update-grub2

For more information: http://stackoverflow.com/questions/1009577/selecting-a-linux-i-o-scheduler

6. Mount a suitably sized data disk:

First start by creating a new 1TB disk using the Azure CLI.

https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-linux-classic-attach-disk/

Partition the new disk and format it in ext4 using the following script.

#!/bin/sh</div>
hdd="/dev/sdc"
for i in $hdd;do
echo "n
p
1
w
"|fdisk $i;mkfs.ext4 $i;done

Mount the disk.

mkdir /mnt/data/
mount -t ext4 /dev/sdc1 /mnt/data/

Obtain UUID of newly mounted disk.

blkid /dev/sdc

Add the following to /etc/fstab.

UUID=<NEW DISK UUID>       /        ext4   noatime,defaults,discard        0 0

6. Add a swap file:

sudo dd if=/dev/zero of=/mnt/data/swapfile bs=1G count=32
sudo chmod 600 /mnt/data/swapfile
sudo mkswap /mnt/data/swapfile
sudo swapon /mnt/data/swapfile
echo "/mnt/data/swapfile   none    swap    sw    0   0" >> /etc/fstab

8. Enable Linux Kernel TRIM support for SSD drives:

sudo sed -i 's/exec fstrim-all/exec fstrim-all --no-model-check/g' /etc/cron.weekly/fstrim

For more information: https://www.leaseweb.com/labs/2013/12/ubuntu-14-04-lts-supports-trim-ssd-drives/

 

Deploy Hardened HA-Proxy Azure VM from VM Depot (Microsoft Open Technologies)

In this post, we will discuss how to deploy various VM image developed by community from VM Depot (Microsoft Open Technologies).

Microsoft Azure Cross Platform Command Line (X-Plat CLI)

I blogged Microsoft Azure Cross Platform Command Line previously. This post will continue to explore Microsoft dedication on Open-Source technologies.

Firstly let’s prepare quickly our tools to run Azure X-Plat CLI:

1. I am using my Windows machine. I run my Azure Command Prompt or You can use node.js on Windows as Administrator

2. Install Azure Tools if You have not by leveraging NPM (Node Package Manager) to install Azure tools:

  • npm install azure-cli –global

xplat1

 

Let’s test it by typing command: Azure

xplat2

 

Hooray! If You notice from my previous blog, Microsoft changed it from Windows Azure: Microsoft’s Cloud Platform to Microsoft Azure: Microsoft’s Cloud Platform. Next download your Azure account if You have not by executing:

  • azure account download

Import the *.publishsettings file You downloaded using:

  • azure account import

Delete the *.publishsettings file after importing it and always good habit to set the default subscription before running any script to avoid awkward deployment to wrong subscription:

  • azure account set

xplat3

Deploy VM using VM Depot Community Image

Next we go to VM Depot to look for Image we will deploy to our Azure subscription.

HA-Proxy 1.4.18 on Hardened Ubuntu 12.04 LTS image is available there! The best part of this image is the image has been tweaked and hardened. HA-Proxy will provide reliable layer 7 load balancing which is a nice option to have together with Azure ILB (Internal Load Balance)

nixha1

 

Now how do we deploy this image ?
In this post we will try Deployment script method

Click deployment script icon on the page and we get the script to deploy this image:

nixha3

Let’s modify the script by changing the DNS_PREFIX = our Service Name, USER_NAME to admin user name, PASSWORD with admin user name password (Strong complex 8 chars password required).

Run the script above:

nixha2

Let’s jump to our good friend Mr. PowerShell and Run Get-AzureVM command to confirm:

kloudhaproxy001

And that’s it! We can SSH to our harden HAProxy Azure VM from VM Depot

 

 

 

 

 

Microsoft Azure Cross Platform Command Line Step by Step

Microsoft Azure is not just about Windows, Microsoft Azure also supports Linux workloads. Spinning up Linux VMs in Microsoft’s fabric offers alternative options for open-source technologies with Microsoft Azure services.

Microsoft also provides Azure Cross-Platform Command-Line Interface (X-Plat CLI) which is a set of Open-Source, Cross-Platform commands for managing Microsoft Azure platform. X-Plat CLI has few top-level commands which correspond to different set of Microsoft Azure features. Typing “azure” will list each of the sub commands.

X-Plat CLI command line tool is implemented in JavaScript (powered by Node.js).

Deploying Linux VM from Microsoft Azure Management Portal
This blog provides Step by Step instructions via Linux VM. It is quite straight forward deploying secure Linux VM by providing a PEM Certificate associated with Private Key. With this Certificate it is possible to create “Password-less” Linux VM when using Azure Management Portal “From Gallery” functionality or Command Line Tool.

Below instructions will provide Step-by-Step guide how to generate key pair with Git. OpenSSL parameter is used on this occasion.

  • Install and launch Git Bash for Windows computer or launch Terminal for OSX
  • Use command below to generate a key pair:

    nix1

  • Fill in some information for the key pair

    nix2

Once it is done, save the .pem file – this is the public key to be “attached” to new Windows Azure VM. Store the “.key” file safely. For Linux system or Mac OSX command chmod 0600 *.key can be leveraged to secure it from unwanted access.

Create new Azure Linux VM by clicking New button on the bottom left Azure Management Portal and select Compute > Virtual Machine > From Gallery > Pick the Linux OS (Recommended to create the Affinity Group, Storage Account and Virtual Network) > Ensure not to check the “Provide a Password” checkbox, instead upload the Certificate. This is the .pem certificate above. Simply follow the rest of deployment wizard to complete the deployment.

nix3

Connecting to Linux VM
SSH Client is needed to connect to Linux VM:

  • Linux: Use SSH
  • Mac OSX: Use SSH built-in the Terminal App
  • Windows: Use SSH built-in Git Bash shell or Download and use Putty

Below is sample SSH command, simply by passing various parameters:

$ssh –i ./kloudadmin.key kloudadmin@kloudlinux1.cloudapp.net –p22

  • ssh = command we use to connect to Linux VM
  • -i ./kloudadmin.key = pointing to private .key file associated .pem used for Linux VM
  • kloudadmin@kloudlinux1.cloudapp.net = Linux VM user name @ VM DNS name
  • -p 22 = The port to connect to, 22 is the default endpoint (the endpoint can be specified)

nix4

Installing Cross Platform Command Line (X-Plat CLI)
There are few ways to install the X-Plat CLI; using installer packages for Windows and OS X or combination of Node.js and NPM for Linux.

Node.js and npm via nave
Nave is a tool for handling node.js installations. Nave is to node.js just like RVM is to Ruby. It pulls directly from nodejs.org

Follow below instructions:

Note: # = explanation; $ command = execute on Linux VM

$ sudo su –#install node.js through nave
$ wget https://raw.github.com/isaacs/nave/master/nave.sh
$ chmod +x nave.sh
$ ./nave.sh install 0.10.15
$ ./nave.sh use 0.10.15
$ node –v

nix5

#install npm

$ curl -s https://npmjs.org/install.sh > npm-install-$$.sh
$ sh npm-install-*.sh

 nix6

Microsoft Azure X-Plat CLI
use npm command to install Azure X-Plat CLI

#install X-Plat CLI
$ npm install azure-cli -g

Using Microsoft Azure X-Plat CLI

Type $azure to test and show sub-commands

nix7

Microsoft Azure Publish Settings File
MIcrosoft Azure Publish Settings File needs to be downloaded and imported in order to create resources on related subscription.

$azure account download
$azure account import “path to the publishing file”

Note: You need a browser to download the publish settings file or You can download the file from local machine and upload it to Azure Linux VM
Example Azure Account Import: $ azure account import My.publishsettings

nix8

Click here for more details info how to use Microsoft Azure X-Plat CLI 

When Will Microsoft Drop “Windows” from the Name of Windows Intune?

It has been a pleasure to observe a truly significant change in the thinking at Microsoft.  Slowly, Microsoft is realizing that not everything is about Windows anymore.  I say this as someone who is a former employee of Microsoft.  I am a regular user of Windows.  I personally think that Windows is a terrific product and brand.  I run Windows 8.1 Update 1 on my notebook.  I also run Windows VMs.

 

But we live in a world of BYOD now.  For many users, IT no longer chooses the device that you use at work.  Moreover, even users who have a device dictated by IT will often use a secondary device that is a personal asset.  Some companies have sanctioned BYOD programs.  Other companies forbid BYOD, but users find ways around policies which are not well enforced.  To Microsoft’s credit, they have realized that BYOD is an inevitable trend.  Rather than fight it, they have chosen to embrace it.  This is very sensible because users are unwilling to be told what device they have to use. 

 

Microsoft’s move to embrace management of other platforms goes back to 2008.  Microsoft announced at the Microsoft Management Summit that System Center would support cross platform management of Unix and Linux servers with System Center Operations Manager 2007 R2.  This was a big shift for Microsoft which previously only supported management of Windows Server.   Why did Microsoft make this change?  Because they realized that all enterprises run a heterogeneous mix of servers.  While Windows Server may be the dominant server platform in most organizations, there is generally a small quantity of non-Microsoft servers which need to be managed.  For System Center to be a true enterprise class management tool, it needed to manage every server in the enterprise.

 

In 2012, Microsoft released System Center Configuration Manager 2012 SP1.  This was the first release of Config Manager which provided native management of Linux, Unix, and Mac OS X.  Microsoft also released a version of System Center Endpoint Protection that was compatible with Mac OS X and Linux.  Previous versions of Config Manager required 3rd party management extensions to manage these platforms. 

 

In April 2014, Microsoft announced that they would be changing the name of Windows Azure to Microsoft Azure.  Microsoft dropped “Windows” from the Azure product name to emphasize that Azure is not just a platform for running Windows VMs and .NET applications.  Microsoft Azure supports Linux, Java, PHP, Oracle, and other non-Microsoft technologies.  Microsoft’s goal is to rebrand itself as a company focused on public and hybrid clouds, not just clouds that run Windows Server.

 

Reviewing Microsoft’s recent history around cross platform management leads to the inevitable question:

When will Microsoft wake up and drop the name “Windows” from Windows Intune?

When Windows Intune began as a product back in 2010, it was developed out of the Windows Product Group.  Windows Intune could only manage Windows PC and Windows Client VMs.  Every customer that purchased Windows Intune was also purchasing a Windows Client SA Upgrade subscription.  Windows Intune and the Windows Client OS were deeply tied together. 

 

All of that changed in 2012 with Windows Intune Wave C.  Microsoft introduced new MDM features into the product.  This provide a way for Windows Intune to manage iOS, Android, and Windows Phone devices.  They changed the licensing from a per device subscription which included a Windows Client SA Upgrade to a per user subscription which could be purchased without the upgrade.  Microsoft also moved the Windows Intune Product Group into the System Center Product Group.  This made sense now that Windows Intune was evolving into a management product for multiple platforms.  But the name has remained the same for multiple releases since 2012.  The next release of Windows Intune is due to release in Q2/Q3.  For more information on this release, please see my previous blog post:

https://blog.kloud.com.au/2014/05/04/windows-intune-vnext-coming-q2q3-2014/

 

There have been some great features announced for the next version of Windows Intune.  One notable oversight is the lack of any announcements regarding the name of the product.  Windows Intune is a cloud-based management solution for the BYOD era.  It is no longer about managing Windows PCs.  When will Microsoft wake up and change the name of the product to reflect its current usage in the market?  A great cloud service deserves a great name.  Hopefully Microsoft will give Windows Intune a name that reflects its true greatness as a solution for BYOD.  How about System Center Device Manager Online?