Using Ansible to create an inventory of your AWS resources

First published on Nivlesh’s personal blog at https://nivleshc.wordpress.com

Background

I was recently at a customer site, to perform an environment review of their AWS real-estate. As part of this engagement, I was going to do an inventory of all their AWS resources. Superficially, this sounds like an easy task, however when you consider the various regions that resources can be provisioned into, the amount of work required for a simple inventory can easily escalate.

Not being a big fan of manual work, I started to look at ways to automate this task. I quickly settled on Ansible as the tool of choice and not long after, I had two Ansible playbooks ready (the main and the worker playbook) to perform the inventory.

In this blog, I will introduce the two Ansible playbooks that I wrote. The first playbook is the main actor. This is where the variables are defined. This playbook iterates over the specified AWS regions, calling the worker playbook each time, to check if any resources have been provisioned in these regions. The output is written to comma separated value (.CSV) files (I am using semi-colons instead of commas), which can be easily imported into Microsoft Excel (or any spreadsheet program of your choice) for analysis.

Introducing the Ansible playbooks

The playbooks have been configured to check the following AWS resources

  • Virtual Private Cloud (VPC)
  • Subnets within the VPCs
  • Internet Gateways
  • Route Tables
  • Security Groups
  • Network Access Control Lists
  • Customer Gateways
  • Virtual Private Gateways
  • Elastic IP Addresses
  • Elastic Compute Cloud Instances
  • Amazon Machine Images that were created
  • Elastic Block Store Volumes
  • Elastic Block Store Snapshots
  • Classic Load Balancers
  • Application Load Balancers
  • Relational Database Service Instances
  • Relational Database Service Snapshots
  • Simple Storage Service (S3) Buckets

The table below provides details for the two Ansible playbooks.

FilenamePurpose
ansible-aws-inventory-main.ymlThis is the controller playbook. It iterates over each of the specified regions, calling the worker playbook to check for any resources that are provisioned in these regions.
ansible-aws-inventory-worker.ymlThis playbook does all the heavy lifting. It checks for any provisioned resources in the region that is provided to it by the controller playbook

Let’s go through each of the sections in the main Ansible playbook (ansible-aws-inventory-main.yml), to get a better understanding of what it does.

First off, the variables that will be used are defined

aws_regions – this defines all the AWS regions which will be checked for provisioned resources

verbose – to display the results both on screen and to write it to file, set this to true. Setting this to false just writes the results to file.

owner_id – this is the account id for the AWS account that is being inventoried. It is used to retrieve all the Amazon Machine Images (AMI) that are owned by this account

Next, the column headers for each of the .CSV files is defined.

After this, the output filenames are defined. Do note that the file names use timestamps (for when the playbook is run) as prefixes. This ensures that they don’t overwrite any output files from previous runs.

When I was generating the inventory list, at times I found that I needed only a subset of resource types inventoried, instead of all (for instance when I was looking for only EC2 instances). For this reason, I found it beneficial to have boolean variables to either enable or disable inventory checks for specific resource types.

The next section lists boolean variables that control if a particular resource type should be checked or not. Set this to true if it is to be checked and false if it is to be skipped. You can set this to your own preference.

After all the variables have been defined, the tasks that will be carried out are configured.

The first task initialises the output .CSV files with the column headers.

Once the initialisation has been completed, the inventory process is started by looping through each of the specified AWS regions and calling the worker Ansible playbook to check for provisioned resources.

The last task displays the path for the output files.

The main Ansible playbook (ansible-aws-inventory-main.yml) can be downloaded from https://gist.github.com/nivleshc/64ea7201fb0ba8cb6f87d06adc6152de.

The worker playbook (ansible-aws-inventory-worker.yml) has the following format.

  • go through each of the defined resource types and confirm that it is to be checked (checks for a particular resource type are enabled using the boolean variable that is defined in the main playbook)
  • If checks are enabled for that particular resource type, find all provisioned resources of that type in the region provided by the main Ansible playbook
  • write the results to the respective output file
  • if verbose is enabled, write the results to screen

The worker file (ansible-aws-inventory-worker.yml) can be downloaded from https://gist.github.com/nivleshc/bedd2c440c816ebc86dbaeddef50d500.

Running the Ansible playbooks

Use the following steps to run the above mentioned Ansible playbooks to perform an inventory of your AWS account.

1. On a computer that has Ansible installed, create a folder and name it appropriately (for instance inventory)

2. Download ansible-aws-inventory-main.yml from https://gist.github.com/nivleshc/64ea7201fb0ba8cb6f87d06adc6152de and put it in the folder that was created in step 1 above

3. Download ansible-aws-inventory-worker.yml from https://gist.github.com/nivleshc/bedd2c440c816ebc86dbaeddef50d500 and put it in the folder that was created in step 1 above

4. Download the Ansible inventory file from https://gist.github.com/nivleshc/bc2e300fe1d2779ecc15c0876fc4db62, rename it to hosts and put it in the folder that was created in step 1 above

5. Customise ansible-aws-inventory-main.yml by adding your account id as the owner_id and change the output folder by updating the output_root_folder variable. If you need to disable inventory for certain resource types, you can set the respective boolean variable to false.

6. Create a user account with access keys enabled within your AWS account. For checking all the resources defined in the playbook, at a minimum, the account must have the following permissions assigned.

AmazonVPCReadOnlyAccess
AmazonEC2ReadOnlyAccess
ElasticLoadBalancingReadOnly
AmazonRDSReadOnlyAccess
AmazonS3ReadOnlyAccess

7. Open a command line and then run the following to configure environment variables with credentials of the account that was created in step 6 above (the following commands are specific to MacOS).

export AWS_ACCESS_KEY_ID="xxxxx"
export AWS_SECRET_ACCESS_KEY="xxxxxxx"

8. There is a possibility that you might encounter an error with boto complaining that it is unable to access region us-west-3. To fix this, define the following environment variable as well.

export BOTO_USE_ENDPOINT_HEURISTICS=True

9. Run the Ansible playbook using the following command line.

ansible-playbook -i hosts ansible-aws-inventory-main.yml

Depending on how many resources are being inventoried, the playbook can take anywhere from five to ten minutes to complete. So, sit back and relax, while the playbook runs.

I found a bug with the Ansible “AWS S3 bucket facts” module. It ignores the region parameter and instead of returning S3 buckets in a specific region, it returns buckets in all regions. Due to this, the s3 buckets .CSV file will have the same buckets repeated in all the regions.

Hope you enjoy the above Ansible playbooks and they make your life much easier when trying to find all resources that are deployed within your AWS account.

Till the next time, enjoy!

A scenario-based tutorial for Azure Kubernetes Service – Part 2

First published on Nivlesh’s personal blog at https://nivleshc.wordpress.com.

Introduction

In this blog, we will dig a little deeper into Azure Kubernetes Service (AKS). What better way to do this than by building an AKS cluster ourselves! Just a heads-up, I will be using terminology that was introduced in part 1 of this mini-blog series. If you haven’t read it, or need a refresher, you can access it at https://blog.kloud.com.au/2019/03/04/a-scenario-based-tutorial-for-azure-kubernetes-service-part-1/

Let’s start by describing the AKS cluster architecture. The diagram below provides a great overview:

(Image copied from https://docs.microsoft.com/en-au/azure/aks/media/concepts-clusters-workloads/cluster-master-and-nodes.png)

The AKS Cluster is made up of two components. These are described below

  • cluster master node– this is an Azure managed service, which takes care of the Kubernetes service and ensures all the application workloads are properly running.
  • node– this is where the application workloads run.

The cluster master node is comprised of the following components

  • kube-apiserver– this api server provides a way to interface with the underlying Kubernetes API. Management tools such as kubectl or Kubernetes dashboard interact with this to manage the Kubernetes cluster.
  • etcd– this provides a key value store within Kubernetes, and is used for maintaining state of the Kubernetes cluster and state
  • kube-scheduler– the role of this component is to decide which nodes the newly created or scaled up application workloads can run on, and then it starts these workloads on them.
  • kube-controller-manager– the controller manager looks after several smaller controllers that perform actions such as replicating pods and handing node operations

The node is comprised of the following

  • kubelet– this is an agent that handles the orchestration requests from the cluster master node and also takes care of scheduling the running of the requested containers
  • kube-proxy– this component provides networking services on each node. It takes care of routing network traffic and managing IP addresses for services and pods
  • container runtime– this allows the container application workloads to run and interact with other resources within the node.

For more information about the above, please refer to https://docs.microsoft.com/en-au/azure/aks/concepts-clusters-workloads

Now that you have a good understanding of the Kubernetes architecture, lets move on to the preparation stage, after which we will deploy our AKS cluster.

Preparation

AKS subnet size

AKS uses a subnet to host nodes, pods, and any other Kubernetes and Azure resources that are created for the AKS cluster. As such, it is extremely important that the subnet is appropriately sized, to ensure it can accommodate the resources that will be initially created, and still have enough room for any future updates.

There are two networking methods available when deploying an Azure Kubernetes Service cluster:

  • Kubenet
  • Azure Container Networking Interface (CNI)

AKS uses kubnet by default, and in doing so, it automatically creates a virtual network and subnets that are required to host the pods in. This is a great solution if you are learning about AKS, however if you need more control, it is better to go with Azure CNI. With Azure CNI, you get the option to use an existing virtual network and subnet or you can create a custom one. This is a much better option, especially when deploying into a production environment.

In this blog, we will use Azure CNI.

The formula below provides a good estimate on how large your subnet must be, in order to accommodate your AKS resources.

Subnet size = (number of nodes + 1) + ((number of nodes + 1) * maximum number of pods per node that you configure)

When using Azure CNI, by default each node is setup to run 30 pods. If you need to change this limit, you will have to deploy your AKS cluster using Azure CLI or Azure Resource Manager templates.

Just as an example, for a default AKS cluster deployment, using Azure CNI with 4 nodes, the subnet size at a minimum must be

IPs required = (4 + 1) + ((4+ 1) * (30 pods per node)) = 5 + (5 * 30) = 155

This means that the subnet must be at least a /24.

For this blog, create a new resource group called myAKS-resourcegroup. Within this new resource group, create a virtual network called AKSVNet with an address space of 10.1.0.0/16. Inside this virtual network, create a subnet called AKSSubnet1with an address range of 10.1.3.0/24.

Deploying an Azure Kubernetes Service Cluster

Let’s proceed on to deploying our AKS cluster.

  1. Login to your Azure Portal and add Kubernetes Service
  2. Once you click on Create, you will be presented with a screen to enter your cluster’s configuration information
  3. Under Basics
  • Choose the subscription into which you want to deploy the AKS cluster
  • Choose the resource group into which you want to deploy the AKS cluster. One thing to point out here is that the cluster master node will be deployed in this resource group, however a new resource group with a name matching the naming format MC_<AKS master node resource group name>_<AKS cluster name>_regionwill be created to host the nodes where the containers will run (if you use the values specified in this blog, your node resource group will be named )
  • Provide the Kubernetes cluster name (for this blog, let’s call thismydemoAKS01)
  • Choose the region you want to deploy the AKS cluster in (for this blog, we are deploying in australiaeastregion)
  • Choose the Kubernetes version you want to deploy (you can choose the latest version, unless there is a reason to choose a specific version)
  • DNS name prefix – for simplicity, you can set this to the same as the cluster name
  • Choose the Node size. (for this blog, lets choose D2s v3 (2 vcpu, 8 GB memory)
  • Set the Node count to 1 (the Node count specifies the number of nodes that will be initially created for the AKS cluster)
  • Leave the virtual nodes to disabled

Under Authentication

  • Leave the default option to create a service principal (you can also provide an existing service principal, however for this blog, we will let the provisioning process create a new one for us)
  • RBAC allows you to control who can view the Kubernetes configuration (kubeconfig) information and to limit the permissions that they have. For now, leave RBAC turned off

Under Networking

  • Leave HTTP application routingset to No
  • As previously mentioned, by default AKS uses kubenet for networking. However, we will use Azure CNI. Change the Network configurationfrom Basicto Advanced
  • Choose the virtual network and subnet that was created as per the prerequisites (AKSVNetand AKSSubnet1)
  • Kubernetes uses a separate address range to allocate IP addresses to internal services within the cluster. This is referred to as Kubernetes service address range. This range must NOT be within the virtual network range and must not be used anywhere else. For our purposes we will use the range 10.2.4.0/24. Technically, it is possible to use IP addresses for the Kubernetes service address range from within the cluster virtual network, however this is not recommended due to potential of IP address overlaps which could potentially cause unpredictable behaviour. To read more about this, you can refer to https://docs.microsoft.com/en-au/azure/aks/configure-azure-cni.
  • Leave the Kubernetes DNS service IP addressas the default 10.2.4.10 (the default is set to the tenth IP address within the Kubernetes service address range)
  • Leave the Docker bridge address as the default 172.17.0.1/16. The Docker Bridge lets AKS nodes communicate with the underlying management platform. This IP address must not be within the virtual network IP address range of your cluster, and shouldn’t overlap with other address ranges in use on your network

Under Monitoring

  • Leave enable container monitoring set to Yes
  • Provide an existing Log Analytics workspace or create a new one

Under Tags

  • Create any tags that need to be attached to this AKS cluster
     4.  Click on Next: Review + create to get the settings validated.
          After validation has successfully passed, click on Create
         
Just be aware that it can take anywhere from 10 – 15 minutes to complete the AKS cluster provisioning.

While you are waiting

During the AKS cluster provisioning process, there are a number of things that are happening under the hood. I managed to track down some of them and have listed them below:

  • Within the resource group that you specified for the AKS cluster to be deployed in, you will now see a new AKS cluster with the name mydemoAKS01
  • If you open the virtual network that the AKS cluster has been configured to use and click on Connected devices, you will notice that a lot of IP addresses that have been already allocated.

    I have noticed that the number of IP addresses equals

    ((number of pods per node) + 1) * number of nodes

    FYI – for the AKS cluster that is being deployed in this blog, it is 31

  • A new resource group with the name complying to the naming format MC_<AKS master node resource group name>_<AKS cluster name>_region will be created. In our case it will be calledMC_myAKS-resourcegroup_mydemoAKS01_australiaeast. This resource group will contain the virtual machine for the node (not the cluster master node), including all the resources that are needed for the virtual machines (availability set, disk, network card, network security group)

What will this cost me?

The cluster master node is a managed service and you are not charged for it. You only pay for the nodes on which the application workloads are run (these are those resources inside the new resource group that gets automatically created when you provision the AKS cluster).

In the next blog, we will delve deeper into the newly deployed AKS cluster, exposing its configuration using command line tools.

Happy sailing and till the next time, enjoy!

Using Ansible to deploy an AWS environment

First published at https://nivleshc.wordpress.com

Background

Over the past few weeks, I have been looking at various automation tools for AWS. One tool that seems to get a lot of limelight is Ansible, an open source automation tool from Red Hat. I decided to give it a go, and to my amazement, I was surprised at how easy it was to learn Ansible, and how powerful it can be.

All that one must do is to write up a list of tasks using YAML notation in a file (called a playbook) and get Ansible to execute it. Ansible reads the playbook and executes the tasks in the order that they are written. Here is the biggest advantage, there are no agents to be installed on the managed computers! Ansible connects to each of the managed computers using ssh or winrm.

Another nice feature of Ansible is that it supports third party modules. This allows Ansible to be extended to support many of the services that it natively does not understand.

In this blog, we will be focusing on one of the third-party modules, the AWS module. Using this, we will use Ansible to deploy an environment within AWS.

Scenario

For this blog, we will use Ansible to provision an AWS Virtual Private Cloud (VPC) in the North Virginia (us-east-1) region. Within this VPC, we will create a public and a private subnet. We will then deploy a jumphost in the public subnet and a server within the private subnet.

Below is a diagram depicting what will be done.

Figure 1: Environment that will be deployed within AWS using Ansible Playbook

Preparation

The computer that is used to run Ansible to manage all other computers is referred to as the control machine. Currently, Ansible can be run from any machine with Python 2 (version 2.7) or Python 3 (version 3.5 or higher) installed. The Ansible control machine can run the following operating systems

  • Red Hat
  • Debian
  • CentOS
  • macOS
  • any of the BSD variants

Note: Currently windows operating system is not supported for running the control machine.

For this blog, I am using a MacBook to act as the control machine.

Before we run Ansible, we need to get a few things done. Let’s go through them now.

  1. We will use pip (Python package manager) to install Ansible. If you do not already have pip installed, run the following command to install it
    sudo easy_install pip
  2. With pip installed, use the following command to install Ansible
    sudo pip install ansible

    For those that are not using macOS for their control machine, you can get the relevant installation commands from https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html.

  3. Next, we must install the AWS Command Line Interface (CLI) tools. Use the following command for this.
    sudo pip install awscli

    More information about the AWS CLI tools is available at https://aws.amazon.com/cli/

  4. To provision items within AWS, we need to provide Ansible with a user account that has the necessary permissions. Using the AWS console, create a user account ensuring it is assigned an access key and a secret access key. At a minimum, this account must have the following policies assigned to it.
    AmazonEC2FullAccess
    AmazonVPCFullAccess

    Note: As this is a privileged user account, please ensure that the access key and secret access key is kept in a safe place.

  5. To provision AWS Elastic Compute Cloud (EC2) instances, we require key pairs created in the region that the EC2 instances will be deployed in. Ensure that you already have key pairs for the North Virginia (us-east-1) region. If not, please create them.

Instructions

Create an Ansible Playbook

Use the following steps to create an Ansible playbook to provision an AWS environment.

Open your favourite YAML editor and paste the following code

The above code instructs Ansible that it should connect to the local computer, to run all the defined tasks. This means that Ansible modules will use the local computer to connect to AWS APIs in order to carry out the tasks.

Another thing to note is that we are declaring two variables. These will be used later in the playbook.

  • vpc_region – this is the AWS region where the AWS environment will be provisioned (currently set to us-east-1)
  • my_useast1_key – provide the name of your key pair for the us-east-1 region that will be used to provision EC2 instances

Next, we will define the tasks that Ansible must carry out. The format of the tasks is as follows

  • name – this gives a descriptive name for the task
  • module name – this is the module that Ansible will use to carry out the task
  • module Parameters – these are parameters passed to the module, to carry out the specific task
  • register – this is an optional keyword and is used to record the output that is returned from the module, after the task has been carried out.

Copy the following lines of code into your YAMl file.

The above code contains two tasks.

  • the first task creates an AWS Virtual Private Cloud (VPC) using the ec2_vpc_netmodule. The output of this module is recorded in the variable ansibleVPCusing the registercommand
  • the second task outputs the contents of the variable ansibleVPC using the debugcommand (this displays the output of the previous task)

Side Note

  • Name of the VPC has been set to ansibleVPC
  • The CIDR block for the VPC has been set to 172.32.0.0/16
  • The state keyword controls what must be done to the VPC. In our case, we want it created and to exist, as such, the value for state has been set to present.
  • The region is being set by referencing the variable that was defined earlier. Variables are referenced with the notation “{{ variable name }}”

Copy the following code to create an AWS internet gateway and associate it with the newly created VPC. The second task in the below code displays the result of the internet gateway creation.

The next step is to create the public and private subnets. However, instead of hardcoding the availability zones into which these subnets will be deployed, we will pick the first availability zone in the region for our public and the second availability zone in the region for our private subnet. Copy the following code into your YAML file to show all the availability zones that are present in the region, and which ones will be used for the public and private subnets.

Copy the following code to create the public subnet in the first availability zone in us-east-1 region. Do note that we are provisioning our public subnet with CIDR range 172.32.1.0/24

Copy the following code to deploy the private subnet in the second availability zone in us-east-1 region. It will use the CIDR range 172.32.2.0/24

Hold on! To make a public subnet, it is not enough to just create a subnet. We need to create routes from that subnet to the internet gateway! The below code will address this. The private subnet does not need any such routes, it will use the default route table.

As planned, we will be deploying jumphosts within the public subnet. By default, you won’t be able to externally connect to the EC2 instances deployed within the public subnet because the default security group does not allow this.

To remediate this, we will create a new security group that will allow RDP access and assign it to the jumphost server. For simplicity, the security group will allow RDP access from anywhere, however please ensure that for your environment, you have locked it down to a few external IP addresses.

Phew! Finally, we are ready to deploy our jumphost! Copy the following code for this

I would like to point out a few things

  • The jumphost is running on a t2.micro instance. This instance type is usually sufficient for a jumphost in a lab environment, however if you need more performance, this can be changed (changing the instance type from t2.micro can take you over the AWS free tier limits and subsequently add to your monthly costs)
  • The image parameter refers to the AMI ID of the Windows 2016 base image that is currently available within the AWS console. AWS, from time to time, changes the images that are available. Please check within the AWS console to ensure that the AMI ID is valid before running the playbook
  • Instance tags are tags that are attached to the instance. In this case, the instance tags have been used to name the jumphost win2016jh.

Important Information

The following parameters are extremely important, if you do not intend on deploying a new EC2 instance for the same server every time you re-run this Ansible playbook.

exact_count– this parameter specifies the number of EC2 instances of a server that should be running whenever the Ansible playbook is run. If the current number of instances doesn’t match this number, Ansible either creates new EC2 instances for this server or terminates the extra EC2 instances. The servers are identified using the count_tag

count_tag– this is the instance tag that is used to identify a server. Multiple instances of the same server will have the same tag applied to them. This allows Ansible to easily count how many instances of a server are currently running.

Next, we will deploy the servers within the private subnet. Wait a minute! By default, the servers within the private subnet will be assigned the default security group. The default security group allows unrestricted access to all EC2 instances that have been attached to the default security group. However, since the jumphost is not part of this security group, it will not be able to connect to the servers in the private subnet!

Let’s remediate this issue by creating a new security group that will allow RDP access from the public subnet to the servers within the private subnet (in a real environment, this should be restricted further, so that the incoming connections are from particular servers within the public subnet, and not from the whole subnet itself). This new security group will be associated with the servers within the private subnet.

Copy the following code into your YAML file.

We are now at the end of the YAML file. Copy the code below to provision the windows 2016 server within the private subnet (the server will be tagged with name=win2016svr)

Save the playbook with a meaningful name. I named my playbook Ansible-create-AWS-environment.yml

The full Ansible playbook can be downloaded from https://gist.github.com/nivleshc/344dca91e3d0349c8a359b03853886be

Running the Ansible Playbook

Before we run the playbook, we need to tell Ansible about all the computers that are within the management scope. This is done using an inventory file, which contains a group name within square brackets eg [webservers] and below that, all the computers that will be in that group. Then in the playbook, we just target the group, which in turn targets all the computers in that group.

However, in our scenario, we are directly targeting the local computer (refer to the second line in the YAML file that shows hosts: localhost). In this regard, we can get away with not providing an inventory file. However, do note that doing so will mean that we can’t use anything other than localhost to reference a computer within our playbook.

Let’s create an inventory file called hostsin the same folder as where the playbook is saved. The contents of the file will be as listed below.

[local]
localhost

We are ready to run the playbook now.

Open a terminal session and change to the folder where the playbook was saved.

We need to create some environment variables to store the user details that Ansible will use to connect to AWS. This is where the access key and secret access key that we created initially will be used. Run the following command

export AWS_ACCESS_KEY_ID={access key id}
export AWS_SECRET_ACCESS_KEY={secret access key}

Now run the playbook using the following command (as previously mentioned, we could get away with not specifying the inventory file, however this means that we only can use localhost within the playbook)

ansible-playbook -i hosts ansible-create-aws-environment.yml

You should now see each of the tasks being executed, with the output being shown (remember that after each task, we have a follow-up task that shows the output using the debug keyword? )

Once the playbook execution has completed, check your AWS console to confirm that the following items have been created within the us-east-1 (North Virginia) region

  • A VPC called ansibleVPC with the CIDR 172.32.0.0/16
  • An internet gateway called ansibleVPC_igw
  • A public subnet in the first availability zone with CIDR 172.32.1.0/24
  • A private subnet in the second availability zone with CIDR 172.32.2.0/24
  • A route table called rt_ansibleVPC_PublicSubnet
  • A security group for jumphosts called sg_ansibleVPC_publicsubnet_jumphost
  • A security group for the servers in private subnet called sg_ansibleVPC_privatesubnet_servers
  • An EC2 instance in the public subnet representing a jumphost named win2016jh
  • An EC2 instance in the private subnet representing a server named win2016svr

Once the provisioning is complete, to test, connect to the jumphost and then from there connect to the server within the private subnet.

Don’t forget to turn off the EC2 instances if you don’t intend on using them

Closing Remarks

Ansible is a great automation tool and can be used to both provision and manage infrastructure within AWS.

Having said that, I couldn’t find an easy way to do post provisioning tasks (eg assigning roles, installing additional packages etc) after the server has been provisioned, without getting Ansible to connect directly to the provisioned server. This can be a challenge if the Ansible control machine is external to AWS and the provisioned server is within an AWS private subnet. With AWS CloudFormation, this is easily done. If anyone has any advice on this, I would appreciate it if you can leave it in the comments below.

I will surely be using Ansible for most of my automations from now on.

Till the next time, enjoy!

A scenario-based tutorial for Azure Kubernetes Service – Part 1

First published at https://nivleshc.wordpress.com

Introduction

Containers are gaining a lot of popularity these days. They provide an easy way to run applications, without having to worry about the underlying infrastructure.

As you might imagine, managing all these containers can become quite daunting, especially if there are numerous containers. This is where orchestration tools such as Kubernetes are very useful.

Kubernetes was developed by Google and is heavily based on their internal Borg system. It is an excellent tool to manage containers, where you provide a desired state for your containers and Kubernetes takes care of everything to ensure the containers are always in that state (for example, if a pod dies, Kubernetes will automatically start a new pod for that container, to ensure that the defined number of pods are always running). Kubernetes also provides an easy process to scale the number of pods or the number of nodes.

Soon after releasing Kubernetes, Google partnered with the Linux Foundation to form the Cloud Native Computing Foundation (CNCF). Kubernetes was then made open-source, with the Cloud Native Computing Foundation acting as its guardian. A nice writeup for Kubernetes history can be found at https://en.wikipedia.org/wiki/Kubernetes.

Kubernetes is abbreviated as k8s. If you are like me and are wondering how can the word Kubernetes be possibly shortened to k8s? Well, the 8 in k8s represents the number of characters between the letters k and s in the word Kubernetes.

With the popularity of Kubernetes soaring, Microsoft recently adopted it for its Azure environment, providing Azure Kubernetes Serviceas a managed service. The service entered general availability in June 2018. If you are interested in reading about this announcement, a good article to read is https://redmondmag.com/articles/2018/06/13/azure-kubernetes-service-ga.aspx.

This blog is the first in the mini-series that I will be publishing about Azure Kubernetes Service. I will take you through the process of creating an Azure Kubernetes Service (AKS) Cluster and then we will create an environment within the AKS cluster using some custom docker images.

In this first blog I will introduce some key Kubernetes terminologies and map out the scenario that the blog mini-series will focus on.

Terminology

Below are some of the key concepts which I believe will help immensely in understanding Kubernetes.

Pods

If you think about a pea pod, there can be one or many peas inside it. Treating each pea as a container, this translates to a pod being an encapsulation of an application container (or, in some cases, multiple containers).

As per the formal definition, a pod is an encapsulation of an application container (or, in some cases, multiple containers), storage resources, a unique network IP, and options that govern how the container(s) should run. A pod represents a unit of deployment, a single instance of an application in Kubernetes, which might consist of either a single container or a small number of containers that are tightly coupled and that share resources. A more detailed explanation is available at https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/.

One key point to remember is that pods are ephemeral, they are created and at times they die as well. In that regard, any application that directly accesses pods will eventually fail when the pod dies. Instead, you should always interact with Services, when trying to access containers deployed within Kubernetes.

Services

Due to the ephemeral nature of pods, any application that is directly accessing a pod will eventually suffer a downtime (when the pod dies, and another is created to replace it). To get around this, Kubernetes provides Services.

Think of a Service to be like an application load balancer, it provides a front end for your container, and then routes the traffic to a pod running that container. Since your applications are always connecting to a Service (the properties for the Service remain unchanged during its lifetime), they are shielded from any pod deaths. For information about services, refer to https://kubernetes.io/docs/concepts/services-networking/service/.

Namespaces

Namespaces provide a logical way of grouping your Kubernetes cluster. This allows you to provide access to different resources to different sets of users. Namespaces also provide a scope for names. Names must be unique within a namespace however they do not need to be unique across namespaces. A more in-depth description can be found at https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/

Kubernetes Control Plane (master)

The Kubernetes master (this is a collection of processes) ensures the Kubernetes cluster is working as expected by maintaining the clusters desired state.

Kubernetes Nodes

The nodes are where the containers and workflows are run. The nodes can be virtual machines, physical machines etc. The Kubernetes master controls each node.

Scenario

The diagram below shows the environment we will be deploying within our Azure Kubernetes Service (AKS) cluster.

In summary, we will deploy three pods, each running a customised nginx container. The nginx containers will be listening on non-http/https ports. As Kubernetes does not natively provide a way to route non-http/https traffic to services, we will be deploying nginx ingress controllers to enable this functionality.

Figure 1 – Infrastructure that will be deployed within the Azure Kubernetes Service cluster

In the next blog in this mini-series, we will deploy the Azure Kubernetes Service cluster.

Happy sailing and see you soon!

VicRoads digital transformation in the cloud and beyond

How VicRoads managed its cloud migration, improving data compliance and streamlining its digital operations.

Last year, VicRoads embarked on an ambitious project to revolutionise its existing informational website to a transactional one, offering online versions of many regular interactions, such as vehicle registration and permits.

While a big step up for the experience of Victorian motorists, from a technical perspective, this move required VicRoads to completely revisit its cloud architecture and delivery model. This would ensure new personal and financial data being gathered were stored in accordance to best practices and in compliance with government standards.

“Our old capability, which was a hosted virtual service, was not able to meet our new compliance needs,” says Babu Krishnamoorthy, Director of ICT Strategy at VicRoads, “we needed to ensure that customer data was protected and we were compliant”.

Through Amazon’s Elastic Container Service and the power of Docker, Kloud – A Telstra Company – delivered an architecture which gave VicRoads the flexibility they needed to reduce release cycle windows and avoid unnecessary downtime.

Compliance and beyond

The decision was clear, a move to a modern hosting platform was critical to the ongoing success of delivering a world-class solution for themotorists of Victoria.

“The initial trigger was compliance, however beyond that, our hosting arrangement was constraining us,” says Babu. “Our ability to push product to market was slow, it would typically take between 15 and 20 people to deliver software to production. We would also typically run between 12 and 15 hours of full-website outage to deploy new code.”

18 months later, Babu says that the new platform allows just three people to deploy with no, or almost no website down time – and he aims to bring that down to just one ‘clicking a button’ with zero downtime for standard web enhancements in the near future.

To ensure everything went smoothly, VicRoads engaged Kloud – A Telstra company, to help manage the transformation.

“Kloud has been an amazing partner for us,” says Babu, “from the beginning, we knew what we wanted to achieve, but we didn’t know how to get there, because the cloud market is changing on almost a daily basis.”

“Kloud was able to take our needs and identify the most appropriate system, tools, products and processes for us to adopt. They made life much easier for us, by removing some of those confusions.” he said.

An intrepid journey

VicRoads had set the bar high with an ambitious timeline for relaunching their web presence. Babu’s team made the initial move from traditional hosting into the cloud with Amazon Web Services (AWS), in just three months, by focusing on replicating “like for like functionality,” rather than launching with all the “bells and whistles”.

“We were able to move from inception through to the new platform in 3 months, but to get to that kind of velocity we had to take shortcuts. We moved to a cloud-based platform but weren’t taking on board the capabilities of the cloud,” says Babu.

“Step two was more of a six-to-nine-month goal, where we then started to transform our presence in the cloud to better utilise the capability of AWS. That unlocked the really big change that we were able to achieve.”

This second phase allowed Babu’s team to adopt cutting-edge technology, including Docker and Amazon’s Elastic Container Service (ECS), which allowed VicRoads to scale their cloud presence more effectively and efficiently, letting technology to do the heavy lifting while relying on their integration partner to guide them through this unknown landscape.

Start small – grow big

Of course, working with cutting edge technology has its downsides.

“When you’re working with new tech, you don’t have too many reference clients and people that you can learn lessons from,” explains Babu.

“In hindsight, I would suggest running your upgrades as a proof of concept first then moving that proof of concept into your critical path pipeline. First: validate the nuances of your proof of concept. Your integration points. The technology itself and how it’s going to talk. How you’re managing doing DevOps.”

Having successfully realised the benefits of migrating traditional services into the cloud, he advises others in large organisations not to wait around for the perfect migration plan.

“Don’t hesitate and invest. Even if your investment is small, take that as your first step of a journey. Your investment in the cloud, your investment in agile operations, your investment in new hosting, these small steps will generate a little bit of momentum, and that momentum starts to build up a snowball that you’re hopefully able to get a solution like we’ve achieved.”

“What we’ve ended up doing is we’ve shrunk our downtime, so that almost 70 per cent of our use cases can now be deployed into production with zero outage, and the remaining 30 per cent can be done within about 20 minutes of outage. This is down from 12 hours for every deployment.” Babu Krishnamoorthy, Director of ICT at VicRoads. 

Source: https://insight.telstra.com.au/optimise-your-it/articles/vicroads-digital-transformation-in-the-cloud-and-beyond

Nested Virtual PowerShell Desktop Environments on Windows 10 & Windows Server 2019 in Azure – Part 2

27 Nov 18 Part 3 is available here that details customizing 
an image and accessing it via other SSH clients with elevated
access.

In Part-1 of this series posted yesterday I showed that with Windows 10/Windows Server 2019 we can now have isolated virtual environments for PowerShell Desktop in Azure through containerization.

In this post I’ll show how I plan to leverage this capability from a mobility perspective. What we need to do first is enable elevated (privileged) access to our VM. My Client will be Azure Cloud Shell. My target/host is the Windows 10 1809 Virtual Machine I deployed in the last post.

Enabling SSH Key Based Privileged Authentication to our Windows 10 VM

To setup Key Based Access (over password access, which is required for elevated access) we need to configure the SSH Server and our Client.

SSH Server

On the Windows 10 Azure VM where we installed OpenSSH as per the first post here, we need to start the SSH-Agent. By default it is set to Disabled. Change the Startup Type, Start it and test it by adding the local user to the Agent. Using an elevated PowerShell session on the Azure Windows VM run;

Set-Service ssh-agent -startupType automatic
Start-Service ssh-agent
cd ~\
ssh-add .\.ssh\id_rsa

Add SSH Key to SSH-Agent on Server.PNG

SSH Client

As I’m using Azure Cloud Shell as my client, I started a Cloud Shell Session in my browser.

  • In Azure Cloud Shell generate a SSH Key using SSH-Keygen
    • Remember your passphrase as this will be required for accessing the Windows 10 Azure VM

Client SSH Keygen.PNG

  • Copy the key to the Windows 10 Azure VM
    • Run the command below (after changing it for your username and Windows VM IP Address) and provide your password to copy up the file
cd ~/
scp ./.ssh/id_rsa.pub username@Win10ServerIPAddress:C:\Users\userprofilename\.ssh\authorized_keys\

Copy Public Key from Client to Server.PNG

  • On the Server if C:\ProgramData\ssh\administrators_authorized_keys exists add your Public key that you copied into your home folder above into it. If C:\ProgramData\ssh\administrators_authorized_keys doesn’t exist then copy the authorized_keys file from your .ssh home directory (e.g c:\users\darrenjrobinson\.ssh ) to C:\ProgramData\ssh\administrators_authorized_keys
  • Edit the permission on the administrators_authorized_keys file.
    • Right-Click the file => Properties => Security => Advanced => Disable Inheritance => Choose “Convert inherited permissions into explicit permissions on this object”
    • Remove Authenticated Users so that only System and Administrators remain as per the screen shot below. Then select Apply and then OK.

Administrators Authorized Keys.PNG

Testing SSH with Key Access

From our Azure Cloud Shell SSH to your Windows 10 Host;

ssh username@ipaddress

SSH Key Access.PNG

You will be prompted for the passphrase you gave when you generated the SSH key. Enter that and you will be authenticated using SSH to the Windows 10 VM.

SSH to Windows 10.PNG

Docker Access from Azure Cloud Shell in Browser

Now that we have Privileged Access to our Windows 10 VM, let’s try running a Windows 10 1809 Container and executing a PowerShell command to query the version of PowerShell available.

docker run -it mcr.microsoft.com/windows:1809 powershell $psversiontable

Run Docker.PNG

Wait a few seconds (maybe longer depending on the spec of your VM) and

PowerShell Desktop via Docker.PNG

Fantastic, we have a Container with PowerShell Desktop that we have accessed via Cloud Shell in a Browser.

Docker Access from Azure Cloud Shell in iOS Azure App

Using the Azure iOS App on my iPhone I started a Cloud Shell session and changed to my home directory cd ~\ where I had put a file named Connect-Win10.ps1 which contains

ssh username@ipaddressOfWin10Host

IMG-8441

I executed it and it prompted me for the passphrase for my SSH Key which I entered

IMG-8444

and I was then SSH’d into the Windows 10 VM.

IMG-8445

I did a dir d* and saw the DockerPS.cmd file I’d previously created. It contains the following command.

docker run -it mcr.microsoft.com/windows:1809 powershell $psversiontable

IMG-8446

Running that file

IMG-8447

starts the Docker Windows 1809 Container with the PowerShell command

IMG-8448

and I can see from my phone I’m have access to a PowerShell Desktop via Azure Cloud Shell and Docker from inside a Windows 10 VM based in Azure.

IMG-8449

Summary

This post has demonstrated that it is possible to get an elevated privileged session into a Windows 10 host using SSH, from which Docker Containers can be orchestrated and executed. By doing this from Azure Cloud Shell, it means that I can essentially login to a browser or app from anywhere in the world and access my Virtual PowerShell environments that in turn will allow world domination. Muwahahahah.

Got thoughts or feedback on this? Twitter || Blog

Nested Virtual PowerShell Desktop Environments on Windows 10 & Windows Server 2019 in Azure – Part 1

22 Nov 18 Part 2 is available here that details accessing
the Docker Image via Azure Cloud Shell / SSH
27 Nov 18 Part 3 is available here that details customizing
an image and accessing it via other SSH clients with 
elevated access.

PowerShell Desktop Virtual Environments

If you’ve been working with PowerShell for any length of time you know that through its flexibility there can come challenges when using disparate PowerShell Modules and often their version dependencies. This isn’t just a PowerShell thing; Python can also trip you up in a similar manner.

Python however has Virtual Environments (virtualenv) capabilities which provides functionality to create an environment that contains all the necessary binaries required for the packages/libraries that a Python project would need. I’ve found this this very useful and I’ve wondered why I couldn’t do the same for PowerShell Desktop (not Core). PowerShell Desktop, PowerShell Core?

PowerShell Desktop vs PowerShell Core

As of August 2016 there are two PowerShell versions;

  • PowerShell Desktop
    • PowerShell 5.1 that runs on Windows and on top of the full .NET Framework stack
  • PowerShell Core
    • PowerShell Core 6.x that is cross platform (Windows, MacOSX, Linux)
      • Doesn’t run on the full .NET Framework

If you are a Windows/Directory Services Admin the likelihood of many of the PowerShell Modules you use running on PowerShell Core are slim. That’s because a lot of the modules you use require the full .NET Framework. And that isn’t available in PowerShell Core.

A Virtual PowerShell Desktop Env? Why is this only possible now?

In July this year Microsoft started providing Windows Container Images for the Insider releases (over and above Nano and Core OS builds). This was great, but meant you needed to be on the Insider Builds and were restricted to environments on physical hardware or VM’s migrated to Azure as there wasn’t an Azure Marketplace OS Version (Windows 10 or Server 2019 Preview) that met the minimum host requirements for the Insider Container images.

We’ve had to wait until Build 1809 became available in the Azure Marketplace which it did at the end of last week (w/e 18 November 2018). The Windows Container Version History shows that there was no 1803 Windows Image. But that’s all bygones now, as 1809 is finally here.

PowerShell Desktop Virtual Environments through Nested Virtualization

The screenshot below on first glance just looks like any command window in a virtual machine. But look a little closer;

  • Remote Desktop Session to an Azure Windows 10 1809 Virtual Machine (host.region.cloudapp.azure.com)
  • Docker Run Windows 1809 PowerShell $psversiontable
    • PowerShell Desktop 5.1 via Docker inside a Virtual Machine in Azure
      • BOOM!!

PowerShell Desktop Virt Env Nested Virtualization.PNG

Ok, so that is a single Docker Container with a full Windows 10 1809 environment running inside a Windows 10 Virtual Machine. But that means we can also add more containers and have multiple isolated PowerShell environments. Something like ….

Nested Virtual PowerShell Desktop Env.png

Wait, what, how? – The Overview

The high-level process is;

  • Provision a Windows 10 Virtual Machine (Build 1809 or later).
    • I recommend to deploy it in Azure, but you could do it in other virtualization environments that support Nested Virtualization
    • NOTE: As I write this Windows Server 2019 Build 1809 hasn’t hit the Azure Marketplace. When it does, as it has a common code-base it should work exactly the same.
  • Enable the OpenSSH Feature (I’ll be using this a little in this post but more in a future post)
  • Enable the Containers and Hyper-V Features
  • Install and configure Docker
  • Pull the Windows Build 1809 Container Image

Windows 10 Build 1809 Virtual Machine

I’m not going to give step-by-step details for deploying a Windows VM in Azure. If you’re looking to setup Virtual PowerShell Desktop Environments with Docker you should be able to deploy a Windows VM. That said you need to choose a VM Size and Version that will support “Nested Virtualization”. The Azure RM Dv3 and Ev3 Series VM’s do. If you get an error similar to this when running a Docker Image then change your VM Series to Dv3. I went with;

  • The Azure Marketplace has a image for Windows 10 Build 1809. Search for Windows 10 Pro, Version 1809
    • In order to run this VM as pragmatic as practical I chose the following size and configuration for my VM initially
      • Standard D2_v3 (2 vCPUs, 8 GB memory)
      • HDD over SSD
      • Un-managed disks
    • Enable SSH and RDP in the NSG configuration
      • initially we’ll need RDP to connect to the workstation
      • moving forward we’ll be using SSH

OpenSSH Server

OpenSSH Client and Server has been available for Windows for a while. Build 1809 though has streamlined the install process considerably. The base install and setup is now just a couple of commands away. The commands below will install the latest version of OpenSSH Server via PowerShell;

# Find OpenSSH Server
$openSSH = Get-WindowsCapability -Online | Where-Object Name -like 'OpenSSH*'

# Install OpenSSH Server
$sshServer = $openSSH | Select-Object name | Where-Object {$_.name -like "OpenSSH.Server*"}
$sshServer

Add-WindowsCapability -Online -Name $sshServer.Name

which when executed via VSCode looks like;

Install OpenSSH Server on Windows 10.PNG

By default the SSH Server service is configured for Manual startup. To configure it for Automatic Startup use the Set-Service cmdlet.

# Set SSH Server for Auto Startup
Get-Service sshd
Set-service sshd -StartupType Automatic

ssh Server Startup Automatic.PNG

Finally we need to increase the ClientAliveInterval setting in the sshd_config configuration file located in the %programdata%\ssh directory. I’ve made mine 3600 seconds (1 hour).

sshd ClientAliveInterval.PNG

Windows Containers / Docker Dependencies

# Install Containers / Docker Dependencies
Enable-WindowsOptionalFeature -Online -FeatureName containers –All -NoRestart
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V –All
Restart the computer

Install Docker

Head on over to Docker and login (or create an account if you don’t already have one). Get Docker CE for Windows. I’m running 18.06.1-ce-win73.

Download and Install Docker.PNG

As we want a full Windows environment for PowerShell (not PowerShell Core on Linux) select “Use Windows containers” when installing Docker.

Use Windows Containers.PNG

At the end of the Docker install there is a reboot required.

Docker Install Complete.PNG

Get the Windows 1809 Base Container Image

We’re almost there. We need to get the recently released full Windows Image that will be the basis for our containers that will allow us to run full PowerShell environments. Don’t be confused by the Nano and Core images that have been available for quite some time. This is the FULL WINDOWS Build 1809 IMAGE.

As future Windows updates increment the version, the version you want to pull needs to be no greater than the host it is running on. Unlike the Insider Images the release versions follow the Release Number not the Build Number. Looking at the repository we can see that the image name is 1809 where-as its Build Number is 10.0.17763.134.

Docker Windows Image Registry.PNG

With the workstation restarted we SSH into it and pull the Windows 1809 Docker Image. I’ve given my Windows 10 VM a DNS name so I don’t need to figure out the IP Address each time I start it up. From a Windows command prompt to access your new VM (via IPAddress) use;

ssh username@IPAddressofWin10VM

Once we have a console on our Windows 10 VM we can pull the Windows 10 Docker Image.

docker pull mcr.microsoft.com/windows:1809

The image will be retrieved.

Pull Windows 1809 Base Image.PNG

After pulling the image it will be extracted. Depending on the spec of your VM this may take 10-20 minutes.

Extracting Windows Base Docker Image.PNG

After Extraction we have our base Container Image.

Completed Docker Image.PNG

In order to create a container from the command console via SSH we need to be elevated. I’ll cover that in the next post. So to validate we are able to create a container based on the full Windows 10 1809 image, RDP into the Windows 10 VM and open an elevated command prompt. Then type the command;

docker run -it mcr.microsoft.com/windows:1809 powershell $psversiontable

which will start a container using the Windows 10 1809 Image and run PowerShell with the command $PSVersionTable that will return the version of PowerShell.

PowerShell Desktop Virt Env Nested Virtualization

Summary

As you can see from the screenshot above, we have Nested Virtualization in an Azure Resource Manager Windows 10 Virtual Machine running a Docker Windows 10 1089 Container Image that allows us to run PowerShell Desktop 5.1. BOOM!!

That’s it for the first post, where I introduced the concept of Full Windows Docker Images supporting PowerShell Desktop in Azure. Stay tuned for the next post that starts putting this new functionality to good use.

Got thoughts or feedback on this? Twitter || Blog

Building deployment pipelines for Azure Function proxies and Logic Apps

Azure Logic Apps offer a great set of tools to rapidly build APIs and leverage your existing assets through a variety of connectors. Whether in a more ad-hoc scenario or in a well-designed micro service architecture, it’s always a good way to introduce some form of decoupling through the mediator pattern. If you don’t have the budget for a full blown API Management rollout and your requirements don’t extend further than a basic proxy as a mediator, keep on reading.

One of the intricacies of working with the Logic Apps HTTP input trigger is the dynamic input URLs. When recreating your Logic Apps via ARM templates you’ll notice that these input URLs change once you removed your existing Logic App. This, amongst other reasons make the Logic Apps unsuitable for direct exposure to API consumers. Azure API management offer a great way of building an API gateway between your consumers and Logic Apps but comes with a serious price tag until the consumption tier is finally available later this year. Another way of introducing a mediator for your Logic Apps is Azure Function App Proxies. Although very lightweight, we can consider the Function App Proxy as an API layer with the following characteristics:

  • Decouple API consumer from API implementation
    By virtue of decoupling we can move our API implementation around in the future, or introduce versioning without impacting the API consumer
  • Centralised monitoring with Application Insights
    Rich out of the box monitoring capabilities through one-click deployment

In this post we’ll look at fully automating resource creation of a microservice, including the following components:

  • Application Insights instance
    Each App Insights instance has its unique subscription key. The ARM template will resolve the key during deployment for Application Insights integration with the Function App Proxy.
  • Logic App
    As mentioned before the input URL will be dynamic, so we’ll need to resolve this during deployment.
  • Function App
    The function app and its hosting plan can be easily created with an ARM template, together with some application settings including the reference to the Logic App backend URL which is determined during deployment time.
  • Function App Proxy
    Last but not least, proxies are defined as part of the application content. The proxies.json file in the wwwroot contains the actual proxy service definition, establishing the connection with the Logic App.

ARM deployment

Below is the single ARM template that contains the resource definition for the AppInsights instance, Logic App and Function App. The Logic App contains a simple workflow with HTTP trigger and response, outputting a supplied URL parameter in a JSON message body.

The important bit to point out here are setting the LogicAppBackendUri setting in the Function App:

[skip(listCallbackURL(concat(resourceId('Microsoft.Logic/workflows/', variables('logicAppName')), '/triggers/manual'), '2016-06-01').value,8)]

This command strips the ‘https://’ from the dynamically retrieved LogicApp so we can refer to it from our proxy definition below. In addition it will copy a URL query string parameter to the backend service.

Deployment

PowerShell

The following PowerShell script (with a little help of Azure CLI) deploys the ARM template, and function app proxy by uploading the hosts.json and proxies.json files to the Function App using Azure CLI. The DeployAzureResourceGroup.ps1 is the out of the box script that Visual Studio scaffolds in ARM template projects.

The above PowerShell and Azure CLI scripts are an excellent way of creating your assets from scratch. In addition we’ll show how to use an Azure DevOps pipeline to perform true CI/CD.

Azure DevOps pipeline

With Azure DevOps pipelines we can easily setup a CI/CD pipeline with just three simple steps.

The first step performs an Azure resource group deployment to deploy the Logic App, Function App and AppInsights instance.

Next we’ll package the Function App proxy definition into a zip file.

The last step will deploy the packaged proxy definition to our Function App:

After a successful deployment with either PowerShell or Azure DevOps we can finally test our function app:

Happy days. The above demonstrates how we can utilise Azure to create a very cost effective and neat solution to provide an API and proxy whilst leveraging Application Insights to monitor incoming traffic.

Step-by-step: Using Azure DevOps Services to deploy ARM templates with CI/ CD – Part 2

In this blog (Part 2), I take you through on Enabling Continuous Integration (CI) / Continuous Deployment (CD), for the project created on Part 1.

To re-cap, I have made this entire post into two parts for easier understanding and we will focus on Part 2 here:

Part 1- Creating your first project in Azure DevOps (https://blog.kloud.com.au/2018/10/17/step-by-step-using-azure-devops-services-to-deploy-arm-templates-with-ci-cd-part-1/).
Part 2 – Enabling the first project in Azure DevOps for Continuous Integration (CI) / Continuous Deployment (CD).

Enabling the first project in Azure DevOps for Continuous Integration

    • Now, the next step is to enable continuous integration. Which will keep your build updated based on your changes on project / ARM templates.
    • Select Builds on the left pane and click pipeline, which you have created. Click on Edit.
    • Click on Triggers and select Enable continuous integration. Click on Save.
    • Provide your comment and save.
    • Now if you make a change on your template and push it. The deployment will happen automatically.
    • From within Visual Studio click on the Code tab and edit Azuredeploy.json file.
    • Add storage account to the project and provide name.
    • Click Commit when done and push the code. (Please refer part 1 for this activity).
    • The deployment will happen automatically due to Continuous Integration and deployment.
    • Verify storage account has been created on your azure tenant.

Enabling the first project in Azure DevOps for Continuous Deployment

To perform continuous deployment, need to copy the files and publish to artifact.

An artifact is a deployable component of your application. It is typically produced through a Continuous Integration or a build pipeline. This means, Code once and share packages across different stages / environment (Dev, Test, UAT & Prod).

  • Go to the Pipelines tab, and then select Builds and click on edit.
  • Click on + item on Agent Job
  • On the new pane, select Copy files and click ADD. 
  • On the left pane, select copy files to: and fill required information:
    • Provide name for task
    • Select folder as: Azure template folder.
    • Provide target folder.
  • Next, we need to publish the Artifact.
  • Click on + item on Agent Job
  • On the new pane, select publish build Artifact and click ADD.
  • On the left pane, select Publish Build Artifacts: and fill required information:
    • Provide name.
    • Select path to publish.
    • Provide publish location
  • click Save.

Create a release pipeline

A release pipeline is one of the fundamental concepts in Azure Pipelines for your DevOps CI/CD processes. It defines the end-to-end release pipeline for an application to be deployed across various stages.

      • Select the action to create a New pipeline. Then select Create a release pipeline.
      • Select the action to start with an Empty job. Name the stage Stage1 (Test).
      • In the Artifact panel, select + Add and specify a Source (Build pipeline created earlier on this). Select Add.
      • To enable the Continuous deployment trigger, click on Lightning bolt to trigger continuous deployment. You can specify any scheduled time for this deployment. 
      • Select the Tasks tab and select your Stage1 (Test) Select the plus sign (+) for the job to add a task to the job.
      • On the Add tasks dialog box, select deploy and click on Azure Resource Group deployment and click ADD.
      • On the left pane, select Azure Deployment: Create or Update Resource Group action on:Select Azure Subscription and click on Authorize.
      • Select your resource group on your Azure subscription and location.
      • The template location will be linked artefact.
      • Select your template file (azuredeploy.json) from selection menu.
      • Select your template parameter file (azuredeploy.parameters.json) from selection menu.
      • Deployment mode: complete.
      • On the Pipeline tab, select the stage (Stage1 (Test)) and select Clone.
      • Rename the cloned stage (Stage1 (PROD)).
        • Note: If needed you can change your Azure subscription details by editing this stage.
      • Rename the release pipeline with appropriate.
      • Save the release pipeline.

Deploy a release

  • To run the Azure template on each stage, you can create a release or make a scheduled trigger.
  • Select release and click on Create a release.
  • Select which stage needs to be have conditions before deployment,  in my case production deployment (Stage (PROD)) .
  • Click pre- deployment conditions on Prod stage. select After stage of Stage 1 (Test).
  • Click on create.
  • The release we have created will be deploying and you can check the resource on Azure or verify the logs.
  • If you need to perform multiple deploy, select pipeline and click on deploy, choose multiple deploy options.
  • After this production stage will have resourced mentioned on ARM template.

Notes: Approvals and gates give you additional control over the start and completion of the deployment pipeline. Each stage in a release pipeline can be configured with pre-deployment and post-deployment conditions that can include waiting for users to manually approve or reject deployments and checking with other automated systems until specific conditions are verified.

In addition, you can configure a manual intervention to pause the deployment pipeline and prompt users to carry out manual tasks, then resume or reject the deployment.

 

This is end of this series, Azure DevOps Services to deploy ARM templates with CI/ CD. Please feel free to post your comments.

Step-by-step: Using Azure DevOps Services to deploy ARM templates with CI/ CD – Part 1

In this blog, we will see how to get started with Azure DevOps for an Infrastructure background person.

We will familiarize ourselves with deploying your Azure resources with ARM templates by using Azure DevOps with Continuous Integration (CI) and Continuous Deployment (CD).

I have made this entire post into two parts for easier understanding:

Part 1: Creating your first project in Azure DevOps

Part 2: Enabling the first project in Azure DevOps for Continuous Integration (CI) / Continuous Deployment (CD).

This article will focus on Part 1. The things needed to make this successful include:

        1. Visual Studio software (Free edition)  – you can get this from website: https://visualstudio.microsoft.com
        2. Azure Subscription access. If not, you can create a free azure account.
        3. An account in Visual Studio. if you don`t have one create a new account by signing into https://visualstudio.microsoft.com and enabling Azure DevOps service. 
        4. Click on Azure DevOps and select sign in.
        5. Once you sign in with your Microsoft account, click continue.
  1. Creating the first project in Azure DevOps:When you log into Azure DevOps(https://dev.azure.com) for first time with your MSDN/ Microsoft account.
    • Now, click on New project and provide name (Eg: Firstproject) & add a Description for project.
    • Select visibility options: Private (with this setting, only you can access the content. You can provide access to people who can able to view this project.)
    • Under Firstproject , Click on Repos.
    • Since the project folder is empty, we need to create a new file. We can use Visual studio for creating it and click on clone in Visual studio options:

      • Visual studio software will open its console.
      • Provide your Microsoft account credentials, which has been used for Azure DevOps and Azure account.
      • The project needs to be cloned on local disk. Click on clone.
      • This will pop-up for Azure DevOps credentials.
      • This may result in authentication failed or fatal error. To resolve this, follow below steps:
      • In Visual studio, select team explorer and select manage connections and click connect to project.
      • Select your user id for Azure DevOps and provide credentials. Then your Project (First project) will be listed for connect.
      • Now you will get clone options:
      • On Team Explorer view, click on Create a new project or solution in this repository.
      • Select Installed -> cloud and Azure Resource group

      • Select Blank template for deployment.

    • Select solution explorer view on Visual Studio
    • Select AzureResourceGroup and click on Azuredeploy.json
    • Click on Resources on Json outline and select virtual network for deployment. provide name for vnet : eg   firstnetwork01

  • On the bottom of Visual Studio, you find the number of changes icon has been performed to it. Click on it for commit changes.
    • Provide comments for commit and select commit all.

    • The change has been committed locally and we need to push the changes to Azure DevOps project file. Click on sync for change.

  • Click on push for changes to cloud (Azure DevOps).
  • Now, go back to Azure DevOps portal and select your project (First project) and select repos.
  • You will able to find your AzureResourceGroup, which you created on Visual Studio will be available.
  • Click on Azuredeploy.json file to verify your file.
    1. Enabling deployment of ARM Template in Azure DevOps:
  • Log on Azure DevOps portal and open Firstproject (your project name), then click on Builds.
  • On the new page, click on New Pipeline. Select “Use the visual designer to create a pipeline without YAML”.
  • Ensure your project & repository is selected and click on continue.
  • Select “Start with an Empty Job”
  • Click on + item on Agent Job.
  • On the new pane, select deploy and click on Azure Resource Group deployment and click ADD.
  • On the left pane, select Azure Deployment: Create or Update Resource Group action on
  • Select Azure Subscription and click on Authorize.
  • Select your resource group on your Azure subscription and location.
  • The template location will be linked artefact.
  • Select your template file (azuredeploy.json) from the selection menu.
  • Select your template parameter file (azuredeploy.parameters.json) from the selection menu.
  • Deployment mode: complete.
  • Click save and queue and provide your comment on the file changes.
  • After it  has saved, the build operation will commences deployment  on your Azure tenant.
  • You can view the deployment logs from the Azure DevOps portal. In addition, you will receive an email (email which has been used for Azure DevOps account) with deployment status.
  • Verify your network (Azure Resource which we added on ARM template) has been created on Azure tenant.
  1. This concludes Part 1 creating and deploying  ARM templates with Azure DevOps.
  2. In Part-2, I take you through on enabling Continuous Integration (CI) / Continuous Deployment (CD).
Follow Us!

Kloud Solutions Blog - Follow Us!