Access Azure linked templates from a private repository

I recently was tasked to propose a way to use linked templates, especially how to refer to templates stored in a private repository.  The Azure Resource Manager (ARM) engine accepts a URI to access and deploy linked templates, hence the URI must be accessible by ARM.  If you store your templates in a public repository, ARM can access them fine, but what if you use a private repository?  This post will show you how.

In this example, I use Bitbucket – a Git-based source control product by Atlassian.  The free version (hosted in the cloud) allows you to have up to 5 private repositories.  I will describe the process for obtaining a Bitbucket OAuth 2.0 token using PowerShell and how we can use the access token in Azure linked templates.

Some Basics

If you store your code in a private repository, you can access the stored code after logging into Bitbucket.  Alternatively, you can use an access token instead of user credentials to log in.  Access tokens allow apps or programs to access private repositories hence, they are secret in nature, just like passwords or cash.

Bitbucket access token expires in one hour.  Once the token expires, you can either request for a new access token or renew using a refresh token.  Bitbucket supports all 4 of the RFC-6749 standard to grant access/bearer token – in this example; we will use the password grant method.  Note that this method will not work when you have a two-factor authentication enabled.

Getting into actions

First thing first, you must have a private repository in Bitbucket. To obtain access tokens, we will create a consumer in Bitbucket which will generate a consumer key and a secret.  This key/secret pair is used to grant access token.  See Bitbucket documentation for more detailed instructions.

Before I describe the process to grant access token in PowerShell, let’s examine how we can use get Bitbucket access token using the curl command.


curl -X POST -u "client_id:secret"
https://bitbucket.org/site/oauth2/access_token -d grant_type=password
-d username={username} -d password={password}

  • -X POST = specifies a POST method over HTTP, the request method that is passed to the Bitbucket server.
  • -u “client_id:secret” = specifies the user name and password used for basic authentication.  Note that this does not refer to the bitbucket user login details but rather the consumer key/secret pair.
  • -d this is the body of the HTTP request – in this case, it specifies the grant_type method to be used, which is password grant. In addition to Bitbucket login details – username and password.

To replicate the same command in PowerShell, we can use the Invoke-RestMethod cmdlet. This cmdlet is an REST-compliant, a method that sends HTTP/HTTPS requests which return structured data.


# Construct BitBucket request
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $bbCredentials.bbConsumerKey,$bbCredentials.bbConsumersecret)))
$data = @{
grant_type = 'password'
username=$bbCredentials.bbUsername
password=$bbCredentials.bbPassword
}
# Perform the request to BB OAuth2.0
$tokens=Invoke-RestMethod -Uri $accessCodeURL -Headers @{Authorization=("Basic {0}" -f $base64AuthInfo)} -Method Post -Body $data

The base64AuthInfo variable creates a base64 encoded string for HTTP basic authentication and the HTTP body request encapsulated in the data variable.  Both variables are used to construct the Bitbucket OAuth 2.0 request.

When successfully executed, an access token is produced (an example is below).  This access token is valid for 1 hour by default and you can either renew it with a refresh token or request a new access token.

access_token :g-9dXI3aa3upn0KpXIBGfq5FfUE7UXqHAiBeYD4j_mf383YD2drOEf8Y7CCfAv3yxv2GFlODC8hmhwXUhL8=
scopes : repository 
expires_in : 3600 
refresh_token : Vj3AYYcebM8TGnle8K 
token_type : bearer

Use in Azure Linked Templates

Once you have obtained the access token, we can use it in our Azure linked templates by including it as part of the URL query string.

(For ways how we can implement linked templates, refer to my previous blog post)

{
 "apiVersion": "2015-01-01",
 "name": "dbserverLinked",
 "type": "Microsoft.Resources/deployments",
 "properties": {
    "mode": "Incremental",
     "templateLink": {
         "uri": "https://api.bitbucket.org/1.0/repositories/swappr/azurerm/raw/e1db69add5d62f64120b06a3179828a37b7f166c/azuredeploy.json?accesstoken=g-9dXI3aa3upn0KpXIBGfq5FfUE7UXqHAiBeYD4j_mf383YD2drOEf8Y7CCfAv3yxv2GFlODC8hmhwXUhL8=",
         "contentVersion": "1.0.0.0"
     },
     "parametersLink": {
        "uri": "https://api.bitbucket.org/1.0/repositories/swappr/azurerm/raw/6208359175c99bb892c2097901b0ed7bd723ae56/azuredeploy.parameters.json?access_token=g-9dXI3aa3upn0KpXIBGfq5FfUE7UXqHAiBeYD4j_mf383YD2drOEf8Y7CCfAv3yxv2GFlODC8hmhwXUhL8=",
        "contentVersion": "1.0.0.0"
     }
 }
}

Summary

We have described a way to obtain an access token to a private Bitbucket repository. With this token, your app can access resources, code, or any other artifacts stored in your private repository.  You can also use the access token in your build server so it can get required code, build/compile it, or perform other things.

 

 

WORKAROUND / FIX: Login to Azure with certificate as Service Principal

This blog post describes my recent experience with an Azure AD service principal authentication with a certificate. The process is well documented and seemed quite straightforward, however this was not my experience.

The issue

I was able to successfully follow the process to setup Azure AD service principal until the step where I granted the service principal with a role (using PS cmdlets). When I tried to login as the service principal, I encountered the issue below.

Login-AzureRmAccount -CertificateThumbprint $cert.Thumbprint -ApplicationId $appId -ServicePrincipal -TenantId $subscription.TenantId

Login-AzureRmAccount : Invalid provider type specified. At line:1 char:1 + Login-AzureRmAccount -CertificateThumbprint $cert.Thumbprint -Applica ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : CloseError: (:) [Add-AzureRmAccount], CryptographicException + FullyQualifiedErrorId : http://Microsoft.Azure.Commands.Profile.AddAzureRMAccountCommand

Referring to the documentation on Add-AzureRMAccount (Login-AzureRMAccount is an alias) the TenantId parameter expected an array of strings, which prompted me to change the command to:

Login-AzureRmAccount -CertificateThumb print $cert.Thumbprint -ApplicationId $azureAdApplication.IdentifierUris -ServicePrincipal -TenantId string[]$subscription.TenantId

This time the error received was:

Login-AzureRmAccount : 'authority' should be in Uri format Parameter name: authority At line:1 char:1 + Login-AzureRmAccount -CertificateThumbprint $cert.Thumbprint  -Applica ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : CloseError: (:) [Add-AzureRmAccount], ArgumentException + FullyQualifiedErrorId : http://Microsoft.Azure.Commands.Profile.AddAzureRMAccountCommand

Note: I updated the Azure PowerShell module to 1.3.2 (19 April 2016) and still received the ‘authority‘ error.  The ‘Invalid provider type‘ error didn’t appear though, instead it provided a clearer error message:  Cannot convert ‘System.Object[]’ to the type ‘System.String’ required by parameter ‘TenantId’

The workaround / fix

As a workaround, I resorted to using the Azure cross-platform (version, as tested, is 0.9.2) CLI that performs the equivalent operation in PowerShell.  If you don’t mind CLI, I think this can be considered a fix.

Before running this command you need to convert the PFX file to a PEM file as described here.

azure login --service-principal --tenant "$tenantid" -u "$appid" --certificate-file <path to PEM file>\cert.pem --thumbprint "$cert"

This resulted in the following.

info:    Executing command login
info:    Added subscription <Subscription name>
info:    Setting subscription "<Subscription Name>" as default
info:    login command OK

I have not performed a detailed analysis on why the PS cmdlet produced such errors – there might be information that can be gleaned via Fiddler on what REST API requests were generated (in the mean time I raised this issue with @AzureSupport and an issue on Github).

Hope this short post saves you troubleshooting time.

 

Break down your templates with Linked Templates (Part 2)

Continued from part 1

The 2nd part of the series will describe how we construct Azure Resource Manager linked templates.

Quick Recap

In the first part, we set up the first template which deploys the storage, virtual network, and subnets. This will be our “master” template where we will link all our related templates.

 

Linked templates.png

  • 1st template: master template – we will modify this template slightly to capture parameters and the linked templates
  • 2nd template: two web servers (IIS) – this is a new template
  • 3rd template: DB server (MySQL) – a new template

We will use the Azure quickstart templates on GitHub as the basis for the second and third templates. The quickstart template is your go-to site to shop for Azure templates. The point of using templates is for re-usability and most likely you do not need to start with a blank template as you may find existing templates that you can build upon.

The templates will provision two Windows web servers with IIS + one Linux server with mySQL . The second template is adapted from windows on IIS.  This template was modified so that it creates two instances of IIS server – installed using the DSC extension. There are some interesting aspects of this template that is worth examining, especially the way we deploy multiple resource instances. The third template is based on mySql on Ubuntu. This template, like the second template, was modified to reference our master template.

We will also revisit the master template in order to link all of our templates together. All templates are provided at the end of this blog post including where to find them on Github.

The templates are not particularly short but bear with me as I highlight the interesting bits so we understand their significance.

The point of using templates is for re-usability and most likely you do not need to start with a blank template as you may find existing templates that you can build upon.

Important Bits

Second template: two web servers (IIS)

First is the use of Copy() function to create multiple resource instances. This function allows you to create multiple instances of a given resource. This will result in a less-cluttered template as you do not need to copy & paste resource when more than one resource is needed.

"copy" : {
        "name" : "publicIpCopy",
        "count" : "[parameters('numOfInstances')]"
 }

The above has a number of consequences. We have a way to copy resource – which is all well and good. But we need to ensure that any resource that is duplicated has a unique name, or otherwise they will be in conflict. This is avoided by the concat() and copyIndex() functions. These two functions allow us to concatenate our base name with a counter index of the resource we are duplicating.

"resources" : { 
   "name" : "[concat(parameters('vmName'), copyIndex())]" 
}

Then, what do we do with the dependsOn attribute when we have multiple resources created. Good question!  And the answer is the same :), we will use the copyIndex() to point Azure RM to the resource it needs. This is the same for assigning resourceId in the properties attribute.

"dependsOn" : [
   "[concat('Microsoft.Network/networkInterfaces/', variables('nicName'), copyIndex())]"
]

"id": "[concat(resourceId('Microsoft.Network/networkInterfaces', variables('nicName')), copyIndex())]"

There is also a DSC (desired state configuration) extension, this extension is responsible for installing the web server role using the PowerShell provider.

Finally, all references to the creation of storage account and virtual network are removed as we employ VNET and storage account from the master template. They are provided as parameters.

Third template: one DB server (MySQL)

The DB template is pretty basic, we will deploy a new Linux VM with a mySQL database installed using the Linux extension. Similar to the second template, we will also replace storage and virtual network resources definition with parameters so that they refer to our master template.

Linking all the templates together

There are two ways we can link templates: the first method is to hard-code the URLs of the linked templates. The second method is a smarter way of linking templates (and is my preferred approach) – which uses variables to link templates.

OK, there is a third method :), using parameters; and based on what being passed, deploy specific linked templates (also known as conditional nested templates).  Note: I will explain the two methods and I might save the third one for another time as it is beyond the scope of this post.

Hard-coded

With this method, we simply hard-code the URLs of our linked templates.

{
 "apiVersion": "2015-01-01",
 "name": "dbserverLinked",
 "type": "Microsoft.Resources/deployments",
 "properties": {
   "mode": "Incremental",
   "templateLink": {
     "uri": "https://urltotemplate.com/linkedtemplate.json",
     "contentVersion": "1.0.0.0"
   },
   "parametersLink": {
     "uri": "https://urltotemplate.com/linkedparamtemplate.json",
     "contentVersion": "1.0.0.0"
   }
 }
}

We could, for example, use variables to group all the URL’s together.

"variables" {
  "linkedTemplate" : {
    "templateUrl" : "https://urltotemplate.com/linkedtemplate.json",
    "paramUrl" : "https://urltotemplate.com/linkedtemplate.param.json"
  }
}

...
"templateLink": {
     "uri": "[variables('linkedTemplate').templateUrl]",
     "contentVersion":"1.0.0.0"
},
"parametersLink": {
     "uri": "[variables('linkedTemplate').paramUrl]",
     "contentVersion":"1.0.0.0"
}
...

The downside of the hard-coded way is that you need to update every single linked template URL every time you move your templates e.g when you fork or migrate to templates in your source control.

The downside of the hard-coded way is that you need to update every single linked template URLs every time you move your templates.

Alternatively, we can choose to pass the parameters in our template – as opposed to using parameters file denoted by the parametersLink attribute.

"templateLink": {
     "uri": "[variables('linkedTemplate').templateUrl]",
     "contentVersion": "1.0.0.0"
},
"parameters": {
     "attribute1" : "someValue",
     "attribute2" : "someValue"
}

The above method can also be applied to the second method (discussed below).

Variables to construct templates

This is the method we will use to link our templates together.  The approach is rather simple albeit more elegant, we assign a variable which contains the URL to the directory of all linked templates.  From this variable, linked template variables are built utilising the concat() method.

This method effectively avoids the need to update every single URL as we only need to update the shared URL variable which the linked templates’ URLs are built on (by concatenating the two together).

"variables" : {   
   "sharedBaseUrl" : "https://raw.githubusercontent.com/vicperdana/azure-quickstart-templates/master/windows-iis-sql-linked-multivm/nested/"
   "secondTemplate" : {
     "templateUrl" : "[concat(variables('sharedBaseUrl'),'azuredeploy-web.json')]",
     "paramUrl" : "[concat(variables('sharedBaseUrl'),'azuredeploy-web.parameters.json')]"
   }
   ...
}

...
"resources" : {
   "templateLink": {
       "uri": "[variables('secondTemplate').templateUrl]",
       "contentVersion":"1.0.0.0"
    },
   "parametersLink": {
       "uri": "[variables('secondTemplate').paramUrl]",
       "contentVersion":"1.0.0.0"
    }

As you can see there are a few permutations we can use, whichever method you prefer,  it is quite easy to link templates together.

All templates are now linked

linkt1

The linked templates on Azure portal once deployed

Extra bits

One thing we need to keep in mind is that each linked template is an entity that can be deployed independently.

When linking templates, ensure that no resource in the parent template or other templates other than what is being defined on the current template is referred to in the dependsOn attribute, otherwise it will throw an error.  This makes sense since the dependsOn attribute scope is within a template and it does not have visibility to other templates other than where it is defined.

Finally, Azure Resource Manager has a two-part validation: before and during deployment. The before deployment checks whether a template is syntactically correct and that it would be accepted by the Azure Resource Manager.  Then the during deployment validation (runtime) checks for resource dependencies within your Azure subscription. The templates may seem valid, but until they are deployed as there is no fool-proof way to guarantee that it is a working template.  However, as a good practice when deploying templates, make pre-deployment validation a habit as it will catch most obvious errors detected by Azure Resource Manager.

As a good practice when deploying templates, make pre-deployment validation a habit as it will catch most obvious errors detected by Azure Resource Manager.

The next post

Now that we have covered the linked templates – we will look at how we can leverage KeyVault to secure our templates and any sensitive information passed into them such as credentials. Hang tight  :).

Below, you will find the updates made to the master template & the new templates; or if you are looking for the full template including the linked ones, I have made them available on my forked quick-templates on Github. Enjoy!

 

Updated master template

2 x web servers template — second template

The DB template — third template

 

Break down your templates with Linked Templates (Part 1)

Templated deployment is one of the key value propositions of moving from the Azure classic to Resource Manager (ARM) deployment model.  This is probably one key feature that made a big stride towards Infrastructure as a Code (IAC).  Personally, I have been looking forward to this feature since it’s a prominent feature on the other competing platform.

Now that this feature is live for a while, one aspect which I found interesting is the ability to link templates in Azure Resource Manager.  This post is part of a three-part series highlighting ways you could deploy linked templates.  The first part of the series describes the basics – building a template that creates the base environment.  The second part will demonstrate the linked template.  Third part will delve into a more advanced scenario with KeyVault and how we can secure our linked templates.

Why linked templates?  We could get away with one template to build our environment, but is it a good idea?  If your environment is pretty small, sure go ahead.  If the template becomes un-manageable for you which is mostly the case if you are serious about ‘templating’, then you come to the right place – linking or nesting templates is a way for you to de-couple your templates to manageable chunks.  This allows you to ‘branch’ out templates development, especially when you have more than one person; resulting in smaller, manageable templates than a one big template.  Plus, this would make testing / debugging a little easier as you have a smaller scope to work off.

Linking or nesting templates is a way for you to de-couple your templates to manageable chunks.  This allows you to ‘branch’ out templates development, especially when you have more than one person; resulting in smaller, manageable templates than a one big template.  Plus, this would make testing / debugging a little easier as you have a smaller scope to work off.

OK, so let’s start with the scenario that demonstrates linked templates.  We will build a two-tier app consisting of two web servers and a database server.  This includes placing the two web servers in an availability set with a virtual network (VNET) and storage account to host the VMs.  To decouple the components, we will have three templates, one template that defines the base environment (VNET, subnets, and storage account).  The second template will build the virtual machines (web servers) in the front-end subnet, and finally the 3rd template will build the database server.

AzTemplate-1

First template (this blog post)

AzTemplate-2

Second template

AzTemplate-3

Third template

AzTemplate-4

The building process

First, create a resource group, this example relies on a resource group for deployment.  You will need to create the resource group now or later during deployment.

Once you have a resource group created, set up your environment – SDK, Visual Studio, etc. You can find more info here.  Visual Studio is optional but this demo will use this to build & deploy the templates.

Create a new project and select Azure Resource Group.  Select Blank templateAzTemplate-5

This will create a project with PowerShell deployment script and two JSON files: azuredeploy and azuredeploy.parameters.  The PowerShell script is particularly interesting as it has everything you need to deploy the Azure templates.

AzTemplate-6

We will start by defining the variables in the azuredeploy template.   Note this can be parameterised for a better portability.

Then we define the resources – storage account and virtual network (including subnets).

We then deploy it – in Visual Studio (VS) the process is simply by right-clicking project and select New Deployment… then Deploy.  If you didn’t create a resource group, you can create it here also.

AzTemplate-7

The template will deploy and a similar message should be displayed once it’s finished.

Successfully deployed template ‘w:\github-repo\linkedtemplatesdemo\linkedtemplatesdemo\templates\azuredeploy-base.json‘ to resource group ‘LinkedTemplatesDemo‘.

On the Azure portal you should see the storage account and virtual network created.

AzTemplate-8AzTemplate-9

The next part in the series describes how we modify this template to call the web and db servers templates (linked templates).  Stay tuned :).

 

 

 

Create AWS CloudFormation Templates with Visual Studio

Background

AWS CloudFormation is a wonderful service for automating your AWS builds – my colleagues have done a number of detailed walk-throughs in other blog posts.

AWS also provides a toolkit for Visual Studio as an extension of the IDE.  To get started, configure the extension with your AWS IAM Access Key ID and Secret Key and you will be able to use the new AWS explorer pane to explore all AWS services such as VPC, EC2, RDS, etc.

By installing the toolkit, it also automatically installed the AWS .NET SDK which included libraries to develop apps using AWS services using .NET classes.  With the AWS SDK support on the .NET platform, building applications and infrastructure leveraging AWS services using .NET easier.

AWSVisualStudio

Create and deploy your CloudFormation template in Visual Studio

To create a new CloudFormation template in Visual Studio, you simply add a new project: select AWS — File — New — Project.  Navigate to Templates — AWS and select AWS CloudFormation Project.

NewProject

Once the project is created, you will be presented with the goodness of Visual Studio including Intellisense!

IntelliSense

To deploy the template, right click the template and select Deploy to AWS CloudFormation

Deploy

Troubleshooting notes

I came across an error whenever I deployed a new AWS CloudFormation template created in Visual Studio (I am using Visual Studio 2012 Premium edition).  The error indicated a syntax error; and after validating my template – it is clear that it is not a formatting error.

Deploying the same template directly on the AWS console or via an AWS Powershell command (test-cfntemplate) rendered the same result:

Error

Test-CFNTemplate : Template format error: JSON not well-formed. (line 1, column 2)
 At line:1 char:1
 + Test-CFNTemplate -TemplateURL "https://s3-ap-southeast-1.amazonaws.com/vicperdan ...
 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 + CategoryInfo : InvalidOperation: (Amazon.PowerShe...NTemplateCmdlet:TestCFNTemplateCmdlet) [Test-CFNTem
 plate], InvalidOperationException
 + FullyQualifiedErrorId : Amazon.CloudFormation.AmazonCloudFormationException,Amazon.PowerShell.Cmdlets.CFN.TestCF
 NTemplateCmdlet

Finding the solution took some searching, until I found a post indicating that this is caused by the default encoding used by Visual Studio: UTF-8 with BOM (Byte-Order-Mark).  Changing the default to UTF-8 without BOM fixed the issue.  This can be changed by selecting File — Advanced Save Options in Visual Studio.

UTF

Remote desktop client randomly unable connect to the RDS farm

Recently I ran into a problem with an existing Remote Desktop Services 2012 R2 at a client site. The error occurred intermittently and after a number of retries, client could establish connection normally making the issue not always reproducible.  This blog summarises the process of identifying the symptoms, possible causes, and the resolution steps.

Some Background

The RDS farm consisted of two connection broker servers and two session hosts.  The Remote Desktop Connection Broker is configured in HA mode using two DNS records pointing to two broker nodes for round robin.  The session hosts are 2012 R2 based machines.  The broker nodes also host the RD Web Access and RD gateway with one of the nodes assuming the RD Licensing role.

RDSFarm

Troubleshooting

The end user encountered the following error when trying to connect:
Your computer can’t connect to the remote computer because the Connection Broker couldn’t validate the settings specified in your RDP file. Contact your network administrator for assistance.

After further digging, I found the error below (Event ID 802) on the second broker node:
RD Connection Broker failed to process the connection request for user <userID>.
Farm name specified in user’s RDP file (hints) could not be found.
Error: The farm specified for the connection is not present.

RDSError

 

Additional errors encountered were:
Remote Desktop Connection Broker Client failed while getting redirection packet from Connection Broker.
User : <userID>
Error: Element not found.

Remote Desktop Connection Broker Client failed to redirect the user <userID>
Error: NULL

One aspect I discovered was that the same error didn’t occur on the other broker server. This led me to investigate the RDS configuration: the RDCB was setup in HA mode with a SQL backend however it only has one node configured.  We got somewhere.  To isolate the issue, we had decided to operate the RDS on a one node configuration to confirm suspicion that whenever a user gets redirected to the broker that’s not configured it would cause a redirection failure (RDCB uses Round Robin DNS for HA).

In the server manager console, the following tasks were done:

  • Removed DNS RR record of the second broker node
  • Removed the second gateway
  • Removed the RD Web Access of the second node

Connecting to the remote desktop farm from internal network worked fine after we have made this change – tested this multiple times and from different machines to confirm that it’s stable. However we received a different error when connecting from external network – but this time the end user’s error was different:

Remote Desktop can’t connect to the remote computer for one of these reasons:
1) Remote access to the server is not enabled
2) The remote computer is turned off
3) The remote computer is not available on the network
Make sure the remote computer is turned on and connected to the network, and that remote access is enabled.

The next port of call was to check RD gateway and we found that the second gateway was still part of the RD gateway farm.  From Windows Server 2012, RDS is administered in the Server Manager console which included configuration for Session Collections, RD Web Access, Broker Deployment, and RD Licensing.  One aspect that is not fully managed via the console is Remote Desktop Gateway.  One key takeaway is after adding or removing RD gateway from the server manager console check if the RD gateway server has been removed from the RD gateway manager.

RDGateway

*Further investigation showed that the configuration had “Bypass RD Gateway server for local addresses” checked resulting in a different outcome when connecting from local networks as it bypasses the RD gateway. Unhecking this enforces all connections through the RD gateway.

RDSConfig

Easily connect to your AWS VPC via VPN

This blog post will explain the process for setting up a client to site connectivity on AWS. This allows you to connect to your AWS resources from anywhere using a VPN client. There are several ways to do this but this post shows you one of the quickest ways to do it using a pre-built community image by OpenVPN available in AWS.

AWS Marketplace

AWS Marketplace is a great place to find any pre-built solutions created by AWS ISV’s or enthusiasts for a wider community benefit. The offerings cover from things such as commercial and community AMIs, SaaS to selling Reserved instances. While there may be software costs associated with the use of them – which are built into the hourly charges – some do not charge extra to use other than the cost of running the EC2 instances.

The setup

The following is an overview diagram of my setup. Note that this post does not cover HA setup although it is possible to extend it further by running the instances in multiple AZs.

VPCVPNSetup

In our VPC, we have public and private subnets: in our public subnet, we have the openVPN instance and in our private subnet we have the web server (server 1). This configuration allows you to separate public & private traffic by terminating all internet traffic at the public subnet layer. It is possible to have your internal instances in the public subnet where your VPN instance is located but the above model provides more isolation.

The steps

To configure your VPN, perform the following:

Create a VPC

  • VPC range: 172.16.0.0
  • Public subnet:
    • Contains the VPN EC2 instance
    • Create an internet gateway
    • Attach the internet gateway to the public subnet
    • Route to the internet using Internet gateway
  • Private subnet:
    • A Windows 2012 server with IIS enabled
    • Route to the public subnet

Create a new openVPN image

  • Launch a new instance and select AWS Marketplace

AWSMarketplace

  • Search for OpenVPN

OpenVPNAMI

  • Assign the server to the public subnet and an Elastic IP
  • Security Group should have the following services opened:
    • SSH
    • HTTP
    • HTTPS
    • TCP 943
    • UDP 1194
    • ICMP

Create a new Windows Server machine in the private subnet

Win2012AMI

  • Assign the server to the private subnet and an elastic IP (the Elastic IP will later be removed)
  • Security Group should have the following services opened:
    • HTTP
    • HTTPS
    • ICMP
  • Connect to your Windows serverOpen Powershell command window and enter the following command to install IIS
Install-WindowsFeature web-server,web-mgmt-console

Disable source/dest check on the VPN server – to allow communications via the VPN tunnel

SourceDestCheck

Setup the VPN server

  • I used Putty to connect to the VPN machine (download). Right click the instance in EC2 and select “Connect” and follow the instructions to connect
  • The following is a snippet of openVPN prompts and their answers when you log on for the first time

======================

Please enter 'yes' to indicate your agreement [no]: yes

Once you provide a few initial configuration settings,
OpenVPN Access Server can be configured by accessing
its Admin Web UI using your Web browser.

Will this be the primary Access Server node?
(enter 'no' to configure as a backup or standby node)
Press ENTER for default [yes]: yes

Please specify the network interface and IP address to be
used by the Admin Web UI:
(1) all interfaces: 0.0.0.0
(2) eth0: 172.16.10.121
Please enter the option number from the list above (1-2).
Press Enter for default [2]:

Please specify the port number for the Admin Web UI.
Press ENTER for default [943]:

Please specify the TCP port number for the OpenVPN Daemon
Press ENTER for default [443]:

Should client traffic be routed by default through the VPN?
Press ENTER for default [yes]:

Should client DNS traffic be routed by default through the VPN?
Press ENTER for default [yes]:

Use local authentication via internal DB?
Press ENTER for default [no]:

Private subnets detected: ['172.16.10.0/24']

Should private subnets be accessible to clients by default?
Press ENTER for default [yes]:

To initially login to the Admin Web UI, you must use a
username and password that successfully authenticates you
with the host UNIX system (you can later modify the settings
so that RADIUS or LDAP is used for authentication instead).

You can login to the Admin Web UI as &quot;openvpn&quot; or specify
a different user account to use for this purpose.

Do you wish to login to the Admin UI as &quot;openvpn&quot;?
Press ENTER for default [yes]:

Please specify your OpenVPN-AS license key (or leave blank to specify later):

Initializing OpenVPN...
Adding new user login...
useradd -s /sbin/nologin &quot;openvpn&quot;
Writing as configuration file...
Perform sa init...
Wiping any previous userdb...
Creating default profile...
Modifying default profile...
Adding new user to userdb...
Modifying new user as superuser in userdb...
Getting hostname...
Hostname: ip-172-16-10-121
Preparing web certificates...
Getting web user account...
Adding web group account...
Adding web group...
Adjusting license directory ownership...
Initializing confdb...
Generating init scripts...
Generating PAM config...
Generating init scripts auto command...
Starting openvpnas...

NOTE: Your system clock must be correct for OpenVPN Access Server
to perform correctly. Please ensure that your time and date
are correct on this system.

Initial Configuration Complete!

You can now continue configuring OpenVPN Access Server by
directing your Web browser to this URL:

https://172.16.10.121:943/admin
Login as &quot;openvpn&quot; with the same password used to authenticate
to this UNIX host.

During normal operation, OpenVPN AS can be accessed via these URLs:
Admin UI: https://172.16.10.121:943/admin
Client UI: https://172.16.10.121:943/
  • Reset the openvpn user

user@ip-172-16-10-121:~# passwd openvpn
Enter new UNIX password
Retype new UNIX password:
Reset the openvpn user
passwd: password updated successfully

openVPNLogin

    • Go to VPN Settings and allow access to the private subnet and remove access to the public subnet

PrivSubnet

    • Click Save Settings
    • Click Update Running Server
  • Once you have completed above tasks, remove the Elastic IP address assigned to your EC2 web server

Test your new VPN server

OpenVPNClient OpenVPNClient1

    • Open a command prompt and confirm connectivity – you should not be able to ping your VPN private IP as intended

testVPN

    • Ping the web server, confirm that ICMP is working

testVPN1

testVPN2

That’s it

This should all what’s needed to setup VPN connection to your AWS environment. The openVPN AS license allows you to have  two concurrent connections at a time – additional licenses can be purchased at OpenVPN site (link). You should consider locking down the environment if you plan to use it for production eg. creating a different user in the openVPN console, applying ACLs at the subnet level, restricting the security groups even further, or running VPN instances in multiple Availability Zones for a High Availability configuration.

With the recent openSSL vulnerability, ensure that your version of openVPN is updated to 2.0.6 – details are available here

ELBs do not cater for your environment? Set up HAProxy for your IIS servers

Recently we encountered a scenario where we needed to look for an alternative for Amazon Web Services (AWS) Elastic Load Balancing (ELB) due to an existing IIS configuration used in an organisation.  We found that HAProxy was the best candidate in terms of simplicity & the suitability for scenario we were addressing.

This post will show you how you can leverage HAProxy to load balance IIS web servers hosted in AWS EC2 and explain briefly why HAProxy is best suited to address our scenario.

The scenario

Consider you have two web servers you need to load balanced; each hosts several websites configured using multiple IP addresses.  In this case, there is no need to handle SSL at the load balancer (LB) layer, the LB only passes through SSL requests to the backend servers.

Web server 1 hosts the following websites:

  • KloudWeb port 80 with IIS binding to 192.168.137.31:80
  • KloudWeb port 443 with IIS binding to 192.168.137.31:443
  • KloudMetro port 80 with IIS binding to 192.168.137.15:80
  • KloudMetro port 443 with IIS binding to 192.168.137.15:443
  • Note: 192.168.137.31 is the primary interface IP address of web server 1.

Web server 2 hosts the following websites:

  • KloudWeb port 80 with IIS binding to 192.168.137.187:80
  • KloudWeb port 443 with IIS binding to 192.168.137.187:443
  • KloudMetro port 80 binding to 192.168.137.107:80
  • KloudMetro port 443 binding to 192.168.137.107:443
  • Note: 192.168.137.187 is the primary interface IP address of web server 2.

Why Amazon Elastic Load Balancer is less ideal in this case?

ELB only delivers traffic and load balance the primary interface i.e. eth0.  To make this scenario work with ELB, the IIS binding configuration needs to be amended to either the following:

  • KloudWeb or KloudMetro will need to change ports other than port 80 or 443 for the HTTP and HTTPS respectively; or
  • Use different host headers

Those alternatives could not be employed as we needed to migrate environments as-is.  Given this, replacing ELB is the most viable option to support the scenario. Note: There are merits for binding different IPs for different sites, however a similar goal can be achieved with a single IP address by assigning custom ports on the binding settings in IIS – host headers. Further details on the pros and cons on both approaches can be found  here.

Why HAProxy?

HAProxy is a very popular choice for replacing ELB in many AWS scenarios.  It provides both the features of L4 & L7 traditional load balancers and a flexibility that is rarely found in a software based load balancer.  We also assessed alternatives such as LVS or NGINX – both of which are free for use, but decided to go ahead with HAProxy since it supports SSL pass-through using its tcp port forwarder feature and the simplicity it provides.

One thing to note: at the time of writing, HAProxy stable release 1.4 does not support SSL termination at the load balancer (there are 3rd party tools that can support them e.g. bundled with nginx). The newest version (in dev) now supports SSL offload capability therefore eliminating the need to install any components outside HAProxy to handle SSL.

The Preparation Steps

To prepare, we need the following info:

  • The Load Balancer “VIP” addresses
  • Backend addresses (since you need to bind the VIP addresses to the different backend addresses)
  • LB listener ports and backend server ports

Let’s get hands on

First of all, you may be surprised by how simple it is to configure HAProxy on AWS.  Key thing is to understand what goal or scenario you would like to achieve and (once again), the preparation to collect relevant information.

Instance creation

  • We have chosen to use the ‘vanilla’ Amazon Linux AMI in Sydney. Spin this instance up on the UI or command line
  • Assigned two IP addresses for this HAProxy instance to host the two websites
  • Created a security group which only allows SSH (port 22) and Web connections (port 80 & 443).  You can also separate them to limit  SSH connection from certain addresses for an additional security
  • Connect to your newly created instance (via Putty or the built-in AWS Java console)

Configure your HAProxy

  • Make sure you have changed as root or an account with a sudo right
  • Install haproxy – yum install haproxy
  • Once it is installed, browse to the /etc/haproxy directory and review the haproxy.cfg file
  • Backup the haproxy.cfg file – cp haproxy.cfg haproxy.cfg.backup
  • Modify the original file with the following configuration – vi haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000

# KloudWeb & KloudMetro Web Servers
# -----------------------

listen kloudwebhttp
bind 192.168.200.10:80
mode http
stats enable
stats auth admin:apassword
balance roundrobin
option httpchk /kloudlb/test.aspx
server webserver1 192.168.137.31:80
server webserver2 192.168.137.187:80

listen kloudwebhttps
bind 192.168.200.10:443
mode tcp
balance roundrobin
server webserver1 192.168.137.31:443
server webserver1 192.168.137.187:443

listen kloudmetrohttp
bind 192.168.200.11:80
mode http
stats enable
stats auth admin:apassword
balance roundrobin
option httpchk /kloudlb/test.aspx
server webserver1 192.168.137.15:80
server webserver2 192.168.137.107:80

listen kloudmetrohttps
bind 192.168.200.11:443
mode tcp
balance roundrobin
server webserver1 192.168.137.15:443
server webserver1 192.168.137.107:443

Testing

Once you have modified the file, run HAProxy to test its functionality

  • On a SSH console, enter – service haproxy start
  • HAProxy will verify the configuration and start the service
  • From this point you can see the built-in dashboard of your new HAProxy configuration by going to the link below
  • Hit or access your website (with IP or DNS)
  • Any new requests will update  the stats shown here in real time (see kloudmetrohttp for updated stats)

The HAProxy Configuration explained

Apart from the default configuration, this section briefly details the configuration file we used above. See the following documentation for more info.  There are ways you can leverage the vast features of HAProxy such as performing advanced health checks based on regular expression and modifying polling time which are beyond the scope of this blog post.

# listen <name> -- this specifies Haproxy listener group, you can define a logical name for your servers

# bind <IP addr of LB>:<LB listener port> -- The binding IP address & port

# mode <http or tcp> -- this is set to http (L7 load balancing) or TCP (L4 load balancing)

# stats enable -- Enable the Haproxy stats

# stats auth admin:<anypassword> -- Set the username and password for accessing the site

# balance roundrobin -- this sets the algorithm used for load balancing requests.

# option httpchk <uri> -- this configuration will perform an optional health check to put the listener in or out of service

# server <name> <server ip addr>:<server port> check port <server port> -- this sets the backend servers which will be load balanced.

Australian IaaS players – a comparison

UPDATE (21/05/13) : Azure announced their plans to expand to the Australian shore yesterday. This blog was updated to include the key changes.

UPDATE (22/10/13) : Updated workload size specification in the IaaS specification comparison table.

There are many blogs comparing the major IaaS providers – however this post focuses more on the Australian market IaaS providers. Organisations of all sizes have begun adopting or investigating Cloud computing making it essential for decision makers to look into what they offer. This comparison looks at what available options are in the market in regards to Infrastructure as a Service (IaaS) providers.  As more customers are looking at the best combination in the market – we will examine each cloud provider feature set at a high level.  Note that this comparison does not include aspects on the PaaS (Platform as a service) space.

                      

IaaS major players in Australia

When referring to pure Australian players – we can count major providers that have (actual) presence in Australia and who are yet to make their services available to this region. Firstly, as we all know Amazon has seen Australia as a serious market with its Sydney region establishment mid last year (June 2012). Next are the Telco’s, Telstra & Optus who recognize that there is a significant revenue for the Cloud market. Finally, companies who wish to scale their services internationally should look at what Rackspace have to offer with their IaaS packages.  UPDATE: Microsoft have shared their plans to enable services in Victoria and NSW to cater for the growing demand of Cloud services.

In summary we particularly look at the following Cloud providers:

  • Amazon Web Services
  • Microsoft Windows Azure
  • Telstra Utility Hosting (IaaS)
  • Optus PowerOn IaaS
  • Rackspace IaaS

Amazon Web Services
AWS has gained huge market share and popularity anywhere in the world including Australia and is seen as a leader in public IaaS. They have very frequent releases with new products/updates coming every 2 weeks or so. As an infrastructure as a service provider, Amazon is seen as a leader in the enterprises and start-ups at present.

Microsoft Windows Azure
Azure unveiled the IaaS Virtual Machines offering preview at the Meet Windows Azure Event in June last year (2012). Last week (17-Apr AEST) Microsoft has made the infrastructure services with GA (General Availability) along with new features such as larger virtual machines and a new pricing commitment based model for possible greater discounts. Despite no local availability in Australia, we see Azure as a major player in the Cloud especially its successful adoption by developers with their PaaS and SaaS offerings. UPDATE: Azure will be available in Australia – see their announcement here.

Telstra Utility Hosting (IaaS)
It has been public for a while that Telstra are offering Cloud services, they have announced $800m investment to build a cloud platform to serve majority of Australian customers. They have also recently completed a major upgrade to their Cloud portal providing greater ease for consumers. Telstra are ramping up its services to cover more geography regions in APAC with its recent initiative – Telstra Global.

Optus PowerOn IaaS
Optus released its first Cloud product late 2010 followed by a major upgrade last year. To support its strategy, the parent company, Singtel has completed re-organisation to focus more on regional opportunities. Optus PowerOn Cloud is a vCloud certified data center.

Rackspace IaaS
Rackspace established an Australian presence in 2009 using its overseas data centers. Now, Rackspace has opened an Australian data center and brings its openstack solution for private cloud deployments. There is no date yet when they will release its public cloud offerings in Australia. To make a consistent comparison, we will compare the public cloud offering.

The Comparison

The following table compares the offerings among major Cloud providers in the industry at a high level. The comparison takes into account support and availability within the Australian landscape focusing on Infrastructure services (IaaS). We have selected differentiators as a method to distinguish services/feature set being provided by the Cloud providers (refer to the table). Where applicable, we discuss several key areas in more detail.

This comparison is valid at the time this blog is published and is subject to change in the future as Cloud providers rapidly adding more features.

Table

Footnotes description:

  1. Only one region is available at present.
  2. Refers to the DB cloud offerings (PaaS) and excludes the use of a dedicated database installed on a virtual server.
  3. Microsoft customised Hyper-V for Azure.
  4. Refers to a set of virtual machines running on a dedicated hardware.
  5. Rackspace 100% SLA is for hardware and infrastructure failures – please refer to their SLA here.
  6. IaaS (Virtual Machines, Networks, Storage) has the same price worldwide.  CDN (PaaS) has differing prices based on zones.
  7. Update: Azure will soon be available in Australia in two regions – New South Wales and Victoria.  No official date has been announced yet.

Cloud Engine
Cloud Engine refers to the underlying provisioning and orchestration technology supporting the IaaS. Azure, AWS, and Telstra IaaS use proprietary Cloud engines with Rackspace notably uses the OpenStack platform, and Optus as the early provider embraces VMware vCloud. 

Consumer API
One of the add-on benefits with Cloud is the ability to programmatically manage your infrastructure via API and various programming languages.  Both Azure and AWS provide strong API support which practically allow anything done via UI possible via the APIs, this is also accessible via different languages too eg. .Net, Java, PHP, node.js, etc.  Rackspace supports the industry standard RESTful API powered by the OpenStack platform.  At the time this article written, there are no API published by both Telstra and Optus.

Storage Offerings
All providers have services around storage – this again refers to dedicated storage offering for unstructured and structured data in the Cloud as opposed to disks attached to servers. Azure offers Table (NoSQL) and Blob (unstructured) storage to store your data; Amazon with its DynamoDB and S3 (and quite recently) Glacier for archiving solution; Rackspace offers Cloud Files and Databases solution but no support for NoSQL yet. Telstra and Optus only offer unstructured data storage option at this stage.

Compute Offerings
There is not much to say here as all providers we compared have compute offerings – there are varying workload sizes which is described in the above table.

Network offerings
Azure has virtual network, load balancer, and network security products such as Traffic Manager (it’s in preview as this blog is written). AWS has virtual private cloud allowing you to create private and public subnets, load balancer with its elastic load balancing (ELB), and security groups and ACL allowing granular access control mechanism. Rackspace allows the creation of isolated networking with CloudNetworks, Load Balancer with the Cloud Load Balancers product, and advanced traffic filtering using open vSwitch technologies. Despite these similarities there are certain aspects of networking that are different eg. Load Balancer capabilities between Azure, AWS and Rackspace which we may cover in a separate blog.

24×7 Support Availability

All cloud providers offer 24×7 support as follows

  • Azure has phone and email options but no online chat support option.
  • AWS has phone, email, chat, screen sharing support options.
  • Rackspace has phone, ticket (email), chat support.
  • Telstra has phone and email support but no community forum option.
  • Optus has phone and email support, service management reporting but no community forum.

SLA
Each vendor provides differing SLA terms and condition and you should consult appropriate parties (SIs, lawyers, and the relevant vendors)

Amazon
EC2 SLA
S3 SLA

Azure
Virtual Machines and Network SLA
Storage SLA
SQL Databases SLA

Rackspace
Cloud Servers SLA
Cloud Load Balancers SLA
Cloud Databases SLA
Cloud Files SLA 

Telstra
Telstra IaaS SLA

Optus
Optus PowerOn SLA

What does this mean for my organization?

While it is good to see what these cloud providers bring to the table, you will need to understand how your organization can benefit from these. For starters, understand at what stage your organisation is at in the journey of adopting the cloud, what immediate business problems you urgently need to address, and then think about ways Cloud can make a real impact to your organisation.

It is important to look at beyond the hype and to align your cloud initiatives to your business need. At Kloud, we believe that every organisation can benefit from Cloud in some way and we are enthusiastic in enabling your business to be a cloud-ready business. Invite us for a quick meeting & discuss how the Cloud can transform your business.

We look forward to hearing what you think – if you have any suggestions or questions please contact us.

Key links for further info:

https://cloud.telstra.com/virtual-servers
http://www.arnnet.com.au/article/425575/optus_shifts_cloud_strategy_into_high_gear/
http://www.rackspace.com/blog/rackspace-comes-to-australiaand-brings-our-openstack-solution-too/
http://www.rackspace.com/blog/cloud-networks-the-next-chapter-in-the-open-cloud/
http://aws.amazon.com/premiumsupport/
http://www.windowsazure.com/en-us/support/plans/
http://www.rackspace.com/cloud/servers/support_b/

http://www.telstraglobal.com/news/437-telstra-global-s-cloud-solution-powers-flat-planet-s-expansion-in-asia
https://cloud.telstra.com/help-and-support