Automate ADFS Farm Installation and Configuration

Originally posted on Nivlesh’s blog @ nivleshc.wordpress.com

Introduction

In this multi-part blog, I will be showing how to automatically install and configure a new ADFS Farm. We will accomplish this using Azure Resource Manager templates, Desired State Configuration scripts and Custom Script Extensions.

Overview

We will use Azure Resource Manager to create a virtual machine that will become our first ADFS Server. We will then use a desired state configuration script to join the virtual machine to our Active Directory domain and to install the ADFS role. Finally, we will use a Custom Script Extension to install our first ADFS Farm.

Install ADFS Role

We will be using the xActiveDirectory and xPendingReboot experimental DSC modules.

Download these from

https://gallery.technet.microsoft.com/scriptcenter/xActiveDirectory-f2d573f3

https://gallery.technet.microsoft.com/scriptcenter/xPendingReboot-PowerShell-b269f154

After downloading, unzip the file and  place the contents in the Powershell modules directory located at $env:ProgramFiles\WindowsPowerShell\Modules (unless you have changed your systemroot folder, this will be located at C:\ProgramFiles\WindowsPowerShell\Modules )

Open your Windows Powershell ISE and lets create a DSC script that will join our virtual machine to the domain and also install the ADFS role.

Copy the following into a new Windows Powershell ISE file and save it as a filename of your choice (I saved mine as InstallADFS.ps1)

In the above, we are declaring some mandatory parameters and some variables that will be used within the script

$MachineName is the hostname of the virtual machine that will become the first ADFS server

$DomainName is the name of the domain where the virtual machine will be joined

$AdminCreds contains the username and password for an account that has permissions to join the virtual machine to the domain

$RetryCount and $RetryIntervalSec hold values that will be used to  check if the domain is available

We need to import the experimental DSC modules that we had downloaded. To do this, add the following lines to the DSC script

Import-DscResource -Module xActiveDirectory, xPendingReboot

Next, we need to convert the supplied $AdminCreds into a domain\username format. This is accomplished by the following lines (the converted value is held in $DomainCreds )

Next, we need to tell DSC that the command needs to be run on the local computer. This is done by the following line (localhost refers to the local computer)

Node localhost

We need to tell the LocalConfigurationManager that it should reboot the server if needed, continue with the configuration after reboot,  and to just apply the settings only once (DSC can apply a setting and constantly monitor it to check that it has not been changed. If the setting is found to be changed, DSC can re-apply the setting. In our case we will not do this, we will apply the setting just once).

Next, we need to check if the Active Directory domain is ready. For this, we will use the xWaitForADDomain function from the xActiveDirectory experimental DSC module.

Once we know that the Active Directory domain is available, we can go ahead and join the virtual machine to the domain.

the JoinDomain function depends on xWaitForADDomain. If xWaitForADDomain fails, JoinDomain will not run

Once the virtual machine has been added to the domain, it needs to be restarted. We will use xPendingReboot function from the xPendingReboot experimental DSC module to accomplish this.

Next, we will install the ADFS role on the virtual machine

Our script has now successfully added the virtual machine to the domain and installed the ADFS role on it. Next, create a zip file with InstallADFS.ps1 and upload it to a location that Azure Resource Manager can access (I would recommend uploading to GitHub). Include the xActiveDirectory and xPendingReboot experimental DSC module directories in the zip file as well. Also add a folder called Certificates inside the zip file and put the ADFS certificate and the encrypted password files (discussed in the next section) inside the folder.

In the next section, we will configure the ADFS Farm.

The full InstallADFS.ps1 DSC script is pasted below

Create ADFS Farm

Once the ADFS role has been installed, we will use Custom Script Extensions (CSE) to create the ADFS farm.

One of the requirements to configure ADFS is a signed certificate. I used a 90 day trial certificate from Comodo.

There is a trick that I am using to make my certificate available on the virtual machine. If you bootstrap a DSC script to your virtual machine in an Azure Resource Manager template, the script along with all the non out-of-box DSC modules have to be packaged into a zip file and uploaded to a location that ARM can access. ARM then will download the zip file, unzip it, and place all directories inside the zip file to $env:ProgramFiles\WindowsPowerShell\Modules ( C:\ProgramFiles\WindowsPowerShell\Modules ) ARM assumes the directories are PowerShell modules and puts them in the appropriate directory.

I am using this feature to sneak my certificate on to the virtual machine. I create a folder called Certificates inside the zip file containing the DSC script and put the certificate inside it. Also, I am not too fond of passing plain passwords from my ARM template to the CSE, so I created two files, one to hold the encrypted password for the domain administrator account and the other to contain the encrypted password of the adfs service account. These two files are named adminpass.key and adfspass.key and will be placed in the same Certificates folder within the zip file.

I used the following to generate the encrypted password files

AdminPlainTextPassword and ADFSPlainTextPassword are the plain text passwords that will be encrypted.

$key  is used to convert the secure string into an encrypted standard string. Valid key lengths are 16, 24, 32

For this blog, we will use

$Key = (3,4,2,3,56,34,254,222,1,1,2,23,42,54,33,233,1,34,2,7,6,5,35,43)

Open Windows PowerShell ISE and paste the following (save the file with a name of your choice. I saved mine as ConfigureADFS.ps1)

param (
 $DomainName,
 $DomainAdminUsername,
 $AdfsSvcUsername
)

These are the parameters that will be passed to the CSE

$DomainName is the name of the Active Directory domain
$DomainAdminUsername is the username of the domain administrator account
$AdfsSvcUsername is the username of the ADFS service account

Next, we will define the value of the Key that was used to encrypt the password and the location where the certificate and the encrypted password files will be placed

$localpath = "C:\Program Files\WindowsPowerShell\Modules\Certificates\"
$Key = (3,4,2,3,56,34,254,222,1,1,2,23,42,54,33,233,1,34,2,7,6,5,35,43)

Now, we have to read the encrypted passwords from the adminpass.key and adfspass.key file and then convert them into a domain\username format

Next, we will import the certificate into the local computer certificate store. We will mark the certificate exportable and set the password same as the domain administrator password.

In the above after the certificate is imported,  $cert is used to hold the certificate thumbprint

Next, we will configure the ADFS Farm

The ADFS Federation Service displayname is set to “Active Directory Federation Service” and the Federation Service Name is set to fs.adfsfarm.com

Upload the CSE to a location that Azure Resource Manager can access (I uploaded my script to GitHub)

The full ConfigureADFS.ps1 CSE is shown below

Azure Resource Manager Template Bootstrapping

Now that the DSC and CSE scripts have been created, we need to add them in our ARM template, straight after the virtual machine is provisioned.

To add the DSC script, create a DSC extension and link it to the DSC Package that was created to install ADFS. Below is an example of what can be used

The extension will run after the ADFS virtual machine has been successfully created (referred to as ADFS01VMName)

The MachineName, DomainName and domain administrator credentials are passed to the DSC extension.

Below are the variables that have been used in the json file for the DSC extension (I have listed my GitHub repository location)

Next, we have to create a Custom Script Extension to link to the CSE for configuring ADFS. Below is an example that can be used

The CSE depends on the ADFS virtual machine being successfully provisioned and the DSC extension that installs the ADFS role to have successfully completed.

The DomainName, Domain Administrator Username and the ADFS Service Username are passed to the CSE script

The following contains a list of the variables being used by the CSE (the example below shows my GitHub repository location)

"repoLocation": "https://raw.githubusercontent.com/nivleshc/arm/master/",
"ConfigureADFSScriptUrl": "[concat(parameters('repoLocation'),'ConfigureADFS.ps1')]",

That’s it Folks! You now have an ARM Template that can be used to automatically install the ADFS role and then configure a new ADFS Farm.

In my next blog, we will explore how to add another node to the ADFS Farm and we will also look at how we can automatically create a Web Application Proxy server for our ADFS Farm.

Active Directory – What are Linked Attributes?

A customer request to add some additional attributes to their Azure AD tenant via Directory Extensions feature in the Azure AD Connect tool, lead me into further investigation. My last blog here set out the customer request, but what I didn’t detail in that blog was one of the attributes they also wanted to extend into Azure AD was directReports, an attribute they had used in the past for their custom built on-premise applications to display the list of staff the user was a manager for. This led me down a rabbit hole where it took a while to reach the end.

With my past experience in using Microsoft Identity Manager (formally Forefront Identity Manager), I knew directReports wasn’t a real attribute stored in Active Directory, but rather a calculated value shown using the Active Directory Users and Computers console. The directReports was based on the values of the manager attribute that contained the reference to the user you were querying (phew, that was a mouthful). This is why directReport and other similar type of attributes such as memberOf were not selectable for Directory Extension in the Azure AD Connect tool. I had never bothered to understand it further than that until the customer also asked for a list of these type of attributes so that they could tell their application developers they would need a different technique to determine these values in Azure AD. This is where the investigation started which I would like to summarise as I found it very difficult to find this information in one place.

In short, these attributes in the Active Directory schema are Linked Attributes as detailed in this Microsoft MSDN article here:

Linked attributes are pairs of attributes in which the system calculates the values of one attribute (the back link) based on the values set on the other attribute (the forward link) throughout the forest. A back-link value on any object instance consists of the DNs of all the objects that have the object’s DN set in the corresponding forward link. For example, “Manager” and “Reports” are a pair of linked attributes, where Manager is the forward link and Reports is the back link. Now suppose Bill is Joe’s manager. If you store the DN of Bill’s user object in the “Manager” attribute of Joe’s user object, then the DN of Joe’s user object will show up in the “Reports” attribute of Bill’s user object.

I then found this article here which further explained these forward and back links in respect of which are writeable and which are read-only, the example below referring to the linked attributes member/memberOf:

Not going too deep into the technical details, there’s another thing we need to know when looking at group membership and forward- and backlinks: forward-links are writable and backlinks are read-only. This means that only forward-links changed and the corresponding backlinks are computed automatically. That also means that only forward-links are replicated between DCs whereas backlinks are maintained by the DCs after that.

The take-out from this is the value in the forward-link can be updated, the member attribute in this case, but you cannot update the back-link memberOf. Back-links are always calculated automatically by the system whenever an attribute that is a forward-link is modified.

My final quest was to find the list of linked attributes without querying the Active Directory schema which then led me to this article here, which listed the common linked attributes:

  • altRecipient/altRecipientBL
  • dLMemRejectPerms/dLMemRejectPermsBL
  • dLMemSubmitPerms/dLMemSubmitPermsBL
  • msExchArchiveDatabaseLink/msExchArchiveDatabaseLinkBL
  • msExchDelegateListLink/msExchDelegateListBL
  • publicDelegates/publicDelegatesBL
  • member/memberOf
  • manager/directReports
  • owner/ownerBL

There is further, deeper technical information about linked attributes such as “distinguished name tags” (DNT) and what is replicated between DCs versus what is calculated locally on a DC, which you can read in your own leisure in the articles listed throughout this blog. But I hope the summary is enough information on how they work.

Using Microsoft Azure Table Service REST API to collect data samples

Sometimes we need a simple solution that requires collecting data from multiple sources. The sources of data can be IoT devices or systems working on different platforms and in different places. Traditionally, integrators start thinking about implementation of a custom centralised REST API with some database repository. This solution can take days to implement and test, it is very expensive and requires hosting, maintenance, and support. However, in many cases, it is not needed at all. This post introduces the idea that out-of-the-box Azure Tables REST API is good enough to start your data collection, research, and analysis in no time. Moreover, the suggested solution offers very convenient REST API that supports JSON objects and very flexible NoSQL format. Furthermore, what’s great is that you do not need to write lots of code and hire programmers. Anybody who understands how to work with REST API, create headers and put JSON in the Web request body can immediately start working on a project and sending data to a very cheap Azure Tables storage. Additional benefits of using Azure Tables are: native support in Microsoft Azure Machine Learning, other statistical packages also allow you to download data from Azure Tables.

Microsoft provides Azure Tables SDKs for various languages and platforms. By all means you should use these SDKs; your life will be much easier. However, some systems don’t have this luxury and require developers to work with Azure Tables REST API directly. The only requirement for your system is that it should be able to execute web requests, and you should be able to work with the headers of these requests. Most of the systems satisfy this requirement. In this post, I explain how to form web requests and work with the latest Azure Table REST API from your code. I’ve also created a reference code to support my findings. It is written in C# but this technique can be replicated to other languages and platforms.

The full source code is hosted on GitHub here:

https://github.com/dimkdimk/AzureTablesRestDemo

Prerequisites.

You will need an Azure subscription. Create there a new storage account and create a test table with the name “mytable” in it. Below is my table that I’ve created in the Visual Studio 2015.

Picture 1

I’ve created a helper class that has two main methods: RequestResource and InsertEntity.

The full source code of this class is here:

Testing the class is easy. I’ve created a console application, prepared a few data samples and called our Azure Tables helpers methods. The source code of this program is below.

The hardest part in calling Azure Tables Web API is creating encrypted signature string to form Authorise header. It can be a bit tricky and not very clear for beginners. Please take a look at the detailed documentation that describes how to sign a string for various Azure Storage services: https://msdn.microsoft.com/en-au/library/dd179428.aspx

To help you with the authorisation code, I’ve written an example of how it can be done in the class AzuretableHelper. Please take a look at the code that creates strAuthorization variable. First, you will need to form a string that contains your canonical resource name and current time in the specific format including newline characters in-between. Then, this string has to be encrypted with so-called HMAC SHA-256 encryption algorithm. The key for this encryption code is a SharedKey that you can obtain from your Azure Storage Account as shown here:

Picture 2

The encrypted Authorisation string has to be re-created on each request you execute. Otherwise, Azure API will reject your requests with the Unuthorised error message in the response.

The meaning of other request’s headers is straightforward. You specify the version, format, and other attributes. To see the full list of methods you can perform on Azure Tables and to read about all attributes and headers you can refer to MSDN documentation:

https://msdn.microsoft.com/library/dd179423.aspx

Once you’ve mastered this “geeky” method of using Azure Tables Web API, you can send your data straight to your Azure Tables without using any intermediate Web API facilities. Also, you can read Azure Tables data to receive configuration parameters or input commands from another applications. A similar approach can be applied to other Azure Storage API, for example, Azure Blob storage, or Azure Queue.

 

 

 

 

Moving SharePoint Online workflow task metadata into the data warehouse using Nintex Flows and custom Web API

This post suggests the idea of automatic copying of SharePoint Online(SPO) workflow tasks’ metadata into the external data warehouse.  In this scenario, workflow tasks are becoming a subject of another workflow that performs automatic copying of task’s data into the external database using a custom Web API endpoint as the interface to that database. Commonly, the requirement to move workflow tasks data elsewhere arises from limitations of SPO. In particular, SPO throttles requests for access to workflow data making it virtually impossible to create a meaningful workflow reporting system with large amounts of workflow tasks. The easiest approach to solve the problem is to use Nintex workflow to “listen” to the changes in the workflow tasks, then request the task data via SPO REST API and, finally, send the data to external data warehouse Web API endpoint.

Some SPO solutions require creation of a reporting system that includes workflow tasks’ metadata. For example, it could be a report about documents with statuses of workflows linked to these documents. Using conventional approach (ex. SPO REST API) to obtain the data seems unfeasible as SPO throttles requests for workflow data. In fact, the throttling is so tight that generation of reports with more than a hundred of records is unrealistic. In addition to that, many companies would like to create Business Intelligence(BI) systems analysing workflow tasks data. Having data warehouse with all the workflow tasks metadata can assist in this job very well.

To be able to implement the solution a few prerequisites must be met. You must know basics of Nintex workflow creation and to be able to create a backend solution with the database of your choice and custom Web API endpoint that allows you to write the data model to that database. In this post we have used Visual Studio 2015 and created ordinary REST Web API 2.0 project with Azure SQL Database.

The solution will involve following steps:

  1. Get sample of your workflow task metadata and create your data model.
  2. Create a Web API capable of writing data model to the database.
  3. Expose one POST endpoint method of the Web REST API that accepts JSON model of the workflow task metadata.
  4. Create Nintex workflow in the SPO list storing your workflow tasks.
  5. Design Nintex workflow: call SPO REST API to get JSON metadata and pass this JSON object to your Web API hook.

Below is detailed description of each step.

We are looking here to export metadata of a workflow task. We need to find the SPO list that holds all your workflow tasks and navigate there. You will need a name of the list to be able to start calling SPO REST API. It is better to use a REST tool to perform Web API requests. Many people use Fiddler or Postman (Chrome Extension) for this job. Request SPO REST API to get a sample of JSON data that you want to put into your database. The request will look similar to this example:

Picture 1

The key element in this request is getbytitle(“list name”), where “list name” is SPO list name of your workflow tasks. Please remember to add header “Accept” with the value “application/json”. It tells SPO to return JSON instead of the HTML. As a result, you will get one JSON object that contains JSON metadata of Task 1. This JSON object is the example of data that you will need to put into your database. Not all fields are required in the data warehouse. We need to create a data model containing only fields of our choice. For example, it can look like this one in C# and all properties are based on model returned earlier:

The next step is to create a Web API that exposes a single method that accepts our model as a parameter from the body of the request. You can choose any REST Web API design. We have created a simple Web API 2.0 in Visual Studio 2015 using general wizard for MVC, Web API 2.0 project. Then, we have added an empty controller and filled  it with the code that works with the Entity Framework to write data model to the database. We have also created code-first EF database context that works with just one entity described above.

The code of the controller:

The code of the database context for Entity Framework

Once you have created the Web API, you should be able to call Web API method like this:
https://yoursite.azurewebsites.net/api/EmployeeFileReviewTasksWebHook

You will need to put your model data in the request body as a JSON object. Also don’t forget to include proper headers for your authentication and header “Accept” with “application/json” and set type of the request to POST. Once you’ve tested the method, you can move on to the next steps. For example, below is how we tested it in our project.

Picture 4

Next, we will create a new Nintex Workflow in the SPO list with our workflow tasks. It is all straightforward. Click Nintex Workflows, then create a new workflow and start designing it.

Picture 5

Picture 6

Once you’ve created a new workflow, click on Workflow Settings button. In the displayed form please set parameters as it shown on screenshot below. We set “Start when items are created” and “Start when items are modified”. In this scenario, any modifications of our Workflow task will start this workflow automatically. It also includes cases when Workflow task have been modified by other workflows.

Picture 7.1

Create 5 steps in this workflow as it shown on the following screenshots labelled as numbers 1 to 5. Please keep in mind that blocks 3 and 5 are there to assist in debugging only and not required in production use.

Picture 7

Step 1. Create a Dictionary variable that contains SPO REST API request headers. You can add any required headers, including Authentication headers. It is essential here to include Accept header with “application/json” in it to tell SPO that we want JSON in responses. We set Output variable to SPListRequestHeaders so we can use it later.

Picture 8

Step 2. Call HTTP Web Service. We call SPO REST API here. It is important to make sure that getbytitle parameter is correctly set to your Workflow Tasks list as we discussed before. The list of fields that we want to be returned is defined in the “$select=…” parameter of OData request. We need only fields that are included in our data model. Other settings are straightforward: we supply our Request Headers created in Step 1 and create two more variables for response. SPListResponseContent will get resulting JSON object that we going to need at the Step 4.

Picture 9

Step 3 is optional. We’ve added it to debug our workflow. It will send an email with the contents of our JSON response from the previous step. It will show us what was returned by SPO REST API.

Picture 10

Step 4. Here we are calling our custom API endpoint with passing JSON object model that we got from SPO REST API. We supply full URL of our Web Hook, Set method to POST and in the Request body we inject SPListResponseContent from the Step 2. We’re also capturing response code to display later in workflow history.

Picture 11

Step 5 is also optional. It writes a log message with the response code that we have received from our API Endpoint.

Picture 12

Once all five steps are completed, we publish this Nintex workflow. Now we are ready for testing.

To test the system, open list of our Workflow tasks. Click on any task and modify any of task’s properties and save the task. This will initiate our workflow automatically. You can monitor workflow execution in workflow history. Once workflow is completed, you should be able to see messages as displayed below. Notice that our workflow has also written Web API response code at the end.

Picture 13

To make sure that everything went well, open your database and check the records updated by your Web API. After every Workflow Task modification you will see corresponding changes in the database. For example:

Picture 14

 

In this post we have shown that automatic copying of Workflow Tasks metadata into your data warehouse can be done with a simple Nintex Workflow setup and performing only two REST Web API requests. The solution is quite flexible as you can select required properties from the SPO list and export into the data warehouse. We can easily add more tables in case if there are more than one workflow tasks lists. This solution enables creation of powerful reporting system  using data warehouse and also allows to employ BI data analytics tool of your choice.

Break down your templates with Linked Templates (Part 1)

Templated deployment is one of the key value propositions of moving from the Azure classic to Resource Manager (ARM) deployment model.  This is probably one key feature that made a big stride towards Infrastructure as a Code (IAC).  Personally, I have been looking forward to this feature since it’s a prominent feature on the other competing platform.

Now that this feature is live for a while, one aspect which I found interesting is the ability to link templates in Azure Resource Manager.  This post is part of a three-part series highlighting ways you could deploy linked templates.  The first part of the series describes the basics – building a template that creates the base environment.  The second part will demonstrate the linked template.  Third part will delve into a more advanced scenario with KeyVault and how we can secure our linked templates.

Why linked templates?  We could get away with one template to build our environment, but is it a good idea?  If your environment is pretty small, sure go ahead.  If the template becomes un-manageable for you which is mostly the case if you are serious about ‘templating’, then you come to the right place – linking or nesting templates is a way for you to de-couple your templates to manageable chunks.  This allows you to ‘branch’ out templates development, especially when you have more than one person; resulting in smaller, manageable templates than a one big template.  Plus, this would make testing / debugging a little easier as you have a smaller scope to work off.

Linking or nesting templates is a way for you to de-couple your templates to manageable chunks.  This allows you to ‘branch’ out templates development, especially when you have more than one person; resulting in smaller, manageable templates than a one big template.  Plus, this would make testing / debugging a little easier as you have a smaller scope to work off.

OK, so let’s start with the scenario that demonstrates linked templates.  We will build a two-tier app consisting of two web servers and a database server.  This includes placing the two web servers in an availability set with a virtual network (VNET) and storage account to host the VMs.  To decouple the components, we will have three templates, one template that defines the base environment (VNET, subnets, and storage account).  The second template will build the virtual machines (web servers) in the front-end subnet, and finally the 3rd template will build the database server.

AzTemplate-1

First template (this blog post)

AzTemplate-2

Second template

AzTemplate-3

Third template

AzTemplate-4

The building process

First, create a resource group, this example relies on a resource group for deployment.  You will need to create the resource group now or later during deployment.

Once you have a resource group created, set up your environment – SDK, Visual Studio, etc. You can find more info here.  Visual Studio is optional but this demo will use this to build & deploy the templates.

Create a new project and select Azure Resource Group.  Select Blank templateAzTemplate-5

This will create a project with PowerShell deployment script and two JSON files: azuredeploy and azuredeploy.parameters.  The PowerShell script is particularly interesting as it has everything you need to deploy the Azure templates.

AzTemplate-6

We will start by defining the variables in the azuredeploy template.   Note this can be parameterised for a better portability.

Then we define the resources – storage account and virtual network (including subnets).

We then deploy it – in Visual Studio (VS) the process is simply by right-clicking project and select New Deployment… then Deploy.  If you didn’t create a resource group, you can create it here also.

AzTemplate-7

The template will deploy and a similar message should be displayed once it’s finished.

Successfully deployed template ‘w:\github-repo\linkedtemplatesdemo\linkedtemplatesdemo\templates\azuredeploy-base.json‘ to resource group ‘LinkedTemplatesDemo‘.

On the Azure portal you should see the storage account and virtual network created.

AzTemplate-8AzTemplate-9

The next part in the series describes how we modify this template to call the web and db servers templates (linked templates).  Stay tuned :).

 

 

 

Simultaneously Start|Stop all Azure Resource Manager Virtual Machines in a Resource Group

Problem

How many times have you wanted to Start or Stop all Virtual Machines in an Azure Resource Group ? For me it seems to be quite often, especially for development environment resource groups. It’s not that difficult though. You can just enumerate the VM’s then cycle through them and call ‘Start-AzureRMVM’ or ‘Start-AzureRMVM’. However, the more VM’s you have, that approach running serially as PowerShell does means it can take quite some time to complete. Go to the Portal and right-click on each VM and start|stop ?

There has to be a way of starting/shutting down all VM’s in a Resource Group in parallel via PowerShell right ?

Some searching and it seems common to use Azure Automation and Workflow’s to accomplish it. But I don’t want to run this on schedule or necessarily mess around with Azure Automation for development environments, or have to connected to the portal and kickoff the workflow.

What I wanted was a script that was portable. That lead me to messing around with ‘ScriptBlocks’ and ‘Start-Job’ functions in PowerShell. Passing variables in for locally hosted jobs running against Azure though was painful. So I found a quick clean way of doing it, that I detail in this post.

Solution

I’m using the brilliant Invoke-Parallel Powershell Script from Cookie.Monster, to in essence multi-thread and run in parallel the Virtual Machine ‘start’ and ‘stop’ requests.

In my script at the bottom of this post I haven’t included the ‘invoke-parallel.ps1’. The link for it is in the paragraph above. You’ll need to either reference it at the start of your script, or include it in your script. If you want to keep it all together in a single script include it like I have in the screenshot below.

My rudimentary PowerShell script takes two parameters;

  1. Power state. Either ‘Start’ or ‘Stop’
  2. Resource Group. The name of the Azure Resource Group containing the Virtual Machines you are wanting to start/stop. eg. ‘RG01’

<

p style=”background:white;”>Example: .\AzureRGVMPowerGo.ps1 -power ‘Start’ -azureResourceGroup ‘RG01’ or PowerShell .\AzureRGVMPowerGo.ps1 -power ‘Start’ -azureResourceGroup ‘RG01’

Note: If you don’t have a session to Azure in your current environment, you’ll be prompted to authenticate.

Your VM’s will simultaneously start/stop.

What’s it actually doing ?

It’s pretty simple. The script enumerates the VM’s in the Resource Group you’ve specified. It looks to see the status of the VM’s (Running or Deallocated) that is the inverse of the ‘Power’ state you’ve specified when running the script. It’ll start stopped VM’s in the Resource Group when you run it with ‘Start’ or it will stop all started VM’s in the Resource Group when you run it with ‘Stop’. Simples.

This script could also easily be updated to do other similar tasks. Like, delete all VM’s in a Resource Group.

Here it is

Enjoy.

Follow Darren Robinson on Twitter

Azure ExpressRoute in Australia via Equinix Cloud Exchange

Microsoft Azure ExpressRoute provides dedicated, private circuits between your WAN or datacentre and private networks you build in the Microsoft Azure public cloud. There are two types of ExpressRoute connections – Network (NSP) based and Exchange (IXP) based with each allowing us to extend our infrastructure by providing connectivity that is:

  • Private: the circuit is isolated using industry-standard VLANs – the traffic never traverses the public Internet when connecting to Azure VNETs and, when using the public peer, even Azure services with public endpoints such as Storage and Azure SQL Database.
  • Reliable: Microsoft’s portion of ExpressRoute is covered by an SLA of 99.9%. Equinix Cloud Exchange (ECX) provides an SLA of 99.999% when redundancy is configured using an active – active router configuration.
  • High Speed speeds differ between NSP and IXP connections – but go from 10Mbps up to 10Gbps. ECX provides three choices of virtual circuit speeds in Australia: 200Mbps, 500Mbps and 1Gbps.

Microsoft provided a handy table comparison between all different types of Azure connectivity on this blog post.

ExpressRoute with Equinix Cloud Exchange

Equinix Cloud Exchange is a Layer 2 networking service providing connectivity to multiple Cloud Service Providers which includes Microsoft Azure. ECX’s main features are:

  • On Demand (once you’re signed up)
  • One physical port supports many Virtual Circuits (VCs)
  • Available Globally
  • Support 1Gbps and 10Gbps fibre-based Ethernet ports. Azure supports virtual circuits of 200Mbps, 500Mbps and 1Gbps
  • Orchestration using API for automation of provisioning which provides almost instant provisioning of a virtual circuit.

We can share an ECX physical port so that we can connect to both Azure ExpressRoute and AWS DirectConnect. This is supported as long as we use the same tagging mechanism based on either 802.1Q (Dot1Q) or 802.1ad (QinQ). Microsoft Azure uses 802.1ad on the Sell side (Z-side) to connect to ECX.

ECX pre-requisites for Azure ExpressRoute

The pre-requisites for connecting to Azure regardless the tagging mechanism are:

  • Two Physical ports on two separate ECX chassis for redundancy.
  • A primary and secondary virtual circuit per Azure peer (public or private).

Buy-side (A-side) Dot1Q and Azure ExpressRoute

The following diagram illustrates the network setup required for ExpressRoute using Dot1Q ports on ECX:

Dot1Q setup

Tags on the Primary and Secondary virtual circuits are the same when the A-side is Dot1Q. When provisioning virtual circuits using Dot1Q on the A-Side use one VLAN tag per circuit request. This VLAN tag should be the same VLAN tag used when setting up the Private or Public BGP sessions on Azure using Azure PowerShell.

There are few things that need to be noted when using Dot1Q in this context:

  1. The same Service Key can be used to order separate VCs for private or public peerings on ECX.
  2. Order a dedicated Azure circuit using Azure PowerShell Cmdlet (shown below) and obtain the Service Key and use the this to raise virtual circuit requests with Equinix.https://gist.github.com/andreaswasita/77329a14e403d106c8a6

    Get-AzureDedicatedCircuit returns the following output.Get-AzureDedicatedCircuit Output

    As we can see the status of ServiceProviderProvisioningState is NotProvisioned.

    Note: ensure the physical ports have been provisioned at Equinix before we use this Cmdlet. Microsoft will start charging as soon as we create the ExpressRoute circuit even if we don’t connect it to the service provider.

  3. Two physical ports need to be provisioned for redundancy on ECX – you will get the notification from Equinix NOC engineers once the physical ports have been provisioned.
  4. Submit one virtual circuit request for each of the private and public peers on the ECX Portal. Each request needs a separate VLAN ID along with the Service Key. Go to the ECX Portal and submit one request for private peering (2 VCs – Primary and Secondary) and One Request for public peering (2VCs – Primary and Secondary).Once the ECX VCs have been provisioned check the Azure Circuit status which will now show Provisioned.expressroute03

Next we need to configure BGP for exchanging routes between our on-premises network and Azure as a next step, but we will come back to this after we have a quick look at using QinQ with Azure ExpressRoute.

Buy-side (A-side) QinQ Azure ExpressRoute

The following diagram illustrates the network setup required for ExpressRoute using QinQ ports on ECX:

QinQ setup

C-TAGs identify private or public peering traffic on Azure and the primary and secondary virtual circuits are setup across separate ECX chassis identified by unique S-TAGs. The A-Side buyer (us) can choose to either use the same or different VLAN IDs to identify the primary and secondary VCs.  The same pair of primary and secondary VCs can be used for both private and public peering towards Azure. The inner tags identify if the session is Private or Public.

The process for provisioning a QinQ connection is the same as Dot1Q apart from the following change:

  1. Submit only one request on the ECX Portal for both private and public peers. The same pair of primary and secondary virtual circuits can be used for both private and public peering in this setup.

Configuring BGP

ExpressRoute uses BGP for routing and you require four /30 subnets for both the primary and secondary routes for both private and public peering. The IP prefixes for BGP cannot overlap with IP prefixes in either your on-prem or cloud environments. Example Routing subnets and VLAN IDs:

  • Primary Private: 192.168.1.0/30 (VLAN 100)
  • Secondary Private: 192.168.2.0/30 (VLAN 100)
  • Primary Public: 192.168.1.4/30 (VLAN 101)
  • Secondary Public: 192.168.2.4/30 (VLAN 101)

The first available IP address of each subnet will be assigned to the local router and the second will be automatically assigned to the router on the Azure side.

To configure BGP sessions for both private and public peering on Azure use the Azure PowerShell Cmdlets as shown below.

Private peer:

Public peer:

Once we have configured the above we will need to configure the BGP sessions on our on-premises routers and ensure any firewall rules are modified so that traffic can be routed correctly.

I hope you’ve found this post useful – please leave any comments or questions below!

Read more from me on the Kloud Blog or on my own blog at www.wasita.net.

Amazon Web Services vs Microsoft Azure service comparison cheat sheet

Originally posted on Lucian’s blog at clouduccino.com.

I’m a big fan of both Microsoft Azure and Amazon Web Services. The two clouds are redefining the way web, apps and everything on the internet is made accessible from enterprise to the average user. Both for my own benefit and for yours, here’s a detailed side by side comparison of services as well as features available in each cloud:

Cloud Service Microsoft Azure Amazon Web Services
Locations Azure Regions Global Infrastructure
  NA Availability Zones
Management Azure Portal Management Console
Azure Preview Portal NA
Powershell+Desired State Configuration Command Line Interface
Compute Services
Cloud Services Elastic Beanstalk
Virtual Machines Elastic Compute Cloud (EC2)
  Batch Auto Scaling
RemoteApp Work Spaces
Web and Mobile Web Apps NA

Mobile Services Mobile SDK
API Management CloudTrail
NA Cognito
NA Mobile Analytics
Storage
SQL Databases Relational Database Service (RDS)
DocumentDB Dynamo DB
  Redis Cache Redshift
Blob Storage Simple Storage Service (S3)
Table Storage Elastic Block Store (EBS)
Queues Simple Queue Service (SQS)
File Storage Elastic File System (EFS)
Storsimple Storage Gateway
Analytics + Big Data
HDInsight (Hadoop) Elastic MapReduce (EMR)
Stream Analytics Kinesis
Machine Learning Machine Learning
Data Orchestration Data Factory Data Pipeline
Media Services
Media Services Elastic Transcoder
  Visual Studio Online NA
  BizTalk Services Simple Email Service (SES)
  Backup (Recovery Services) Glacier
  CDN CloudFront
Automation Automation OpsWorks
  Scheduler CodeDeploy + CodePipeline
Service Bus Simple Workflow (SWF)
Search CloudSearch
Networking Virtual Network Virtual Private Cloud (VPC)
  ExpressRoute DirectConnect
  Traffic Manager Elastic Load Balancing
  NA Route 53 (DNS)
 Management Services Resource Manager Cloud Formation
NA Trusted Adviser
Identity and Access Management
Active Directory Directory Service
NA Identity and Access Management (IAM)
Marketplace Marketplace Marketplace
Container Support Docker VM Extensions EC2 Container Service
Compliance Trust Centre CloudHSM
Multi-factor Authentication Multi-Factor Authentication Multi-Factor Authentication
Monitoring Services Operational Insights Config
Application Insights CloudWatch
Event Hubs NA
Notification Hubs Simple Notification Service (SNS)
Key Vault Key Management Store
Government Government GovCloud
Other services Web Jobs Lambda
NA Service Catalog
Office 365 Exchange Online WorkMail
Office 365 Sharepoint Online WorkDocs

For me this comparison is an exercise to allow me to reference quickly what the major services and features are on each cloud platform. I hope you can use it for reference if you’re needing to quickly know the equivalent service from one platform or the other.

Originally posted on Lucian’s blog at clouduccino.com.

Thank you,

by-lucian-handwritten-v1

 

 

Mule ESB DEV/TEST environments in Microsoft Azure

Agility in delivery of IT services is what cloud computing is all about. Week in, week out, projects on-board and wind-up, developers come and go. This places enormous stress on IT teams with limited resourcing and infrastructure capacity to provision developer and test environments. Leveraging public cloud for integration DEV/TEST environments is not without its challenges though. How do we develop our interfaces in the cloud yet retain connectivity to our on-premises line-of-business systems?

In this post I will demonstrate how we can use Microsoft Azure to run Mule ESB DEV/TEST environments using point-to-site VPNs for connectivity between on-premises DEV resources and our servers in the cloud.

MuleSoft P2S

Connectivity

A point-to-site VPN allows you to securely connect an on-premises server to your Azure Virtual Network (VNET). Point-to-site connections don’t require a VPN device. They use the Windows VPN client and must be started manually whenever the on-premises server (point) wishes to connect to the Azure VNET (site). Point-to-site connections use secure socket tunnelling protocol (SSTP) with certificate authentication. They provide a simple, secure connectivity solution without having to involve the networking boffin’s to stand up expensive hardware devices.

I will not cover the setup of the Azure Point-to-site VPN in this post, there are a number of good articles already covering the process in detail including this great MSDN article.

A summary of steps to create the Point-to-site VPN are as follows:

  1. Create an Azure Virtual Network (I named mine AUEastVNet and used address range 10.0.0.0/8)
  2. Configure the Point-to-site VPN client address range  (I used 172.16.0.0/24)
  3. Create a dynamic routing gateway
  4. Configure certificates (upload root cert to portal, install private key cert on on-premise servers)
  5. Download and install client package from the portal on on-premise servers

Once we established the point-to-site VPN we can verify the connectivity by running ipconfig /all and checking we had been assigned an IP address from the range we configured on our VNET.

IP address assigned from P2S client address range

Testing our Mule ESB Flow using On-premises Resources

In our demo, we want to test the interface we developed in the cloud with on-premises systems just as we would if our DEV environment was located within our own organisation

Mule ESB Flow

The flow above listens for HL7 messages using the TCP based MLLP transport and processes using two async pipelines. The first pipeline maps the HL7 message into an XML message for a LOB system to consume. The second writes a copy of the received message for auditing purposes.

MLLP connector showing host running in the cloud

The HL7 MLLP connector is configured to listen on port 50609 of the network interface used by the Azure VNET (10.0.1.4).

FILE connector showing on-premise network share location

The first FILE connector is configured to write the output of the xml transformation to a network share on our on-premises server (across the point-to-site VPN). Note the IP address used is the one assigned by the point-to-site VPN connection (from the client IP address range configured on our Azure VNET)

P2S client IP address range

To test our flow we launch a MLLP client application on our on-premises server and establish a connection across the point-to-site VPN to our Mule ESB flow running in the cloud. We then send a HL7 message for processing and verify we receive a HL7 ACK and that the transformed xml output message has also been written to the configured on-premises network share location.

Establishing the connection across the point-to-site VPN…

On-premises MLLP client showing connection to host running in the cloud

Sending the HL7 request and receiving an HL7 ACK response…

MLLP client showing successful response from Mule flow

Verifying the transformed xml message is written to the on-premises network share…

On-premises network share showing successful output of transformed message

Considerations

  • Connectivity – Point-to-site VPNs provide a relatively simple connectivity option that allows traffic between the your Azure VNET (site) and your nominated on-premise servers (the point inside your private network). You may already be running workloads in Azure and have a site-to-site VPN or MPLS connection between the Azure VNET and your network and as such do not require establishing the point-to-site VPN connection. You can connect up to 128 on-premise servers to your Azure VNET using point-to-site VPNs.
  • DNS – To provide name resolution of servers in Azure to on-premise servers OR name resolution of on-premise servers to servers in Azure you will need to configure your own DNS servers with the Azure VET. The IP address of on-premise servers will likely change every time you establish the point-to-site VPN as the IP address is assigned from a range of IP addresses configured on the Azure VET.
  • Web Proxies – SSTP does not support the use of authenticated web proxies. If your organisation uses a web proxy that requires HTTP authentication then the VPN client will have issues establishing the connection. You may need the network boffins after all to bypass the web proxy for outbound connections to your Azure gateway IP address range.
  • Operating System Support – Point-to-site VPNs only support the use of the Windows VPN client on Windows 7/Windows 2008 R2 64 bit versions and above.

Conclusion

In this post I have demonstrated how we can use Microsoft Azure to run a Mule ESB DEV/TEST environment using point-to-site VPNs for simple connectivity between on-premises resources and servers in the cloud. Provisioning integration DEV/TEST environments on demand increases infrastructure agility, removes those long lead times whenever projects kick-off or resources change and enforces a greater level of standardisation across the team which all improve the development lifecycle, even for integration projects!

Deploy an Ultra High Availability MVC Web App on Microsoft Azure – Part 2

In the first post in this series we setup our scenario and looked at how we can build out an ultra highly available Azure SQL Database layer for our applications. In this second post we’ll go through setting up the MVC Web Application we want to deploy so that it can leverage the capabilities of the Azure platform.

MVC project changes

This is actually pretty straight forward – you can take the sample MVC project from Codeplex and apply these changes easily. The sample Github repository has the resulting project (you’ll need Visual Studio 2013 Update 3 and the Azure SDK v2.4).

The changes are summarised as:

  • Open the MvcMusicStore Solution (it will be upgraded to the newer Visual Studio project / solution format).
  • Right-click on the MvcMusicStore project and select
    Convert – Convert to Microsoft Azure Cloud Project.
    Converting a Project
  • Add new Solution Configurations – one for each Azure Region to deploy to. This allows us to define Entity Framework and ASP.Net Membership / Role Provider database connection strings for each Region. I copied mine from the Release configuration.
    Project Configurations
  • Add web.config transformations for the two new configurations added in the previous step. Right click on the web.config and select “Add Config Transform”. Two new items will be added (as shown below in addition to Debug and Release).
    Configuration Transforms
  • Switch the Role deployment to use a minimum of two Instances by double-clicking on the MvcMusicStore.Azure – Roles – MvcMusicStore node in Solution Explorer to open up the properties page for the Role. Change the setting as shown below.
    Set Azure Role Instance Count

At this stage the basics are in place. A few additional items are required and they will be very familiar to anyone who has had to run ASP.Net web applications in a load balanced farm. These changes all relate to application configuration and are achieved through edits to the web.config.

Configure Membership and Role Provider to use SQL Providers.


    <membership defaultProvider="SqlMembershipProvider" userIsOnlineTimeWindow="15">
      <providers>
        <clear/>
        <add name="SqlMembershipProvider" type="System.Web.Security.SqlMembershipProvider" connectionStringName="AspNetDbProvider" applicationName="MVCMusicStoreWeb" enablePasswordRetrieval="false" enablePasswordReset="false" requiresQuestionAndAnswer="false" requiresUniqueEmail="true" passwordFormat="Hashed"/>
      </providers>
    </membership>
    <roleManager enabled="true" defaultProvider="SqlRoleProvider">
      <providers>
        <add name="SqlRoleProvider" type="System.Web.Security.SqlRoleProvider" connectionStringName="AspNetDbProvider" applicationName="MVCMusicStoreWeb"/>
      </providers>
    </roleManager>

Add a fixed Machine Key (you can use a range of online tools to generate one for you if you want). This allows all Instances to handle Forms authentication and other shared state that will require encryption / decryption between requests.


<machineKey
 validationKey="E9D17A5F58DE897D9161BB8D9AA995C59102AEF75F0224183F1E6F67737DE5EBB649BA4F1622CD52ABF2EAE35F9C26D331A325FC9EAE7F59A19F380E216C20F7"
 decryptionKey="D6F541F7A75BB7684FD96E9D3E694AB01E194AF6C9049F65"
 validation="SHA1"/>

Define a new connection string for our SQL-based Membership and Role Providers


<add name="AspNetDbProvider" 
connectionString="{your_connection_string}" 
providerName="System.Data.SqlClient"/>

Phew! Almost there!

Last piece of the puzzle is to add configuration transformations for our two Azure Regions so we talk to the Azure SQL Database in each Region. Repeat this in each Region’s transform and replace the Azure SQL Database Server name with the one appropriate to that Region (note that your secondary will be read-only at this point).


<connectionStrings>
  <add name="MusicStoreEntities"
       xdt:Transform="SetAttributes" 
       xdt:Locator="Match(name)"
       connectionString="Server=tcp:{primaryeast_db_server}.database.windows.net,1433;Database=mvcmusicstore;User ID={user}@{primaryeast_db_server};Password={your_password_here};Trusted_Connection=False;Encrypt=True;Connection Timeout=30;"
    />
  <add name="AspNetDbProvider"
       xdt:Transform="SetAttributes" 
       xdt:Locator="Match(name)"
       connectionString="Server=tcp:{primaryeast_db_server}.database.windows.net,1433;Database=aspnetdb;User ID={user}@{primaryeast_db_server};Password={your_password_here};Trusted_Connection=False;Encrypt=True;Connection Timeout=30;"
    />
  </connectionStrings>

Build Deployment Packages

Now that we have a project ready to deploy we will need to publish the deployment packages locally ready for the remainder of this post.

In Visual Studio, right-click on the MvcMusicStore.Azure project and select Package as shown below.

Package Azure Solution

Choose the appropriate configuration and click ‘Package’.

Package with config choice.

After packaging is finished a Windows Explorer window will open at the location of the published files.

Setup Cloud Services

Now we have all the project packaging work out-of-the-way let’s go ahead and provision up a Cloud Service which will be used to host our Web Role Instances. The sample script below shows how we can use the Azure PowerShell Cmdlets to provision a Cloud Service. The use of an Affinity Group allows us to ensure that our Blob Storage and Cloud Service are co-located closely enough to be at a low latency inside of the Azure Region.

Note: Just about everything you deploy in Azure requires a globally unique name. If you run all these scripts using the default naming scheme I’ve included the chances are that they will fail because someone else may have already run them (hint: you should change them).

Deploy Application to Cloud Services

Once the above Cloud Service script has been run successfully we have the necessary pieces on place to actually deploy our MVC Application. The sample PowerShell script below does just this – it utilises the output of our packaging exercise from above, uploads the package to Azure Blob Storage and then deploys using the appropriate configuration. The reason we have two packages is because the web.config is deployed with the package and making changes to it are not supported post deployment.

Once the above script has finished successfully you should now be open a web browser up and connect to the cloudapp.net endpoints in each Region. The Primary Region should give you full read/write access, whereas the Secondary Region will work but will likely throw exceptions for any action that requires a database write (this is expected behaviour). Note that these cloudapp.net endpoints are resolving to a load balanced endpoint that frontends the individual Instances in the Cloud Service.

Configure Traffic Manager

The final piece of this infrastructure puzzle is to deploy Traffic Manager which is Azure’s service offering for controlling where inbound requests are routed. Traffic Manager is not a load balancer but can provide services such as least-latency and failover routing and as of October 2014 can now support nested profiles (i.e. Traffic Manager managing traffic for another Traffic Manager – all very Inception-like!)

For our purposes we are going to use a Failover configuration that will use periodic health checks to determine if traffic should continue to be routed to the Primary Endpoint or failover to the Secondary.

Notes:

When defining a Failover configuration the ordering of the Endpoints matters. The first Endpoint added is considered as the Primary (you can change this later if you wish though).

You might want to consider a different “MonitorRelativePath” and utilise a custom page (or view in MVC’s case) that performs some form a simple diagnostics and returns a 200 OK response code if everything is working as expected.

But wait, there’s more!

A free set of steak knives!

No, not really, but if you’ve read this far you may as well qualify for some!

There are a few important things to note with this setup.

  • It won’t avoid data loss: the nature of Geo-replication means there is a high likelihood you will not see all Primary Database transactions play through to the Secondary in the event of a failure in the Primary. The window should be fairly small (depending on how geographically dispersed your databases are), but there will be a gap.
  • Failover requires manual intervention: you need to use the Stop-AzureSqlDatabaseCopy Cmdlet to force termination of the relationship between the Primary and Secondary databases. Until you do this the Secondary is read-only. Use external monitoring to find out when the Primary service goes down and then either leverage the Azure REST API to invoke the termination or script the Cmdlet. Note that once you break the copy process it isn’t automatically recreated.
  • Users will notice the cutover: there will be a combination of things they’ll notice depending on how long you wait to terminate the database copy. Active sessions will lose some data as they are backed by the database. User actions that require write access to the database will fail until you terminate the copy.
  • What about caching? We didn’t look at caching in this post. If we leverage the new Redis Cache we could theoretically setup a Slave in our Secondary Region. I haven’t tested this so your mileage may vary! (Would love to hear if you have)

The big benefit of this setup is that you can quickly recover from a failure in one Region. You may choose to use a holding page as your Secondary failover and then manually manage the complete cutover to the secondary active application once you are satisfied that the outage in the Primary Region will be of a long duration.

You should be running monitoring of the setup and check that both Primary and Secondary cloudapp.net endpoints are healthy and that your Traffic Manager service is healthy. Any issue with the Primary cloudapp.net endpoint will be the trigger for you to intervene and potentially switch off the geo-replication.

Anyway, this has been a big couple of posts – thanks for sticking around and I hope you’ve found the content useful.