The quickest way to create new VMs in Azure from existing VM snapshots, mostly with PowerShell

Originally posted on Lucian’s blog @clouduccino.com. Follow Lucian on Twitter @LucianFrango.


There’s probably multiple ways to do this, both right and wrong, but, here’s a process that I’ve been using for a while that I’ve recently tweaked to take advantage of new Azure Managed Disks.

Sidebar – standard managed disk warning

Before I go on though, I wanted to issue a quick warning about the differences between standard unmanaged and managed disks. Microsoft will be pushing you to you Managed Disks more and more. Yes, its a great feature that makes the management of VM disks simpler. The key bit of information though is as follows:

  • If you provision an unmanaged disk that is 1Tb in size, but, only use 100Gb, you are paying for 100Gb of storage costs. So you’re only paying for what you use. [1. Unmanaged disk cost – Azure Documentation ]
  • If you provision a managed disk that is 1Tb in size, but, only you 10Mb, you will be paying for the privilege of the whole 1Tb disk [2. Managed disk cost – Azure Documentation ]
  • Additionally, Premium disks, you’re paying for what you provision no matter if its managed or unmanaged

That aside, Managed Disks are a pretty good feature that makes disk and storage account management considerably simpler. If you’re frugal with your VM allocation and have the process to manage people and technology correctly, Managed Disks are great.

The Process

tl;dr

  • Create a snapshot in Azure
  • Copy the snapshot from snapshot storage location to Blob storage
  • Create a new VM instance based on the blob.vhd file
    • This blob post outlines the use of managed disks
    • However, mounting direct from Blob can also be done

The actual process

I’ve gone through this recently and updated it so that it’s as streamlined, for me, as possible. Again, this is skewed towards managed disk usage, but, can easily be extended to be used with unmanaged disks as well. Lets begin:

Step 0 ?

If you’re wanting to do this to create copies of your VM instances, to scale out your workload, remember to generalise or sysprep your VM instance prior to Step 1. In the example I go into below, my use case was to create a copy of a server from a production environment (VNET and subscription) and move it to different and seperate non-production environment (seperate VNET and subscription).

Step 1 – Create a snapshot of your VM disk(s)

The first thing we need to do is actually power off your virtual machine instance. I’ve seen that snapshots can happen while the VM instance is running, but, I guess you can call me a a little bit more old school, a little bit more on the cautious side when it comes to these sorts of things. I’ve been bitten by this particular bug in the past, unpleasant it was; so i’m inclined to err on the side of caution.

Once the VM instance is offline, go to the Azure Portal and search for “Snapshots”. Create a new snapshot.

  • NOTE: snapshots in Azure are done per DISK and not per VM INSTANCE
  • Name the snapshot
  • Select the subscription where the VM instance is located
  • Select the resource group you want to save the snapshot to
    • Or create a new one
  • Select the snapshot location
  • Select the source disk
    • If you earlier selected the same resource group where your VM instance is contained, the disk selection will display the resource group member VM instance disks first in the list
  • Select the storage type- standard or premium for your snapshot
    • I usually just use standard as I’ve not had the need for faster speed premium as yet (that will change one day for sure)
  • Create the snapshot

One the snapshot is created, complete this quick next step to generate an export access URL (we’ll need this in step 2):

  • Select the snapshot
  • From the top menu, select Export
  • You’ll be presented with a menu item with a time interval (based in seconds)
  • The default is 3600 or 1 hour
  • That is fine, but, I like to make that 36000 (add another 0) so that I have a whole day to do this again and again if need be
  • Save the generated URL to notepad for later!

Step 2 – Copy the snapshot to Blob

The next part relies on PowerShell. Update the following PowerShell script with your parameters to copy the snapshot to Blob:

$storageAccountName = "<storage account name>"
$storageAccountKey = “<storage account key>
$absoluteUri = “https://blahblahblah.blob.core.windows.net/blahblahblah/........
$destContainer = “<container>”
$blobName = “server.vhd

$destContext = New-AzureStorageContext –StorageAccountName $storageAccountName -StorageAccountKey $storageAccountKey
Start-AzureStorageBlobCopy -AbsoluteUri $absoluteUri -DestContainer $destContainer -DestContext $destContext -DestBlob $blobName

Just for your info, heres a quick explanation of the above:

  • Storage account name = there storage account where you want to store the VHD
  • The storage account key = either the primary or secondary  which is used for authentication for accessing the storage account
  • The absolute URI = this is the snapshot URI we generated at the end of step 1
  • The destination container = where you want to store the VHD. Usually this is either “vhds”,  or maybe create one called “snapshots”
  • The blob name = the file name of the VHD itself (remember to only use lowercase)

Step 2.5 – Moving around the blob if need be

Before we actually create a new VM instance based on this snapshot blob, there is an additional option we could take. That is, perhaps it would make sense to move the blob to a different subscription. This is particularly handy when you would have a development environment that you would want to move to production. Other use cases might be the inverse- making a replica of a production system for development purposes.

The absolute fastest way to do this, as I don’t like being inefficient here is with the Azure Storage Explorer (ASE) tool. Its an application that provides a quick GUI for completing storage actions. If you add in both the storage accounts in the ASE, you can as easily as this:

  • Select the blob from storage account A (in subscription A)
  • Select copy from the top menu
  • Go to the your second storage account (storage account B in perhaps subscription B)
  • Go to the relevant container
  • Select paste from the top menu
  • Wait for the blob to copy
  • DONE

It can’t get any simpler or faster than that. I’m sure if you’re command line inclined, you have a quick go to PowerShell cmdlet for that, but, for me, I’ve found that to be pretty damn quick. So it isn’t broken, why fix it.

Step 3 – Create a new VM with a managed disk based on the snapshot we’ve put into Azure Blob

The final piece of the puzzle, as the cliche would go, is to create a new virtual machine instance. Again, as the wonderfully elusive and vague title of this blog post states, we’ll use PowerShell to do this. Sure, ARM templates would work and likely the Azure Portal can get you pretty far as well. However, again I like to be efficient and I’ve found that the following PowerShell script does this the best.

Additionally, you can change this up to mount the VHD from blob, vs create a new managed disk as well. So, for purpose of creating a new machine, PowerShell is as flexible as it is fast and convenient.

Here’s the script you’ll need to create the new VM instance:

#Prepare the VM parameters 
$rgName = "<resource-group-name>"
$location = "australiaEast"
$vnet = "<virtual-network>"
$subnet = "/subscriptions/xxxxxxxxx/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network>/subnets/<subnet>"
$nicName = "VM01-Nic-01"
$vmName = "VM01"
$osDiskName = "VM01-OSDisk"
$osDiskUri = "https://<storage-account>.blob.core.windows.net/<container>/server.vhd"
$VMSize = "Standard_A1"
$storageAccountType = "StandardLRS"
$IPaddress = "10.10.10.10"

#Create the VM resources
$IPconfig = New-AzureRmNetworkInterfaceIpConfig -Name "IPConfig1" -PrivateIpAddressVersion IPv4 -PrivateIpAddress $IPaddress -SubnetId $subnet
$nic = New-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $rgName -Location $location -IpConfiguration $IPconfig
$vmConfig = New-AzureRmVMConfig -VMName $vmName -VMSize $VMSize
$vm = Add-AzureRmVMNetworkInterface -VM $vmConfig -Id $nic.Id

$osDisk = New-AzureRmDisk -DiskName $osDiskName -Disk (New-AzureRmDiskConfig -AccountType $storageAccountType -Location $location -CreateOption Import -SourceUri $osDiskUri) -ResourceGroupName $rgName
$vm = Set-AzureRmVMOSDisk -VM $vm -ManagedDiskId $osDisk.Id -StorageAccountType $storageAccountType -DiskSizeInGB 128 -CreateOption Attach -Windows
$vm = Set-AzureRmVMBootDiagnostics -VM $vm -disable

#Create the new VM
New-AzureRmVM -ResourceGroupName $rgName -Location $location -VM $vm

Again, let me explain a little the parameters we’ve set that the start of the script:

  • $rgName = the resource group where you want to deploy the VM instance
  • $location = the Azure region
  • $vnet = the virtual network where you want to deploy the VM instance
  • $subnet = the subnet where you want to deploy the VM instance
  • $nicName = the name of the NIC of the server
  • $vmName = the name of the VM instance, the server name
  • $osDiskName = the OS disk name
  • $osDiskUri = the direct URI/URL to the VHD in your storage account
  • $VMSize = the VM size or the service plan for the VM
  • $storageAccountType = what type of storage you would like to have
  • $IPaddress = the static IP address of the server as I like to do this in Azure, rather than use dynamic IP’s

And that is pretty much that!

Conclusion

It’s Friday in Sydney. Its the pre-kend and it’s a gloomy, cold 9th day of Winter 2017. I hope that the above content helps you out of a jam or gives you the insight you need to run through this process quickly and efficiently. That feeling of giving back, helping. Thats that feeling that should warm me up and get me to lunch time! Counting down!

Cheers!

How to build and deploy an Azure NodeJS WebApp using Visual Studio Code

Introduction

This week I had the need to build a small web application with a reasonably simple front end that will later be integrated inside a Portal. The web application isn’t going to be high use and didn’t necessitate deployment of infrastructure (VM’s). I’d messed with NodeJS a while back in this post where I configured a UI for Microsoft Identity Manager and Azure based functions.

In the back of my mind I knew I didn’t want to have to go for a full Visual Studio Project Solution for this, and with the recent updates to Visual Studio Code I figured it must be possible to do it using it. There wasn’t much around on doing it, so I dived in and worked it out for myself. Here I share the end-to-end process to make it easy for you to started.

Overview

What you will need on your development workstation before you start are the following components. Download and install them on your development machine.

You will also need an Azure Subscription to where you will publish your NodeJS site.

This post details setting up Visual Studio Code to build a shell NodeJS site and deploy it to Azure using a local GIT Repository. Let’s get started.

Visual Studio Code Extensions

A really smart and handy extension for VS Code is Azure Tools for VS Code. Release a few months ago (January 2017), this extension allows you to quickly create a Web App (Resource Group, App Service, Application Service Plan etc) from within VS Code. With VSCode on your development machine from the prerequisites above click on the Extensions Icon (bottom left) in the VSCode menu and type Azure Tools. Click the green Install button.

Azure Tools for VS Code

Creating the NodeJS Site in VS Code

I had a couple of attempts at doing this before I found a quick, neat and repeatable method of getting started. In order to get the Web App deployed and accessible correctly in Azure I found it easiest to use the Sample Azure NodeJS Hello World example from here. Download that sample and extract the contents to a new folder on your workstation. I created a new path on mine named …\NodeJS\nodejssite and dropped the sample in there so it looked like below.

After creating the folder structure and putting the sample in it, whilst in the sub-directory type:

code .

That will startup Visual Studio Code in the newly created folder with the starter sample.

Install Express for NodeJS

To that base sample site we’ll install Express. From the Terminal tab in the lower pane type:

npm install -g express-generator

Express App

With Express now on our machine, lets add the Express App to our new NodeJS site. Type express in the Terminal window.

express

Accept that the directory is not empty

This will create the folder structure for Express.

Now to get all the files and modules for our site configured for our app run npm install

Now type npm start in the terminal window to start our new site.

The NodeJS site will start. Open a Web Browser and go to http://localhost:3000 and you should see the Express empty site.

Navigate to views => index.jade Update the text like I have below.

Refresh your browser window and you should see the text updated.

In the terminal windows press Cntrl + C to stop NodeJS.

Test Deploy to Azure

Now let’s do a test deploy of our shell site as an Azure WebApp.

Press Cntrl + Shift + P or from the View menu select Command Palette.

Type Azure: Login 

This will generate a code and give you a link to open in your browser and login

Paste in the code from the clipboard and select continue

Then login with your account for the Tenant where you want to deploy the WebApp too. You’ll then be authorized.

From the Command Palette type azure sub and choose Azure: List Azure Subscriptions and choose the subscription where you will create and deploy the WebApp

Now from the Command Palette type Azure Create a Web App (Simple).

Give the WebApp a name. This will become the WebApp Name, and the basis for the all the associated WebApp components. Use Create a Web App (Advanced) if you want to be more specific about the name of the App Resources etc.

If you watch the bottom VS Code Status bar you will see the Azure Tools extension create the new Resource Group, Web App and Web App Plan.

Login to the Azure Portal, select the new Web App.

Select Deployment Options and then Local Git Repository. Select OK.

Select Deployment credentials and provide a username and password. You’ll need this shortly to publish your site.

Click Overview. Copy the Git clone url.

Back in VS Code, select the GIT icon (under the magnifying glass) and from the top choose Initialize Repository.

Then in the terminal window type git remote add azure <git clone url> obtained from the step above.

Type Initial Commit as the message and click the tick icon in the Source Control menu bar.

Select and select Publish

Select azure as the remote target we setup earlier.

You’ll be prompted to authenticate. Use the account you created above in Deployment Credentials.

Back in the Azure Portal under the Web App under Deployment Options you will see the initial commit.

Click on Overview and you should see that it is running. Click on URL and the site will open in a new tab in your browser.

Updating our WebApp

Now, let’s make a change to our WebApp.

Back in VS Code, click on the files and folder icon in the top left corner, navigate to views => index.jade and update the title. Hit Cntrl + S (or select Save from the File menu). In Terminal below type npm start to start our NodeJS site locally.

Check out the update locally. In a browser navigate to the local NodeJS site on localhost:3000. You’ll see the changed page.

Select the Git icon on the left menu, give the update some text e.g. ‘updated page text’ and select the tick from the top menu.

Select the and choose Push to publish the changes to our Azure WebApp.

Go back to your browser which was on the Azure WebApp URL and reload. Our change and been push and reflected in the WebApp.

Summary

Very quickly and easily using Visual Studio Code (with NodeJS and Git Desktop installed locally) we have;

  • Created an Azure WebApp
  • Created a base NodeJS site
  • Have a local NodeJS site we can develop
  • Publish it to Azure

Now go create something awesome.

Notes for Logic Apps around Webhook Actions

Azure Logic Apps (Logic Apps) is one of serverless services that Azure is offering. Of course the other one in Azure is Azure Functions (Functions). Logic Apps consists of many connectors and triggers to interconnect services outside Azure. A webhook connector is one of them which has unique characteristics to the others. In this post, we are briefly looking at some tips we should know when we use webhook actions in a Logic Apps workflow.

What is Webhook?

Webhook is basically an API. We register an API endpoint onto a certain service. When an event occurs on the service, it invokes the API endpoint and sends a payload to there through its request body. That’s how a webhook API works. Here are some picks:

  • In order to use a webhook, we need to register it onto somewhere. The registration process is called Subscription. We may or may not explicitly perform the subscription process.
  • During the subscription process, we store an endpoint URL, which is called Callback. After the subscription, an event occurs the callback is invoked.
  • When we invoke the callback, we also send a data, called Payload through its request body. That means we always use the POST method rather than any other HTTP methods.
  • If we don’t need the webhook any longer, we need to deregister it, which is called Unsubscription. This is performed either explicitly or implicitly.

Those four behaviours are the most distinctive ones on a webhook. Let’s take an example using Slack and GitHub. We have a simple scenario around those two services:

As a developer, I want to post a notification on a Slack channel when I push a commit to a GitHub repository so that other team members get notified of my code update.

  • Get an endpoint URL from Slack to send a notification, which will be registered onto GitHub. This is the Subscription.
  • The Slack endpoint URL is considered as Callback URL.
  • Through the callback URL, a POST request is sent when a new push is made. The request contains the push commit details, which is the Payload.
  • When we don’t need the notification, we simply delete the callback URL from the GitHub repository. This is the Unsubscription.

Does it make us have much more senses? Let’s apply them to Logic Apps.

Webhook Action on Logic Apps

A webhook action can be found like:

If we select it, we are given to enter those details. Here are the bits and pieces we are talking about in this post.

  1. At the time of this writing, the picture only indicates two required fields – Subscribe - Method and Subscribe - URI. However, this is NOT true. Actually we have to fill in those FIVE fields at least.
    • Subscribe - Method
    • Subscribe - URI
    • Subscribe - Body
    • Unsubscribe - Method
    • Unsubscribe - URI
  2. Both Subscribe - Method and Unsubscribe - Method only accept POST, not any other else. Like the picture above, if we select any method, GET for example, over POST, the Logic App will throw an error below. We can select any method, but this is NOT true. DO NOT get confused by the dropdown box.

  3. Subscribe - Body is actually a de-facto required field as we need to pass a callback URL through its payload in JSON format. Therefore, the callback URL MUST be sent using the WDL (Workflow Definition Language) function, @{listCallbackUrl()} via the request body.

  4. Both Logic App endpoint URL and the callback URL from the webhook action requires a SAS token, which is automatically generated. This SAS token is used for authentication. Therefore, there is no need to have another Authorization header. If we put the Authorization header by accident, the DirectApiAuthorizationRequired error will occur.

  5. When a Logic App meets the webhook action within its workflow, it subscribes the subscription endpoint and waits for the callback sending a payload until it is called. The maximum waiting period is 90 days. The callback only expects the POST request.

  6. When the callback sends a payload, the payload is considered as a response of the webhook action. Let’s have an example like below. The Logic App subscribe a Function endpoint. The Function sends a request back to Logic App through the callback URL, with a payload. Initial request body coming from Logic App has a productId property but the callback request body from Function changes it to objectId. As a result, the webhook action receives the new payload containing the objectId property.

    dynamic data = await req.Content.ReadAsAsync();
    var serialised = JsonConvert.SerializeObject((object)data);
    
    using (var client = new HttpClient())
    {
      var payload = JsonConvert.SerializeObject(new { objectId = (int) data.productId });
      var content = new StringContent(payload);
      await client.PostAsync((string) data.callbackUrl, content);
    }
    
    return req.CreateResponse(HttpStatusCode.OK, serialised);
    
  7. Regardless of the callback request that is successful or not, whenever the callback is invoked and captured by Logic App, its payload always becomes the response message of the webhook action.

  8. Only after the callback response is captured by Logic App, the subsequent actions are triggered. If those following actions use the webhook’s response message, it will be the payload from the callback as described earlier.

  9. Unsubscription is only executed when the Logic App run is cancelled or 90 days timeout occurs. It works as like a garbage collector.

  10. Unsubscription doesn’t have to fire a callback.

So far, we have briefly taken a look at the webhook action of Logic App. Documentation around this still needs more improvements so it is worth noting those characteristics when using Logic Apps. By doing so, we will be able to reduce number of trial-and-error efforts.

How to create and auto update route tables in Azure for your local Azure datacentre with Azure Automation, bypassing firewall appliances

Originally posted on Lucians blog, at clouduccino.com. Follow Lucian on Twitter @LucianFrango.


When deploying an “edge” or “perimeter” network in Azure, by way of a peered edge VNET or an edge subnet, you’ll likely want to deploy virtual firewall appliances of some kind to manage and control that ingress and egress traffic. This comes at a cost though. That cost being that Azure services are generally accessed via public IP addresses or hosts, even within Azure. The most common of those and one that has come up recently is Azure Blob storage.

If you have ExpressRoute, you can get around this by implementing Public Peering. This essentially sends all traffic destined for Azure services to your ER gateway. A bottleneck? Perhaps.

The problem in detail

Recently I ran into a road block on a customers site around the use of Blob storage. I designed an edge network that met certain traffic monitoring requirements. Azure NSGs were not able to meet all requirements, so, something considerably more complex and time consuming was implemented. It’s IT, isn’t that what always happens you may ask? 

Heres some reference blog posts:

Getting Azure 99.95% SLA for Cisco FTD virtual appliances in Azure via availability sets and ARM templates

Lessons learned from deploying Cisco Firepower Threat Defence firewall virtual appliances in Azure, a brain dump

WE deployed Cisco Firepower Threat Defence virtual appliance firewalls in an edge VNET. Our subnets had route tables with a default route of 0.0.0.0/0 directed to the “tag” “VirtualAppliance”. So all traffic to a host or network not known by Azure is directed to the firewall(s).  How that can be achieved is another blog post.

When implementing this solution, Azure services that are accessed via an external or public range IP address or host, most commonly Blob Storage which is accessed via blah.blob.core.windows.net, additionally gets directed to the Cisco FTDs. Not a big problem, create some rules to allow traffic flow etc and we’re cooking with gas.

Not exactly the case as the FTDv’s have a NIC with a throughput of 2GiB’s per second. That’s plenty fast, but, when you have a lot of workloads, a lot of user traffic, a lot of writes to Blob storage, bottle necks can occur.

The solution

As I mentioned earlier this can be tackled quickly through a number of methods. These discarded methods in this situation are as follows:

  • Implement ExpressRoute
    • Through ExpressRoute enable public peering
    • All traffic to Azure infrastructure is directed to the gateway
    • This is a “single device” that I have heard whispers is a virtual Cisco appliance similar to a common enterprise router
    • Quick to implement and for most cases, the throughout bottleneck isn’t reached and you’re fine
  • Implement firewall rules
    • Allow traffic to Microsoft IP ranges
    • Manually enter those IP ranges into the firewall
      • These are subject to change to a maintenance or managed services runbook should be implemented to do this on a regular basis
    • Or- enable URL filtering and basically allow traffic to certain URI’s or URL’s

Like I said, both above will work.

The solution in this blob post is a little bit more technical, but, does away with the above. Rather than any manual work, lets automate this process though AzureAutomation. Thankfully, this isn’t something new, but, isn’t something that is used often. Through the use of pre-configured Azure Automation modules and Powershell scripts, a scheduled weekly or monthly (or whatever you like) runbook to download the Microsoft publicly available .xml file that lists all subnets and IP addresses use in Azure. Then uses that file to update a route table the script creates with routes to Azure subnets and IP’s in a region that is specified.

This process does away with any manual intervention and works to the ethos “work smarter, not harder”. I like that very much, and no, that is not being lazy. It’s being smart.

The high five

I’m certainly not trying to take the credit for this, except for the minor tweak to the runbook code, so cheers to James Bannan (@JamesBannan) who wrote this great blog post (available here) on the solution. He’s put together the Powershell script that expands on a Powershell module written by Kieran Jacobson (@kjacobsen). Check out their Twitters, their blogs and all round awesome content for. Thank you and high five to both!

The process

I’m going to speed through this as its really straight forward and nothing to complicated here. The only tricky part is the order in doing that. Stick to the order and you’re guaranteed to succeed:

  • Create a new automation user account
  • Create a new runbook
    • Quick create a new Powershell runbook
  • Go back to the automation account
  • Update some config:
    • Update the modules in the automation account – do this FIRST as there are dependencies on up to date modules (specifically the AzureRM.Profile module by AzureRM.network)
    • By default you have these runbook modules:

    • Go to Browse Gallery
    • Select the following two modules, one at a time, and add to the automation user account
      • AzureRM.Network
      • AzurePublicIPAddress
        • This is the module created by Kieran Jacobson 
    • Once all are in, for the purposes of being borderline OCD, select Update Azure Modules
      • This should update all modules to the latest version, incase some are lagging a little behind
    • Lets create some variables
      • Select Variables from the menu blade in the automation user account
      • The script will need the following variables for YOUR ENVIRONMENT
        • azureDatacenterRegions
        • VirtualNetworkName
        • VirtualNetworkRGLocation
        • VirtualNetworkRGName
      • For my sample script, I have resources in the Australia, AustraliaEast region
      • Enter in the variables that apply to you here (your RGs, VNET etc)
  • Lets add in the Powershell to the runbook
    • Select the runbook
    • Select EDIT from the properties of the runbook (top horizontal menu)
    • Enter in the following Powershell:
      • this is my slightly modified version
$VerbosePreference = 'Continue'

### Authenticate with Azure Automation account

$cred = "AzureRunAsConnection"
try
{
 # Get the connection "AzureRunAsConnection "
 $servicePrincipalConnection=Get-AutomationConnection -Name $cred

"Logging in to Azure..."
 Add-AzureRmAccount `
 -ServicePrincipal `
 -TenantId $servicePrincipalConnection.TenantId `
 -ApplicationId $servicePrincipalConnection.ApplicationId `
 -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint 
}
catch {
 if (!$servicePrincipalConnection)
 {
 $ErrorMessage = "Connection $cred not found."
 throw $ErrorMessage
 } else{
 Write-Error -Message $_.Exception
 throw $_.Exception
 }
}

### Populate script variables from Azure Automation assets

$resourceGroupName = Get-AutomationVariable -Name 'virtualNetworkRGName'
$resourceLocation = Get-AutomationVariable -Name 'virtualNetworkRGLocation'
$vNetName = Get-AutomationVariable -Name 'virtualNetworkName'
$azureRegion = Get-AutomationVariable -Name 'azureDatacenterRegions'
$azureRegionSearch = '*' + $azureRegion + '*'

[array]$locations = Get-AzureRmLocation | Where-Object {$_.Location -like $azureRegionSearch}

### Retrieve the nominated virtual network and subnets (excluding the gateway subnet)

$vNet = Get-AzureRmVirtualNetwork `
 -ResourceGroupName $resourceGroupName `
 -Name $vNetName

[array]$subnets = $vnet.Subnets | Where-Object {$_.Name -ne 'GatewaySubnet'} | Select-Object Name

### Create and populate a new array with the IP ranges of each datacenter in the specified location

$ipRanges = @()

foreach($location in $locations){
 $ipRanges += Get-MicrosoftAzureDatacenterIPRange -AzureRegion $location.DisplayName
}

$ipRanges = $ipRanges | Sort-Object

### Iterate through each subnet in the virtual network
foreach($subnet in $subnets){

$RouteTableName = $subnet.Name + '-RouteTable'

$vNet = Get-AzureRmVirtualNetwork `
 -ResourceGroupName $resourceGroupName `
 -Name $vNetName

### Create a new route table if one does not already exist
 if ((Get-AzureRmRouteTable -Name $RouteTableName -ResourceGroupName $resourceGroupName) -eq $null){
 $RouteTable = New-AzureRmRouteTable `
 -Name $RouteTableName `
 -ResourceGroupName $resourceGroupName `
 -Location $resourceLocation
 }

### If the route table exists, save as a variable and remove all routing configurations
 else {
 $RouteTable = Get-AzureRmRouteTable `
 -Name $RouteTableName `
 -ResourceGroupName $resourceGroupName
 $routeConfigs = Get-AzureRmRouteConfig -RouteTable $RouteTable
 foreach($config in $routeConfigs){
 Remove-AzureRmRouteConfig -RouteTable $RouteTable -Name $config.Name | Out-Null
 }
 }

### Create a routing configuration for each IP range and give each a descriptive name
 foreach($ipRange in $ipRanges){
 $routeName = ($ipRange.Region.Replace(' ','').ToLower()) + '-' + $ipRange.Subnet.Replace('/','-')
 Add-AzureRmRouteConfig `
 -Name $routeName `
 -AddressPrefix $ipRange.Subnet `
 -NextHopType Internet `
 -RouteTable $RouteTable | Out-Null
 }

### Add default route for Edge Firewalls
 Add-AzureRmRouteConfig `
 -Name 'DefaultRoute' `
 -AddressPrefix 0.0.0.0/0 `
 -NextHopType VirtualAppliance `
 -NextHopIpAddress 10.10.10.10 `
 -RouteTable $RouteTable
 
### Include a routing configuration to give direct access to Microsoft's KMS servers for Windows activation
 Add-AzureRmRouteConfig `
 -Name 'AzureKMS' `
 -AddressPrefix 23.102.135.246/32 `
 -NextHopType Internet `
 -RouteTable $RouteTable

### Apply the route table to the subnet
 Set-AzureRmRouteTable -RouteTable $RouteTable

$forcedTunnelVNet = $vNet.Subnets | Where-Object Name -eq $subnet.Name
 $forcedTunnelVNet.RouteTable = $RouteTable

### Update the virtual network with the new subnet configuration
 Set-AzureRmVirtualNetwork -VirtualNetwork $vnet -Verbose

}

How is this different from James’s?

I’ve made two changes to the original script. These changes are as follows:

I changed the authentication to use an Azure Automation account. This streamlined the deployment process so I could reuse the script across another of subscriptions. This change was the following:

$cred = "AzureRunAsConnection"
try
{
 # Get the connection "AzureRunAsConnection "
 $servicePrincipalConnection=Get-AutomationConnection -Name $cred

"Logging in to Azure..."
 Add-AzureRmAccount `
 -ServicePrincipal `
 -TenantId $servicePrincipalConnection.TenantId `
 -ApplicationId $servicePrincipalConnection.ApplicationId `
 -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint 
}
catch {
 if (!$servicePrincipalConnection)
 {
 $ErrorMessage = "Connection $cred not found."
 throw $ErrorMessage
 } else{
 Write-Error -Message $_.Exception
 throw $_.Exception
 }
}

Secondly, I added an additional static route. This was for the default route (0.0.0.0/0) which is used to forward to our edge firewalls. This change was the following:

### Add default route for Edge Firewalls
 Add-AzureRmRouteConfig `
 -Name 'DefaultRoute' `
 -AddressPrefix 0.0.0.0/0 `
 -NextHopType VirtualAppliance `
 -NextHopIpAddress 10.10.10.10 `
 -RouteTable $RouteTable

You can re-use this section to add further custom static routes

Tying it all together

  • Hit the SAVE button and job done
    • Well, almost…
  • Next you should test the script
    • Select the TEST PANE from the top horizontal menu
    • A word of warning- this will go off and create the route tables and associate them with the subnets in your selected VNET!!!
    • Without testing though, you can’t confirm it works correctly
  • Should the test work out nicely, hit the publish button in the top hand menu
    • This gets the runbook ready to be used
  • Now go off and create a schedule
    • Azure public IP addresses can update often
    • It’s best to be ahead of the game and keep your route tables up to date
    • A regular schedule is recommended – I do once a week as the script only takes about 10-15min to run 
    • From the runbook top horizontal menu, select schedule
    • Create the schedule as desired
    • Save
  • JOB DONE! -No really, thats it!

Enjoy!

Best,

Lucian

Message retry patterns in Azure Functions

Azure Functions provide ServiceBus based trigger bindings that allow us to process messages dropped onto a SB queue or delivered to a SB subscription. In this post we’ll walk through creating an Azure Function using a ServiceBus trigger that implements a configurable message retry pattern.

Note: This post is not an introduction to Azure Functions nor an introduction to ServiceBus. For those not familiar will these Azure services take a look at the Azure Documentation Centre.

Let’s start by creating a simple C# function that listens for messages delivered to a SB subscription.

create azure function

Azure Functions provide a number of ways we can receive the message into our functions, but for the purpose of this post we’ll use the BrokeredMessage type as we will want access to the message properties. See the above link for further options for receiving messages into Azure Functions via the ServiceBus trigger binding.

To use BrokeredMessage we’ll need to import the Microsoft.ServiceBus assembly and change the input type to BrokeredMessage.

If we did nothing else, our function would receive the message from SB, log its message ID and remove it from the queue. Actually, the SB trigger peeks the message off the queue, acquiring a peek lock in the process. If the function executes successfully, the message is removed from the queue. So what happens when things go pear shaped? Let’s add a pear and observe what happens.

Note: To send messages to the SB topic, I use Paolo Salvatori’s ServiceBus Explorer. This tool allows us to view queue and message properties mentioned in this post.

retry by default

Notice the function being triggered multiple times. This will continue until the SB queue’s MaxDeliveryCount is exceeded. By default, SB queues and topics have a MaxDeliveryCount of 10. Let’s output the delivery count in our function using a message property on the BrokeredMessage class so we can observe this in action.

outputing delivery count

From the logs we see the message was retried 10 times until the maximum number of deliveries was reached and ServiceBus expired the message, sending it to the dead letter queue. “Ah hah!”, I hear you say. Implement message retries by configuring the MaxDeliveryCount property on the SB queue or subscription. Well, that will work as a simple, static retry policy but quite often we need a more configurable, dynamic approach. One based on the message context or type of exception caught by the processing logic.

Typical use cases include handling business errors (e.g. message validation errors, downstream processing errors etc.) vs transport errors (e.g. downstream service unavailable, request timeouts etc.). When handling business errors we may elect not to retry the failed message and instead move it to the dead letter queue. When handling transport errors we may wish to treat transient failures (e.g. database connections) and protocol errors (e.g. 503 service unavailable) differently. Transient failures we may wish to retry a few times over a short period of time where-as protocol errors we might want to keep trying the service over an extended period.

To implement this capability we’ll create a shared function that determines the appropriate retry policy based on context of the exception thrown. It will then check if the number of retry attempts against the maximum defined by the policy. If retry attempts have been exceeded, the message is moved to the dead letter queue, else the function waits for the duration defined by the policy before throwing the original exception.

Let’s change our function to throw mock exceptions and call our retry handler function to implement message retry policy.

Now our function implements some basic exception handling and checks for the appropriate retry policy to use. Let’s test our retry polices work by throwing the different exceptions our policy supports

Testing throwing our mock business rules exception…

test mock business exception

…we observe that the message is moved straight to the dead letter queue as per our defined policy.

Testing throwing our mock protocol exception…test mock protocol exception

…we observe that we retry the message a total of 5 times, waiting 3 seconds between retries as per the defined policy for protocol errors.

Considerations  

  • Ensure your SB queues and subscriptions are defined with a MaxDeliveryCount greater than your maximum number of retries.
  • Ensure your SB queues and subscriptions are defined with a TTL period greater than your maximum retry interval.
  • Be aware that using Consumption based service plans have a maximum execution duration of 5 minutes. App Service Plans don’t have this constraint. However, ensure the functionTimeout setting in the host.json file is greater than your maximum retry interval.
  • Also be aware that if you are using Consumption based plans you will still be charged for time spent waiting for the retry interval (thread sleep).

Conclusion

In this post we have explored the behaviour of the ServiceBus trigger binding within Azure Functions and how we can implement a dynamic message retry policy. As long as you are willing to manage the deserialization of message content yourself (rather than have Azure Functions do it for you) you can gain access to the BrokeredMessage class and implement feature rich messaging solutions on the Azure platform.

API Mocking for Developers

API is the most common practice to exchange messages in a microservices architecture world. There are actually two different approaches for API development. One is called Model First and the other is called Design First. Usually the latter, AKA Spec-Driven Development (SDD), is preferred over the former.

When is the Model First approach useful? If you are running legacy API applications, this would be a good example of using this approach. If those systems are well documented, API documents can be easily extracted by tools like Swagger which is now renamed to Open API. There are many implementations for Swagger, Swashbuckle for example, so it’s very easy to extract API spec document using those tools.

What if we are developing a new API application? Should I develop the application and extract its API spec document like above? Of course we can do like this. There’s no problem at all. If the API spec is updated, what should we do then? The application should be updated first then the updated API spec should be extracted, which might be expensive. In this case, the Design First approach would be useful because we can make all API spec clearly before the actual implementation, which we can reduce time and cost, and achieve better productivity. Here’s a rough diagram how SDD workflow looks like:

  1. Design API spec based on the requirements.
  2. Simulate the API spec.
  3. Gather feedback from API consumers.
  4. Validates the API spec if it meets the requirements.
  5. Publish the API spec.

There is an interesting point. How can we run/simulate the API without the actual implementation? Here we can introduce API mocking. By mocking APIs, front-end developers, mobile developers or other back-end developers who actually consume the APIs can simulate expected result, regardless that the APIs are working or not. Also by mocking those APIs, developers can feed back more easily.

In this post, we will have a look how API mocking features work in different API management tools such as MuleSoft API Manager, Azure API Management and Amazon API Gateway, and discuss their goods and bads.

MuleSoft API Manager + RAML

MuleSoft API Manager is a strong supporter of RAML(RESTful API Modelling Language) spec. RAML spec version 0.8 is the most popular but recently its Spec version 1.0 has been released. RAML basically follows the YAML format, which brings more readability to API spec developers. Here’s a simple RAML based API spec document.

According to the spec document, API has two different endpoints of /products and /products/{productId} and they define GET, POST, PATCH, DELETE respectively. Someone might have picked up in which part of the spec document would be used. Each response body has its example node. These nodes are used for mocking in MuleSoft API Manager. Let’s see how we can mock both API endpoints.

First of all, login to Anypoint website. You can create a free trial account, if you want.

Navigate to API Manager by clicking either the button or icon.

Click the Add new API button to add a new API and fill the form fields.

After creating an API, click the Edit in API designer link to define APIs.

We have already got the API spec in RAML, so just import it.

The designer screen consists of three sections – left, centre, and right. The centre section shows the RAML document itself while the right section displays how RAML is visualised as API document. Now, click the Mocking Service button at the top-right corner to activate the mocking status. If mocking is enabled, the original URL is commented out and a new mocking URL is added like below:

Of course, when the mocking is de-activated, the mocking URL disappears and the original URL is uncommented. With this mocking URL, we can actually send requests through Postman for simulation. In addition to this, developers consuming this API don’t have to wait until it’s implemented but just using it. The API developer can develop in parallel with front-end/mobile developers. This is the screenshot of using Postman to send API requests.

As defined in the example nodes of RAML spec document, the pre-populated result is popping out. Like the same way, other endpoints and/or methods return their mocked results.

So far, we have looked how MuleSoft API Manager can handle RAML and API mocking. It’s very easy to use. We just imported a RAML spec document and switched on the mocking feature, and that’s it. Mocking can’t be easier that this. It also supports Swagger 2.0 spec. When we import a Swagger document, it is automatically converted to a RAML 1.0 document. However, in the API designer, we still have to use RAML. It would be great if MuleSoft API Manager supports to edit Swagger spec documents out of the box sooner rather than later, which is a de-facto standard nowadays.

There are couple of downsides of using API Manager. We can’t have precise controls on individual endpoints and methods.

  • “I want to mock only the GET method on this specific endpoint!”
  • “Nah, it’s not possible here. Try another service.”

Also, the API designer changes the base URL of API, if we activate the mocking feature. It might be critical for other developers consuming the API for their development practice. They have to change the API URL once the API is implemented.

Now, let’s move onto the example supporting Swagger natively.

Azure API Management + Swagger

Swagger has now become Open API, and spec version 2.0 is the most popular. Recently a new spec version 3.0 has been released for preview. Swagger is versatile – ie. it supports YAML and JSON format. Therefore, we can design the spec in YAML and save it into a JSON file so that Azure API Management can import the spec document natively. Here’s the same API document written in YAML based on Swagger spec.

It looks very similar to RAML, which includes a examples node in each endpoint and/or method. Therefore, we can easily activate the mocking feature. Here’s how.

First of all, we need to create an API Management instance on Azure Portal. It takes about half an hour for provisioning. Once the instance is fulfilled, click the APIs - PREVIEW blade at the left-hand side to see the list of APIs.

We can also see tiles for API registration. Click the Open API specification tile to register API.

Upload a Swagger definition file in JSON format, say swagger.json, and enter appropriate name and suffix. We just used shop as a prefix for now.

So, that’s it! We just uploaded the swagger.json file and it completes the API definition on API Management. Now, we need to mock an endpoint. Unlike MuleSoft API Manager, Azure API Management can handle mocking on endpoints and methods individually. Mocking can be set in the Inbound processing tile as it intercepts the back-end processing before it hits the back-end side. It can also set mocking at a global level rather than individual level. For now, we simply setup the mocking feature only on the /products endpoint and the GET method.

Select /products - GET at the left-hand side and click the pencil icon at the top-right corner of the Inbound Processing tile. Then we’re able to see the screen below:

Click the Mocking tab, select the Static responses option on the Mocking behavior item, and choose the 200 OK option of the Sample or schema responses item, followed by Save. We can expect the content defined under the examples node. Once saved, the screen will display something like this:

In order to use the API Management, we have to send a special header key in every request. Go to the Products - PREVIEW blade and add the API that we’ve defined above.

Go to the Users - PREVIEW blade to get the Subscription Key.

We’re ready now. In Postman, we can see the result like:

The subscription key has been sent through the header with a key of Ocp-Apim-Subscription-Key and the result was what we expected above, which has already been defined in the Swagger definition.

So far, we have used Azure API Management with Swagger definition for mocking. The main differences between MuleSoft API Manager and Azure API Management are:

  • Azure API Management doesn’t change the mocking endpoint URL, while MuleSoft API Manager does. That’s actually very important for front-end/mobile developers because they don’t get bothered with changing the base URL – same URL with mocked result or actual result based on the setting.
  • Azure API Management can mock endpoint and method individually. We only mock necessary ones.

However, the downside of using Azure API Management will be the cost. It costs more than $60/month on the developer pricing plan, which is the cheapest. MuleSoft API Manager is literally free to use, from the mocking perspective (Of course we have to pay for other services in MuleSoft).

Well, what else service can we use together with Swagger? Of course there is Amazon AWS. Let’s have a look.

Amazon API Gateway + Swagger

Amazon API Gateway also uses Swagger for its service. Login to the API Gateway Console and import the Swagger definition stated above.

Once imported, we can see all API endpoints. We choose the /products endpoint with the GET method. Select the Mock option of the Integration type item,, and click Save.

Now, we’re at the Method Execution screen. When we see at the rightmost-hand side of the screen, it says Mock Endpoint where this API will hit. Click the Integration Request tile.

We confirm this is mocked endpoint. Right below, select the When there are no templates defined (recommended) option of the Request body passthrough item.

Go back to the Method Execution screen and click the Integration Response tile.

There’s already a definition of the HTTP Status Code of 200 in Swagger, which is automatically showed upon the screen. Click the triangle icon at the left-hand side.

Open the Body Mapping Templates section and click the application/json item. By doing so, we’re able to see the sample data input field. We need to put a JSON object as a response example by hand.

I couldn’t find any other way to automatically populate the sample data. Especially, the array type response object can’t be auto-generated. This is questionable; why doesn’t Amazon API Gateway allow a valid object? If we want to avoid this situation, we have to update our Swagger definition, which is vendor dependent.

Once the sample data is updated, save it and go back to the Method Execution screen again. We are ready to use the mocking feature. Wait. One more step. We need to publish for public access.

Eh oh… We found another issue here. In order to publish (or deploy) the API, we have to set up the Integration Type of all endpoints and methods INDIVIDUALLY!!! We can’t skip one single endpoint/method. We can’t set up a global mocking feature, either. This is a simple example that only contains four endpoints/methods. But, as a real life scenario, there are hundreds of API endpoints/methods in one application. How can we set up individual endpoints/methods then? There’s no such way here.

Anyway, once deployment completes, API Gateway gives us a URL to access to API Gateway endpoints. Use Postman to see the result:

So far, we have looked Amazon API Gateway for mocking. Let’s wrap up this post.

  • Global API Mocking: MueSoft API Manager provides a one-click button, while Amazon API Gateway doesn’t provide such feature.
  • Individual API Mocking: Both Azure API Management and Amazon API Gateway are good for handling individual endpoints/methods, while MuleSoft API Manager can’t do it. Also Amazon API Gateway doesn’t well support to return mocked data, especially for array type response. Azure API Management perfectly supports this feature.
  • Uploading Automation of API Definitions: Amazon API Gateway has to manually update several places after uploading Swagger definition files, like examples as mocked data. On the other hand, both Azure API Management and MuleSoft API Manager perfectly supports this feature. There’s no need for manual handling after uploading definitions.
  • Cost of API Mocking: Azure API Management is horrible from the cost perspective. MuleSoft provides virtually a free account for mocking and Amazon API Gateway offers the first 12 months free trial period.

We have so far briefly looked how we can mock our APIs using the spec documents. As we saw above, we don’t need to code for this feature. Mocking purely relies on the spec document. We also have looked how this mocking can be done in each platform – MuleSoft API Manager, Azure API Management and Amazon API Gateway, and discuss merits and demerits of each service, from the mocking perspective.

Know Your Cloud Resource Costs on Azure

An organisation used to invest their IT infrastructure mostly for computers, network or data centre. Over time, they spent their budget for hosting spaces. Nowadays, in cloud environments, they mostly spend their funds to purchase computing power. Here’s a simple diagram about the cloud computing evolution. From left to right, expenditure shifts from infrastructure to computing power.

In the cloud environment, when we need resources, we just create and use them, and when we don’t need them any longer, we just delete them. But let’s think about this. If your organisation runs dev, test and production environment on cloud, the cost of resources running on dev or test environment is likely to be overlooked unless carefully monitored. In this case, your organisation might be receiving an invoice with massive amount of cost! That has to be avoided. In this post, we are going to have a look at the Azure Billing API that was released in preview and build a simple application to monitor costs in an effective way.

The sample codes used for this post can be found here.

Azure Billing API Structure

There are two distinctive APIs for Azure Billing – one is Usage API and the other is Rate Card API. Therefore, we can calculate how much we spent during a particular period.

Usage API

This API is based on a subscription. Within a subscription, we can send a request to calculate how much resources we used in a specified period. Here are the parameters we can use for these requests.

  • ReportedStartTime: Starting date/time reported in the billing system.
  • ReportedEndTime: Ending date/time reported in the billing system.
  • Granularity: Either Daily or Hourly. Hourly can return more detailed result but takes far longer time to get the result.
  • Details: Either true or false. This determines how usage is split into instance level or not. If false is selected, all same instance types are aggregated.

Here’s an interesting point on the term Reported. When we USE cloud resources, that can be interpreted from two different perspectives. The term, USE, might mean that the resources were actually used at the specified date/time, or the resource used events were reported to the billing system at the specified date/time. This happens because Azure is basically a distributed system scattered all around the world, and based on the data centre the resources are situated, the actual usage date/time can be reported to the billing system in a delayed manner. Therefore, even though we send requests based on the reported date/time, the responses containing usage data show the actual usage date/time.

Rate Card API

When you open a new Azure subscription, you might have noticed a code looking like MS-AZR-****P. Have you seen that code before? This is called Offer Durable ID and, based on this, different rates on resources apply. Please refer to this page to see more details about various types of offers. In order to send requests for this, we can use the following query parameters.

  • OfferDurableId: This is the offer Id. eg) MS-AZR-0017P (EA Subscription)
  • Currency: Currency that you want to look for. eg) AUD
  • Locale: Locale of your search region. eg) en-AU
  • Region: Two-letter ISO country code that you purchased this offer. eg) AU

Therefore, in order to calculate the actual spending, we need to combine these two API responses. Fortunately, there’s a good NuGet library called CodeHollow.AzureBillingApi. So we just use it to figure out Azure resource consumption costs.

Scenario

Kloud, as a cloud consulting firm, offers all consultants access to the company’s subscription without restriction so that they can create resources to develop/test scenarios for their clients. However, once resources are created, there’s high chance that those resources are not destroyed in a timely manner, which brings about unnecessary cost spending. Therefore, management team has made a decision to perform cost control by resource groups 1) assigning resource group owners, 2) setting total spend limit, and 3) setting daily spend limit, using tags. By virtue of these tags, resource group owners are notified via email when cost approaches 90% of the total spend limit, and when it reaches the total spend limit. They also get notified if the cost exceeds the daily spend limit so they can take appropriate actions for their resource groups.

Sounds simple, right? Let’s code it!

When the application is written, it should be run daily to aggregate all costs, store it to database, and send notifications to resource group owners that meets the conditions above.

Writing Common Libraries

The common libraries consist of three parts. Firstly, it calls Azure Billing API, and aggregates data by date and resource group. Secondly, it stores those aggregated data into database. Finally, it sends notification to resource group owners who have resource groups that exceeds either total spend limit or daily spend limit.

Azure Billing API Call & Data Aggregation

CodeHollow.AzureBillingApi can reduce huge amount of API calling work. Its simple implementation might look like:

First of all, like the code above, we need to fetch all resource usage/cost data then, like below, those data needs to be grouped by dates and resource groups.

We now have all cost related data per resource group. We then need to fetch tag values from resource groups using another API call and merge it with the data previously populated.

We can look up all resource groups in a given subscription like above, and merge this result with the cost data that we previously found, like below.

Date Storage

This is the simplest part. Just use Entity Framework and store data into the database.

We’ve so far implemented data aggregation part.

Notification

First of all, we need to fetch resource groups that meet conditions, which is not that hard to write.

The code above is self-explanatory: it only returns resource groups that 1) approach the total spend limit or 2) exceed the total spend limit, or 3) exceed the daily spend limit. It works well, even though it looks smelly.

The following code bits show how to send notifications to the resource group owners.

It only writes alarms onto the screen, but we can implement SendGrid for email notification or Twillio for SMS alert, in here.

Now we’ve got the basic application structure. How can we execute it, by the way? We might have two approaches – Azure WebJobs and Azure Functions. Let’s move on.

Monitoring Application on Azure WebJob

A console application might be the simplest way for this purpose. Once the console app is built, it can be deployed to an Azure WebJob straight away. Here’s the simple console application code.

Aggregator service collects and store data and Reminder service sends alerts to resource group owners. In order to deploy this to Azure WebJob, we need to create two extra files, run.cmd and settings.job.

  • settings.job: It contains CRON expression for scheduled job. For example, if this WebJob runs every night at 00:20, the JSON object might look like:

  • run.cmd: When this WebJob is run, it always looks up run.cmd first, which is a simple batch command file. Therefore, if necessary, we can enter the actual executable command with appropriate arguments into this file.

That’s how we can use Azure WebJob for monitoring.

Monitoring Application on Azure Function

We can use Azure Functions instead. But in this case we HAVE TO make sure:

Azure Functions instance MUST be with App Service Plan, NOT Consumption Plan

Basically, this app runs for 1-2 minutes at the shortest or 30-40 minutes at the longest. This execution time is not affordable for Consumption Plan, which charges costs based on execution time. On the other hand, as we have already paid for App Service Plan, we don’t need to pay extra for the Function instance, if we create it under the App Service Plan.

Timer Trigger Function code suits our purpose. Also using Precompiled Azure Functions approach would be more helpful and the function code might look like:

Here’s the function.json for this Timer Trigger one:

Here we have shown how to quickly write a simple application for cost monitoring, using the Azure Billing API. Cloud resources can certainly be used effectively and efficiently, but the flipside of it is, of course, that we have to be very careful not to be wasteful. Therefore, implementing a monitoring application would help in preventing unwanted cost leak.

Automating Source IP Address updates on an Azure Network Security Group RDP Access Rule

Recently I’ve migrated a bunch of Virtual Box Virtual Machines to Azure as detailed here. These VM’s are in Resource Groups with a Network Security Group associated that restricts access to them for RDP based on a source TCPIP address. All good practice. However from a usability perspective, when I want to use these VM’s, I’m not always in the same location, and rarely on a connection with a static IP address.

This post details a simple little script that;

  • Has a couple of variables associated with a Resource Group, Network Security Group, Virtual Machine Name and an RDP Configuration File associated with the VM
  • Gets the public IP Address of the machine I’m running the script from
  • Prompts for Authentication to Azure, and retrieves the NSG associated with the Resource Group
  • Compares the Source IP Address in the ‘RDP’ Inbound Rule to my current IP Address. If they aren’t a match it updates the Source IP Address to be my current public IP Address
  • Starts the Virtual Machine configured at the start of the script
  • Launches Remote Desktop using the RDP Configuration file

The Script

Here’s the raw script. Update lines 2-8 for your environment and away you go. Simple but useful as is often the way.

Getting Azure 99.95% SLA for Cisco FTD virtual appliances in Azure via availability sets and ARM templates

First published on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter: @LucianFrango or connect via LinkedIn: Lucian Franghiu.


In the real world there are numerous lessons learned, experiences, opinions and vendors recommendations that dictate and what constitutes “best practice” when it comes to internet edge security. It’s a can of worms that I don’t want to open as I am not claiming to be an expert in that regard. I can say that I do have enough experience to know that not having any security is a really bad idea and having bank level security for regular enterprise customers can be excessive.

I’ve been working with an enterprise customer that falls pretty much in the middle of that dichotomy. They are a regular large enterprise organisation that is concerned about internet security and have little experience with Azure. That said, the built in tools and software defined networking principles of Azure don’t meet the requirements they’ve set. So, to accomodate those requirements, moving from Azure NSGs and WAFs and all the goodness that Azure provides to dedicated virtual appliances was not difficult, but, did require a lot of thinking and working with various team members and 3rd parties to get the result.

Cisco Firepower Thread Defence Virtual for Microsoft Azure

From what I understand, Cisco’s next generation firewall has been in the Azure marketplace for about 4 months now, maybe a little longer. Timelines are not that much of a concern, rather, they are a consideration in that it relates to the maturity of the product. Unlike competitors, there is indeed a lag behind in some features.

The firewalls themselves, Cisco Firepower Thread Defence Virtual for Microsoft Azure, are Azure specific Azure Marketplace available images of the virtual appliances Cisco has made for some time. The background again, not that important. It’s just the foundational knowledge for the following:

Cisco FTDv supports 4 x network interfaces in Azure. These interfaces include:

  • A management interface (Nic0) – cannot route traffic over this
  • A diagnostics interface (Nic1) – again, cannot route traffic over this. I found this out the hard way…
  • An external / untrusted interface (Nic2)
  • An internal / trusted interface (Nic3)

So we have a firewall that essentially is an upgraded Cisco ASA (Cisco Adaptive Security Appliance) with expanded feature sets unlocked through licensing. An already robust product with new features.

The design

Availability is key in the cloud. Scale out dominates scale up methodologies and as the old maxim goes: two is better than one. For a customer, I put together the following design to leverage Azure availability sets (to guarantee instance uptime of at least one instance in the set; and to guarantee different underlying Azure physical separation of these resources) and to have a level of availability higher than a single instance. NOTE: Cisco FTDv does not support high availability (out of the box) and is not a statefull appliance in Azure. 

Implementation

To deploy a Cisco FTDv in Azure, the quick and easy way is to use the Azure Marketplace and deploy through the portal. It’s a quick and pretty much painless process. To note though, here are some important pieces of information when deploying these virtual appliances from the Azure marketplace:

  • There is essentially only one deployment option for the size of instances – Standard_D3 or Standard_D3v2 – the difference being SSD vs HDD (with other differences between v1 and v2 series coming by way of more available RAM in certain service plans)
  • YOU MUST deploy the firewall in a resource group WITH NO OTHER RESOURCES in that group (from the portal > Marketplace)
  • Each interface MUST be on a SUBNET – so when deploying your VNET, you need to have 4 subnets available
    • ALSO, each interface MUST be on a UNIQUE subnet – again, 4 subnets, can’t double up – even with the management and diagnostic interfaces
  • Deploying the instance in an Azure availability set is- NOT AVAILABLE (from the portal > Marketplace)

Going through the wizard is relatively painless and straight forward and within 15-20min you can have a firewall provisioned and ready to connect to your on-premises management server. Yes, another thing to note is that the appliance is managed from Firepower Management Centre (FMC). The FMC, from that I have read, cannot be deployed in Azure at this time. However, i’ve not looked into that tidbit to much, so I may be wrong there.

The problem

In my design I have a requirement for two appliances. These appliances would be in a farm, which is supported in the FMC, and the two appliances can have common configuration applied to both devices- stuff like allow/deny rules. In Azure, without an availability set, there is a small chance, however a chance nonetheless, that both devices could someone be automagically provisioned in the same rack, on the same physical server infrastructure in the Australia East region (my local region).

Availability is a rather large requirement and ensuring that all workloads across upwards of 500+ instances for the customer I was working with is maintained was a tricky proposition. Here’s how I worked around the problem at hand as officially Cisco do not state they “do not support availability sets”.

The solution

Pretty much all resources when working with the Azure Portal have a very handy tab under their properties. I use this tab a lot. It’s the Automation Script section of the properties blade of a resource.

Automation script

After I provisioned a single firewall, I reviewed the Automation Script blade of the instance. There is plenty of good information there. What was particularly is handy to know is the following:

 },
 "storageProfile": {
 "imageReference": {
 "publisher": "cisco",
 "offer": "cisco-ftdv",
 "sku": "ftdv-azure-byol",
 "version": "620362.0.0"
 },
 "osDisk": {
 "osType": "Linux",
 "name": "[concat(parameters('virtualMachines_FW1_name'),'-disk')]",
 "createOption": "FromImage",

So with that, we have all the key information to leverage ARM templates to deploy the firewalls. In practice though, I copied the entire Automation Script 850 line JSON file and put it into Atom. Then I did the following:

  • Reviewed the JSON file and cleaned it up to match the naming standard for the customer
    • This applied to various resources that were provisioned: NIC, NSGs, route tables etc
  • I copied and added to the file an addition for adding the VM to availability set
    • The code for that will be bellow
  • I then removed the firewall instance and all of the instance specific resources it created form Azure
    • I manually removed the NICs
    • I manually removed the DISKs from Blob storage
  • I then used Visual Studio to create an ew project
  • I copied the JSON file into the azuredeploy.json file
  • Updated my parameters file (azuredeplpy.parameters.json)
  • Finally, proceeded to deploy from template

Low and behold the firewall instance provisioned just fine and indeed there was an availability set associated with that. Additionally, when I provisioned the second appliance, I followed the same process and both are now in the same availability set. This makes using the Azure Load Balancer nice and easy! Happy days!

For your reference, here’s the availability set JSON I added in my file:

"parameters": [
 {
"availabilitySetName": {
 "defaultValue": "FW-AS",
 "type": "string"
 }

Then you need to add the following under “resources”:

"resources": [
 {
 "type": "Microsoft.Compute/availabilitySets",
 "name": "[parameters('availabilitySetName')]",
 "apiVersion": "2015-06-15",
 "location": "[resourceGroup().location]",
 "properties": {
 "platformfaultdomaincount": "2",
 "platformupdatedomaincount": "2"
 }
 },

Then you’ll also need to add in the resources “type”: “Microsoft.Compute/virtualMachines”:

 "properties": {
 "availabilitySet": {
 "id": "[resourceId('Microsoft.Compute/availabilitySets', parameters('availabilitySetName'))]"
 },
  "dependsOn": [
 "[resourceId('Microsoft.Compute/availabilitySets', parameters('availabilitySetName'))]",

Those are really the only things that need to be added to the ARM template. It’s quick and easy!

BUT WAIT, THERES MORE!

No, I’m not talking about throwing in a set of steak knives with that, but, there is a little more to this that you dear reader need to be aware of.

Once you deploy the firewall and the creating process finalises and its state is now running, there is an additional challenge. When deploying via the Marketplace, the firewall enters Advanced User mode and is able to be connected to the FMC. I’m sure you can guess where this is going… When deploying the firewall via an ARM template, the same mode is not entered. You get the following error message:

User [admin] is not allowed to execute /bin/su/ as root on deviceIDhere

After much time digging through Cisco documentation, which I am sorry to say is not up to standard, Cisco TAC were able to help. The following command needs to be run in order to get into the correct mode:

~$ su admin
~$ [password goes here] which is Admin123 (the default admin password, not the password you set)

Once you have entered the correct mode, you can add the device to the FMC with the following:

~$ configure manager add [IP address of FMC] [key - one time use to add the FW, just a single word]

The summary

I appreciate that speciality network vendors provide really good quality products to manage network security. Due limitations in the Azure Fabric, not all work 100% as expected. From a purists point of view, NSGs and the Azure provided software defined networking solutions and the wealth of features provided, works amazingly well out of the box.

The cloud is still new to a lot of people. That trust that network admins place in tried and true vendors and products is just not there yet with BSOD Microsoft. In time I feel it will be. For now though, deploying virtual appliances can be a little tricky to work with.

Happy networking!

Lucian

Calling WCF client proxies in Azure Functions

Azure Functions allow developers to write discrete units of work and run these without having to deal with hosting or application infrastructure concerns. Azure Functions are Microsoft’s answer to server-less computing on the Azure Platform and together with Azure ServiceBus, Azure Logic Apps, Azure API Management (to name just a few) has become an essential part of the Azure iPaaS offering.

The problem

Integration solutions often require connecting legacy systems using deprecating protocols such as SOAP and WS-*. It’s not all REST, hypermedia and OData out there in the enterprise integration world. Development frameworks like WCF help us deliver solutions rapidly by abstracting much of the boiler plate code away from us. Often these frameworks rely on custom configuration sections that are not available when developing solutions in Azure Functions. In Azure Functions (as of today at least) we only have access to the generic appSettings and connectionString sections of the configuration.

How do we bridge the gap and use the old boiler plate code we are familiar with in the new world of server-less integration?

So let’s set the scene. Your organisation consumes a number of legacy B2B services exposed as SOAP web services. You want to be able to consume these services from an Azure Function but definitely do not want to be writing any low level SOAP protocol code. We want to be able to use the generated WCF client proxy so we implement the correct message contracts, transport and security protocols.

In this post we will show you how to use a generated WCF client proxy from an Azure Function.

Start by generating the WCF client proxy in a class library project using Add Service Reference, provide details of the WSDL and build the project.

add_service_reference

Examine the generated bindings to determine the binding we need and what policies to configure in code within our Azure Function.

bindings

In our sample service above we need to create a basic http binding and configure basic authentication.

Create an Azure Function App using an appropriate template for your requirements and follow the these steps to call your WCF client proxy:

Add the System.ServiceModel NuGet package to the function via the project.json file so we can create and configure the WCF bindings in our function
project_json

Add the WCF client proxy assembly to the ./bin folder of our function. Use Kudo to create the folder and then upload your assembly using the View Files panelupload_wcf_client_assembly

In your function, add references to both the System.ServiceModel assembly and your WCF client proxy assembly using the #r directive

When creating an instance of the WCF client proxy, instead of specifying the endpoint and binding in a config file, create these in code and pass to the constructor of the client proxy.

Your function will look something like this

Lastly, add endpoint address and client credentials to appSettings of your Azure Function App.

Test the function using the built-in test harness to check the function executes ok

test_func

 

Conclusion

The suite of integration services available on the Azure Platform are developing rapidly and composing your future integration platform on Azure is a compelling option in a maturing iPaaS marketplace.

In this post we have seen how we can continue to deliver legacy integration solutions using emerging integration-platform-as-a-service offerings.