Putting SQL to REST with Azure Data Factory

Microsoft’s integration stack has slowly matured over the past years, and we’re on the verge of finally breaking away from BizTalk Server, or are we? In this article I’m going to explore Azure Data Factory (ADF). Rather than showing the usual out of the box demo I’m going to demonstrate a real-world scenario that I recently encountered at one of Kloud’s customers.

ADF is a very easy to use and cost-effective solution for simple integration scenarios that can be best described as ETL in the ‘old world’. ADF can run at large scale, and has a series of connectors to load data from a data source, apply a simple mapping and load the transformed data into a target destination.

ADF is limited in terms of standard connectors, and (currently) has no functionality to send data to HTTP/RESTful endpoints. Data can be sourced from HTTP endpoints, but in this case, we’re going to read data from a SQL server and write it to a HTTP endpoint.

Unfortunately ADF tooling isn’t available in VS2017 yet, but you can download the Microsoft Azure DataFactory Tools for Visual Studio 2015 here. Next we’ll use the extremely useful 3rd party library ‘Azure.DataFactory.LocalEnvironment’ that can be found on GitHub. This library allows you to debug ADF projects locally, and eases deployment by generating ARM templates. The easiest way to get started is to open the sample solution, and modify accordingly.

You’ll also need to setup an Azure Batch account and storage account according to Microsoft documentation. Azure Batch is running your execution host engine, which effectively runs your custom activities on one or more VMs in a pool of nodes. The storage account will be used to deploy your custom activity, and is also used for ADF logging purposes. We’ll also create a SQL Azure AdventureWorksLT database to read some data from.

Using the VS templates we’ll create the following artefacts:

  • AzureSqlLinkedService (AzureSqlLinkedService1.json)
    This is the linked service that connects the source with the pipeline, and contains the connection string to connect to our AdventureWorksLT database.
  • WebLinkedService (WebLinkedService1.json)
    This is the linked service that connects to the target pipeline. ADF doesn’t support this type as an output service, so we only use it to refer to from our HTTP table so it passes schema validation.
  • AzureSqlTableLocation (AzureSqlTableLocation1.json)
    This contains the table definition of the Azure SQL source table.
  • HttpTableLocation (HttpTableLocation1.json)
    T
    he tooling doesn’t contain a specific template for Http tables, but we can manually tweak any table template to represent our target (JSON) structure.

AzureSqlLinkedService

AzureSqlTable

Furthermore, we’ll adjust the DataDownloaderSamplePipeline.json to use the input and output tables that are defined above. We’ll also set our schedule and add a custom property to define a column mapping that allows us to map between input columns and output fields.

The grunt of the solution is performed in the DataDownloaderActivity class, where custom .NET code ‘wires together’ the input and output data sources and performs the actual copying of data. The class uses a SqlDataReader to read records, and copies them in chunks as JSON to our target HTTP service. For demonstration purposes I am using the Request Bin service to verify that the output data made its way to the target destination.

We can deploy our solution via PowerShell, or the Visual Studio 2015 tooling if preferred:

NewADF

After deployment we can see the data factory appearing in the portal, and use the monitoring feature to see our copy tasks spinning up according to the defined schedule:

ADF Output

In the Request Bin that I created I can see the output batches appearing one at a time:

RequestBinOutput

As you might notice it’s not all that straightforward to compose and deploy a custom activity, and having to rely on Azure Batch can incur significant cost unless you adopt the right auto scaling strategy. Although the solution requires us to write code and implement our connectivity logic ourselves, we are able to leverage some nice platform features as a reliable execution host, retry logic, scaling, logging and monitoring that are all accessible through the Azure portal.

The complete source code can be found here. The below gists show the various ADF artefacts and the custom .NET activity.

The custom activity C# code:

Azure Functions Logging to Application Insights

We’re going to have a look at several ways to integrate Application Insights (AppInsights) with Azure Functions (Functions).

Functions supports built-in logging features using TraceWriter instance. Basic sample function might look like:

With TraceWriter, we can log information to the log console like:

However, it has the maximum limit of 1000 records. This is good for simple debugging purposes, but not for logging. Therefore, we should store logs somewhere like database or storage. Fortunately, AppInsights has recently been consolidated to Functions as a preview to overcome these limitations. Let’s have a look.

Application Insights Integration

According to the document, it’s really easy.

  1. Create an AppInsights instance. Its type MUST be General.
  2. Add a new key of APPINSIGHTS_INSTRUMENTATIONKEY to the AppSettings section of the Function instance.
  3. Give the Instrumentation Key value of the AppInsights instance to the key, APPINSIGHTS_INSTRUMENTATIONKEY.

This is it. Once it’s done, simply execute some functions and wait for the aggregated result up to 5 minutes. Then, go to the AppInsights blade and find the graph looking like:

Can’t be easier, huh?

ARM Template Setup for DevOps Engineers

We can add Instrumentation Key like above. However, this is not ideal from the CI/CD point of view. Instead, setting the key within an ARM template would be more preferred, effective and efficient. Here’s the cut-down version of ARM template sample:

As we can see above, we can directly pour the Instrumentation Key within the ARM template without knowing it. If we want to know more about ARM template, this official document would be a good starting point.

ILogger Integration

ILogger is a library of ASP.NET Core. As it supports .NET Standard 1.1, Functions has recently introduced ILogger version 1.1.1. With this, we can virtually add as many logging libraries as possible. Functions provides AppInsights logging through this interface. In other words, we can simply replace the TraceWriter instance with the ILogger one, in order to send all loggings to AppInsights.

This is also really easy. Simply replace TraceWriter with ILogger in the Function parameter and change the method name from log.Info() to log.LogInformation():

If we still want to keep the log.Info() method name, that’s fine. Simply create an extension method like:

And use the extension method like below:

Once everything is done, deploy the Function again and run Functions several times. Then check out the Function log console:

We have the exactly same experience as before. Of course, we can see additional logging details on the AppInsights blade:

As mentioned above, Functions has implemented the ILogger interface. That means, we might be able to add third-party logging libraries such as Serilog. Unfortunately, at the time of writing, we can’t use those third-party ones. But the Azure Functions Team has started looking at those implementation, according to this issue. Hope this feature is released soon.

Another known issue around the ILogger implementation on Functions is that the Azure Functions Core Tools that helps local debugging doesn’t display log to the console. So, don’t panic, even if no log is displayed on your local console. It displays as expected when you deploy your Functions to Azure. The Azure Functions Team works really hard to fix the issue sooner rather than later.

So far, we have walked through a few ways to integrate AppInsights with Azure Functions. As it’s still in preview, features around this may change over time until GA. But I’m pretty sure that this logging integration to AppInsights will be quite useful and powerful.

Integration of Microsoft Identity Manager with Azure Platform-as-a-Service Services

Overview

This isn’t an out of the box solution. This is a bespoke solution that takes a number of elements and puts them together in a unique way. I’m not expecting anyone to implement this specific solution (but you’re more than welcome to) but to take inspiration from it to implement solutions relevant to your environment(s). This post supports a presentation I did to The MIM Team User Group on 14 June 2017.

This post describes a solution that;

  • Leverages an Azure WebApp (NodeJS) to present a simple website. That site can be integrated easily in the FIM/MIM Portal
  • The NodeJS website leverages an Azure Function App to get a list of users from the FIM/MIM Synchronization Server and allows the user to use typeahead functionality to find the user they want to generate a FIM/MIM object report on
  • On selection of a user, a request will be sent to another Azure Function App to generate and return the report to the user in a new browser window

This is shown graphically below.

 

Report Request UI

The NodeJS WebApp is integrated into the FIM/MIM portal. Bootstrap Typeahead is used to find the user to generate a report on. The Typeahead userlist if fulfilled by an Azure Function into the MIM Sync Metaverse. The Generate Report button fires off a call to FIM/MIM via another Azure Function into the MIM Sync and MIM Service to generate the report.

The returned report opens in a new tab in the users browser. The report contains details of the FIM/MIM connectors the user is represented on.

The values of all attributes for the users hologram from the Metaverse are displayed along with the MA the value came from and the last modified date.

Finally the metadata report from the MIM Service MA Connector Space and the MIM Service.

Prerequisites

These are numerous, but I’ve previously posted about them. You will need;

I encourage you to digest those posts to understand how to configure the prerequisites for this solution.

Additional Solution Requirements

To bring all the individual components together, there are a few additional tasks to enable this solution.

  • Enable CORS on your Azure Function App Configuration (see details further below)
  • If you want to display User Object Photos as part of the report, you will likely need to synchronize them into FIM/MIM from an authoritative source (e.g. Office365/Exchange Online)   Checkout this post  and additional details further below
  • In order to embed the NodeJS WebApp into the FIM/MIM Portal, this post provides the details. Change the target URL from PowerBI URL to your NodeJS site
  • Object Report Request WebApp (see below for sample site)

Azure Functions Cross Origin Resource Sharing (CORS)

You will need to configure CORS to allow the NodeJS WebApp to access the Azure Functions (from both local and Azure). Reflect your port number if it is different from 3000, and use the DNS name for your Azure WebApp.

Sample UI NodeJS HTML

Here is a sample HTML file for your NodeJS WebApp with the UI to provide Input for LoginID fulfilled by the NodeJS Javascript file further below.

Sample UI NodeJS JavaScript

The following NodeJS JavaScript supports the HTML UI above. It populates the LoginID typeahead box and takes the Submit Report button to fulfill the report for the desired object(s). Yes if you use the UI to select (individually) multiple different objects all will be returned in their separate output windows.

As the HTML file above indicates you will need to obtain and make available as part of your NodeJS project the typeahead.bundle.js library.

Azure PowerShell Trigger Function App for AccountNames Lookup

The following Azure Function takes the call from the load of the NodeJS WebApp to populate the typeahead userlist.

Azure PowerShell Trigger Function App for User Object Report

Similar in structure to the Username List Lookup Azure Function above, but in the ScriptBlock you embed the Report Generation Script that is detailed here. Modify for what you want to report on.

Photos in the Report

If you want to display images in your report, you will need to determine if the user has an image during the MV metadata report generation part of the script. Add the following lines (updating for the name of your Image attribute; mine is named EXOPhoto) after the Try {} Catch {} in this section $obj = @() ; foreach ($attr in $attributes.Keys)

 # Display the Objects Photo rather than Base64 string 
if ($attr.equals("EXOPhoto")){ 
   $objectphoto = "<img src=$([char]0x22)data:image/jpeg;base64,$($attributes.$attr.Values.Valuestring)$([char]0x22)>" 
   $val = "System.Byte[]" 
}

Then in the output of the HTML report at the end of the report generation insert the $objectphoto variable into the HTML stream.

# Output MIM Service Object Data 
$MIMServiceObjOut = $MIMServiceObjectMetaData | Sort-Object -Property Attribute | ConvertTo-Html -Fragment 
$htmlreport = ConvertTo-HTML -Body "$htmlcss<h1>Microsoft Identity Manager User Object Report</h1><h2>Query</h2>$sourcequery</br><b><center>$objectphoto</br>NOTE: Only attributes with values are displayed.</center></b><h2>Connector(s) Summary</h2>$connectorsummary<h2>MetaVerse Data</h2>$objectmetadata <h2>MIM Service CS Object Data</h2>$MIMServiceCSobjectmetadata <h2>MIM Service Object Data</h2>$MIMServiceObjOut" -Title "MIM Object Report" 

 

As you can see above I’ve also injected the CSS ($htmlcss) into the output stream at the beginning of the Body section.  Somewhere in your script block you will need to define your CSS values. e.g.

 # StyleSheet for nice pretty output 
$htmlcss = "<style> 
   h1, h2, th { text-align: center; } 
   table { margin: auto; font-family: Segoe UI; box-shadow: 10px 10px 5px #888; border: thin ridge grey; } 
   th { background: #0046c3; color: #fff; max-width: 400px; padding: 5px 10px; } 
   td { font-size: 11px; padding: 5px 20px; color: #000; } 
   tr { background: #b8d1f3; } 
   tr:nth-child(even) { background: #dae5f4; } 
   tr:nth-child(odd) { background: #b8d1f3; } 
</style>"

Summary

An interesting solution integrating Azure PaaS Services with Microsoft Identity Manager via PowerShell and the extremely versatile Lithnet FIM/MIM PowerShell Modules.

Please share your implementations enhancing your FIM/MIM Solution.

Azure Functions with Swagger

Azure Functions Team has recently announced the Swagger support as a preview. If we use Azure Functions as APIs, this will be very useful. In this post, we will have a look how to enable Swagger support on Azure Functions.

Sample codes used for this post can be found here.

Sample Azure Functions Instance

First of all, with the sample code provided, we’re creating two HTTP triggers, CreateProduct and GetProduct. Once we deploy them, we can find it from the Azure Portal like:

Here are simple requests and responses through Postman:

Let’s create a Swagger definition document for those Functions.

Auto-Generate Swagger Definition

If we have at least a Function endpoint in our Function instance, we can automatically generate a Swagger definition in YAML format. Click the API definition (preview) tab.

As a default, the External URL button is chosen. Click the Functions button right next to it.

Because we have never generated the Swagger definition, it spits the error screen.

Now, click the Generate API definition template button so that the document is automatically generated.

Now we’ve got the Swagger definition doco. The actual document generated looks like:

However, there are at least three missing gaps that we have to fill in:

  • definitions: There’s no request/response model definition. We have to fill in.
  • produces/consumes: There’s no document type defined. In general, as JSON format is the most popular for REST API, we can simply add application/json here.
  • securityDefinitions: Azure Functions use either code in the querystring or x-functions-key in the request header for processing. The auto-generated template only defines the code one, not the other one. So we have to define the other x-functions-key.

Here’s the updated Swagger definition including the missing ones:

Once we update the Swagger definition, we can test the API right away by providing function key code and payload. Easy, huh? Also the address specified in the middle of the picture, https://xxxx.azurewebsites.net/admin/host/swagger?code=xxxx, allows us to access to the Swagger definition document in JSON format. Azure Functions instance automatically converts the YAML document to the JSON one.

It seems to be very easy. However, there is a critical point we have to bear in mind. The Function instance must have at least one function endpoint so that the Swagger definition should be auto-generated. In other words, API design-first approach is not applicable.

Now, we’ve got a question. If we only have Swagger definition document, not the actual implementation, what can we do with Azure Functions? Why not directly rendering Swagger document by Azure Function code?

Render Swagger Definition via Azure Functions

Here’s the deal. We basically create an Azure Function code that reads Swagger definition and render it as a response. The following function code will give us a brief idea:

The Function instance contains a swagger-v1.yaml file in its root level. When we see the code above, firstly it reads the file. In order to read the file, we have to set a value to represent the root path, called WEBROOT_PATH (or whatever) in the AppSettings section. Its value will be D:\home\site\wwwroot, which never changes unless Azure App Service changes it. There are two ways to read the settings value:

  1. var wwwroot = Environment.GetEnvironmentVariable("WEBROOT_PATH");
  2. var wwwroot = ConfigurationManager.AppSettings["WEBROOT_PATH"];

Either is fine to read the settings value. If we omit this settings value, Azure Functions basically assumes that the file is located at C:\Windows\System32, which will cause an unexpected result.

According to this document, at the time of this writing, we can pass the Microsoft.Azure.WebJobs.ExecutionContext instance as a function parameter so that we can handle the file path a little bit easier. However, as it’s not fully rolled out yet nor dev tools don’t have that feature yet, we should wait for it until it’s fully rolled out.

The code above then read the YAML file, convert it to JSON and render it. We can now see the Swagger definition through a web browser:

So far, we have briefly looked how to enable Swagger support in Azure Functions in two different ways. Both of course have goods and bads. The first one might be the easiest option but needs more work. Also if we want to access to the swagger definition with the first option, we have to use a different access code. This is a bit critical because we have to manage at least two different keys – one for Functions and the other for Swagger, which is not ideal. On the other hand, the second option needs another Function code but can be handled by the same host key that is accessible to the other Function codes. Therefore, from the management point of view, the second option might be better. It’s still in preview, so we hope that the GA version of the Swagger support will be better than now.

The quickest way to create new VMs in Azure from existing VM snapshots, mostly with PowerShell

Originally posted on Lucian’s blog @clouduccino.com. Follow Lucian on Twitter @LucianFrango.


There’s probably multiple ways to do this, both right and wrong, but, here’s a process that I’ve been using for a while that I’ve recently tweaked to take advantage of new Azure Managed Disks.

Sidebar – standard managed disk warning

Before I go on though, I wanted to issue a quick warning about the differences between standard unmanaged and managed disks. Microsoft will be pushing you to you Managed Disks more and more. Yes, its a great feature that makes the management of VM disks simpler. The key bit of information though is as follows:

  • If you provision an unmanaged disk that is 1Tb in size, but, only use 100Gb, you are paying for 100Gb of storage costs. So you’re only paying for what you use. [1. Unmanaged disk cost – Azure Documentation ]
  • If you provision a managed disk that is 1Tb in size, but, only you 10Mb, you will be paying for the privilege of the whole 1Tb disk [2. Managed disk cost – Azure Documentation ]
  • Additionally, Premium disks, you’re paying for what you provision no matter if its managed or unmanaged

That aside, Managed Disks are a pretty good feature that makes disk and storage account management considerably simpler. If you’re frugal with your VM allocation and have the process to manage people and technology correctly, Managed Disks are great.

The Process

tl;dr

  • Create a snapshot in Azure
  • Copy the snapshot from snapshot storage location to Blob storage
  • Create a new VM instance based on the blob.vhd file
    • This blob post outlines the use of managed disks
    • However, mounting direct from Blob can also be done

The actual process

I’ve gone through this recently and updated it so that it’s as streamlined, for me, as possible. Again, this is skewed towards managed disk usage, but, can easily be extended to be used with unmanaged disks as well. Lets begin:

Step 0 ?

If you’re wanting to do this to create copies of your VM instances, to scale out your workload, remember to generalise or sysprep your VM instance prior to Step 1. In the example I go into below, my use case was to create a copy of a server from a production environment (VNET and subscription) and move it to different and seperate non-production environment (seperate VNET and subscription).

Step 1 – Create a snapshot of your VM disk(s)

The first thing we need to do is actually power off your virtual machine instance. I’ve seen that snapshots can happen while the VM instance is running, but, I guess you can call me a a little bit more old school, a little bit more on the cautious side when it comes to these sorts of things. I’ve been bitten by this particular bug in the past, unpleasant it was; so i’m inclined to err on the side of caution.

Once the VM instance is offline, go to the Azure Portal and search for “Snapshots”. Create a new snapshot.

  • NOTE: snapshots in Azure are done per DISK and not per VM INSTANCE
  • Name the snapshot
  • Select the subscription where the VM instance is located
  • Select the resource group you want to save the snapshot to
    • Or create a new one
  • Select the snapshot location
  • Select the source disk
    • If you earlier selected the same resource group where your VM instance is contained, the disk selection will display the resource group member VM instance disks first in the list
  • Select the storage type- standard or premium for your snapshot
    • I usually just use standard as I’ve not had the need for faster speed premium as yet (that will change one day for sure)
  • Create the snapshot

One the snapshot is created, complete this quick next step to generate an export access URL (we’ll need this in step 2):

  • Select the snapshot
  • From the top menu, select Export
  • You’ll be presented with a menu item with a time interval (based in seconds)
  • The default is 3600 or 1 hour
  • That is fine, but, I like to make that 36000 (add another 0) so that I have a whole day to do this again and again if need be
  • Save the generated URL to notepad for later!

Step 2 – Copy the snapshot to Blob

The next part relies on PowerShell. Update the following PowerShell script with your parameters to copy the snapshot to Blob:

$storageAccountName = "<storage account name>"
$storageAccountKey = “<storage account key>
$absoluteUri = “https://blahblahblah.blob.core.windows.net/blahblahblah/........
$destContainer = “<container>”
$blobName = “server.vhd

$destContext = New-AzureStorageContext –StorageAccountName $storageAccountName -StorageAccountKey $storageAccountKey
Start-AzureStorageBlobCopy -AbsoluteUri $absoluteUri -DestContainer $destContainer -DestContext $destContext -DestBlob $blobName

Just for your info, heres a quick explanation of the above:

  • Storage account name = there storage account where you want to store the VHD
  • The storage account key = either the primary or secondary  which is used for authentication for accessing the storage account
  • The absolute URI = this is the snapshot URI we generated at the end of step 1
  • The destination container = where you want to store the VHD. Usually this is either “vhds”,  or maybe create one called “snapshots”
  • The blob name = the file name of the VHD itself (remember to only use lowercase)

Step 2.5 – Moving around the blob if need be

Before we actually create a new VM instance based on this snapshot blob, there is an additional option we could take. That is, perhaps it would make sense to move the blob to a different subscription. This is particularly handy when you would have a development environment that you would want to move to production. Other use cases might be the inverse- making a replica of a production system for development purposes.

The absolute fastest way to do this, as I don’t like being inefficient here is with the Azure Storage Explorer (ASE) tool. Its an application that provides a quick GUI for completing storage actions. If you add in both the storage accounts in the ASE, you can as easily as this:

  • Select the blob from storage account A (in subscription A)
  • Select copy from the top menu
  • Go to the your second storage account (storage account B in perhaps subscription B)
  • Go to the relevant container
  • Select paste from the top menu
  • Wait for the blob to copy
  • DONE

It can’t get any simpler or faster than that. I’m sure if you’re command line inclined, you have a quick go to PowerShell cmdlet for that, but, for me, I’ve found that to be pretty damn quick. So it isn’t broken, why fix it.

Step 3 – Create a new VM with a managed disk based on the snapshot we’ve put into Azure Blob

The final piece of the puzzle, as the cliche would go, is to create a new virtual machine instance. Again, as the wonderfully elusive and vague title of this blog post states, we’ll use PowerShell to do this. Sure, ARM templates would work and likely the Azure Portal can get you pretty far as well. However, again I like to be efficient and I’ve found that the following PowerShell script does this the best.

Additionally, you can change this up to mount the VHD from blob, vs create a new managed disk as well. So, for purpose of creating a new machine, PowerShell is as flexible as it is fast and convenient.

Here’s the script you’ll need to create the new VM instance:

#Prepare the VM parameters 
$rgName = "<resource-group-name>"
$location = "australiaEast"
$vnet = "<virtual-network>"
$subnet = "/subscriptions/xxxxxxxxx/resourceGroups/<resource-group-name>/providers/Microsoft.Network/virtualNetworks/<virtual-network>/subnets/<subnet>"
$nicName = "VM01-Nic-01"
$vmName = "VM01"
$osDiskName = "VM01-OSDisk"
$osDiskUri = "https://<storage-account>.blob.core.windows.net/<container>/server.vhd"
$VMSize = "Standard_A1"
$storageAccountType = "StandardLRS"
$IPaddress = "10.10.10.10"

#Create the VM resources
$IPconfig = New-AzureRmNetworkInterfaceIpConfig -Name "IPConfig1" -PrivateIpAddressVersion IPv4 -PrivateIpAddress $IPaddress -SubnetId $subnet
$nic = New-AzureRmNetworkInterface -Name $nicName -ResourceGroupName $rgName -Location $location -IpConfiguration $IPconfig
$vmConfig = New-AzureRmVMConfig -VMName $vmName -VMSize $VMSize
$vm = Add-AzureRmVMNetworkInterface -VM $vmConfig -Id $nic.Id

$osDisk = New-AzureRmDisk -DiskName $osDiskName -Disk (New-AzureRmDiskConfig -AccountType $storageAccountType -Location $location -CreateOption Import -SourceUri $osDiskUri) -ResourceGroupName $rgName
$vm = Set-AzureRmVMOSDisk -VM $vm -ManagedDiskId $osDisk.Id -StorageAccountType $storageAccountType -DiskSizeInGB 128 -CreateOption Attach -Windows
$vm = Set-AzureRmVMBootDiagnostics -VM $vm -disable

#Create the new VM
New-AzureRmVM -ResourceGroupName $rgName -Location $location -VM $vm

Again, let me explain a little the parameters we’ve set that the start of the script:

  • $rgName = the resource group where you want to deploy the VM instance
  • $location = the Azure region
  • $vnet = the virtual network where you want to deploy the VM instance
  • $subnet = the subnet where you want to deploy the VM instance
  • $nicName = the name of the NIC of the server
  • $vmName = the name of the VM instance, the server name
  • $osDiskName = the OS disk name
  • $osDiskUri = the direct URI/URL to the VHD in your storage account
  • $VMSize = the VM size or the service plan for the VM
  • $storageAccountType = what type of storage you would like to have
  • $IPaddress = the static IP address of the server as I like to do this in Azure, rather than use dynamic IP’s

And that is pretty much that!

Conclusion

It’s Friday in Sydney. Its the pre-kend and it’s a gloomy, cold 9th day of Winter 2017. I hope that the above content helps you out of a jam or gives you the insight you need to run through this process quickly and efficiently. That feeling of giving back, helping. Thats that feeling that should warm me up and get me to lunch time! Counting down!

Cheers!

Tools for Testing Webhooks

In a microservices environment, APIs are the main communication methods between services. Most of time each API merely sends a request and wait for its response. But there are definitely cases that need longer period to complete the requests. Even some cases stop processing at some stage until they get a signal to continue. Those are not uncommon in the API world. However, implementing those features might be a little bit tricky because most APIs use HTTP protocol, which are basically web-based applications that have timeout constraints. The timeout limitation is pretty critical to achieve those long-running processes within a given period of time. Fortunately, there are already solutions (or patterns) to overcome those sorts of challenges – asynchronous patterns and/or webhook patterns. As both are quite similar to each other, they are used interchangeably or together. While those approaches sort out the long-running process issues, they are hard for testing or debugging without some external tools. In this post, we are going to have a look at a couple of useful tools for debugging and testing, when we develop REST API applications, especially webhook APIs.

Disclaimer: those services introduced in this post do not have any sort of relationship to us, Kloud.

RequestBin

RequestBin is an online webhook request sneaking tool. It has a very simple user interface so that developers can hop into the service straight away. If we want to check webhook request data, follow the steps below:

  1. Click the Create a RequestBin button at the first page.

  2. A temporary bin URL is generated. Grab it.

  3. Use this bin URL as a webhook callback URL. Here’s the screenshot using Postman sending a request to the bin URL.

  4. The request data sent from Postman is captured like below. If the bin URL is https://requestb.in/1fbhrlf1, just append the querystring parameter of ?inspect so that we can inspect the request data. Can we find out the same request data sent?

This service brings a few merits. Firstly, we can simply use the bin URL when we register a webhook. The webhook request body is captured at the bin URL. Secondly, we don’t have to develop a test application to analyse the webhook payload because the data has already been captured by the bin URL. Thirdly, we can save time and resources for those testing application development. And finally, it’s free!

Of course, there are demerits. Each bin URL is only available for limited period of time. According to the service website, the bin URL is only valid for the first 48 hours after it’s generated. In fact, the lifetime varies from 5 mins to 48 hours. If we close a web browser and open it again, the bin URL is no longer valid. Therefore, this is only for testing webhooks that have a short lifecycle. In addition to this, RequestBin doesn’t have a good fit when we try local debugging. There must be cases that the webhooks receive data and process it. RequestBin only shows how webhook request data is captured, nothing further.

If RequestBin is not our solution, what else can we use? How can we debug or test the webhook functions?

ngrok

According to this post, tunneling services take traffic from the Internet to local development environment. ngrok is one of those tunneling services. It has both free version and paid one, but the free one is enough for our purpose.

As it supports cross-platforms, download the suitable binary for OS. For Windows, there is only one binary, ngrok.exe. Copy this to the C:\ngrok folder (or wherever preferred) and enter the command below:

ngrok http 7071 -host-header=localhost
  • http: This specifies the protocol to watch incoming traffic.
  • 7071: This is the port number. Default is 80. If we debug Azure Functions, set this port number to 7071.
  • -host-header=localhost: Without this option, only ngrok captures the traffic but it can’t reach the local debugging environment.

After typing the command above, we can see the screen like below:

As we can see, external traffic hitting the endpoint of http://b46c7c81.ngrok.io can reach our locally running Azure Functions app. Let’s run Azure Functions in a local debugging mode.

Azure Functions is now up and running. Run Postman and send a request to the endpoint that ngrok has generated.

We can now confirm that the code stops at the break point within Visual Studio.

ngrok also provides a replay feature. If we access to http://localhost:4040, how ngrok has captured all requests since it is run.

The free version of ngrok keeps changing the endpoint every time it runs. Run history is also blown away. But this wouldn’t matter for our purpose.

So far, we have briefly looked at a couple of tools to sneak webhook traffic for debugging purpose. If we utilise those tools well, we can more easily perform checking how API request calls are made. It would be definitely worth noting.

How to build and deploy an Azure NodeJS WebApp using Visual Studio Code

Introduction

This week I had the need to build a small web application with a reasonably simple front end that will later be integrated inside a Portal. The web application isn’t going to be high use and didn’t necessitate deployment of infrastructure (VM’s). I’d messed with NodeJS a while back in this post where I configured a UI for Microsoft Identity Manager and Azure based functions.

In the back of my mind I knew I didn’t want to have to go for a full Visual Studio Project Solution for this, and with the recent updates to Visual Studio Code I figured it must be possible to do it using it. There wasn’t much around on doing it, so I dived in and worked it out for myself. Here I share the end-to-end process to make it easy for you to started.

Overview

What you will need on your development workstation before you start are the following components. Download and install them on your development machine.

You will also need an Azure Subscription to where you will publish your NodeJS site.

This post details setting up Visual Studio Code to build a shell NodeJS site and deploy it to Azure using a local GIT Repository. Let’s get started.

Visual Studio Code Extensions

A really smart and handy extension for VS Code is Azure Tools for VS Code. Release a few months ago (January 2017), this extension allows you to quickly create a Web App (Resource Group, App Service, Application Service Plan etc) from within VS Code. With VSCode on your development machine from the prerequisites above click on the Extensions Icon (bottom left) in the VSCode menu and type Azure Tools. Click the green Install button.

Azure Tools for VS Code

Creating the NodeJS Site in VS Code

I had a couple of attempts at doing this before I found a quick, neat and repeatable method of getting started. In order to get the Web App deployed and accessible correctly in Azure I found it easiest to use the Sample Azure NodeJS Hello World example from here. Download that sample and extract the contents to a new folder on your workstation. I created a new path on mine named …\NodeJS\nodejssite and dropped the sample in there so it looked like below.

After creating the folder structure and putting the sample in it, whilst in the sub-directory type:

code .

That will startup Visual Studio Code in the newly created folder with the starter sample.

Install Express for NodeJS

To that base sample site we’ll install Express. From the Terminal tab in the lower pane type:

npm install -g express-generator

Express App

With Express now on our machine, lets add the Express App to our new NodeJS site. Type express in the Terminal window.

express

Accept that the directory is not empty

This will create the folder structure for Express.

Now to get all the files and modules for our site configured for our app run npm install

Now type npm start in the terminal window to start our new site.

The NodeJS site will start. Open a Web Browser and go to http://localhost:3000 and you should see the Express empty site.

Navigate to views => index.jade Update the text like I have below.

Refresh your browser window and you should see the text updated.

In the terminal windows press Cntrl + C to stop NodeJS.

Test Deploy to Azure

Now let’s do a test deploy of our shell site as an Azure WebApp.

Press Cntrl + Shift + P or from the View menu select Command Palette.

Type Azure: Login 

This will generate a code and give you a link to open in your browser and login

Paste in the code from the clipboard and select continue

Then login with your account for the Tenant where you want to deploy the WebApp too. You’ll then be authorized.

From the Command Palette type azure sub and choose Azure: List Azure Subscriptions and choose the subscription where you will create and deploy the WebApp

Now from the Command Palette type Azure Create a Web App (Simple).

Give the WebApp a name. This will become the WebApp Name, and the basis for the all the associated WebApp components. Use Create a Web App (Advanced) if you want to be more specific about the name of the App Resources etc.

If you watch the bottom VS Code Status bar you will see the Azure Tools extension create the new Resource Group, Web App and Web App Plan.

Login to the Azure Portal, select the new Web App.

Select Deployment Options and then Local Git Repository. Select OK.

Select Deployment credentials and provide a username and password. You’ll need this shortly to publish your site.

Click Overview. Copy the Git clone url.

Back in VS Code, select the GIT icon (under the magnifying glass) and from the top choose Initialize Repository.

Then in the terminal window type git remote add azure <git clone url> obtained from the step above.

Type Initial Commit as the message and click the tick icon in the Source Control menu bar.

Select and select Publish

Select azure as the remote target we setup earlier.

You’ll be prompted to authenticate. Use the account you created above in Deployment Credentials.

Back in the Azure Portal under the Web App under Deployment Options you will see the initial commit.

Click on Overview and you should see that it is running. Click on URL and the site will open in a new tab in your browser.

Updating our WebApp

Now, let’s make a change to our WebApp.

Back in VS Code, click on the files and folder icon in the top left corner, navigate to views => index.jade and update the title. Hit Cntrl + S (or select Save from the File menu). In Terminal below type npm start to start our NodeJS site locally.

Check out the update locally. In a browser navigate to the local NodeJS site on localhost:3000. You’ll see the changed page.

Select the Git icon on the left menu, give the update some text e.g. ‘updated page text’ and select the tick from the top menu.

Select the and choose Push to publish the changes to our Azure WebApp.

Go back to your browser which was on the Azure WebApp URL and reload. Our change and been push and reflected in the WebApp.

Summary

Very quickly and easily using Visual Studio Code (with NodeJS and Git Desktop installed locally) we have;

  • Created an Azure WebApp
  • Created a base NodeJS site
  • Have a local NodeJS site we can develop
  • Publish it to Azure

Now go create something awesome.

Notes for Logic Apps around Webhook Actions

Azure Logic Apps (Logic Apps) is one of serverless services that Azure is offering. Of course the other one in Azure is Azure Functions (Functions). Logic Apps consists of many connectors and triggers to interconnect services outside Azure. A webhook connector is one of them which has unique characteristics to the others. In this post, we are briefly looking at some tips we should know when we use webhook actions in a Logic Apps workflow.

What is Webhook?

Webhook is basically an API. We register an API endpoint onto a certain service. When an event occurs on the service, it invokes the API endpoint and sends a payload to there through its request body. That’s how a webhook API works. Here are some picks:

  • In order to use a webhook, we need to register it onto somewhere. The registration process is called Subscription. We may or may not explicitly perform the subscription process.
  • During the subscription process, we store an endpoint URL, which is called Callback. After the subscription, an event occurs the callback is invoked.
  • When we invoke the callback, we also send a data, called Payload through its request body. That means we always use the POST method rather than any other HTTP methods.
  • If we don’t need the webhook any longer, we need to deregister it, which is called Unsubscription. This is performed either explicitly or implicitly.

Those four behaviours are the most distinctive ones on a webhook. Let’s take an example using Slack and GitHub. We have a simple scenario around those two services:

As a developer, I want to post a notification on a Slack channel when I push a commit to a GitHub repository so that other team members get notified of my code update.

  • Get an endpoint URL from Slack to send a notification, which will be registered onto GitHub. This is the Subscription.
  • The Slack endpoint URL is considered as Callback URL.
  • Through the callback URL, a POST request is sent when a new push is made. The request contains the push commit details, which is the Payload.
  • When we don’t need the notification, we simply delete the callback URL from the GitHub repository. This is the Unsubscription.

Does it make us have much more senses? Let’s apply them to Logic Apps.

Webhook Action on Logic Apps

A webhook action can be found like:

If we select it, we are given to enter those details. Here are the bits and pieces we are talking about in this post.

  1. At the time of this writing, the picture only indicates two required fields – Subscribe - Method and Subscribe - URI. However, this is NOT true. Actually we have to fill in those FIVE fields at least.
    • Subscribe - Method
    • Subscribe - URI
    • Subscribe - Body
    • Unsubscribe - Method
    • Unsubscribe - URI
  2. Both Subscribe - Method and Unsubscribe - Method only accept POST, not any other else. Like the picture above, if we select any method, GET for example, over POST, the Logic App will throw an error below. We can select any method, but this is NOT true. DO NOT get confused by the dropdown box.

  3. Subscribe - Body is actually a de-facto required field as we need to pass a callback URL through its payload in JSON format. Therefore, the callback URL MUST be sent using the WDL (Workflow Definition Language) function, @{listCallbackUrl()} via the request body.

  4. Both Logic App endpoint URL and the callback URL from the webhook action requires a SAS token, which is automatically generated. This SAS token is used for authentication. Therefore, there is no need to have another Authorization header. If we put the Authorization header by accident, the DirectApiAuthorizationRequired error will occur.

  5. When a Logic App meets the webhook action within its workflow, it subscribes the subscription endpoint and waits for the callback sending a payload until it is called. The maximum waiting period is 90 days. The callback only expects the POST request.

  6. When the callback sends a payload, the payload is considered as a response of the webhook action. Let’s have an example like below. The Logic App subscribe a Function endpoint. The Function sends a request back to Logic App through the callback URL, with a payload. Initial request body coming from Logic App has a productId property but the callback request body from Function changes it to objectId. As a result, the webhook action receives the new payload containing the objectId property.

    dynamic data = await req.Content.ReadAsAsync();
    var serialised = JsonConvert.SerializeObject((object)data);
    
    using (var client = new HttpClient())
    {
      var payload = JsonConvert.SerializeObject(new { objectId = (int) data.productId });
      var content = new StringContent(payload);
      await client.PostAsync((string) data.callbackUrl, content);
    }
    
    return req.CreateResponse(HttpStatusCode.OK, serialised);
    
  7. Regardless of the callback request that is successful or not, whenever the callback is invoked and captured by Logic App, its payload always becomes the response message of the webhook action.

  8. Only after the callback response is captured by Logic App, the subsequent actions are triggered. If those following actions use the webhook’s response message, it will be the payload from the callback as described earlier.

  9. Unsubscription is only executed when the Logic App run is cancelled or 90 days timeout occurs. It works as like a garbage collector.

  10. Unsubscription doesn’t have to fire a callback.

So far, we have briefly taken a look at the webhook action of Logic App. Documentation around this still needs more improvements so it is worth noting those characteristics when using Logic Apps. By doing so, we will be able to reduce number of trial-and-error efforts.

Try/Catch works in PowerShell ISE and not in PowerShell console

I recently encountered an issue with one of my PowerShell scripts. It was a script to enable litigation hold on all mailboxes in Exchange Online.

I connected to Exchange Online via the usual means below.

$creds = Get-Credential
$session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $Creds -Authentication Basic -AllowRedirection
Import-PSSession $session -AllowClobber

I then attempted to execute the following with no success.

try
{
Set-Mailbox -Identity $user.UserPrincipalName -LitigationHoldEnabled $true -ErrorAction Stop
}
catch
{
Write-Host "ERROR!" -ForegroundColor Red
}

As a test I removed the “-ErrorAction Stop” switch and then added the following line to the top of my script.

ErrorActionPreference = 'Stop'

That’s when it would work in Windows PowerShell ISE but not in Windows PowerShell console.

After many hours of troubleshooting I then discovered it was related to the implicit remote session established when connecting to Exchange Online.

To get my try/catch to work correctly I added the following line to the top of my script and all was well again.

$Global:ErrorActionPreference = 'Stop'

How to create and auto update route tables in Azure for your local Azure datacentre with Azure Automation, bypassing firewall appliances

Originally posted on Lucians blog, at clouduccino.com. Follow Lucian on Twitter @LucianFrango.


When deploying an “edge” or “perimeter” network in Azure, by way of a peered edge VNET or an edge subnet, you’ll likely want to deploy virtual firewall appliances of some kind to manage and control that ingress and egress traffic. This comes at a cost though. That cost being that Azure services are generally accessed via public IP addresses or hosts, even within Azure. The most common of those and one that has come up recently is Azure Blob storage.

If you have ExpressRoute, you can get around this by implementing Public Peering. This essentially sends all traffic destined for Azure services to your ER gateway. A bottleneck? Perhaps.

The problem in detail

Recently I ran into a road block on a customers site around the use of Blob storage. I designed an edge network that met certain traffic monitoring requirements. Azure NSGs were not able to meet all requirements, so, something considerably more complex and time consuming was implemented. It’s IT, isn’t that what always happens you may ask? 

Heres some reference blog posts:

Getting Azure 99.95% SLA for Cisco FTD virtual appliances in Azure via availability sets and ARM templates

Lessons learned from deploying Cisco Firepower Threat Defence firewall virtual appliances in Azure, a brain dump

WE deployed Cisco Firepower Threat Defence virtual appliance firewalls in an edge VNET. Our subnets had route tables with a default route of 0.0.0.0/0 directed to the “tag” “VirtualAppliance”. So all traffic to a host or network not known by Azure is directed to the firewall(s).  How that can be achieved is another blog post.

When implementing this solution, Azure services that are accessed via an external or public range IP address or host, most commonly Blob Storage which is accessed via blah.blob.core.windows.net, additionally gets directed to the Cisco FTDs. Not a big problem, create some rules to allow traffic flow etc and we’re cooking with gas.

Not exactly the case as the FTDv’s have a NIC with a throughput of 2GiB’s per second. That’s plenty fast, but, when you have a lot of workloads, a lot of user traffic, a lot of writes to Blob storage, bottle necks can occur.

The solution

As I mentioned earlier this can be tackled quickly through a number of methods. These discarded methods in this situation are as follows:

  • Implement ExpressRoute
    • Through ExpressRoute enable public peering
    • All traffic to Azure infrastructure is directed to the gateway
    • This is a “single device” that I have heard whispers is a virtual Cisco appliance similar to a common enterprise router
    • Quick to implement and for most cases, the throughout bottleneck isn’t reached and you’re fine
  • Implement firewall rules
    • Allow traffic to Microsoft IP ranges
    • Manually enter those IP ranges into the firewall
      • These are subject to change to a maintenance or managed services runbook should be implemented to do this on a regular basis
    • Or- enable URL filtering and basically allow traffic to certain URI’s or URL’s

Like I said, both above will work.

The solution in this blob post is a little bit more technical, but, does away with the above. Rather than any manual work, lets automate this process though AzureAutomation. Thankfully, this isn’t something new, but, isn’t something that is used often. Through the use of pre-configured Azure Automation modules and Powershell scripts, a scheduled weekly or monthly (or whatever you like) runbook to download the Microsoft publicly available .xml file that lists all subnets and IP addresses use in Azure. Then uses that file to update a route table the script creates with routes to Azure subnets and IP’s in a region that is specified.

This process does away with any manual intervention and works to the ethos “work smarter, not harder”. I like that very much, and no, that is not being lazy. It’s being smart.

The high five

I’m certainly not trying to take the credit for this, except for the minor tweak to the runbook code, so cheers to James Bannan (@JamesBannan) who wrote this great blog post (available here) on the solution. He’s put together the Powershell script that expands on a Powershell module written by Kieran Jacobson (@kjacobsen). Check out their Twitters, their blogs and all round awesome content for. Thank you and high five to both!

The process

I’m going to speed through this as its really straight forward and nothing to complicated here. The only tricky part is the order in doing that. Stick to the order and you’re guaranteed to succeed:

  • Create a new automation user account
  • Create a new runbook
    • Quick create a new Powershell runbook
  • Go back to the automation account
  • Update some config:
    • Update the modules in the automation account – do this FIRST as there are dependencies on up to date modules (specifically the AzureRM.Profile module by AzureRM.network)
    • By default you have these runbook modules:

    • Go to Browse Gallery
    • Select the following two modules, one at a time, and add to the automation user account
      • AzureRM.Network
      • AzurePublicIPAddress
        • This is the module created by Kieran Jacobson 
    • Once all are in, for the purposes of being borderline OCD, select Update Azure Modules
      • This should update all modules to the latest version, incase some are lagging a little behind
    • Lets create some variables
      • Select Variables from the menu blade in the automation user account
      • The script will need the following variables for YOUR ENVIRONMENT
        • azureDatacenterRegions
        • VirtualNetworkName
        • VirtualNetworkRGLocation
        • VirtualNetworkRGName
      • For my sample script, I have resources in the Australia, AustraliaEast region
      • Enter in the variables that apply to you here (your RGs, VNET etc)
  • Lets add in the Powershell to the runbook
    • Select the runbook
    • Select EDIT from the properties of the runbook (top horizontal menu)
    • Enter in the following Powershell:
      • this is my slightly modified version
$VerbosePreference = 'Continue'

### Authenticate with Azure Automation account

$cred = "AzureRunAsConnection"
try
{
 # Get the connection "AzureRunAsConnection "
 $servicePrincipalConnection=Get-AutomationConnection -Name $cred

"Logging in to Azure..."
 Add-AzureRmAccount `
 -ServicePrincipal `
 -TenantId $servicePrincipalConnection.TenantId `
 -ApplicationId $servicePrincipalConnection.ApplicationId `
 -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint 
}
catch {
 if (!$servicePrincipalConnection)
 {
 $ErrorMessage = "Connection $cred not found."
 throw $ErrorMessage
 } else{
 Write-Error -Message $_.Exception
 throw $_.Exception
 }
}

### Populate script variables from Azure Automation assets

$resourceGroupName = Get-AutomationVariable -Name 'virtualNetworkRGName'
$resourceLocation = Get-AutomationVariable -Name 'virtualNetworkRGLocation'
$vNetName = Get-AutomationVariable -Name 'virtualNetworkName'
$azureRegion = Get-AutomationVariable -Name 'azureDatacenterRegions'
$azureRegionSearch = '*' + $azureRegion + '*'

[array]$locations = Get-AzureRmLocation | Where-Object {$_.Location -like $azureRegionSearch}

### Retrieve the nominated virtual network and subnets (excluding the gateway subnet)

$vNet = Get-AzureRmVirtualNetwork `
 -ResourceGroupName $resourceGroupName `
 -Name $vNetName

[array]$subnets = $vnet.Subnets | Where-Object {$_.Name -ne 'GatewaySubnet'} | Select-Object Name

### Create and populate a new array with the IP ranges of each datacenter in the specified location

$ipRanges = @()

foreach($location in $locations){
 $ipRanges += Get-MicrosoftAzureDatacenterIPRange -AzureRegion $location.DisplayName
}

$ipRanges = $ipRanges | Sort-Object

### Iterate through each subnet in the virtual network
foreach($subnet in $subnets){

$RouteTableName = $subnet.Name + '-RouteTable'

$vNet = Get-AzureRmVirtualNetwork `
 -ResourceGroupName $resourceGroupName `
 -Name $vNetName

### Create a new route table if one does not already exist
 if ((Get-AzureRmRouteTable -Name $RouteTableName -ResourceGroupName $resourceGroupName) -eq $null){
 $RouteTable = New-AzureRmRouteTable `
 -Name $RouteTableName `
 -ResourceGroupName $resourceGroupName `
 -Location $resourceLocation
 }

### If the route table exists, save as a variable and remove all routing configurations
 else {
 $RouteTable = Get-AzureRmRouteTable `
 -Name $RouteTableName `
 -ResourceGroupName $resourceGroupName
 $routeConfigs = Get-AzureRmRouteConfig -RouteTable $RouteTable
 foreach($config in $routeConfigs){
 Remove-AzureRmRouteConfig -RouteTable $RouteTable -Name $config.Name | Out-Null
 }
 }

### Create a routing configuration for each IP range and give each a descriptive name
 foreach($ipRange in $ipRanges){
 $routeName = ($ipRange.Region.Replace(' ','').ToLower()) + '-' + $ipRange.Subnet.Replace('/','-')
 Add-AzureRmRouteConfig `
 -Name $routeName `
 -AddressPrefix $ipRange.Subnet `
 -NextHopType Internet `
 -RouteTable $RouteTable | Out-Null
 }

### Add default route for Edge Firewalls
 Add-AzureRmRouteConfig `
 -Name 'DefaultRoute' `
 -AddressPrefix 0.0.0.0/0 `
 -NextHopType VirtualAppliance `
 -NextHopIpAddress 10.10.10.10 `
 -RouteTable $RouteTable
 
### Include a routing configuration to give direct access to Microsoft's KMS servers for Windows activation
 Add-AzureRmRouteConfig `
 -Name 'AzureKMS' `
 -AddressPrefix 23.102.135.246/32 `
 -NextHopType Internet `
 -RouteTable $RouteTable

### Apply the route table to the subnet
 Set-AzureRmRouteTable -RouteTable $RouteTable

$forcedTunnelVNet = $vNet.Subnets | Where-Object Name -eq $subnet.Name
 $forcedTunnelVNet.RouteTable = $RouteTable

### Update the virtual network with the new subnet configuration
 Set-AzureRmVirtualNetwork -VirtualNetwork $vnet -Verbose

}

How is this different from James’s?

I’ve made two changes to the original script. These changes are as follows:

I changed the authentication to use an Azure Automation account. This streamlined the deployment process so I could reuse the script across another of subscriptions. This change was the following:

$cred = "AzureRunAsConnection"
try
{
 # Get the connection "AzureRunAsConnection "
 $servicePrincipalConnection=Get-AutomationConnection -Name $cred

"Logging in to Azure..."
 Add-AzureRmAccount `
 -ServicePrincipal `
 -TenantId $servicePrincipalConnection.TenantId `
 -ApplicationId $servicePrincipalConnection.ApplicationId `
 -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint 
}
catch {
 if (!$servicePrincipalConnection)
 {
 $ErrorMessage = "Connection $cred not found."
 throw $ErrorMessage
 } else{
 Write-Error -Message $_.Exception
 throw $_.Exception
 }
}

Secondly, I added an additional static route. This was for the default route (0.0.0.0/0) which is used to forward to our edge firewalls. This change was the following:

### Add default route for Edge Firewalls
 Add-AzureRmRouteConfig `
 -Name 'DefaultRoute' `
 -AddressPrefix 0.0.0.0/0 `
 -NextHopType VirtualAppliance `
 -NextHopIpAddress 10.10.10.10 `
 -RouteTable $RouteTable

You can re-use this section to add further custom static routes

Tying it all together

  • Hit the SAVE button and job done
    • Well, almost…
  • Next you should test the script
    • Select the TEST PANE from the top horizontal menu
    • A word of warning- this will go off and create the route tables and associate them with the subnets in your selected VNET!!!
    • Without testing though, you can’t confirm it works correctly
  • Should the test work out nicely, hit the publish button in the top hand menu
    • This gets the runbook ready to be used
  • Now go off and create a schedule
    • Azure public IP addresses can update often
    • It’s best to be ahead of the game and keep your route tables up to date
    • A regular schedule is recommended – I do once a week as the script only takes about 10-15min to run 
    • From the runbook top horizontal menu, select schedule
    • Create the schedule as desired
    • Save
  • JOB DONE! -No really, thats it!

Enjoy!

Best,

Lucian