Message retry patterns in Azure Functions

Azure Functions provide ServiceBus based trigger bindings that allow us to process messages dropped onto a SB queue or delivered to a SB subscription. In this post we’ll walk through creating an Azure Function using a ServiceBus trigger that implements a configurable message retry pattern.
Note: This post is not an introduction to Azure Functions nor an introduction to ServiceBus. For those not familiar will these Azure services take a look at the Azure Documentation Centre.
Let’s start by creating a simple C# function that listens for messages delivered to a SB subscription.
create azure function
Azure Functions provide a number of ways we can receive the message into our functions, but for the purpose of this post we’ll use the BrokeredMessage type as we will want access to the message properties. See the above link for further options for receiving messages into Azure Functions via the ServiceBus trigger binding.
To use BrokeredMessage we’ll need to import the Microsoft.ServiceBus assembly and change the input type to BrokeredMessage.

If we did nothing else, our function would receive the message from SB, log its message ID and remove it from the queue. Actually, the SB trigger peeks the message off the queue, acquiring a peek lock in the process. If the function executes successfully, the message is removed from the queue. So what happens when things go pear shaped? Let’s add a pear and observe what happens.

Note: To send messages to the SB topic, I use Paolo Salvatori’s ServiceBus Explorer. This tool allows us to view queue and message properties mentioned in this post.
retry by default
Notice the function being triggered multiple times. This will continue until the SB queue’s MaxDeliveryCount is exceeded. By default, SB queues and topics have a MaxDeliveryCount of 10. Let’s output the delivery count in our function using a message property on the BrokeredMessage class so we can observe this in action.

outputing delivery count
From the logs we see the message was retried 10 times until the maximum number of deliveries was reached and ServiceBus expired the message, sending it to the dead letter queue. “Ah hah!”, I hear you say. Implement message retries by configuring the MaxDeliveryCount property on the SB queue or subscription. Well, that will work as a simple, static retry policy but quite often we need a more configurable, dynamic approach. One based on the message context or type of exception caught by the processing logic.
Typical use cases include handling business errors (e.g. message validation errors, downstream processing errors etc.) vs transport errors (e.g. downstream service unavailable, request timeouts etc.). When handling business errors we may elect not to retry the failed message and instead move it to the dead letter queue. When handling transport errors we may wish to treat transient failures (e.g. database connections) and protocol errors (e.g. 503 service unavailable) differently. Transient failures we may wish to retry a few times over a short period of time where-as protocol errors we might want to keep trying the service over an extended period.
To implement this capability we’ll create a shared function that determines the appropriate retry policy based on context of the exception thrown. It will then check if the number of retry attempts against the maximum defined by the policy. If retry attempts have been exceeded, the message is moved to the dead letter queue, else the function waits for the duration defined by the policy before throwing the original exception.

Let’s change our function to throw mock exceptions and call our retry handler function to implement message retry policy.

Now our function implements some basic exception handling and checks for the appropriate retry policy to use. Let’s test our retry polices work by throwing the different exceptions our policy supports
Testing throwing our mock business rules exception…
test mock business exception
…we observe that the message is moved straight to the dead letter queue as per our defined policy.
Testing throwing our mock protocol exception…test mock protocol exception
…we observe that we retry the message a total of 5 times, waiting 3 seconds between retries as per the defined policy for protocol errors.

  • Ensure your SB queues and subscriptions are defined with a MaxDeliveryCount greater than your maximum number of retries.
  • Ensure your SB queues and subscriptions are defined with a TTL period greater than your maximum retry interval.
  • Be aware that using Consumption based service plans have a maximum execution duration of 5 minutes. App Service Plans don’t have this constraint. However, ensure the functionTimeout setting in the host.json file is greater than your maximum retry interval.
  • Also be aware that if you are using Consumption based plans you will still be charged for time spent waiting for the retry interval (thread sleep).

In this post we have explored the behaviour of the ServiceBus trigger binding within Azure Functions and how we can implement a dynamic message retry policy. As long as you are willing to manage the deserialization of message content yourself (rather than have Azure Functions do it for you) you can gain access to the BrokeredMessage class and implement feature rich messaging solutions on the Azure platform.

Calling WCF client proxies in Azure Functions

Azure Functions allow developers to write discrete units of work and run these without having to deal with hosting or application infrastructure concerns. Azure Functions are Microsoft’s answer to server-less computing on the Azure Platform and together with Azure ServiceBus, Azure Logic Apps, Azure API Management (to name just a few) has become an essential part of the Azure iPaaS offering.

The problem

Integration solutions often require connecting legacy systems using deprecating protocols such as SOAP and WS-*. It’s not all REST, hypermedia and OData out there in the enterprise integration world. Development frameworks like WCF help us deliver solutions rapidly by abstracting much of the boiler plate code away from us. Often these frameworks rely on custom configuration sections that are not available when developing solutions in Azure Functions. In Azure Functions (as of today at least) we only have access to the generic appSettings and connectionString sections of the configuration.
How do we bridge the gap and use the old boiler plate code we are familiar with in the new world of server-less integration?
So let’s set the scene. Your organisation consumes a number of legacy B2B services exposed as SOAP web services. You want to be able to consume these services from an Azure Function but definitely do not want to be writing any low level SOAP protocol code. We want to be able to use the generated WCF client proxy so we implement the correct message contracts, transport and security protocols.
In this post we will show you how to use a generated WCF client proxy from an Azure Function.
Start by generating the WCF client proxy in a class library project using Add Service Reference, provide details of the WSDL and build the project.
Examine the generated bindings to determine the binding we need and what policies to configure in code within our Azure Function.
In our sample service above we need to create a basic http binding and configure basic authentication.
Create an Azure Function App using an appropriate template for your requirements and follow the these steps to call your WCF client proxy:
Add the System.ServiceModel NuGet package to the function via the project.json file so we can create and configure the WCF bindings in our function
Add the WCF client proxy assembly to the ./bin folder of our function. Use Kudo to create the folder and then upload your assembly using the View Files panelupload_wcf_client_assembly
In your function, add references to both the System.ServiceModel assembly and your WCF client proxy assembly using the #r directive
When creating an instance of the WCF client proxy, instead of specifying the endpoint and binding in a config file, create these in code and pass to the constructor of the client proxy.
Your function will look something like this

Lastly, add endpoint address and client credentials to appSettings of your Azure Function App.
Test the function using the built-in test harness to check the function executes ok


The suite of integration services available on the Azure Platform are developing rapidly and composing your future integration platform on Azure is a compelling option in a maturing iPaaS marketplace.
In this post we have seen how we can continue to deliver legacy integration solutions using emerging integration-platform-as-a-service offerings.

Automate the archiving of your CloudHub application logs

CloudHub is MuleSoft’s integration platform as a service (iPaaS) that enables the deployment and management of integration solutions in the cloud. Runtime Manager, CloudHub’s management tool,  provides an integrated set of logging tools that allow support and operations staff to monitor and troubleshoot application logs of deployed applications.
Currently, application log entries are kept for 30 days or until they reach a max size of 100 MB. Often we are required to keep these logs for greater periods of time for auditing or archiving purposes. Overly chatty applications (applications that write log entries frequently) may find their logs only covering a few days restricting the troubleshooting window even further. Runtime Manager allows portal users to manually download log files via the browser, however no automated solution is provided out-of-the-box.
The good news is, the platform does provide both a command line tool and management API that we can leverage. Leaving the CLI to one side for now, the platform’s management API looks promising. Indeed, a search in Anypoint Exchange also yields a ready built CloudHub Connector we could leverage. However upon further investigation, the connector doesn’t meet all our requirements. The CloudHub Connector does not appear to support different business groups and environments so using it to download logs for applications deployed to non-default environments will not work (at least in the current version). The best approach will be to consume the management APIs provided by the Anypoint Platform directly. RAML definitions have been made available making consuming them within a mule flow very easy.
Solution overview
In this post we’ll develop a CloudHub application that is triggered periodically to loop through a collection of target applications, connect to the Anypoint Management APIs and fetch the current application log for each deployed instance. The downloaded logs will be compressed and sent to an Amazon S3 bucket for archiving.
Putting the solution together:
We start by grabbing the RAML for both the Anypoint Access Management API and the Anypoint Runtime Manager API and bring them into the project. The Access Management API provides the authentication and authorisation operations to login and obtain an access token needed in subsequent calls to the Runtime Manager API. The Runtime Manager API provides the operations to enumerate the deployed instances of an application and download the application log.
Download and add the RAML definitions to the project by extracting them into the ~/src/main/api folder.
To consume these APIs we’ll use the HTTP connector so we need to define some global configuration elements that make use of the RAML definitions we just imported.
Note: Referencing these directly from Exchange currently throws some RAML parsing errors.
So to avoid this, we download manually and reference our local copy of the RAML definition. Obviously we’ll need to update this as the API definition changes in the future.
To provide simple multi-value configuration support I have used a simple JSON structure to describe a collection of applications we need to iterate over.

Our flow then reads in this config and transforms this into a HashMap that we can then iterate over.
Note: Environment IDs can be gathered using the Runtime Manager API or the Anypoint CLI

Next, create our top level flow that is triggered periodically to read and parse our configuration setting into a collection that we can iterate over to download the application logs.

Now, we create a sub-flow that describes the process of downloading application logs for each deployed instance. We first obtain an access token using the Access Management API and present that token to the Runtime Manager API to gather details of all deployed instances of the application. We then iterate over that collection and call the Runtime Manager API to download the current application log for each deployed instance.

Next we add the sub-flows for consuming the Anypoint Platform APIs for each of the in-scope operations



In this last sub-flow, we perform an additional processing step of compressing (zip) the log file before sending to our configured Amazon S3 bucket.
The full configuration for the workflow can be found here.
Once packaged and deployed to CloudHub we configure the solution to archive application logs for any deployed CloudHub app, even if they have been deployed into environments other than the one hosting the log archiver solution.
After running the solution for a day or so and checking the configured storage location we can confirm logs are being archived each day.
Known limitations:

  • The Anypoint Management API does not allow downloading application logs for a given date range. That is, each time the solution runs a full copy of the application log will be downloaded. The API does support an operation to query the logs for a given date range and return matching entries as a result set but that comes with additional constraints on result set size (number of rows) and entry size (message truncation).
  • The RAML definitions in Anypoint Exchange currently do not parse correctly in Anypoint Studio. As mentioned above, to work around this we download the RAML manually and bring it into the project ourselves.
  • Credentials supplied in configuration are in plain text. Suggest creating a dedicated Anypoint account and granting permissions to only the target environments.

In this post I have outlined a solution that automates the archiving of your CloudHub application log files to external cloud storage. The solution allows periodic scheduling and multiple target applications to be configured even if they exist in different CloudHub environments. Deploy this solution once to archive all of your deployed application logs.

DataWeave: Tips and tricks from the field

DataWeave (DW) has been part of the MuleSoft Anypoint Platform since v3.7.0 and has been a welcome enhancement providing an order of magnitude improvement in performance as well as increased mapping capability that enables more efficient flow design.
However, like most new features of this scope and size (i.e. brand new transformation engine written from the ground up), early documentation was minimal and often we were left to ourselves. At times even the most simple mapping scenarios could take an hour or so to solve what could have taken 5 mins in data-mapper. But it pays to stick with it and push on through the adoption hurdle as the pay offs are worth it in the end.
For those starting out with DataWeave here are some links to get you going:

In this post I will share some tips and tricks I have picked up from the field with the aim that I can give someone out there a few hours of their life back.

Tip #1 – Use the identity transform to check how DW performs it’s intermediate parsing

When starting any new DW transform, it pays to capture and understand how DW will parse the inputs and present it to the transform engine. This helps navigate some of the implicit type conversions going on as well as better understand the data structure being traversed in your map. To do this, start off by using the following identity transform with an output type of application/dw.

Previewing a sample invoice xml yields the following output which gives us insight into the internal data structure and type conversations performed by DW when parsing our sample payload.

and the output of the identity transform

Tip #2 – Handling conditional xml node lists

Mule developers who have being using DW even for a short time will be used to seeing these types of errors displayed in the editor

Cannot coerce a :string to a :object

These often occur when we expect the input payload to be an array or complex data type, but a simple type (string in this case) is actually presented to the transform engine. In our invoice sample, this might occur when an optional xml nodelist contains no child nodes.

To troubleshoot this we would use the identity transform described above to gain insight into the intermediate form of our input payload. Notice the invoices element is no longer treated as a nodelist but rather a string.

We resolve this by checking if ns0#invoices is of type object and provide alternative output should the collection be empty.

Tip #3 – Explicitly setting the type of input reader DataWeave uses

Occasionally you will hit scenarios where the incoming message (input payload) doesn’t have a mimeType assigned or DW cannot infer a reader class from the mimeType that is present. In these cases, we’ll either get an exception thrown or we may get unpredictable behaviour from your transform. To avoid this, we should be in the habit of setting the mimeType of the input payload explicitly. At present we can’t do this in the graphical editor, we will need to edit the configuration xml directly and add the following attribute to the <dw:input-payload> element of our transform shape
[code language=”xml” gutter=”false”]
<dw:input-payload doc:sample="xml_1.xml" mimeType="application/xml" />

Tip #4 – Register custom formats as types (e.g. datetime formats)

Hopefully we are blessed to be always working against strongly typed message schema where discovery and debate over the data formats of the output fields never happen…yeah right. Too often we need to tweak the output format of data fields a couple of times during the development cycle. In large transforms, this may mean applying the same change to several fields throughout the map, only to come back and change this again the next day. To save you time and better organise your DW maps, we should look to declare common format types in the transform header and reference those throughout the map. If we need to tweak this we apply the change in one central location.

Tip #5 – Move complex processing logic to MEL functions

Sometimes even the most straight forward of transformations can lead to messy chains of functions that are hard to read, difficult to troubleshoot and often error prone. When finding myself falling into these scenarios I look to pull out this logic and move it into a more manageable MEL function. This not only cleans up the DW map but also provides opportunity to place debug points in our MEL function to assist with troubleshooting a misbehaving transform.
Place your MEL function in your flow’s configuration file at the start along with any other config elements.

Call your function as you would if you declared it inline in your map.

Tip #6 – Avoid the graphical drag and drop tool in the editor

One final tip that I find myself regularly doing is avoid using the graphical drag and drop tool in the editor. I’m sure this will be fixed in later versions of DataWeave, but for now I find it creates untidy code that I often end up fighting with the editor to clean up. I would only typically use the graphical editor to map multiple fields en-mass and then cut and paste the code into my text editor, clean it up and paste it back into DW. From then on, I am working entirely in the code window.
There we go, in this post I have outlined six tips that I hope will save at least one time poor developer a couple of hours which could be much better spent getting on with the business off delivering integration solutions for their customers. Please feel free to contribute more tips in the comments section below.

Connecting Salesforce and SharePoint Online with Azure App Services

Back in November I wrote a post that demonstrated how we can integrate Salesforce and SharePoint Online using the MuleSoft platform and the MuleSoft .NET Connector. In this post I hope to achieve the same thing using the recently released into preview Azure App Services offering.

Azure App Services

Azure App Services rebrands a number of familiar service types (Azure Websites, Mobile Services, and BizTalk Services) as well as adding a few new ones to the platform.


  • Web Apps – Essentially a rebranding of Azure websites.
  • Mobile Apps – Built on the existing Azure Mobile Services with some additional features such as better deployment and scalability options
  • Logic Apps – A new service to the platform that allows you to visually compose process flows using a suite of API Apps from both the Marketplace and custom built.
  • API Apps – A special type of Web App that allows you to host and manage APIs to connect SaaS applications, on-premise applications or implement custom business logic. The Azure Marketplace provides a number of API Apps ready built that you can deploy as APIs in your solution.

Microsoft have also published a number of Apps to the Azure Marketplace to provide some ready-to-use functionality within each of these service types.  A new Azure SDK has also been released that we can use to build & deploy our own custom App Services. Further details on the Azure App Service can be found on the Azure Documentation site here.

Scenario Walkthrough

In this post we will see how we can create a Logic App that composes a collection of API Apps to implement the same SaaS integration solution as we did in the earlier post. To recap, we had the following integration scenario:

  • Customers (Accounts) are entered into by the Sales team.
  • The team use O365 and SharePoint Online to manage customer and partner related documents.
  • When new customers are entered into Salesforce, corresponding document library folders need to be created in SharePoint.
  • Our interface needs to poll Salesforce for changes and create a new document library folder in SharePoint for this customer according to some business rules.
  • The business logic required to determine the target document library is based on the Salesforce Account type (Customer or Partner)

Azure Marketplace

As a first step, we should search the Azure Marketplace for available connectors that suit our requirements. A quick search yields some promising candidates…

Salesforce Connector – Published by Microsoft and supports Account entities and executing custom queries. Supported as an action within Logic Apps. Looking good.


SharePoint Online Connector – Published by Microsoft and supports being used as an action or trigger in Logic Apps. Promising, but upon further inspection we find that it doesn’t support creating folders within a document library. Looks like we’ll need to create our own custom API App to perform this.


Business Rules API – Again published by Microsoft and based on the BizTalk Business Rules Engine. Supports being used as an action in Logic Apps however only supports XML based facts which as we’ll see doesn’t play well with the default messaging format used in Logic Apps (JSON). Looks like we’ll either need to introduce additional Apps to perform the conversion (json > xml and xml > json) or create a custom API App to perform our business rules as well.


So it appears we can only utilize one of the out-of-the-box connectors. We will need to roll up our sleaves and create at least two custom API Apps to implement our integration flow. As the offering matures and community contributions to the Marketplace is supported, hopefully we will be spending less time developing services and more time composing them. But for now let’s move on and setup the Azure App Services we will need.

Azure API Apps

As we are creating our first Azure App Service we need to first create a Resource Group and create a Azure App Service Plan. Service plans allow us to apply and manage resource tiers to each of our apps. We can then modify this service plan to scale up/down resources shared across all the apps consistently.

We start by adding a new Logic App and creating a new Resource Group and Service Plan as follows:


Navigate to the newly created Resource Group. You should see two new resources in your group, your Logic App and an API Gateway that was automatically created for the resource group.


Tip: Pin the Resource Group to your Home screen (start board) for easy access as we switch back and forth between blades.

Next, add the Salesforce Connector API App from the Marketplace …


… and add it to our Resource Group using the same Service Plan as our Logic App. Ensure that in the package settings we have the Account entity configured. This is the entity in Salesforce we want to query.


Now we need to provision two API App Services to host our custom API’s. Let’s add an API App Service for our custom BusinessRulesService API first, ensuring we select our existing Resource Group and Service Plan.


Then repeat for our custom SharePointOnlineConnector API App Service, again selecting our Resource Group and Service Plan. We should now see three API Apps added to our resource group


Developing Custom API Apps

Currently, only the Salesforce Connector API has been deployed (as we created this from the Marketplace). We now need to develop our custom APIs and deploy them to our API App services we provisioned above.

You will need Visual Studio 2013 and the latest Azure SDK for .NET (2.5.1 or above) installed.

Business Rules Service

In Visual Studio, create a new ASP.NET Web Application for the custom BusinessRulesService and choose Azure API App (Preview)


Add a model to represent the SharePoint document library details we need our business rules to spit out

[code language=”csharp”]
public class DocumentLibraryFolder
public string DocumentLibrary { get; set; }
public string FolderName { get; set; }

Add an Api Controller that implements our business rules and return an instance of our DocumentLibraryFolder class.

[code language=”csharp”]
public class BusinessRulesController : ApiController

public DocumentLibraryFolder Get(string accountType, string accountName)
System.Diagnostics.Trace.TraceInformation("Enter: Get");

DocumentLibraryFolder docLib = new DocumentLibraryFolder();

// Check for customer accounts
if (accountType.Contains("Customer"))
docLib.DocumentLibrary = "Customers";

// Check for partner accounts
if (accountType.Contains("Partner"))
docLib.DocumentLibrary = "Partners";

// Set folder name
docLib.FolderName = accountName;
catch (Exception ex)

return docLib;

With the implementation done, we should test it works locally (how else can we claim “it works on my machine” right!). The easiest way to test an API App is to enable the swagger UI and use its built in test harness. Navigate to App_Start\SwaggerConfig.cs and uncomment the lines shown below.


Run your API App and navigate to /swagger


Once we have confirmed it works, we need to deploy the API to the Azure API App service we provisioned above. Right click the BusinessRulesService project in Solution Explorer and select Publish. Sign-in using your Azure Service Administration credentials and select the target API App service from the drop down list.


Click Publish to deploy the BusinessRulesService to Azure

Tip: Once deployed it is good practice to test your API works in Azure. You could enable public access and test using the swagger UI test harness as we did locally, or you could generate a test client app in Visual Studio. Using swagger UI is quicker as long as you remember to revoke access once testing has completed as we don’t want to grant access to this API outside our resource group.

Grant public (anonymous) access in the Application Settings section of our API App and test the deployed version using the URL found on the Summary blade of the AP App.



Custom SharePoint Online Connector

Since the out-of-the-box connector in the Marketplace didn’t support creating folders in document libraries, we need to create our own custom API App to implement this functionality. Using the same steps as above, create a new ASP.NET Web Application named SharePointOnlineConnector and choose the Azure API App (Preview) project template.

Add the same DocumentLibraryFolder model we used in our BusinessRulesService and an Api Controller to implement the connection to SharePoint and creation of the folder in the specified document library

[code language=”csharp”]
public class DocumentLibraryController : ApiController
#region Connection Details
string url = "url to your sharepoint site";
string username = "username";
string password = "password";

public void Post([FromBody] DocumentLibraryFolder folder)
using (var context = new Microsoft.SharePoint.Client.ClientContext(url))
// Provide client credentials
System.Security.SecureString securePassword = new System.Security.SecureString();
foreach (char c in password.ToCharArray()) securePassword.AppendChar(c);
context.Credentials = new Microsoft.SharePoint.Client.SharePointOnlineCredentials(username, securePassword);

// Get library
var web = context.Web;
var list = web.Lists.GetByTitle(folder.DocumentLibrary);
var root = list.RootFolder;

// Create folder

catch (Exception ex)


Deploy to our Resource Group selecting our SharePointOnlineConnector API App Service.


Grant public access and test the API is working in Azure using swagger UI once again.


Note: I did have some issues with the Microsoft.SharePoint.Client libraries. Be sure to use v16.0.0.0 of these libraries to avoid the System.IO.FileNotFoundException: msoidcliL.dll issue (thanks Alexey Shcherbak for the fix).

Azure Logic App

With all our App Services deployed, let’s now focus on composing them into our logic flow within an Azure Logic App. Open our Logic App and navigate to the Triggers and Actions blade. From the toolbox on the right, drag the following Apps onto the designer:

  • Recurrence Trigger
  • Salesforce Connector
  • BusinessRulesService
  • SharePointOnlineConnector


Note: Only API Apps and Connectors in your Resource Group will show up in the toolbox on the right hand side as well as the Recurrence Trigger and HTTP Connector.

Configure Recurrence trigger

  • Frequency: Minutes
  • Interval: 1


Configure Salesforce Connector API

First we must authorise our Logic App to access our SFDC service domain. Click on Authorize and sign in using your SFDC developer credentials. Configure the Execute Query action to perform a select using the following SQL statement:

[code language=”sql”]
SELECT Id, Name, Type, LastModifiedDate FROM Account WHERE LastModifiedDate &gt; YESTERDAY LIMIT 10


The output of the Salesforce Connector API will be in json, the default messaging format used in logic apps. The structure of the json data will look something like this

[code language=”javascript”]
"totalSize": 10,
"done": true,
"records": [{
"attributes": {
"type": "Account",
"url": "/services/data/v32.0/sobjects/Account/00128000002l9m6AAA"
"Id": "00128000002l9m6AAA",
"Name": "GenePoint",
"Type": "Customer – Channel",
"LastModifiedDate": "2015-03-20T22:45:13+00:00"
"attributes": {
"type": "Account",
"url": "/services/data/v32.0/sobjects/Account/00128000002l9m7AAA"
"Id": "00128000002l9m7AAA",
"Name": "United Oil &amp; Gas, UK",
"Type": "Customer – Direct",
"LastModifiedDate": "2015-03-20T22:45:13+00:00"
… repeats …

Notice the repeating “records” section. We’ll need to let downstream APIs be aware of these repeating items so they can get invoked once for every repeating item.

Configure Business Rules API

Setup a repeating item so that our Business Rules API gets called once for every account the Salesforce Connector outputs in the response body.

  • Click on the Settings icon and select Repeat over a list
  • Set Repeat to @body(‘salesforceconnector’).result.records

Note: Here @body(‘salesforceconnector’) references the body of the response (or output) of the API call. “result.records” is referencing the elements within the json response structure where “records” is the repeating collection we want to pass to the next API in the flow.

Configure call to the BusinessRules_Get action passing the Type and Name fields of the repeated item

  • Set accountType to @repeatItem().Type
  • Set accountName to @repeatItem().Name


The output of the BusinessRulesService will be a repeating collection of both inputs and outputs (discovered after much trial and error. Exception details are pretty thin as with most preview releases)

[code language=”javascript”]
"repeatItems": [{
"inputs": {
"host": {
"gateway": "",
"id": "/subscriptions/72608e17-c89f-4822-8726-d15540e3b89c/resourcegroups/blogdemoresgroup/providers/Microsoft.AppService/apiapps/businessrulesservice"
"operation": "BusinessRules_Get",
"parameters": {
"accountType": "Customer – Channel",
"accountName": "GenePoint"
"apiVersion": "2015-01-14",
"authentication": {
"scheme": "Zumo",
"type": "Raw"
"outputs": {
"headers": {
"pragma": "no-cache,no-cache",
"x-ms-proxy-outgoing-newurl": ";accountName=GenePoint",
"cache-Control": "no-cache",
"set-Cookie": "ARRAffinity=451155c6c25a46b4af4ca2b73a70e702860aefb1d0efa48497d93db09e8a6ca1;Path=/;,ARRAffinity=451155c6c25a46b4af4ca2b73a70e702860aefb1d0efa48497d93db09e8a6ca1;Path=/;",
"server": "Microsoft-IIS/8.0",
"x-AspNet-Version": "4.0.30319",
"x-Powered-By": "ASP.NET,ASP.NET",
"date": "Sun, 19 Apr 2015 12:42:18 GMT"
"body": {
"DocumentLibrary": "Customers",
"FolderName": "GenePoint"
"startTime": "2015-04-19T12:42:18.9797299Z",
"endTime": "2015-04-19T12:42:20.0306243Z",
"trackingId": "9c767bc2-150d-463a-9bae-26990c48835a",
"code": "OK",
"status": "Succeeded"

We will need to again define the appropriate repeating collection to present to the next API. In this case it will need to be the “outputs.body” element of the repeatItems collection.

Configure SharePointOnline Connector API

Setup a repeating item so that our SharePointOnline API gets called once for every item in the repeatItems collection.

  • Click on the Settings icon and select Repeat over a list
  • Set Repeat to @actions(‘businessrulesservice’).outputs.repeatItems

Configure call to the DocumentLibrary_POST action setting the following parameters

  • Set DocumentLibrary to @repeatItem().outputs.body.DocumentLibrary
  • Set FolderName to @repeatItem().outputs.body.FolderName


Save the Logic App and verify no errors are displayed. Close the Triggers and Actions blade so we return to our Logic App Summary blade.

Testing Our Solution

Ensure our Logic App is enabled and verify it is being invoked every 1 minute by the Recurrence trigger.


Open a browser and navigate to your Salesforce Developer Account. Modify a number of Accounts ensuring we have a mix of Customer and Partner Account types.


Open a browser and navigate to your SharePoint Online Developer Account. Verify that folders for those modified accounts appear in the correct document libraries.



In this post we have seen how we can compose logical flows using a suite of API Apps pulled together from a mix of the Azure Marketplace and custom APIs within a single integrated solution to connect disparate SaaS applications.

However, it is early days for Azure App Services and I struggled with its v1.0 limitations and poor IDE experience within the Azure Preview Portal. I would like to see a Logic App designer in Visual Studio, addition of flow control and expansion of the expression language to include support for more complex data types (perhaps even custom .NET classes).  I’m sure as the offering matures and community contributions to the Marketplace are enabled, we will be spending less time developing services and more time composing them hopefully with a much better user experience.

Hands Free VM Management with Azure Automation and Resource Manager – Part 2

In this two part series, I am looking at how we can leverage Azure Automation and Azure Resource Manager to schedule the shutting down of tagged Virtual Machines in Microsoft Azure.

  • In Part 1 we walked through tagging resources using the Azure Resource Manager PowerShell module
  • In Part 2 we will setup Azure Automation to schedule a runbook to execute nightly and shutdown tagged resources.

Azure Automation Runbook

At the time of writing, the tooling support around Azure Automation can be politely described as a hybrid one. For starters, there is no support for Azure Automation in the preview portal. The Azure command line tools only support basic automation account and runbook management, leaving the current management portal as the most complete tool for the job

As I mentioned in Part 1, Azure Automation does not yet support the new Azure Resource Manager PowerShell module out-of-the-box, so we need to import that module ourselves. We will then setup service management credentials that our runbook will use (recall the ARM module doesn’t use certificates anymore, we need to supply user account credentials).

We then create our PowerShell workflow to query for tagged virtual machine resources and ensure they are shutdown. Lastly, we setup our schedule and enable the runbook… lets get cracking!

When we first create an Azure Automation account, the Azure PowerShell module is already imported as an Asset for us (v0.8.11 at the time of writing) as shown below.

Clean Azure Automation Screen.
To import the Azure Resource Manager module we need to zip it up and upload it to the portal using the following process. In Windows Explorer on your PC

  1. Navigate to the Azure PowerShell modules folder (typically C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ResourceManager\AzureResourceManager)
  2. Zip the AzureResourceManager sub-folder.

Local folder to zip.

In the Automation pane of the current Azure Portal:

  1. Select an existing Automation account (or create a new one)
  2. Navigate to the Asset tab and click the Import Module button
  3. Browse to the file you created above.

ARM Module Import

After the import completes (this usually takes a few minutes) you should see the Azure Resource Manager module imported as an Asset in the portal.

ARM Module Imported

We now need to setup the credentials the runbook will use and for this we will create a new user in Azure Active Directory (AAD) and add that user as a co-administrator of our subscription (we need to query resource groups and shutdown our virtual machines).

In the Azure Active Directory pane:

  1. Add a new user of type New user in your organisation
  2. Enter a meaningful user name to distinguish it as an automation account
  3. Select User as the role
  4. Generate a temporary password which we’ll need to change it later.

Tell us about this user.

Now go to the Settings pane and add the new user as a co-administrator of your subscription:

Add user as co-admin.

Note: Azure generated a temporary password for the new user. Log out and sign in as the new user to get prompted to change the password and confirm the user has service administration permissions on your subscription.

We now need to add our users credentials to our Azure Automation account assets.

In the Automation pane:

  1. Select the Automation account we used above
  2. Navigate to the Asset tab and click on the Add Setting button on the bottom toolbar
  3. Select Add Credential
  4. Choose Windows PowerShell Credential from type dropdown
  5. Enter a meaningful name for the asset (e.g. runbook-account)
  6. Enter username and password of the AAD user we created above.

Runbook credentials

With the ARM module imported and credentials setup we can now turn to authoring our runbook. The completed runbook script can be found on Github. Download the script and save it locally.

Open the script in PowerShell ISE and change the Automation settings to match the name you gave to your Credential asset created above and enter your Azure subscription name.

[code language=”PowerShell”]
workflow AutoShutdownWorkflow
#$VerbosePreference = "continue"

# Automation Settings
$pscreds = Get-AutomationPSCredential -Name "runbook-account"
$subscriptionName = "[subscription name here]"
$tagName = "autoShutdown"

# Authenticate using WAAD credentials
Add-AzureAccount -Credential $pscreds | Write-Verbose

# Set subscription context
Select-AzureSubscription -SubscriptionName $subscriptionName | Write-Verbose

Write-Output "Checking for resources with $tagName flag set…"

# Get virtual machines within tagged resource groups
$vms = Get-AzureResourceGroup -Tag @{ Name=$tagName; Value=$true } | `
Get-AzureResource -ResourceType "Microsoft.ClassicCompute/virtualMachines"

# Shutdown all VMs tagged
$vms | ForEach-Object {
Write-Output "Shutting down $($_.Name)…"
# Gather resource details
$resource = $_
# Stop VM
Get-AzureVM | ? { $_.Name -eq $resource.Name } | Stop-AzureVM -Force

Write-Output "Completed $tagName check"

Walking through the script, the first thing we do is gather the credentials we will use to manage our subscription. We then authenticate using those credentials and select the Azure subscription we want to manage. Next we gather all virtual machine resources in resource groups that have been tagged with autoShutdown.

We then loop through each VM resource and force a shutdown. One thing you may notice about our runbook is that we don’t explicitly “switch” between the Azure module and Azure Resource Management module as we must when running in PowerShell.

This behaviour may change over time as the Automation service is enhanced to support ARM out-of-the-box, but for now the approach appears to work fine… at least on my “cloud” [developer joke].

We should now have our modified runbook script saved locally and ready to be imported into the Azure Automation account we used above. We will use the Azure Service Management cmdlets to create and publish the runbook, create the schedule asset and link it to our runbook.

Copy the following script into a PowerShell ISE session and configure it to match your subscription and location of the workflow you saved above. You may need to refresh your account credentials using Add-AzureAccount if you get an authentication error.

[code language=”PowerShell”]
$automationAccountName = "[your account name]"
$runbookName = "autoShutdownWorkflow"
$scriptPath = "c:\temp\AutoShutdownWorkflow.ps1"
$scheduleName = "ShutdownSchedule"

# Create a new runbook
New-AzureAutomationRunbook –AutomationAccountName $automationAccountName –Name $runbookName

# Import the autoShutdown runbook from a file
Set-AzureAutomationRunbookDefinition –AutomationAccountName $automationAccountName –Name $runbookName –Path $scriptPath -Overwrite

# Publish the runbook
Publish-AzureAutomationRunbook –AutomationAccountName $automationAccountName –Name $runbookName

# Create the schedule asset
New-AzureAutomationSchedule –AutomationAccountName $automationAccountName –Name $scheduleName –StartTime $([DateTime]::Today.Date.AddDays(1).AddHours(1)) –DayInterval 1

# Link the schedule to our runbook
Register-AzureAutomationScheduledRunbook –AutomationAccountName $automationAccountName –Name $runbookName –ScheduleName $scheduleName

Switch over to the portal and verify your runbook has been created and published successfully…

Runbook published.

…drilling down into details of the runbook, verify the schedule was linked successfully as well…

Linked Schedule.

To start your runbook (outside of the schedule) navigate to the Author tab and click the Start button on the bottom toolbar. Wait for the runbook to complete and click on the View Job icon to examine the output of the runbook.

Manual Start

Run Output

Note: Create a draft version of your runbook to troubleshoot failing runbooks using the built in testing features. Refer to this link for details on testing your Azure Automation runbooks.

Our schedule will now execute the runbook each night to ensure virtual machine resources tagged with autoShutdown are always shutdown. Navigating to the Dashboard tab of the runbook will display the runbook history.

Runbook Dashboard


1. The AzureResourceManager module is not officially supported yet out-of-the-box so a breaking change may come down the release pipeline that will require our workflow to be modified. The switch behaviour will be the most likely candidate. Watch that space!

2. Azure Automation is not available in all Azure Regions. At the time of writing it is available in East US, West EU, Japan East and Southeast Asia. However, region affinity isn’t a primary concern as we are merely just invoking the service management API where our resources are located. Where we host our automation service is not as important from a performance point of view but may factor into organisation security policy constraints.

3. Azure Automation comes in two tiers (Free and Basic). Free provides 500 minutes of job execution per month. The Basic tier charges $0.002 USD a minute for unlimited minutes per month (e.g. 1,000 job execution mins will cost $2). Usage details will be displayed on the Dashboard of your Azure Automation account.

Account Usage

In this two part post we have seen how we can tag resource groups to provide more granular control when managing resource lifecycles and how we can leverage Azure Automation to schedule the shutting down of these tagged resources to operate our infrastructure in Microsoft Azure more efficiently.

Hands Free VM Management with Azure Automation and Resource Manager – Part 1

Over the past six months, Microsoft have launched a number of features in Azure to enable you to better manage your resources hosted there.

In this two part series, I will show how we can leverage two of these new features – Azure Automation and Azure Resource Manager – to schedule the shutting down of tagged Virtual Machines in Microsoft Azure.

  • In Part 1 we will walk through tagging resources using the Azure Resource Manager features and
  • In Part 2 we will setup Azure Automation to schedule a runbook to execute nightly and shutdown tagged VM resources.

About Azure Resource Manager and Azure Automation

Azure Resource Manager (ARM) provides the capability to create reusable deployment templates and provide a common way to manage the resources that make up your deployment. ARM is the foundation of the ALM and DevOps tooling being developed by Microsoft and investments in it are ongoing.

Another key service in this space is the Azure Automation service which provides the ability to create, monitor, manage, and deploy resources using runbooks based on Windows PowerShell workflows. Automation assets can be defined to share credentials, modules, schedules and runtime configuration between runbooks and a runbook gallery provides contributions from both Microsoft and the community that can be imported and used within your Automation account.

Operating infrastructure effectively in the cloud is the new holy grail of today’s IT teams – scaling out to meet demand, back to reduce unnecessary operating costs, running only when needed and de-allocating when not. Automating elastic scale has been one area cloud providers and 3rd party tooling ISVs have invested in and the capability is pretty solid.

The broader resource lifecycle management story is not so compelling. Shutting down resources when they are not needed is still largely a manual process unless heavy investments are made into infrastructure monitoring and automation tools or via 3rd party SaaS offerings such as Azure Watch.


We will start by identifying and tagging the resource groups we want to ensure are automatically shutdown overnight. Following this we will author an Azure Automation runbook that looks for tagged resources and shuts them down, configuring it to run every night.

Setting this up is not as straight forward as you would think (and we will need to bend the rules a little) so I will spend some time going through these steps in detail.

Tagging Resource Groups

Azure Resource Management features are currently only available via the preview portal and command line tools (PowerShell and cross-platform CLI). Using the preview portal we can manually create tags and assign them to our resource groups. A decent article on the Azure site walks you through how to perform these steps. Simon also introduced the ARM command line tools in his post a few months ago.

Performing these tasks in PowerShell is a little different to previous service management tasks you may be use to. For starters, the Azure Resource Manager cmdlets cannot be used in the same PS session as the “standard” Azure cmdlets. We must now switch between the two modes. The other main difference is the way we authenticate to the ARM API. We no longer use service management certificates with expiry periods of two to three years but user accounts backed by Azure Active Directory (AAD). These user tokens have expiry periods of two to three days so we must constantly refresh them.

Note: Switch “modes” just removes one set of Azure modules and imports others. So switching to Azure Resource Manager mode removes the Azure module and imports the Azure Resource Manager module and Azure Profile module. We need this understanding when we setup our Azure Automation later on.

Let’s walk through the steps to switch into ARM mode, add our user account and tag the resource groups we want to automate.

Before you start, ensure you have the latest drop of the Azure PowerShell cmdlets from GitHub. With the latest version installed, switch to Azure Resource Manager mode and add your AAD account to your local PS profile

[code language=”PowerShell”]
# Switch to Azure Resource Manager mode
Switch-AzureMode AzureResourceManager

# Sign in to add our account to our PS profile

# Check our profile has the newly gathered credentials


Add-AzureAccount will prompt us to sign-in using either our Microsoft Account or Organisational account with service management permissions. Get-AzureAccount will list the profiles we have configured locally.

Now that we are in ARM mode and authenticated, we can examine and manage our tag taxonomy using the Get-AzureTag Cmdlet

[code language=”PowerShell”]
#Display current taxonomy

This displays all tags in our taxonomy and how many resource groups they are assigned to. To add tags to our taxonomy we use the New-AzureTag cmdlet. To remove, we use the Remove-AzureTag cmdlet. We can only remove tags that are not assigned to resources (that is, have a zero count).

In this post we want to create and use a tag named “autoShutdown” that we can then assign a value of “True” against resource groups that we want to automatically shutdown.

[code language=”PowerShell”]
# Add Tag to Taxonomy
New-AzureTag -Name "autoShutdown"


Now, let’s tag our resource groups. This can be performed using the preview portal as mentioned above, but if you have many resource groups to manage PowerShell is still the best approach. To manage resource groups using the ARM module we use the following cmdlets


[code language=”PowerShell”]
# Assign tag to DEV/TEST resource groups
Get-AzureResourceGroup | ? { $_.ResourceGroupName -match "dev-cloud" } | `
Set-AzureResourceGroup -Tag @( @{ Name="autoShutdown"; Value=$true } )

The above statement finds all resource groups that contain the text “dev-cloud” in the name (a naming convention I have adopted, yours will be different) and sets the autoShutdown tag with a value of True to that resource group. If we list the resource groups using the Get-AzureResourceGroups cmdlet we can see the results of the tagging.


Note: Our resource group contains many resources. However, for our purposes we are only interested in managing our virtual machines. These resource types shown above will come in handy when we look to filter the tagged resources to only return our VMs.

We can also un-tag resource groups using the following statement

[code language=”PowerShell”]
# Reset tags on DEV/TEST resource groups
Get-AzureResourceGroup | ? { $_.ResourceGroupName -match "dev-cloud" } | `
Set-AzureResourceGroup -Tag @{}

We can see these tagged resource groups in the preview portal as well…


…and if we drill into one of the resource groups we can see our tag has the value True assigned


In this post we have tagged our resource groups so that they can be managed separately. In Part 2 of the post, we will move on to creating an Azure Automation runbook to automate the shutting down of our tagged resources on a daily schedule.


At the time of writing there is no support in the preview portal for tagging individual resources directly (just Resource Groups). The PowerShell Cmdlets suggest it is possible however I always get an error indicating setting the tags property is not permitted yet. This may change in the near future and provide more granular control.

Mule ESB DEV/TEST environments in Microsoft Azure

Agility in delivery of IT services is what cloud computing is all about. Week in, week out, projects on-board and wind-up, developers come and go. This places enormous stress on IT teams with limited resourcing and infrastructure capacity to provision developer and test environments. Leveraging public cloud for integration DEV/TEST environments is not without its challenges though. How do we develop our interfaces in the cloud yet retain connectivity to our on-premises line-of-business systems?

In this post I will demonstrate how we can use Microsoft Azure to run Mule ESB DEV/TEST environments using point-to-site VPNs for connectivity between on-premises DEV resources and our servers in the cloud.

MuleSoft P2S


A point-to-site VPN allows you to securely connect an on-premises server to your Azure Virtual Network (VNET). Point-to-site connections don’t require a VPN device. They use the Windows VPN client and must be started manually whenever the on-premises server (point) wishes to connect to the Azure VNET (site). Point-to-site connections use secure socket tunnelling protocol (SSTP) with certificate authentication. They provide a simple, secure connectivity solution without having to involve the networking boffin’s to stand up expensive hardware devices.

I will not cover the setup of the Azure Point-to-site VPN in this post, there are a number of good articles already covering the process in detail including this great MSDN article.

A summary of steps to create the Point-to-site VPN are as follows:

  1. Create an Azure Virtual Network (I named mine AUEastVNet and used address range
  2. Configure the Point-to-site VPN client address range  (I used
  3. Create a dynamic routing gateway
  4. Configure certificates (upload root cert to portal, install private key cert on on-premise servers)
  5. Download and install client package from the portal on on-premise servers

Once we established the point-to-site VPN we can verify the connectivity by running ipconfig /all and checking we had been assigned an IP address from the range we configured on our VNET.

IP address assigned from P2S client address range

Testing our Mule ESB Flow using On-premises Resources

In our demo, we want to test the interface we developed in the cloud with on-premises systems just as we would if our DEV environment was located within our own organisation

Mule ESB Flow

The flow above listens for HL7 messages using the TCP based MLLP transport and processes using two async pipelines. The first pipeline maps the HL7 message into an XML message for a LOB system to consume. The second writes a copy of the received message for auditing purposes.

MLLP connector showing host running in the cloud

The HL7 MLLP connector is configured to listen on port 50609 of the network interface used by the Azure VNET (

FILE connector showing on-premise network share location

The first FILE connector is configured to write the output of the xml transformation to a network share on our on-premises server (across the point-to-site VPN). Note the IP address used is the one assigned by the point-to-site VPN connection (from the client IP address range configured on our Azure VNET)

P2S client IP address range

To test our flow we launch a MLLP client application on our on-premises server and establish a connection across the point-to-site VPN to our Mule ESB flow running in the cloud. We then send a HL7 message for processing and verify we receive a HL7 ACK and that the transformed xml output message has also been written to the configured on-premises network share location.

Establishing the connection across the point-to-site VPN…

On-premises MLLP client showing connection to host running in the cloud

Sending the HL7 request and receiving an HL7 ACK response…

MLLP client showing successful response from Mule flow

Verifying the transformed xml message is written to the on-premises network share…

On-premises network share showing successful output of transformed message


  • Connectivity – Point-to-site VPNs provide a relatively simple connectivity option that allows traffic between the your Azure VNET (site) and your nominated on-premise servers (the point inside your private network). You may already be running workloads in Azure and have a site-to-site VPN or MPLS connection between the Azure VNET and your network and as such do not require establishing the point-to-site VPN connection. You can connect up to 128 on-premise servers to your Azure VNET using point-to-site VPNs.
  • DNS – To provide name resolution of servers in Azure to on-premise servers OR name resolution of on-premise servers to servers in Azure you will need to configure your own DNS servers with the Azure VET. The IP address of on-premise servers will likely change every time you establish the point-to-site VPN as the IP address is assigned from a range of IP addresses configured on the Azure VET.
  • Web Proxies – SSTP does not support the use of authenticated web proxies. If your organisation uses a web proxy that requires HTTP authentication then the VPN client will have issues establishing the connection. You may need the network boffins after all to bypass the web proxy for outbound connections to your Azure gateway IP address range.
  • Operating System Support – Point-to-site VPNs only support the use of the Windows VPN client on Windows 7/Windows 2008 R2 64 bit versions and above.


In this post I have demonstrated how we can use Microsoft Azure to run a Mule ESB DEV/TEST environment using point-to-site VPNs for simple connectivity between on-premises resources and servers in the cloud. Provisioning integration DEV/TEST environments on demand increases infrastructure agility, removes those long lead times whenever projects kick-off or resources change and enforces a greater level of standardisation across the team which all improve the development lifecycle, even for integration projects!

Migrating Azure Virtual Machines to another Region

I have a number of DEV/TEST Virtual Machines (VMs) deployed to Azure Regions in Southeast Asia (Singapore) and West US as these were the closet to those of us living in Australia. Now that the new Azure Regions in Australia have been launched, it’s time to start migrating those VMs closer to home. Manually moving VMs between Regions is pretty straight forward and a number of articles already exist outlining the manual steps.

To migrate an Azure VM to another Region

  1. Shutdown the VM in the source Region
  2. Copy the underlying VHDs to storage accounts in the new Region
  3. Create OS and Data disks in the new Region
  4. Re-create the VM in the new Region.

Simple enough but tedious manual configuration, switching between tools and long waits while tens or hundreds of GBs are transferred between Regions.

What’s missing is the automation…

Automating the Migration

In this post I will share a Windows PowerShell script that automates the migration of Azure Virtual Machines between Regions. I have made the full script available via GitHub.

Here is what we are looking to automate:


  1. Shutdown and Export the VM configuration
  2. Setup async copy jobs for all attached disks and wait for them to complete
  3. Restore the VM using the saved configuration.

The Migrate-AzureVM.ps1 script assumes the following:

  • Azure Service Management certificates are installed on the machine running the script for both source and destination Subscriptions (same Subscription for both is allowed)
  • Azure Subscription profiles have been created on the machine running the script. Use Get-AzureSubscription to check.
  • Destination Storage accounts, Cloud Services, VNets etc. already have been created.

The script accepts the following input parameters:

[code language=”powershell” gutter=”false”]
.\Migrate-AzureVM.ps1 -SourceSubscription "MySub" `
-SourceServiceName "MyCloudService" `
-VMName "MyVM" `
-DestSubscription "AnotherSub" `
-DestStorageAccountName "mydeststorage" `
-DestServiceName "MyDestCloudService" `
-DestVNETName "MyRegionalVNet" `
-IsReadOnlySecondary $false `
-Overwrite $false `
-RemoveDestAzureDisk $false

SourceSubscription Name of the source Azure Subscription
SourceServiceName Name of the source Cloud Service
VMName Name of the VM to migrate
DestSubscription Name of the destination Azure Subscription
DestStorageAccountName Name of the destination Storage Account
DestServiceName Name of the destination Cloud Service
DestVNETName Name of the destination VNet – blank if none used
IsReadOnlySecondary Indicates if we are copying from the source storage accounts read-only secondary location
Overwrite Indicates if we are overwriting if the VHD already exists in the destination storage account
RemoveDestAzureDisk Indicates if we remove an Azure Disk if it already exists in the destination disk repository

To ensure that the Virtual Machine configuration is not lost (and avoid us have to re-create by hand) we must first shutdown the VM and export the configuration as shown in the PowerShell snippet below.

[code language=”powershell” gutter=”false”]
# Set source subscription context
Select-AzureSubscription -SubscriptionName $SourceSubscription -Current

# Stop VM
Stop-AzureVMAndWait -ServiceName $SourceServiceName -VMName $VMName

# Export VM config to temporary file
$exportPath = "{0}\{1}-{2}-State.xml" -f $ScriptPath, $SourceServiceName, $VMName
Export-AzureVM -ServiceName $SourceServiceName -Name $VMName -Path $exportPath

Once the VM configuration is safely exported and the machine shutdown we can commence copying the underlying VHDs for the OS and any data disks attached to the VM. We’ll want to queue these up as jobs and kick them off asynchronously as they will take some time to copy across.

[code language=”powershell” gutter=”false”]
Get list of azure disks that are currently attached to the VM
$disks = Get-AzureDisk | ? { $_.AttachedTo.RoleName -eq $VMName }

# Loop through each disk
foreach($disk in $disks)
# Start the async copy of the underlying VHD to
# the corresponding destination storage account
$copyTasks += Copy-AzureDiskAsync -SourceDisk $disk
catch {} # Support for existing VHD in destination storage account

# Monitor async copy tasks and wait for all to complete

Tip: You’ll probably want to run this overnight. If you are copying between Storage Accounts within the same Region copy times can vary between 15 mins and a few hours. It all depends on which storage cluster the accounts reside. Michael Washam provides a good explanation of this and shows how you can check if your accounts live on the same cluster. Between Regions will always take a longer time (and incur data egress charges don’t forget!)… see below for a nice work-around that could save you heaps of time if you happen to be migrating within the same Geo.

You’ll notice the script also supports being re-run as you’ll have times when you can’t leave the script running during the async copy operation. A number of switches are also provided to assist when things might go wrong after the copy has completed.

Now that we have our VHDs in our destination Storage Account we can begin putting our VM back together again.

We start by re-creating the logical OS and Azure Data disks that take a lease on our underlying VHDs. So we don’t get clashes, I use a convention based on Cloud Service name (which must be globally unique), VM name and disk number.

[code language=”powershell” gutter=”false”]
# Set destination subscription context
Select-AzureSubscription -SubscriptionName $DestSubscription -Current

# Load VM config
$vmConfig = Import-AzureVM -Path $exportPath

# Loop through each disk again
$diskNum = 0
foreach($disk in $disks)
# Construct new Azure disk name as [DestServiceName]-[VMName]-[Index]
$destDiskName = "{0}-{1}-{2}" -f $DestServiceName,$VMName,$diskNum

Write-Log "Checking if $destDiskName exists…"

# Check if an Azure Disk already exists in the destination subscription
$azureDisk = Get-AzureDisk -DiskName $destDiskName `
-ErrorAction SilentlyContinue `
-ErrorVariable LastError
if ($azureDisk -ne $null)
Write-Log "$destDiskName already exists"

if ($RemoveDisk -eq $true)
# Remove the disk from the repository
Remove-AzureDisk -DiskName $destDiskName

Write-Log "Removed AzureDisk $destDiskName"
$azureDisk = $null
# else keep the disk and continue

# Determine media location
$container = ($disk.MediaLink.Segments[1]).Replace("/","")
$blobName = $disk.MediaLink.Segments | Where-Object { $_ -like "*.vhd" }
$destMediaLocation = "http://{0}{1}/{2}" -f $DestStorageAccountName,$container,$blobName

# Attempt to add the azure OS or data disk
if ($disk.OS -ne $null -and $disk.OS.Length -ne 0)
# OS disk
if ($azureDisk -eq $null)
$azureDisk = Add-AzureDisk -DiskName $destDiskName `
-MediaLocation $destMediaLocation `
-Label $destDiskName `
-OS $disk.OS `
-ErrorAction SilentlyContinue `
-ErrorVariable LastError

# Update VM config
$vmConfig.OSVirtualHardDisk.DiskName = $azureDisk.DiskName
# Data disk
if ($azureDisk -eq $null)
$azureDisk = Add-AzureDisk -DiskName $destDiskName `
-MediaLocation $destMediaLocation `
-Label $destDiskName `
-ErrorAction SilentlyContinue `
-ErrorVariable LastError

# Update VM config
# Match on source disk name and update with dest disk name
$vmConfig.DataVirtualHardDisks.DataVirtualHardDisk | ? { $_.DiskName -eq $disk.DiskName } | ForEach-Object {
$_.DiskName = $azureDisk.DiskName

# Next disk number
$diskNum = $diskNum + 1

[code language=”powershell” gutter=”false”]
# Restore VM
$existingVMs = Get-AzureService -ServiceName $DestServiceName | Get-AzureVM
if ($existingVMs -eq $null -and $DestVNETName.Length -gt 0)
# Restore first VM to the cloud service specifying VNet
$vmConfig | New-AzureVM -ServiceName $DestServiceName -VNetName $DestVNETName -WaitForBoot
# Restore VM to the cloud service
$vmConfig | New-AzureVM -ServiceName $DestServiceName -WaitForBoot

# Startup VM
Start-AzureVMAndWait -ServiceName $DestServiceName -VMName $VMName

For those of you looking at migrating VMs between Regions within the same Geo and have GRS enabled, I have also provided an option to use the secondary storage location of the source storage account.

To support this you will need to enable RA-GRS (read access) and wait a few minutes for access to be made available by the storage service. Copying your VHDs will be very quick (in comparison to egress traffic) as the copy operation will use the secondary copy in the same region as the destination. Nice!

Enabling RA-GRS can be done at any time but you will be charged for a minimum of 30 days at the RA-GRS rate even if you turn it off after the migration.

[code language=”powershell” gutter=”false”]
# Check if we are copying from a RA-GRS secondary storage account
if ($IsReadOnlySecondary -eq $true)
# Append "-secondary" to the media location URI to reference the RA-GRS copy
$sourceUri = $sourceUri.Replace($srcStorageAccount, "$srcStorageAccount-secondary")

Don’t forget to clean up your source Cloud Services and VHDs once you have tested the migrated VMs are running fine so you don’t incur ongoing charges.


In this post I have walked through the main sections of a Windows PowerShell script I have developed that automates the migration of an Azure Virtual Machine to another Azure data centre. The full script has been made available in GitHub. The script also supports a number of other migration scenarios (e.g. cross Subscription, cross Storage Account, etc.) and will be handy addition to your Microsoft Azure DevOps Toolkit.

Connecting Salesforce and SharePoint Online with MuleSoft – Nothing but NET

Often enterprises will choose their integration platform based on the development platform required to build integration solutions. That is, java shops typically choose Oracle ESB, JBoss, IBM WebSphere or MuleSoft to name but a few. Microsoft shops have less choice and typically choose to build custom .NET solutions or use Microsoft BizTalk Server. Choosing an integration platform based on the development platform should not be a driving factor and may limit your options.

Your integration platform should be focused on interoperability. It should support common messaging standards, transport protocols, integration patterns and adapters that allow you to connect to a wide range of line of business systems and SaaS applications. Your integration platform should provide frameworks and pattern based templates to reduce implementation costs and improve the quality and robustness of your interfaces.

Your integration platform should allow your developers to use their development platform of choice…no…wait…what!?!

In this post I will walkthrough integrating and SharePoint Online using the java based Mule ESB platform while writing nothing but .NET code.

MuleSoft .NET Connector

The .NET connector allows developers to use .NET code in their flows enabling you to call existing .NET Framework assemblies, 3rd party .NET assemblies or custom .NET code. Java Native Interface (JNI) is used to communicate between the Java Virtual Machine (JVM), in which your MuleSoft flow is executing, and the Microsoft Common Language Runtime (CLR) where your .NET code is executed.


To demonstrate how we can leverage the MuleSoft .NET Connector in our flows, I have put together a typical SaaS cloud integration scenario.

Our Scenario

  • Customers (Accounts) are entered into by the Sales team
  • The team use O365 and SharePoint Online to manage customer and partner related documents.
  • When new customers are entered into Salesforce, corresponding document library folders need to be created in SharePoint.
  • Our interface polls Salesforce for changes and needs to create a new document library folder in SharePoint for this customer according to some business rules
  • Our developers prefer to use .NET and Visual Studio to implement the business logic required to determine the target document library based on Account type (Customer or Partner)

Our MuleSoft flow looks something like this:


  • Poll Salesforce for changes based on a watermark timestamp
  • For each change detected:
    • Log and update the last time we sync’d
    • Call our .NET business rules to determine target document library
    • Call our .NET helper class to create the folder in SharePoint

Salesforce Connector

Using the MuleSoft Salesforce Cloud Connector we configure it to point to our Salesforce environment and query for changes to the Accounts entity. Using DataSense, we configure which data items to pull back into our flow to form the payload message we wish to process.


Business Rules

Our business rules are implemented using a bog standard .NET class library that checks the account type and assign either “Customers” or “Partners” as the target document library. We then enrich the message payload with this value and return it back to our flow.

public object GetDocumentLibrary(SF_Account account)
    var docLib = "Unknown";     // default

    // Check for customer accounts
    if (account.Type.Contains("Customer"))
        docLib = "Customers";

    // Check for partner accounts
    if (account.Type.Contains("Partner"))
        docLib = "Partners";

    return new 
            Name = account.Name, 
            Id = account.Id, 
            LastModifiedDate = account.LastModifiedDate, 
            Type = account.Type,
            DocLib = docLib 

Note: JSON is used to pass non-primitive types between our flow and our .NET class.

So our message payload looks like


and is de-serialised by the .NET connector into our .NET SF_Account object that looks like

public class SF_Account
    public DateTime LastModifiedDate;
    public string Type;
    public string Id;
    public string type;
    public string Name;
    public string DocLib;

Calling our .NET business rules assembly is a simple matter of configuration.


SharePoint Online Helper

Now that we have enriched our message payload with the target document library


we can pass this to our .NET SharePoint client library to connect to SharePoint using our O365 credentials and create the folder in the target document library

public object CreateDocLibFolder(SF_Account account)
    using (var context = new Microsoft.SharePoint.Client.ClientContext(url))
            // Provide client credentials
            System.Security.SecureString securePassword = new System.Security.SecureString();
            foreach (char c in password.ToCharArray()) securePassword.AppendChar(c);
            context.Credentials = new Microsoft.SharePoint.Client.SharePointOnlineCredentials(username, securePassword);

            // Get library
            var web = context.Web;
            var list = web.Lists.GetByTitle(account.DocLib);
            var folder = list.RootFolder;

            // Create folder
            folder = folder.Folders.Add(account.Name);
        catch (Exception ex)

    // Return payload to the flow 
    return new { Name = account.Name, Id = account.Id, LastModifiedDate = account.LastModifiedDate, Type = account.Type, DocLib = account.DocLib, Site = string.Format("{0}/{1}", url, account.DocLib) };

Calling our helper is the same as for our business rules



Configuring MuleSoft to know where to load our .NET assemblies from is best done using global configuration references.


We have three options to reference our assembly:

  1. Local file path – suitable for development scenarios.
  2. Global Assembly Cache (GAC) – suitable for shared or .NET framework assemblies that are known to exist on the deployment server.
  3. Packaged – suitable for custom and 3rd party .NET assemblies that get packaged with your MuleSoft project and deployed together.



With our flow completed, coding in nothing but .NET, we are good to test and deploy our package to our Mule ESB Integration Server. At the time of writing, CloudHub does not support the .NET Connector but this should be available in the not too distant future. To test my flow I simply spin up and instance on the development server and watch the magic happen.

We enter an Account in Salesforce with Type of “Customer – Direct”…


and we see a new folder in our “Customers” document library for that Account name in a matter of seconds


SLAM DUNK!…Nothing but NET Smile


Integration is all about interoperability and not just at runtime. It should be a core capability of our Integration framework. In this post we saw how we can increase our Integration capability without the need to sacrifice our development platform of choice by using MuleSoft and the MuleSoft .NET Connector.

Follow ...+

Kloud Blog - Follow