Automate the archiving of your CloudHub application logs

CloudHub is MuleSoft’s integration platform as a service (iPaaS) that enables the deployment and management of integration solutions in the cloud. Runtime Manager, CloudHub’s management tool,  provides an integrated set of logging tools that allow support and operations staff to monitor and troubleshoot application logs of deployed applications.

Currently, application log entries are kept for 30 days or until they reach a max size of 100 MB. Often we are required to keep these logs for greater periods of time for auditing or archiving purposes. Overly chatty applications (applications that write log entries frequently) may find their logs only covering a few days restricting the troubleshooting window even further. Runtime Manager allows portal users to manually download log files via the browser, however no automated solution is provided out-of-the-box.

The good news is, the platform does provide both a command line tool and management API that we can leverage. Leaving the CLI to one side for now, the platform’s management API looks promising. Indeed, a search in Anypoint Exchange also yields a ready built CloudHub Connector we could leverage. However upon further investigation, the connector doesn’t meet all our requirements. The CloudHub Connector does not appear to support different business groups and environments so using it to download logs for applications deployed to non-default environments will not work (at least in the current version). The best approach will be to consume the management APIs provided by the Anypoint Platform directly. RAML definitions have been made available making consuming them within a mule flow very easy.

Solution overview

In this post we’ll develop a CloudHub application that is triggered periodically to loop through a collection of target applications, connect to the Anypoint Management APIs and fetch the current application log for each deployed instance. The downloaded logs will be compressed and sent to an Amazon S3 bucket for archiving.

solution_view

Putting the solution together:

We start by grabbing the RAML for both the Anypoint Access Management API and the Anypoint Runtime Manager API and bring them into the project. The Access Management API provides the authentication and authorisation operations to login and obtain an access token needed in subsequent calls to the Runtime Manager API. The Runtime Manager API provides the operations to enumerate the deployed instances of an application and download the application log.

Download and add the RAML definitions to the project by extracting them into the ~/src/main/api folder.

raml_defs

To consume these APIs we’ll use the HTTP connector so we need to define some global configuration elements that make use of the RAML definitions we just imported.

http_config

Note: Referencing these directly from Exchange currently throws some RAML parsing errors.

exchange_error

So to avoid this, we download manually and reference our local copy of the RAML definition. Obviously we’ll need to update this as the API definition changes in the future.

To provide simple multi-value configuration support I have used a simple JSON structure to describe a collection of applications we need to iterate over.

Our flow then reads in this config and transforms this into a HashMap that we can then iterate over.
Note: Environment IDs can be gathered using the Runtime Manager API or the Anypoint CLI

cli_environments

Next, create our top level flow that is triggered periodically to read and parse our configuration setting into a collection that we can iterate over to download the application logs.

entry_point_flow

Now, we create a sub-flow that describes the process of downloading application logs for each deployed instance. We first obtain an access token using the Access Management API and present that token to the Runtime Manager API to gather details of all deployed instances of the application. We then iterate over that collection and call the Runtime Manager API to download the current application log for each deployed instance.

archive_log_file

Next we add the sub-flows for consuming the Anypoint Platform APIs for each of the in-scope operations

cloudhubLogin

cloudhubDeployments

cloudhublogfiles

In this last sub-flow, we perform an additional processing step of compressing (zip) the log file before sending to our configured Amazon S3 bucket.

The full configuration for the workflow can be found here.

Once packaged and deployed to CloudHub we configure the solution to archive application logs for any deployed CloudHub app, even if they have been deployed into environments other than the one hosting the log archiver solution.

deployed_config

After running the solution for a day or so and checking the configured storage location we can confirm logs are being archived each day.

amazon_S3_bucket

Known limitations:

  • The Anypoint Management API does not allow downloading application logs for a given date range. That is, each time the solution runs a full copy of the application log will be downloaded. The API does support an operation to query the logs for a given date range and return matching entries as a result set but that comes with additional constraints on result set size (number of rows) and entry size (message truncation).
  • The RAML definitions in Anypoint Exchange currently do not parse correctly in Anypoint Studio. As mentioned above, to work around this we download the RAML manually and bring it into the project ourselves.
  • Credentials supplied in configuration are in plain text. Suggest creating a dedicated Anypoint account and granting permissions to only the target environments.

In this post I have outlined a solution that automates the archiving of your CloudHub application log files to external cloud storage. The solution allows periodic scheduling and multiple target applications to be configured even if they exist in different CloudHub environments. Deploy this solution once to archive all of your deployed application logs.

DataWeave: Tips and tricks from the field

DataWeave (DW) has been part of the MuleSoft Anypoint Platform since v3.7.0 and has been a welcome enhancement providing an order of magnitude improvement in performance as well as increased mapping capability that enables more efficient flow design.

However, like most new features of this scope and size (i.e. brand new transformation engine written from the ground up), early documentation was minimal and often we were left to ourselves. At times even the most simple mapping scenarios could take an hour or so to solve what could have taken 5 mins in data-mapper. But it pays to stick with it and push on through the adoption hurdle as the pay offs are worth it in the end.

For those starting out with DataWeave here are some links to get you going:

In this post I will share some tips and tricks I have picked up from the field with the aim that I can give someone out there a few hours of their life back.

Tip #1 – Use the identity transform to check how DW performs it’s intermediate parsing

When starting any new DW transform, it pays to capture and understand how DW will parse the inputs and present it to the transform engine. This helps navigate some of the implicit type conversions going on as well as better understand the data structure being traversed in your map. To do this, start off by using the following identity transform with an output type of application/dw.

Previewing a sample invoice xml yields the following output which gives us insight into the internal data structure and type conversations performed by DW when parsing our sample payload.

and the output of the identity transform

Tip #2 – Handling conditional xml node lists

Mule developers who have being using DW even for a short time will be used to seeing these types of errors displayed in the editor

Cannot coerce a :string to a :object

These often occur when we expect the input payload to be an array or complex data type, but a simple type (string in this case) is actually presented to the transform engine. In our invoice sample, this might occur when an optional xml nodelist contains no child nodes.

To troubleshoot this we would use the identity transform described above to gain insight into the intermediate form of our input payload. Notice the invoices element is no longer treated as a nodelist but rather a string.

We resolve this by checking if ns0#invoices is of type object and provide alternative output should the collection be empty.

Tip #3 – Explicitly setting the type of input reader DataWeave uses

Occasionally you will hit scenarios where the incoming message (input payload) doesn’t have a mimeType assigned or DW cannot infer a reader class from the mimeType that is present. In these cases, we’ll either get an exception thrown or we may get unpredictable behaviour from your transform. To avoid this, we should be in the habit of setting the mimeType of the input payload explicitly. At present we can’t do this in the graphical editor, we will need to edit the configuration xml directly and add the following attribute to the <dw:input-payload> element of our transform shape

<dw:input-payload doc:sample="xml_1.xml" mimeType="application/xml" />

Tip #4 – Register custom formats as types (e.g. datetime formats)

Hopefully we are blessed to be always working against strongly typed message schema where discovery and debate over the data formats of the output fields never happen…yeah right. Too often we need to tweak the output format of data fields a couple of times during the development cycle. In large transforms, this may mean applying the same change to several fields throughout the map, only to come back and change this again the next day. To save you time and better organise your DW maps, we should look to declare common format types in the transform header and reference those throughout the map. If we need to tweak this we apply the change in one central location.

Tip #5 – Move complex processing logic to MEL functions

Sometimes even the most straight forward of transformations can lead to messy chains of functions that are hard to read, difficult to troubleshoot and often error prone. When finding myself falling into these scenarios I look to pull out this logic and move it into a more manageable MEL function. This not only cleans up the DW map but also provides opportunity to place debug points in our MEL function to assist with troubleshooting a misbehaving transform.

Place your MEL function in your flow’s configuration file at the start along with any other config elements.

Call your function as you would if you declared it inline in your map.

Tip #6 – Avoid the graphical drag and drop tool in the editor

One final tip that I find myself regularly doing is avoid using the graphical drag and drop tool in the editor. I’m sure this will be fixed in later versions of DataWeave, but for now I find it creates untidy code that I often end up fighting with the editor to clean up. I would only typically use the graphical editor to map multiple fields en-mass and then cut and paste the code into my text editor, clean it up and paste it back into DW. From then on, I am working entirely in the code window.

There we go, in this post I have outlined six tips that I hope will save at least one time poor developer a couple of hours which could be much better spent getting on with the business off delivering integration solutions for their customers. Please feel free to contribute more tips in the comments section below.

Connecting Salesforce and SharePoint Online with Azure App Services

Back in November I wrote a post that demonstrated how we can integrate Salesforce and SharePoint Online using the MuleSoft platform and the MuleSoft .NET Connector. In this post I hope to achieve the same thing using the recently released into preview Azure App Services offering.

Azure App Services

Azure App Services rebrands a number of familiar service types (Azure Websites, Mobile Services, and BizTalk Services) as well as adding a few new ones to the platform.

azure_app_services

  • Web Apps – Essentially a rebranding of Azure websites.
  • Mobile Apps – Built on the existing Azure Mobile Services with some additional features such as better deployment and scalability options
  • Logic Apps – A new service to the platform that allows you to visually compose process flows using a suite of API Apps from both the Marketplace and custom built.
  • API Apps – A special type of Web App that allows you to host and manage APIs to connect SaaS applications, on-premise applications or implement custom business logic. The Azure Marketplace provides a number of API Apps ready built that you can deploy as APIs in your solution.

Microsoft have also published a number of Apps to the Azure Marketplace to provide some ready-to-use functionality within each of these service types.  A new Azure SDK has also been released that we can use to build & deploy our own custom App Services. Further details on the Azure App Service can be found on the Azure Documentation site here.

Scenario Walkthrough

In this post we will see how we can create a Logic App that composes a collection of API Apps to implement the same SaaS integration solution as we did in the earlier post. To recap, we had the following integration scenario:

  • Customers (Accounts) are entered into Salesforce.com by the Sales team.
  • The team use O365 and SharePoint Online to manage customer and partner related documents.
  • When new customers are entered into Salesforce, corresponding document library folders need to be created in SharePoint.
  • Our interface needs to poll Salesforce for changes and create a new document library folder in SharePoint for this customer according to some business rules.
  • The business logic required to determine the target document library is based on the Salesforce Account type (Customer or Partner)

Azure Marketplace

As a first step, we should search the Azure Marketplace for available connectors that suit our requirements. A quick search yields some promising candidates…

Salesforce Connector – Published by Microsoft and supports Account entities and executing custom queries. Supported as an action within Logic Apps. Looking good.

salesforce_connector

SharePoint Online Connector – Published by Microsoft and supports being used as an action or trigger in Logic Apps. Promising, but upon further inspection we find that it doesn’t support creating folders within a document library. Looks like we’ll need to create our own custom API App to perform this.

sharepoint_online_connector

Business Rules API – Again published by Microsoft and based on the BizTalk Business Rules Engine. Supports being used as an action in Logic Apps however only supports XML based facts which as we’ll see doesn’t play well with the default messaging format used in Logic Apps (JSON). Looks like we’ll either need to introduce additional Apps to perform the conversion (json > xml and xml > json) or create a custom API App to perform our business rules as well.

biztalk_rules_api_app

So it appears we can only utilize one of the out-of-the-box connectors. We will need to roll up our sleaves and create at least two custom API Apps to implement our integration flow. As the offering matures and community contributions to the Marketplace is supported, hopefully we will be spending less time developing services and more time composing them. But for now let’s move on and setup the Azure App Services we will need.

Azure API Apps

As we are creating our first Azure App Service we need to first create a Resource Group and create a Azure App Service Plan. Service plans allow us to apply and manage resource tiers to each of our apps. We can then modify this service plan to scale up/down resources shared across all the apps consistently.

We start by adding a new Logic App and creating a new Resource Group and Service Plan as follows:

create_logic_app

Navigate to the newly created Resource Group. You should see two new resources in your group, your Logic App and an API Gateway that was automatically created for the resource group.

new_resource_group

Tip: Pin the Resource Group to your Home screen (start board) for easy access as we switch back and forth between blades.

Next, add the Salesforce Connector API App from the Marketplace …

create_salesforce_connector

… and add it to our Resource Group using the same Service Plan as our Logic App. Ensure that in the package settings we have the Account entity configured. This is the entity in Salesforce we want to query.

sf_connector_package_config

Now we need to provision two API App Services to host our custom API’s. Let’s add an API App Service for our custom BusinessRulesService API first, ensuring we select our existing Resource Group and Service Plan.

create_rules_api

Then repeat for our custom SharePointOnlineConnector API App Service, again selecting our Resource Group and Service Plan. We should now see three API Apps added to our resource group

resource_group_summary

Developing Custom API Apps

Currently, only the Salesforce Connector API has been deployed (as we created this from the Marketplace). We now need to develop our custom APIs and deploy them to our API App services we provisioned above.

You will need Visual Studio 2013 and the latest Azure SDK for .NET (2.5.1 or above) installed.

Business Rules Service

In Visual Studio, create a new ASP.NET Web Application for the custom BusinessRulesService and choose Azure API App (Preview)

vs_-_create_azure_api_app

Add a model to represent the SharePoint document library details we need our business rules to spit out

    public class DocumentLibraryFolder
    {
        public string DocumentLibrary { get; set; }
        public string FolderName { get; set; }
    }

Add an Api Controller that implements our business rules and return an instance of our DocumentLibraryFolder class.

    public class BusinessRulesController : ApiController
    {

        [HttpGet]
        public DocumentLibraryFolder Get(string accountType, string accountName)
        {
            System.Diagnostics.Trace.TraceInformation("Enter: Get");

            DocumentLibraryFolder docLib = new DocumentLibraryFolder();

            try
            {
                // Check for customer accounts
                if (accountType.Contains("Customer"))
                    docLib.DocumentLibrary = "Customers";

                // Check for partner accounts
                if (accountType.Contains("Partner"))
                    docLib.DocumentLibrary = "Partners";

                // Set folder name
                docLib.FolderName = accountName;
            }
            catch (Exception ex)
            {
                System.Diagnostics.Trace.TraceError(ex.ToString());
            }

            return docLib;
        }
    }

With the implementation done, we should test it works locally (how else can we claim “it works on my machine” right!). The easiest way to test an API App is to enable the swagger UI and use its built in test harness. Navigate to App_Start\SwaggerConfig.cs and uncomment the lines shown below.

enable_swagger_ui

Run your API App and navigate to /swagger

business_rules_-_test_with_swagger_locally

Once we have confirmed it works, we need to deploy the API to the Azure API App service we provisioned above. Right click the BusinessRulesService project in Solution Explorer and select Publish. Sign-in using your Azure Service Administration credentials and select the target API App service from the drop down list.

vs_api_publish

Click Publish to deploy the BusinessRulesService to Azure

Tip: Once deployed it is good practice to test your API works in Azure. You could enable public access and test using the swagger UI test harness as we did locally, or you could generate a test client app in Visual Studio. Using swagger UI is quicker as long as you remember to revoke access once testing has completed as we don’t want to grant access to this API outside our resource group.

Grant public (anonymous) access in the Application Settings section of our API App and test the deployed version using the URL found on the Summary blade of the AP App.

api_access_level

business_rules_-_test_with_swagger_in_the_cloud

Custom SharePoint Online Connector

Since the out-of-the-box connector in the Marketplace didn’t support creating folders in document libraries, we need to create our own custom API App to implement this functionality. Using the same steps as above, create a new ASP.NET Web Application named SharePointOnlineConnector and choose the Azure API App (Preview) project template.

Add the same DocumentLibraryFolder model we used in our BusinessRulesService and an Api Controller to implement the connection to SharePoint and creation of the folder in the specified document library

    public class DocumentLibraryController : ApiController
    {
        #region Connection Details
        string url = "url to your sharepoint site";
        string username = "username";
        string password = "password";
        #endregion

        [HttpPost]
        public void Post([FromBody] DocumentLibraryFolder folder)
        {
            using (var context = new Microsoft.SharePoint.Client.ClientContext(url))
            {
                try
                {
                    // Provide client credentials
                    System.Security.SecureString securePassword = new System.Security.SecureString();
                    foreach (char c in password.ToCharArray()) securePassword.AppendChar(c);
                    context.Credentials = new Microsoft.SharePoint.Client.SharePointOnlineCredentials(username, securePassword);

                    // Get library
                    var web = context.Web;
                    var list = web.Lists.GetByTitle(folder.DocumentLibrary);
                    var root = list.RootFolder;
                    context.Load(root);
                    context.ExecuteQuery();

                    // Create folder
                    root.Folders.Add(folder.FolderName);
                    //root.Folders.Add(HttpUtility.HtmlEncode(folder.FolderName));
                    context.ExecuteQuery();

                }
                catch (Exception ex)
                {
                    System.Diagnostics.Debug.WriteLine(ex.ToString());
                }
            }
        }

    }

Deploy to our Resource Group selecting our SharePointOnlineConnector API App Service.

vs_api_publish

Grant public access and test the API is working in Azure using swagger UI once again.

spo_connector_-_test_with_swagger_in_the_cloud

Note: I did have some issues with the Microsoft.SharePoint.Client libraries. Be sure to use v16.0.0.0 of these libraries to avoid the System.IO.FileNotFoundException: msoidcliL.dll issue (thanks Alexey Shcherbak for the fix).

Azure Logic App

With all our App Services deployed, let’s now focus on composing them into our logic flow within an Azure Logic App. Open our Logic App and navigate to the Triggers and Actions blade. From the toolbox on the right, drag the following Apps onto the designer:

  • Recurrence Trigger
  • Salesforce Connector
  • BusinessRulesService
  • SharePointOnlineConnector

logic_app_config

Note: Only API Apps and Connectors in your Resource Group will show up in the toolbox on the right hand side as well as the Recurrence Trigger and HTTP Connector.

Configure Recurrence trigger

  • Frequency: Minutes
  • Interval: 1

recurrence

Configure Salesforce Connector API

First we must authorise our Logic App to access our SFDC service domain. Click on Authorize and sign in using your SFDC developer credentials. Configure the Execute Query action to perform a select using the following SQL statement:

SELECT Id, Name, Type, LastModifiedDate FROM Account WHERE LastModifiedDate &gt; YESTERDAY LIMIT 10

salesforce

The output of the Salesforce Connector API will be in json, the default messaging format used in logic apps. The structure of the json data will look something like this

{
	"totalSize": 10,
	"done": true,
	"records": [{
		"attributes": {
			"type": "Account",
			"url": "/services/data/v32.0/sobjects/Account/00128000002l9m6AAA"
		},
		"Id": "00128000002l9m6AAA",
		"Name": "GenePoint",
		"Type": "Customer - Channel",
		"LastModifiedDate": "2015-03-20T22:45:13+00:00"
	},
	{
		"attributes": {
			"type": "Account",
			"url": "/services/data/v32.0/sobjects/Account/00128000002l9m7AAA"
		},
		"Id": "00128000002l9m7AAA",
		"Name": "United Oil &amp; Gas, UK",
		"Type": "Customer - Direct",
		"LastModifiedDate": "2015-03-20T22:45:13+00:00"
	},
	... repeats ...
    ]
}

Notice the repeating “records” section. We’ll need to let downstream APIs be aware of these repeating items so they can get invoked once for every repeating item.

Configure Business Rules API

Setup a repeating item so that our Business Rules API gets called once for every account the Salesforce Connector outputs in the response body.

  • Click on the Settings icon and select Repeat over a list
  • Set Repeat to @body(‘salesforceconnector’).result.records

Note: Here @body(‘salesforceconnector’) references the body of the response (or output) of the API call. “result.records” is referencing the elements within the json response structure where “records” is the repeating collection we want to pass to the next API in the flow.

Configure call to the BusinessRules_Get action passing the Type and Name fields of the repeated item

  • Set accountType to @repeatItem().Type
  • Set accountName to @repeatItem().Name

business_rules

The output of the BusinessRulesService will be a repeating collection of both inputs and outputs (discovered after much trial and error. Exception details are pretty thin as with most preview releases)

{
	"repeatItems": [{
		"inputs": {
			"host": {
				"gateway": "https://blogdemoresgroupxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.azurewebsites.net/",
				"id": "/subscriptions/72608e17-c89f-4822-8726-d15540e3b89c/resourcegroups/blogdemoresgroup/providers/Microsoft.AppService/apiapps/businessrulesservice"
			},
			"operation": "BusinessRules_Get",
			"parameters": {
				"accountType": "Customer - Channel",
				"accountName": "GenePoint"
			},
			"apiVersion": "2015-01-14",
			"authentication": {
				"scheme": "Zumo",
				"type": "Raw"
			}
		},
		"outputs": {
			"headers": {
				"pragma": "no-cache,no-cache",
				"x-ms-proxy-outgoing-newurl": "https://microsoft-apiapp7816bc6c4ee7452687f2fa9f58cce316.azurewebsites.net/api/BusinessRules?accountType=Customer+-+Channel&amp;accountName=GenePoint",
				"cache-Control": "no-cache",
				"set-Cookie": "ARRAffinity=451155c6c25a46b4af4ca2b73a70e702860aefb1d0efa48497d93db09e8a6ca1;Path=/;Domain=blogdemoresgroupxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.azurewebsites.net,ARRAffinity=451155c6c25a46b4af4ca2b73a70e702860aefb1d0efa48497d93db09e8a6ca1;Path=/;Domain=blogdemoresgroupxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx.azurewebsites.net",
				"server": "Microsoft-IIS/8.0",
				"x-AspNet-Version": "4.0.30319",
				"x-Powered-By": "ASP.NET,ASP.NET",
				"date": "Sun, 19 Apr 2015 12:42:18 GMT"
			},
			"body": {
				"DocumentLibrary": "Customers",
				"FolderName": "GenePoint"
			}
		},
		"startTime": "2015-04-19T12:42:18.9797299Z",
		"endTime": "2015-04-19T12:42:20.0306243Z",
		"trackingId": "9c767bc2-150d-463a-9bae-26990c48835a",
		"code": "OK",
		"status": "Succeeded"
	}]
}

We will need to again define the appropriate repeating collection to present to the next API. In this case it will need to be the “outputs.body” element of the repeatItems collection.

Configure SharePointOnline Connector API

Setup a repeating item so that our SharePointOnline API gets called once for every item in the repeatItems collection.

  • Click on the Settings icon and select Repeat over a list
  • Set Repeat to @actions(‘businessrulesservice’).outputs.repeatItems

Configure call to the DocumentLibrary_POST action setting the following parameters

  • Set DocumentLibrary to @repeatItem().outputs.body.DocumentLibrary
  • Set FolderName to @repeatItem().outputs.body.FolderName

spo_connector

Save the Logic App and verify no errors are displayed. Close the Triggers and Actions blade so we return to our Logic App Summary blade.

Testing Our Solution

Ensure our Logic App is enabled and verify it is being invoked every 1 minute by the Recurrence trigger.

logic_app_operations

Open a browser and navigate to your Salesforce Developer Account. Modify a number of Accounts ensuring we have a mix of Customer and Partner Account types.

sfdc_accounts

Open a browser and navigate to your SharePoint Online Developer Account. Verify that folders for those modified accounts appear in the correct document libraries.

spo_doclibs_updated

Conclusion

In this post we have seen how we can compose logical flows using a suite of API Apps pulled together from a mix of the Azure Marketplace and custom APIs within a single integrated solution to connect disparate SaaS applications.

However, it is early days for Azure App Services and I struggled with its v1.0 limitations and poor IDE experience within the Azure Preview Portal. I would like to see a Logic App designer in Visual Studio, addition of flow control and expansion of the expression language to include support for more complex data types (perhaps even custom .NET classes).  I’m sure as the offering matures and community contributions to the Marketplace are enabled, we will be spending less time developing services and more time composing them hopefully with a much better user experience.

Hands Free VM Management with Azure Automation and Resource Manager – Part 2

In this two part series, I am looking at how we can leverage Azure Automation and Azure Resource Manager to schedule the shutting down of tagged Virtual Machines in Microsoft Azure.

  • In Part 1 we walked through tagging resources using the Azure Resource Manager PowerShell module
  • In Part 2 we will setup Azure Automation to schedule a runbook to execute nightly and shutdown tagged resources.

Azure Automation Runbook

At the time of writing, the tooling support around Azure Automation can be politely described as a hybrid one. For starters, there is no support for Azure Automation in the preview portal. The Azure command line tools only support basic automation account and runbook management, leaving the current management portal as the most complete tool for the job

As I mentioned in Part 1, Azure Automation does not yet support the new Azure Resource Manager PowerShell module out-of-the-box, so we need to import that module ourselves. We will then setup service management credentials that our runbook will use (recall the ARM module doesn’t use certificates anymore, we need to supply user account credentials).

We then create our PowerShell workflow to query for tagged virtual machine resources and ensure they are shutdown. Lastly, we setup our schedule and enable the runbook… lets get cracking!

When we first create an Azure Automation account, the Azure PowerShell module is already imported as an Asset for us (v0.8.11 at the time of writing) as shown below.

Clean Azure Automation Screen.
To import the Azure Resource Manager module we need to zip it up and upload it to the portal using the following process. In Windows Explorer on your PC

  1. Navigate to the Azure PowerShell modules folder (typically C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ResourceManager\AzureResourceManager)
  2. Zip the AzureResourceManager sub-folder.

Local folder to zip.

In the Automation pane of the current Azure Portal:

  1. Select an existing Automation account (or create a new one)
  2. Navigate to the Asset tab and click the Import Module button
  3. Browse to the AzureResourceManager.zip file you created above.

ARM Module Import

After the import completes (this usually takes a few minutes) you should see the Azure Resource Manager module imported as an Asset in the portal.

ARM Module Imported

We now need to setup the credentials the runbook will use and for this we will create a new user in Azure Active Directory (AAD) and add that user as a co-administrator of our subscription (we need to query resource groups and shutdown our virtual machines).

In the Azure Active Directory pane:

  1. Add a new user of type New user in your organisation
  2. Enter a meaningful user name to distinguish it as an automation account
  3. Select User as the role
  4. Generate a temporary password which we’ll need to change it later.

Tell us about this user.

Now go to the Settings pane and add the new user as a co-administrator of your subscription:

Add user as co-admin.

Note: Azure generated a temporary password for the new user. Log out and sign in as the new user to get prompted to change the password and confirm the user has service administration permissions on your subscription.

We now need to add our users credentials to our Azure Automation account assets.

In the Automation pane:

  1. Select the Automation account we used above
  2. Navigate to the Asset tab and click on the Add Setting button on the bottom toolbar
  3. Select Add Credential
  4. Choose Windows PowerShell Credential from type dropdown
  5. Enter a meaningful name for the asset (e.g. runbook-account)
  6. Enter username and password of the AAD user we created above.

Runbook credentials

With the ARM module imported and credentials setup we can now turn to authoring our runbook. The completed runbook script can be found on Github. Download the script and save it locally.

Open the script in PowerShell ISE and change the Automation settings to match the name you gave to your Credential asset created above and enter your Azure subscription name.

workflow AutoShutdownWorkflow
{
    #$VerbosePreference = "continue"

    # Automation Settings
    $pscreds = Get-AutomationPSCredential -Name "runbook-account"
    $subscriptionName = "[subscription name here]"
    $tagName = "autoShutdown"

    # Authenticate using WAAD credentials
    Add-AzureAccount -Credential $pscreds | Write-Verbose 

    # Set subscription context
    Select-AzureSubscription -SubscriptionName $subscriptionName | Write-Verbose

    Write-Output "Checking for resources with $tagName flag set..."

    # Get virtual machines within tagged resource groups
    $vms = Get-AzureResourceGroup -Tag @{ Name=$tagName; Value=$true } | `
    Get-AzureResource -ResourceType "Microsoft.ClassicCompute/virtualMachines"

    # Shutdown all VMs tagged
    $vms | ForEach-Object {
        Write-Output "Shutting down $($_.Name)..."
        # Gather resource details
        $resource = $_
        # Stop VM
        Get-AzureVM | ? { $_.Name -eq $resource.Name } | Stop-AzureVM -Force
    }

    Write-Output "Completed $tagName check"
}

Walking through the script, the first thing we do is gather the credentials we will use to manage our subscription. We then authenticate using those credentials and select the Azure subscription we want to manage. Next we gather all virtual machine resources in resource groups that have been tagged with autoShutdown.

We then loop through each VM resource and force a shutdown. One thing you may notice about our runbook is that we don’t explicitly “switch” between the Azure module and Azure Resource Management module as we must when running in PowerShell.

This behaviour may change over time as the Automation service is enhanced to support ARM out-of-the-box, but for now the approach appears to work fine… at least on my “cloud” [developer joke].

We should now have our modified runbook script saved locally and ready to be imported into the Azure Automation account we used above. We will use the Azure Service Management cmdlets to create and publish the runbook, create the schedule asset and link it to our runbook.

Copy the following script into a PowerShell ISE session and configure it to match your subscription and location of the workflow you saved above. You may need to refresh your account credentials using Add-AzureAccount if you get an authentication error.

$automationAccountName = "[your account name]"
$runbookName = "autoShutdownWorkflow"
$scriptPath = "c:\temp\AutoShutdownWorkflow.ps1"
$scheduleName = "ShutdownSchedule"

# Create a new runbook
New-AzureAutomationRunbook –AutomationAccountName $automationAccountName –Name $runbookName

# Import the autoShutdown runbook from a file
Set-AzureAutomationRunbookDefinition –AutomationAccountName $automationAccountName –Name $runbookName –Path $scriptPath -Overwrite

# Publish the runbook
Publish-AzureAutomationRunbook –AutomationAccountName $automationAccountName –Name $runbookName

# Create the schedule asset
New-AzureAutomationSchedule –AutomationAccountName $automationAccountName –Name $scheduleName –StartTime $([DateTime]::Today.Date.AddDays(1).AddHours(1)) –DayInterval 1

# Link the schedule to our runbook
Register-AzureAutomationScheduledRunbook –AutomationAccountName $automationAccountName –Name $runbookName –ScheduleName $scheduleName

Switch over to the portal and verify your runbook has been created and published successfully…

Runbook published.

…drilling down into details of the runbook, verify the schedule was linked successfully as well…

Linked Schedule.

To start your runbook (outside of the schedule) navigate to the Author tab and click the Start button on the bottom toolbar. Wait for the runbook to complete and click on the View Job icon to examine the output of the runbook.

Manual Start

Run Output

Note: Create a draft version of your runbook to troubleshoot failing runbooks using the built in testing features. Refer to this link for details on testing your Azure Automation runbooks.

Our schedule will now execute the runbook each night to ensure virtual machine resources tagged with autoShutdown are always shutdown. Navigating to the Dashboard tab of the runbook will display the runbook history.

Runbook Dashboard

Considerations

1. The AzureResourceManager module is not officially supported yet out-of-the-box so a breaking change may come down the release pipeline that will require our workflow to be modified. The switch behaviour will be the most likely candidate. Watch that space!

2. Azure Automation is not available in all Azure Regions. At the time of writing it is available in East US, West EU, Japan East and Southeast Asia. However, region affinity isn’t a primary concern as we are merely just invoking the service management API where our resources are located. Where we host our automation service is not as important from a performance point of view but may factor into organisation security policy constraints.

3. Azure Automation comes in two tiers (Free and Basic). Free provides 500 minutes of job execution per month. The Basic tier charges $0.002 USD a minute for unlimited minutes per month (e.g. 1,000 job execution mins will cost $2). Usage details will be displayed on the Dashboard of your Azure Automation account.

Account Usage

In this two part post we have seen how we can tag resource groups to provide more granular control when managing resource lifecycles and how we can leverage Azure Automation to schedule the shutting down of these tagged resources to operate our infrastructure in Microsoft Azure more efficiently.

Hands Free VM Management with Azure Automation and Resource Manager – Part 1

Over the past six months, Microsoft have launched a number of features in Azure to enable you to better manage your resources hosted there.

In this two part series, I will show how we can leverage two of these new features – Azure Automation and Azure Resource Manager – to schedule the shutting down of tagged Virtual Machines in Microsoft Azure.

  • In Part 1 we will walk through tagging resources using the Azure Resource Manager features and
  • In Part 2 we will setup Azure Automation to schedule a runbook to execute nightly and shutdown tagged VM resources.

About Azure Resource Manager and Azure Automation

Azure Resource Manager (ARM) provides the capability to create reusable deployment templates and provide a common way to manage the resources that make up your deployment. ARM is the foundation of the ALM and DevOps tooling being developed by Microsoft and investments in it are ongoing.

Another key service in this space is the Azure Automation service which provides the ability to create, monitor, manage, and deploy resources using runbooks based on Windows PowerShell workflows. Automation assets can be defined to share credentials, modules, schedules and runtime configuration between runbooks and a runbook gallery provides contributions from both Microsoft and the community that can be imported and used within your Automation account.

Operating infrastructure effectively in the cloud is the new holy grail of today’s IT teams – scaling out to meet demand, back to reduce unnecessary operating costs, running only when needed and de-allocating when not. Automating elastic scale has been one area cloud providers and 3rd party tooling ISVs have invested in and the capability is pretty solid.

The broader resource lifecycle management story is not so compelling. Shutting down resources when they are not needed is still largely a manual process unless heavy investments are made into infrastructure monitoring and automation tools or via 3rd party SaaS offerings such as Azure Watch.

Objectives

We will start by identifying and tagging the resource groups we want to ensure are automatically shutdown overnight. Following this we will author an Azure Automation runbook that looks for tagged resources and shuts them down, configuring it to run every night.

Setting this up is not as straight forward as you would think (and we will need to bend the rules a little) so I will spend some time going through these steps in detail.

Tagging Resource Groups

Azure Resource Management features are currently only available via the preview portal and command line tools (PowerShell and cross-platform CLI). Using the preview portal we can manually create tags and assign them to our resource groups. A decent article on the Azure site walks you through how to perform these steps. Simon also introduced the ARM command line tools in his post a few months ago.

Performing these tasks in PowerShell is a little different to previous service management tasks you may be use to. For starters, the Azure Resource Manager cmdlets cannot be used in the same PS session as the “standard” Azure cmdlets. We must now switch between the two modes. The other main difference is the way we authenticate to the ARM API. We no longer use service management certificates with expiry periods of two to three years but user accounts backed by Azure Active Directory (AAD). These user tokens have expiry periods of two to three days so we must constantly refresh them.

Note: Switch “modes” just removes one set of Azure modules and imports others. So switching to Azure Resource Manager mode removes the Azure module and imports the Azure Resource Manager module and Azure Profile module. We need this understanding when we setup our Azure Automation later on.

Let’s walk through the steps to switch into ARM mode, add our user account and tag the resource groups we want to automate.

Before you start, ensure you have the latest drop of the Azure PowerShell cmdlets from GitHub. With the latest version installed, switch to Azure Resource Manager mode and add your AAD account to your local PS profile

# Switch to Azure Resource Manager mode
Switch-AzureMode AzureResourceManager

# Sign in to add our account to our PS profile
Add-AzureAccount

# Check our profile has the newly gathered credentials
Get-AzureAccount

Add-AzureAccount will prompt us to sign-in using either our Microsoft Account or Organisational account with service management permissions. Get-AzureAccount will list the profiles we have configured locally.

Now that we are in ARM mode and authenticated, we can examine and manage our tag taxonomy using the Get-AzureTag Cmdlet

#Display current taxonomy
Get-AzureTag

This displays all tags in our taxonomy and how many resource groups they are assigned to. To add tags to our taxonomy we use the New-AzureTag cmdlet. To remove, we use the Remove-AzureTag cmdlet. We can only remove tags that are not assigned to resources (that is, have a zero count).

In this post we want to create and use a tag named “autoShutdown” that we can then assign a value of “True” against resource groups that we want to automatically shutdown.

# Add Tag to Taxonomy
New-AzureTag -Name "autoShutdown"

azure_tag_-_get-azuretag

Now, let’s tag our resource groups. This can be performed using the preview portal as mentioned above, but if you have many resource groups to manage PowerShell is still the best approach. To manage resource groups using the ARM module we use the following cmdlets

azure_tag_-_azure_resource_groups

# Assign tag to DEV/TEST resource groups
Get-AzureResourceGroup | ? { $_.ResourceGroupName -match "dev-cloud" } | `
    Set-AzureResourceGroup -Tag @( @{ Name="autoShutdown"; Value=$true } )

The above statement finds all resource groups that contain the text “dev-cloud” in the name (a naming convention I have adopted, yours will be different) and sets the autoShutdown tag with a value of True to that resource group. If we list the resource groups using the Get-AzureResourceGroups cmdlet we can see the results of the tagging.

azure_tag_-_tagged_resource_groups

Note: Our resource group contains many resources. However, for our purposes we are only interested in managing our virtual machines. These resource types shown above will come in handy when we look to filter the tagged resources to only return our VMs.

We can also un-tag resource groups using the following statement

# Reset tags on DEV/TEST resource groups
Get-AzureResourceGroup | ? { $_.ResourceGroupName -match "dev-cloud" } | `
    Set-AzureResourceGroup -Tag @{}

We can see these tagged resource groups in the preview portal as well…

azure_tag_-_portal_tags

…and if we drill into one of the resource groups we can see our tag has the value True assigned

azure_tag_-_resource_group_tags

In this post we have tagged our resource groups so that they can be managed separately. In Part 2 of the post, we will move on to creating an Azure Automation runbook to automate the shutting down of our tagged resources on a daily schedule.

Considerations

At the time of writing there is no support in the preview portal for tagging individual resources directly (just Resource Groups). The PowerShell Cmdlets suggest it is possible however I always get an error indicating setting the tags property is not permitted yet. This may change in the near future and provide more granular control.

Mule ESB DEV/TEST environments in Microsoft Azure

Agility in delivery of IT services is what cloud computing is all about. Week in, week out, projects on-board and wind-up, developers come and go. This places enormous stress on IT teams with limited resourcing and infrastructure capacity to provision developer and test environments. Leveraging public cloud for integration DEV/TEST environments is not without its challenges though. How do we develop our interfaces in the cloud yet retain connectivity to our on-premises line-of-business systems?

In this post I will demonstrate how we can use Microsoft Azure to run Mule ESB DEV/TEST environments using point-to-site VPNs for connectivity between on-premises DEV resources and our servers in the cloud.

MuleSoft P2S

Connectivity

A point-to-site VPN allows you to securely connect an on-premises server to your Azure Virtual Network (VNET). Point-to-site connections don’t require a VPN device. They use the Windows VPN client and must be started manually whenever the on-premises server (point) wishes to connect to the Azure VNET (site). Point-to-site connections use secure socket tunnelling protocol (SSTP) with certificate authentication. They provide a simple, secure connectivity solution without having to involve the networking boffin’s to stand up expensive hardware devices.

I will not cover the setup of the Azure Point-to-site VPN in this post, there are a number of good articles already covering the process in detail including this great MSDN article.

A summary of steps to create the Point-to-site VPN are as follows:

  1. Create an Azure Virtual Network (I named mine AUEastVNet and used address range 10.0.0.0/8)
  2. Configure the Point-to-site VPN client address range  (I used 172.16.0.0/24)
  3. Create a dynamic routing gateway
  4. Configure certificates (upload root cert to portal, install private key cert on on-premise servers)
  5. Download and install client package from the portal on on-premise servers

Once we established the point-to-site VPN we can verify the connectivity by running ipconfig /all and checking we had been assigned an IP address from the range we configured on our VNET.

IP address assigned from P2S client address range

Testing our Mule ESB Flow using On-premises Resources

In our demo, we want to test the interface we developed in the cloud with on-premises systems just as we would if our DEV environment was located within our own organisation

Mule ESB Flow

The flow above listens for HL7 messages using the TCP based MLLP transport and processes using two async pipelines. The first pipeline maps the HL7 message into an XML message for a LOB system to consume. The second writes a copy of the received message for auditing purposes.

MLLP connector showing host running in the cloud

The HL7 MLLP connector is configured to listen on port 50609 of the network interface used by the Azure VNET (10.0.1.4).

FILE connector showing on-premise network share location

The first FILE connector is configured to write the output of the xml transformation to a network share on our on-premises server (across the point-to-site VPN). Note the IP address used is the one assigned by the point-to-site VPN connection (from the client IP address range configured on our Azure VNET)

P2S client IP address range

To test our flow we launch a MLLP client application on our on-premises server and establish a connection across the point-to-site VPN to our Mule ESB flow running in the cloud. We then send a HL7 message for processing and verify we receive a HL7 ACK and that the transformed xml output message has also been written to the configured on-premises network share location.

Establishing the connection across the point-to-site VPN…

On-premises MLLP client showing connection to host running in the cloud

Sending the HL7 request and receiving an HL7 ACK response…

MLLP client showing successful response from Mule flow

Verifying the transformed xml message is written to the on-premises network share…

On-premises network share showing successful output of transformed message

Considerations

  • Connectivity – Point-to-site VPNs provide a relatively simple connectivity option that allows traffic between the your Azure VNET (site) and your nominated on-premise servers (the point inside your private network). You may already be running workloads in Azure and have a site-to-site VPN or MPLS connection between the Azure VNET and your network and as such do not require establishing the point-to-site VPN connection. You can connect up to 128 on-premise servers to your Azure VNET using point-to-site VPNs.
  • DNS – To provide name resolution of servers in Azure to on-premise servers OR name resolution of on-premise servers to servers in Azure you will need to configure your own DNS servers with the Azure VET. The IP address of on-premise servers will likely change every time you establish the point-to-site VPN as the IP address is assigned from a range of IP addresses configured on the Azure VET.
  • Web Proxies – SSTP does not support the use of authenticated web proxies. If your organisation uses a web proxy that requires HTTP authentication then the VPN client will have issues establishing the connection. You may need the network boffins after all to bypass the web proxy for outbound connections to your Azure gateway IP address range.
  • Operating System Support – Point-to-site VPNs only support the use of the Windows VPN client on Windows 7/Windows 2008 R2 64 bit versions and above.

Conclusion

In this post I have demonstrated how we can use Microsoft Azure to run a Mule ESB DEV/TEST environment using point-to-site VPNs for simple connectivity between on-premises resources and servers in the cloud. Provisioning integration DEV/TEST environments on demand increases infrastructure agility, removes those long lead times whenever projects kick-off or resources change and enforces a greater level of standardisation across the team which all improve the development lifecycle, even for integration projects!

Migrating Azure Virtual Machines to another Region

I have a number of DEV/TEST Virtual Machines (VMs) deployed to Azure Regions in Southeast Asia (Singapore) and West US as these were the closet to those of us living in Australia. Now that the new Azure Regions in Australia have been launched, it’s time to start migrating those VMs closer to home. Manually moving VMs between Regions is pretty straight forward and a number of articles already exist outlining the manual steps.

To migrate an Azure VM to another Region

  1. Shutdown the VM in the source Region
  2. Copy the underlying VHDs to storage accounts in the new Region
  3. Create OS and Data disks in the new Region
  4. Re-create the VM in the new Region.

Simple enough but tedious manual configuration, switching between tools and long waits while tens or hundreds of GBs are transferred between Regions.

What’s missing is the automation…

Automating the Migration

In this post I will share a Windows PowerShell script that automates the migration of Azure Virtual Machines between Regions. I have made the full script available via GitHub.

Here is what we are looking to automate:

Migrate-AzureVM

  1. Shutdown and Export the VM configuration
  2. Setup async copy jobs for all attached disks and wait for them to complete
  3. Restore the VM using the saved configuration.

The Migrate-AzureVM.ps1 script assumes the following:

  • Azure Service Management certificates are installed on the machine running the script for both source and destination Subscriptions (same Subscription for both is allowed)
  • Azure Subscription profiles have been created on the machine running the script. Use Get-AzureSubscription to check.
  • Destination Storage accounts, Cloud Services, VNets etc. already have been created.

The script accepts the following input parameters:

.\Migrate-AzureVM.ps1 -SourceSubscription "MySub" `
                      -SourceServiceName "MyCloudService" `
                      -VMName "MyVM" `
                      -DestSubscription "AnotherSub" `
                      -DestStorageAccountName "mydeststorage" `
                      -DestServiceName "MyDestCloudService" `
                      -DestVNETName "MyRegionalVNet" `
                      -IsReadOnlySecondary $false `
                      -Overwrite $false `
                      -RemoveDestAzureDisk $false
SourceSubscription Name of the source Azure Subscription
SourceServiceName Name of the source Cloud Service
VMName Name of the VM to migrate
DestSubscription Name of the destination Azure Subscription
DestStorageAccountName Name of the destination Storage Account
DestServiceName Name of the destination Cloud Service
DestVNETName Name of the destination VNet – blank if none used
IsReadOnlySecondary Indicates if we are copying from the source storage accounts read-only secondary location
Overwrite Indicates if we are overwriting if the VHD already exists in the destination storage account
RemoveDestAzureDisk Indicates if we remove an Azure Disk if it already exists in the destination disk repository

To ensure that the Virtual Machine configuration is not lost (and avoid us have to re-create by hand) we must first shutdown the VM and export the configuration as shown in the PowerShell snippet below.

# Set source subscription context
Select-AzureSubscription -SubscriptionName $SourceSubscription -Current

# Stop VM
Stop-AzureVMAndWait -ServiceName $SourceServiceName -VMName $VMName

# Export VM config to temporary file
$exportPath = "{0}\{1}-{2}-State.xml" -f $ScriptPath, $SourceServiceName, $VMName
Export-AzureVM -ServiceName $SourceServiceName -Name $VMName -Path $exportPath

Once the VM configuration is safely exported and the machine shutdown we can commence copying the underlying VHDs for the OS and any data disks attached to the VM. We’ll want to queue these up as jobs and kick them off asynchronously as they will take some time to copy across.

Get list of azure disks that are currently attached to the VM
$disks = Get-AzureDisk | ? { $_.AttachedTo.RoleName -eq $VMName }

# Loop through each disk
foreach($disk in $disks)
{
    try
    {
        # Start the async copy of the underlying VHD to
        # the corresponding destination storage account
        $copyTasks += Copy-AzureDiskAsync -SourceDisk $disk
    }
    catch {}   # Support for existing VHD in destination storage account
}

# Monitor async copy tasks and wait for all to complete
WaitAll-AsyncCopyJobs

Tip: You’ll probably want to run this overnight. If you are copying between Storage Accounts within the same Region copy times can vary between 15 mins and a few hours. It all depends on which storage cluster the accounts reside. Michael Washam provides a good explanation of this and shows how you can check if your accounts live on the same cluster. Between Regions will always take a longer time (and incur data egress charges don’t forget!)… see below for a nice work-around that could save you heaps of time if you happen to be migrating within the same Geo.

You’ll notice the script also supports being re-run as you’ll have times when you can’t leave the script running during the async copy operation. A number of switches are also provided to assist when things might go wrong after the copy has completed.

Now that we have our VHDs in our destination Storage Account we can begin putting our VM back together again.

We start by re-creating the logical OS and Azure Data disks that take a lease on our underlying VHDs. So we don’t get clashes, I use a convention based on Cloud Service name (which must be globally unique), VM name and disk number.

# Set destination subscription context
Select-AzureSubscription -SubscriptionName $DestSubscription -Current

# Load VM config
$vmConfig = Import-AzureVM -Path $exportPath

# Loop through each disk again
$diskNum = 0
foreach($disk in $disks)
{
    # Construct new Azure disk name as [DestServiceName]-[VMName]-[Index]
    $destDiskName = "{0}-{1}-{2}" -f $DestServiceName,$VMName,$diskNum   

    Write-Log "Checking if $destDiskName exists..."

    # Check if an Azure Disk already exists in the destination subscription
    $azureDisk = Get-AzureDisk -DiskName $destDiskName `
                              -ErrorAction SilentlyContinue `
                              -ErrorVariable LastError
    if ($azureDisk -ne $null)
    {
        Write-Log "$destDiskName already exists"

        if ($RemoveDisk -eq $true)
        {
            # Remove the disk from the repository
            Remove-AzureDisk -DiskName $destDiskName

            Write-Log "Removed AzureDisk $destDiskName"
            $azureDisk = $null
        }
        # else keep the disk and continue
    }

    # Determine media location
    $container = ($disk.MediaLink.Segments[1]).Replace("/","")
    $blobName = $disk.MediaLink.Segments | Where-Object { $_ -like "*.vhd" }
    $destMediaLocation = "http://{0}.blob.core.windows.net/{1}/{2}" -f $DestStorageAccountName,$container,$blobName

    # Attempt to add the azure OS or data disk
    if ($disk.OS -ne $null -and $disk.OS.Length -ne 0)
    {
        # OS disk
        if ($azureDisk -eq $null)
        {
            $azureDisk = Add-AzureDisk -DiskName $destDiskName `
                                      -MediaLocation $destMediaLocation `
                                      -Label $destDiskName `
                                      -OS $disk.OS `
                                      -ErrorAction SilentlyContinue `
                                      -ErrorVariable LastError
        }

        # Update VM config
        $vmConfig.OSVirtualHardDisk.DiskName = $azureDisk.DiskName
    }
    else
    {
        # Data disk
        if ($azureDisk -eq $null)
        {
            $azureDisk = Add-AzureDisk -DiskName $destDiskName `
                                      -MediaLocation $destMediaLocation `
                                      -Label $destDiskName `
                                      -ErrorAction SilentlyContinue `
                                      -ErrorVariable LastError
        }

        # Update VM config
        #   Match on source disk name and update with dest disk name
        $vmConfig.DataVirtualHardDisks.DataVirtualHardDisk | ? { $_.DiskName -eq $disk.DiskName } | ForEach-Object {
            $_.DiskName = $azureDisk.DiskName
        }
    }              

    # Next disk number
    $diskNum = $diskNum + 1
}
# Restore VM
$existingVMs = Get-AzureService -ServiceName $DestServiceName | Get-AzureVM
if ($existingVMs -eq $null -and $DestVNETName.Length -gt 0)
{
    # Restore first VM to the cloud service specifying VNet
    $vmConfig | New-AzureVM -ServiceName $DestServiceName -VNetName $DestVNETName -WaitForBoot
}
else
{
    # Restore VM to the cloud service
    $vmConfig | New-AzureVM -ServiceName $DestServiceName -WaitForBoot
}

# Startup VM
Start-AzureVMAndWait -ServiceName $DestServiceName -VMName $VMName

For those of you looking at migrating VMs between Regions within the same Geo and have GRS enabled, I have also provided an option to use the secondary storage location of the source storage account.

To support this you will need to enable RA-GRS (read access) and wait a few minutes for access to be made available by the storage service. Copying your VHDs will be very quick (in comparison to egress traffic) as the copy operation will use the secondary copy in the same region as the destination. Nice!

Enabling RA-GRS can be done at any time but you will be charged for a minimum of 30 days at the RA-GRS rate even if you turn it off after the migration.

# Check if we are copying from a RA-GRS secondary storage account
if ($IsReadOnlySecondary -eq $true)
{
    # Append "-secondary" to the media location URI to reference the RA-GRS copy
    $sourceUri = $sourceUri.Replace($srcStorageAccount, "$srcStorageAccount-secondary")
}

Don’t forget to clean up your source Cloud Services and VHDs once you have tested the migrated VMs are running fine so you don’t incur ongoing charges.

Conclusion

In this post I have walked through the main sections of a Windows PowerShell script I have developed that automates the migration of an Azure Virtual Machine to another Azure data centre. The full script has been made available in GitHub. The script also supports a number of other migration scenarios (e.g. cross Subscription, cross Storage Account, etc.) and will be handy addition to your Microsoft Azure DevOps Toolkit.

Connecting Salesforce and SharePoint Online with MuleSoft – Nothing but NET

Often enterprises will choose their integration platform based on the development platform required to build integration solutions. That is, java shops typically choose Oracle ESB, JBoss, IBM WebSphere or MuleSoft to name but a few. Microsoft shops have less choice and typically choose to build custom .NET solutions or use Microsoft BizTalk Server. Choosing an integration platform based on the development platform should not be a driving factor and may limit your options.

Your integration platform should be focused on interoperability. It should support common messaging standards, transport protocols, integration patterns and adapters that allow you to connect to a wide range of line of business systems and SaaS applications. Your integration platform should provide frameworks and pattern based templates to reduce implementation costs and improve the quality and robustness of your interfaces.

Your integration platform should allow your developers to use their development platform of choice…no…wait…what!?!

In this post I will walkthrough integrating Salesforce.com and SharePoint Online using the java based Mule ESB platform while writing nothing but .NET code.

MuleSoft .NET Connector

The .NET connector allows developers to use .NET code in their flows enabling you to call existing .NET Framework assemblies, 3rd party .NET assemblies or custom .NET code. Java Native Interface (JNI) is used to communicate between the Java Virtual Machine (JVM), in which your MuleSoft flow is executing, and the Microsoft Common Language Runtime (CLR) where your .NET code is executed.

mule-net-connector---jni_thumb

To demonstrate how we can leverage the MuleSoft .NET Connector in our flows, I have put together a typical SaaS cloud integration scenario.

Our Scenario

  • Customers (Accounts) are entered into Salesforce.com by the Sales team
  • The team use O365 and SharePoint Online to manage customer and partner related documents.
  • When new customers are entered into Salesforce, corresponding document library folders need to be created in SharePoint.
  • Our interface polls Salesforce for changes and needs to create a new document library folder in SharePoint for this customer according to some business rules
  • Our developers prefer to use .NET and Visual Studio to implement the business logic required to determine the target document library based on Account type (Customer or Partner)

Our MuleSoft flow looks something like this:

mule_net_connector_-_flow_thumb5

  • Poll Salesforce for changes based on a watermark timestamp
  • For each change detected:
    • Log and update the last time we sync’d
    • Call our .NET business rules to determine target document library
    • Call our .NET helper class to create the folder in SharePoint

Salesforce Connector

Using the MuleSoft Salesforce Cloud Connector we configure it to point to our Salesforce environment and query for changes to the Accounts entity. Using DataSense, we configure which data items to pull back into our flow to form the payload message we wish to process.

mule_net_connector_-_sf_connector_th

Business Rules

Our business rules are implemented using a bog standard .NET class library that checks the account type and assign either “Customers” or “Partners” as the target document library. We then enrich the message payload with this value and return it back to our flow.

public object GetDocumentLibrary(SF_Account account)
{
    var docLib = "Unknown";     // default

    // Check for customer accounts
    if (account.Type.Contains("Customer"))
        docLib = "Customers";

    // Check for partner accounts
    if (account.Type.Contains("Partner"))
        docLib = "Partners";

    return new 
        { 
            Name = account.Name, 
            Id = account.Id, 
            LastModifiedDate = account.LastModifiedDate, 
            Type = account.Type,
            DocLib = docLib 
        };
}

Note: JSON is used to pass non-primitive types between our flow and our .NET class.

So our message payload looks like

{
	"LastModifiedDate":"2014-11-02T11:11:07.000Z",
	"Type":"TechnologyPartner",
	"Id":"00190000016hEhxAAE",
	"type":"Account",
	"Name":"Kloud"
}

and is de-serialised by the .NET connector into our .NET SF_Account object that looks like

public class SF_Account
{
    public DateTime LastModifiedDate;
    public string Type;
    public string Id;
    public string type;
    public string Name;
    public string DocLib;
}

Calling our .NET business rules assembly is a simple matter of configuration.

mule_net_connector_-_business_rules_

SharePoint Online Helper

Now that we have enriched our message payload with the target document library

{
	"LastModifiedDate":"2014-11-02T11:11:07.000Z",
	"Type":"TechnologyPartner",
	"Id":"00190000016hEhxAAE",
	"type":"Account",
	"Name":"Kloud",
	"DocLib":"Partners"
}

we can pass this to our .NET SharePoint client library to connect to SharePoint using our O365 credentials and create the folder in the target document library

public object CreateDocLibFolder(SF_Account account)
{
    using (var context = new Microsoft.SharePoint.Client.ClientContext(url))
    {
        try
        {
            // Provide client credentials
            System.Security.SecureString securePassword = new System.Security.SecureString();
            foreach (char c in password.ToCharArray()) securePassword.AppendChar(c);
            context.Credentials = new Microsoft.SharePoint.Client.SharePointOnlineCredentials(username, securePassword);

            // Get library
            var web = context.Web;
            var list = web.Lists.GetByTitle(account.DocLib);
            var folder = list.RootFolder;
            context.Load(folder);
            context.ExecuteQuery();

            // Create folder
            folder = folder.Folders.Add(account.Name);
            context.ExecuteQuery();
        }
        catch (Exception ex)
        {
            System.Diagnostics.Debug.WriteLine(ex.ToString());
        }
    }

    // Return payload to the flow 
    return new { Name = account.Name, Id = account.Id, LastModifiedDate = account.LastModifiedDate, Type = account.Type, DocLib = account.DocLib, Site = string.Format("{0}/{1}", url, account.DocLib) };
}

Calling our helper is the same as for our business rules

mule_net_connector_-_sharepoint_help

Configuration

Configuring MuleSoft to know where to load our .NET assemblies from is best done using global configuration references.

mule_net_connector_-_global_elements[2]

We have three options to reference our assembly:

  1. Local file path – suitable for development scenarios.
  2. Global Assembly Cache (GAC) – suitable for shared or .NET framework assemblies that are known to exist on the deployment server.
  3. Packaged – suitable for custom and 3rd party .NET assemblies that get packaged with your MuleSoft project and deployed together.

mule_net_connector_-_global_elements[1]

Deployment

With our flow completed, coding in nothing but .NET, we are good to test and deploy our package to our Mule ESB Integration Server. At the time of writing, CloudHub does not support the .NET Connector but this should be available in the not too distant future. To test my flow I simply spin up and instance on the development server and watch the magic happen.

We enter an Account in Salesforce with Type of “Customer – Direct”…

mule_net_connector_-_sf_customerng_t

and we see a new folder in our “Customers” document library for that Account name in a matter of seconds

mule_net_connector_-_sharepoint_fold[1]

SLAM DUNK!…Nothing but NET Smile

Conclusion

Integration is all about interoperability and not just at runtime. It should be a core capability of our Integration framework. In this post we saw how we can increase our Integration capability without the need to sacrifice our development platform of choice by using MuleSoft and the MuleSoft .NET Connector.

Publishing to Azure Event Hubs using a .NET Micro Framework Device

In previous posts, Kloudies Matt Davies and Olaf Loogman have shown how we connect Arduino based devices to the Azure platform. Preferring the .NET Micro Framework (NETMF) platform myself, I thought it time to show how we can publish senor data to Azure Event Hubs using a NETMF connected device.

.NET Micro Framework

Like Arduino, the .NET Micro Framework is an open source platform that runs on small, microcontroller based devices or “things”as we call them now in the world of the Internet-of-Things (IoT). However, unlike the Arduino platform, developers in the NETMF world use Visual Studio and C# to develop embedded applications leveraging the rich developer experience that comes with working within the Visual Studio IDE. Using the .NET Gadgeteer toolkit, we take this experience to the next level using a model driven development approach with graphical designers that abstracts much of the low level “engineering” aspects of embedded device development.

net gadgeteer i VS

Whether we are working with earlier NETMF versions or with the Gadgeteer toolkit, we still get the rich debugging experience and deployment features from Visual Studio which is the big differentiator of the NETMF platform.

FEZ Panda II

The device I have had for a number of years is the FEZ Panda II from GHI Electronics (FEZ stands for Freak’N’Easy).

fez_panda_II

Although not a member of the newer .NET Gadgeteer family, the FEZ Panda II still provides those in the maker community a solid device platform for DIY and commercial grade applications. The FEZ Panda II sports:

  • 72Mhz 32-bit processor with 512KB of FLASH and 96KB of RAM (compared to the Arduino Yun’s 16 MHz and 32KB of FLASH and  2KB of RAM)
  • Micro SD socket for up to 16GB of memory
  • Real Time Clock (RTC)
  • Over 60 digital inputs and outputs
  • TCP/IP HTTP support
  • Arduino shield compatibility

Note: The FEZ Panda II does not have built in support for TLS/SSL which is required to publish data to Azure Event Hubs. This is not a problem for the newer Gadgeteer boards such as the FEZ Raptor.

Our Scenario

iot cloud gateway

The scenario I will walkthrough in this post will feature our NETMF embedded device with a number of analogue sensors taking readings a couple of times per second and publishing the sensor data to an Azure Event Hub via a field gateway. A monitoring application will act as an event hub consumer to display sensor readings in near real-time.

  • Sensors – Thermometer (degs Celsius) and Light (intensity of light as a % with zero being complete darkness)
  • Device – FEZ Panda II connected to the internet using the Wiznet W5100 ethernet controller.
  • Field Gateway – Simple IIS Application Request Routing rule in an Azure hosted Virtual Machine that routes the request as-is to a HTTPS Azure Event Hub endpoint.
  • Cloud Gateway – Azure Event Hub configured as follows:
    • 8 partitions
    • 1 day message retention
    • Monitoring Consumer Group
    • Publisher access policy for our connected device
    • Consumer access policy for our monitoring application
  • Monitoring Application – Silverlight (long live Ag!) application consuming events off the Monitoring Consumer Group.

Creating the Visual Studio Solution

To develop NETMF applications we must first install the .NET Micro Framework SDK and any vendor specific SDK’s. Using Visual Studio we create a NETMF project using the .NET Micro Framework Console Application template (or Gadgeteer template if you are using the newer family of devices).

netmf_project

For the FEZ Panda II, I need to target NETMF version 4.1.

target_netmf_4.1

Additionally, I also need to add assembly references to the device manufacturer’s SDK libraries, GHI Electronics in my case.

referenced_assemblies

Sensor Code

Working with sensors and other modules is fairly straight forward using NETMF and the GHI libraries. To initialise an instance of my sensor class I need to know:

  • The analogue pin on the device my sensor is connected to
  • The interval between sensor readings
  • The min/max values of the readings
public Thermometer(FEZ_Pin.AnalogIn pin, int interval, int min, int max)
{
    // Set reading parameters
    _interval = interval;
    _minScale = min;
    _maxScale = max;
            
    // Initialise thermometer sensor 
    _sensor = new AnalogIn((AnalogIn.Pin)pin);
    _sensor.SetLinearScale(_minScale, _maxScale);

    // Set sensor id
    _sensorId = "An" + pin.ToString() + _sensorType;
}

I then use a background thread to periodically take sensor readings and raise an event passing the sensor data as an event argument

void SensorThreadStart()
{
    while (true)
    {
        // Take a sensor reading
        var temp = _sensor.Read();

        // Create sensor event data 
        var eventData = new SensorEventData()
        {
            DeviceId = _deviceId,
            SensorData = new SensorData[]
                { 
                    new SensorData() { SensorId = _sensorId, SensorType = _sensorType, SensorValue = temp }
                }
        };

        // Raise sensor event
        OnSensorRead(eventData);

        // Pause
        Thread.Sleep(_interval);
    }
}

Networking

A critical attribute of any “thing” in the world of IoT is being connected. When working with resource constrained devices we quickly come to terms with having to perform many lower level functions than we may not be accustomed to in our day to day development. Initialising your network stack may be one of them…

public static void InitNetworkStack()
{
    Debug.Print("Network settings...");

    try
    {
        // Enable ethernet
        WIZnet_W5100.Enable
            (
                SPI.SPI_module.SPI1,
                (Cpu.Pin)FEZ_Pin.Digital.Di10,
                (Cpu.Pin)FEZ_Pin.Digital.Di7,
                true
            );

        // Enable DHCP
        Dhcp.EnableDhcp(new byte[] { 0x00, 0x5B, 0x1C, 0x51, 0xC6, 0xC7 }, "FEZA");
                
        Debug.Print("IP Address: " + new IPAddress(NetworkInterface.IPAddress).ToString());
        Debug.Print("Subnet Mask: " + new IPAddress(NetworkInterface.SubnetMask).ToString());
        Debug.Print("Default Gateway: " + new IPAddress(NetworkInterface.GatewayAddress).ToString());
        Debug.Print("DNS Server: " + new IPAddress(NetworkInterface.DnsServer).ToString());
    }
    catch (Exception ex)
    {
        Debug.Print("Network settings...Error: " + ex.ToString());
    }
    return;
}

 

 

Note the use of the Debug.Print statements. While in debug mode these are written to the output Window for easy troubleshooting and debugging.

Event Hub Client

As I write this, we don’t yet have a Azure SDK for NETMF (but we have been told it is on its way). Like most services in Azure, Event Hubs provides a REST based API that I can consume using plain old web http requests. To handle the access control, I assigned a pre-generated SAS token to the device during deployment. This avoids the resource constrained device having to generate a SAS token itself and use up precious memory doing so.

To construct our request to Event Hubs we need the following details:

  • Service Bus Namespace
  • Event Hub name
  • PartitionKey (I am using a device ID)
  • Authorisation token
public EventHubClient(string serviceNamespace, string eventhub, string deviceName, string accessSignature)
{
    // Assign event hub details
    _serviceNamespace = serviceNamespace;
    _hubName = eventhub;
    _deviceName = deviceName;
    _sas = accessSignature;

    // Generate the url to the event hub
    //_url = "https://" + _serviceNamespace + ".servicebus.windows.net/" + _hubName + "/Publishers/" + _deviceName;

    //  Note: As the FEZ Panda (.NET MF 4.1) does not support SSL I need to send this to the field gateway over HTTP
    _url = "http://dev-vs2014ctp-ty074cpf.cloudapp.net/" + _serviceNamespace + "/" + _hubName + "/" + _deviceName;
}

Note here I have switched my Event Hub Client to use an intermediary field gateway URL as the device does not support SSL and cannot post requests directly to Azure Event Hub endpoint.

Finally the actual payload is the sensor data that I serialise into json format. Event Hubs is payload agnostic so any stream of data may be sent through the hub. Anything from sensor data, application logging or perhaps observational data from medical devices can be published to Azure Event Hubs.

public bool SendEvent(SensorEventData sensorData)
{
    var success = false;
    try
    {
        // Format the sensor data as json
        var eventData = sensorData.ToJson();

        Debug.Print("Sending event data: " + eventData);

        // Create an HTTP Web request.
        HttpWebRequest webReq = HttpWebRequest.Create(_url) as HttpWebRequest;

        // Add required headers
        webReq.Method = "POST";
        webReq.Headers.Add("Authorization", _sas);
        webReq.ContentType = "application/atom+xml;type=entry;charset=utf-8";
        webReq.ContentLength = eventData.Length;
        webReq.KeepAlive = true;                

        using (var writer = new StreamWriter(webReq.GetRequestStream()))
        {
            writer.Write(eventData);                    
        }

        webReq.Timeout = 3000; // 3 secs
        using (var response = webReq.GetResponse() as HttpWebResponse)
        {
            Debug.Print("HttpWebResponse: " + response.StatusCode.ToString());
            // Check status code
            success = (response.StatusCode == HttpStatusCode.Created);
        }                
    }
    catch (Exception ex)
    {
        Debug.Print(ex.ToString());
    }

    return success;
}

Main

Wiring it all together is the job of our entry point Main(). Here we initialise our network stack, sensors, LEDs and of course our Azure Event Hub client. We then wire up the sensor event handlers and off we go.

public static void Main()
{            
    // Initialise device and sensors
    Init();

    // Setup Event Hub client
    client = new EventHubClient(serviceNamespace, hubName, deviceName, sas);

    Debug.Print("Device ready");

    // Start sensor monitoring
    thermo.Start();
    light.Start();
}

static void Init()
{
    // Enable ethernet
    Helpers.Network.InitNetworkStack();

    // Init LED 
    led = new LED((Cpu.Pin)FEZ_Pin.Digital.Di5, false);

    // Init thermometer sensor
    thermo = new Thermometer(FEZ_Pin.AnalogIn.An2, 500, -22, 56);
    thermo.SensorReadEvent += SensorReadEvent;

    // Init light sensor
    light = new Light(FEZ_Pin.AnalogIn.An3, 500, 0, 100);
    light.SensorReadEvent += SensorReadEvent;

    // Flash once if all is good
    led.Flash(1);
}

static void SensorReadEvent(SensorEventData data)
{
    // Send event to Event Hubs
    if (!client.SendEvent(data))
        // Flash three times if failed to send
        led.Flash(3);
    else
        // Flash once if all is good
        led.Flash(1);
}

Note the use of the LED, connected to digital pin 5, to provide runtime feedback. We flash the LED once for every successful publish of an event and three times if we have a failure. It is this kind of low level controller interaction that makes NETMF development such a satisfying, albeit geeky pastime.

Field Gateway

As mentioned above, the FEZ Panda II does not support TLS/SSL. To overcome this, I posted sensor data to a “field gateway” consisting of a simple IIS Application Request Routing rule to perform the protocol translation from HTTP to HTTPS. The ARR rule only performed a rewrite of the URL and did not need to enrich or modify the request in any other way.

arr_rule

Our Consumer

Azure Event Hubs provides Consumer Groups that subscribe to events published to the hub. Only one consumer can receive events from each partition at time so I have found it good practice to create at least two consumer groups so that one group can be used for monitoring as required while your downstream processing application/services consume the primary consumer group. To this end, I developed a quick Silverlight application (yes I know. long live Ag!) to act as a monitoring consumer for the event hub.

event hubs consumer

Conclusion

The .NET Micro Framework provides a great way for .NET developers to participate in the growing Internet-of-Things movement for a relatively small ( < $100 ) outlay while retaining the rich developer experience using familiar tools such as Visual Studio. Azure Event Hubs provides the platform for a cloud-based device gateway allowing the ingestion of millions of events that downstream applications and services can consume for real-time analytics and ordered processing.

Bad Request: Internal Load Balancer usage not allowed for this deployment

Microsoft released a number of new networking features for the Azure platform this week:

  • Multiple Site-to-Site VPNs
  • VNET-to-VNET Secure Connectivity
  • Reserved IPs
  • Instance level Public IPs
  • Internal Load Balancing

Announcement details can be found on Scott Gu’s blog post

Internal load balancing (ILB) was a much needed networking feature that will enable the design of highly available environments in hybrid infrastructure scenarios. Until now, 3rd party solutions were required to load balance workloads in IaaS virtual machines when accessed by on-premise (internal) clients across the site-to-site VPN. Using the existing public load balancer and exposing public endpoints was not a palatable option for many organisations even when hardened using network ACL’s.

internal load balancer

ILB now provides a highly available load balancing solution using an internally addressable endpoint within your cloud service or VNET. However, the ILB feature will not work out-of-the-box for you. Like most new features in preview, documentation is sparse and spread out across a number of locations. I have pulled together a list of considerations you’ll need to address before using this feature:

  1. ILB is not available from the Portal (Scott Gu’s post), you must use PowerShell or REST API
  2. You cannot create an ILB instance for a cloud service or virtual network if there are no external endpoints configured on any of the resident virtual machines. (MSDN)
  3. ILB is available only with new deployments and new virtual networks. (Scott Gu’s post)

After installing the latest PS Azure cmdlets you will have a number of cmdlets used to manage ILBs in Azure:

  • Get-AzureInternalLoadBalancer
  • Add-AzureInternalLoadBalancer
  • Remove-AzureInternalLoadBalancer
  • Add-AzureEndpoint (now includes the –InternalLoadBalancerName parameter)

However, you may receive the following error when trying to provision a new instance of the ILB despite redeploying virtual machines into a new VNET and cloud service:

Add-AzureInternalLoadBalancer : BadRequest: Internal Load Balancer usage not allowed for this deployment.
At line:1 char:1
+ Add-AzureInternalLoadBalancer -ServiceName $svc -InternalLoadBalancerName $ilb – …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo          : CloseError: (:) [Add-AzureInternalLoadBalancer], CloudException
+ FullyQualifiedErrorId : Microsoft.WindowsAzure.Commands.ServiceManagement.IaaS.AddAzureInternalLoadBalancer

The solution lies in the definition of “…new virtual network” (from point 3 above). Re-creating the VNET using the portal provisions the “old” type of VNET. As part of this release, a new type of VNET was introduced, Regional Virtual Network. Regional VNETs span the entire regional data centre instead of being pinned to an Affinity Group. This is a good thing for a number of reasons well documented in this post by Narayan Annamalai. Also mentioned in the post was the key to unlocking this:

Following are the key scenarios enabled by Regional Virtual Networks:

  • New VM Sizes like A8 and A9 can be deployed into a virtual network which can also contain other sizes.
  • New features like Reserved IPs, Internal load balancing, Instance level Public IPs
  • The Virtual Network can scale seamlessly to use the capacity in the entire region, but we still restrict to maximum of 2048 virtual machines in a VNet.
  • You are not required to create an affinity group before creating a virtual network.
  • Deployments going into a VNet does not have to be in the same affinity group.

To provision an ILB instance we need to create a Regional Virtual Network. Again this is not currently supported in the Portal (either version). Fortunately, creating this new type of VNET is just a matter of changing our exported VNET configuration file as follows:

Original VNET config:

<VirtualNetworkSite name="lab-vnet" AffinityGroup="lab-ag">

New VNET config

<VirtualNetworkSite name="lab-vnet" Location="Southeast Asia">

Important: You will need to un-deploy all resources in the current VNET before you can upgrade to the new regional VNET as the old VNET is deleted and the new VNET created. Not an insignificant task for those looking to take advantage of this new feature in existing DEV/TEST environments. I can’t stress enough the importance of automating your cloud provisioning and deployment operations. It will save you many days of Click > Click > Next’ing

After the VNET is upgraded and environment re-provisioned the ILB feature can be provisioned as follows:

  1. Create the ILB instance using the Add-AzureInternalLoadBalancer cmdlet
    Add-AzureInternalLoadBalancer -ServiceName $svc -InternalLoadBalancerName $ilb –SubnetName $subnet –StaticVNetIPAddress $IP
  2. Add internal endpoints to the ILB instance on each VM to be load balanced using the Add-AzureEndpoint cmdlet
    Get-AzureVM –ServiceName $svc –Name $vmname | `
    	Add-AzureEndpoint -Name $epname -Protocol $prot -LocalPort $locport -PublicPort $pubport –DefaultProbe -InternalLoadBalancerName $ilb | `
    	Update-AzureVM
    

    Note: the InternalLoadBalancerName parameter.

  3. Create an DNS A record that points to ILB virtual IP (VIP) that clients will use to communicate with the ILB instance. Use the Get-AzureInternalLoadBalancer cmdlet to get the ILB instance VIP

    Get-AzureService -ServiceName $svc | `
    	Get-AzureInternalLoadBalancer
    

With the new Internal Load Balancer feature we can now achieve a highly available, load balancing solution that clients on-premise can access across the site-to-site VPN.