Message retry patterns in Azure Functions

Azure Functions provide ServiceBus based trigger bindings that allow us to process messages dropped onto a SB queue or delivered to a SB subscription. In this post we’ll walk through creating an Azure Function using a ServiceBus trigger that implements a configurable message retry pattern.

Note: This post is not an introduction to Azure Functions nor an introduction to ServiceBus. For those not familiar will these Azure services take a look at the Azure Documentation Centre.

Let’s start by creating a simple C# function that listens for messages delivered to a SB subscription.

create azure function

Azure Functions provide a number of ways we can receive the message into our functions, but for the purpose of this post we’ll use the BrokeredMessage type as we will want access to the message properties. See the above link for further options for receiving messages into Azure Functions via the ServiceBus trigger binding.

To use BrokeredMessage we’ll need to import the Microsoft.ServiceBus assembly and change the input type to BrokeredMessage.

If we did nothing else, our function would receive the message from SB, log its message ID and remove it from the queue. Actually, the SB trigger peeks the message off the queue, acquiring a peek lock in the process. If the function executes successfully, the message is removed from the queue. So what happens when things go pear shaped? Let’s add a pear and observe what happens.

Note: To send messages to the SB topic, I use Paolo Salvatori’s ServiceBus Explorer. This tool allows us to view queue and message properties mentioned in this post.

retry by default

Notice the function being triggered multiple times. This will continue until the SB queue’s MaxDeliveryCount is exceeded. By default, SB queues and topics have a MaxDeliveryCount of 10. Let’s output the delivery count in our function using a message property on the BrokeredMessage class so we can observe this in action.

outputing delivery count

From the logs we see the message was retried 10 times until the maximum number of deliveries was reached and ServiceBus expired the message, sending it to the dead letter queue. “Ah hah!”, I hear you say. Implement message retries by configuring the MaxDeliveryCount property on the SB queue or subscription. Well, that will work as a simple, static retry policy but quite often we need a more configurable, dynamic approach. One based on the message context or type of exception caught by the processing logic.

Typical use cases include handling business errors (e.g. message validation errors, downstream processing errors etc.) vs transport errors (e.g. downstream service unavailable, request timeouts etc.). When handling business errors we may elect not to retry the failed message and instead move it to the dead letter queue. When handling transport errors we may wish to treat transient failures (e.g. database connections) and protocol errors (e.g. 503 service unavailable) differently. Transient failures we may wish to retry a few times over a short period of time where-as protocol errors we might want to keep trying the service over an extended period.

To implement this capability we’ll create a shared function that determines the appropriate retry policy based on context of the exception thrown. It will then check if the number of retry attempts against the maximum defined by the policy. If retry attempts have been exceeded, the message is moved to the dead letter queue, else the function waits for the duration defined by the policy before throwing the original exception.

Let’s change our function to throw mock exceptions and call our retry handler function to implement message retry policy.

Now our function implements some basic exception handling and checks for the appropriate retry policy to use. Let’s test our retry polices work by throwing the different exceptions our policy supports

Testing throwing our mock business rules exception…

test mock business exception

…we observe that the message is moved straight to the dead letter queue as per our defined policy.

Testing throwing our mock protocol exception…test mock protocol exception

…we observe that we retry the message a total of 5 times, waiting 3 seconds between retries as per the defined policy for protocol errors.

Considerations  

  • Ensure your SB queues and subscriptions are defined with a MaxDeliveryCount greater than your maximum number of retries.
  • Ensure your SB queues and subscriptions are defined with a TTL period greater than your maximum retry interval.
  • Be aware that using Consumption based service plans have a maximum execution duration of 5 minutes. App Service Plans don’t have this constraint. However, ensure the functionTimeout setting in the host.json file is greater than your maximum retry interval.
  • Also be aware that if you are using Consumption based plans you will still be charged for time spent waiting for the retry interval (thread sleep).

Conclusion

In this post we have explored the behaviour of the ServiceBus trigger binding within Azure Functions and how we can implement a dynamic message retry policy. As long as you are willing to manage the deserialization of message content yourself (rather than have Azure Functions do it for you) you can gain access to the BrokeredMessage class and implement feature rich messaging solutions on the Azure platform.

Know Your Cloud Resource Costs on Azure

An organisation used to invest their IT infrastructure mostly for computers, network or data centre. Over time, they spent their budget for hosting spaces. Nowadays, in cloud environments, they mostly spend their funds to purchase computing power. Here’s a simple diagram about the cloud computing evolution. From left to right, expenditure shifts from infrastructure to computing power.

In the cloud environment, when we need resources, we just create and use them, and when we don’t need them any longer, we just delete them. But let’s think about this. If your organisation runs dev, test and production environment on cloud, the cost of resources running on dev or test environment is likely to be overlooked unless carefully monitored. In this case, your organisation might be receiving an invoice with massive amount of cost! That has to be avoided. In this post, we are going to have a look at the Azure Billing API that was released in preview and build a simple application to monitor costs in an effective way.

The sample codes used for this post can be found here.

Azure Billing API Structure

There are two distinctive APIs for Azure Billing – one is Usage API and the other is Rate Card API. Therefore, we can calculate how much we spent during a particular period.

Usage API

This API is based on a subscription. Within a subscription, we can send a request to calculate how much resources we used in a specified period. Here are the parameters we can use for these requests.

  • ReportedStartTime: Starting date/time reported in the billing system.
  • ReportedEndTime: Ending date/time reported in the billing system.
  • Granularity: Either Daily or Hourly. Hourly can return more detailed result but takes far longer time to get the result.
  • Details: Either true or false. This determines how usage is split into instance level or not. If false is selected, all same instance types are aggregated.

Here’s an interesting point on the term Reported. When we USE cloud resources, that can be interpreted from two different perspectives. The term, USE, might mean that the resources were actually used at the specified date/time, or the resource used events were reported to the billing system at the specified date/time. This happens because Azure is basically a distributed system scattered all around the world, and based on the data centre the resources are situated, the actual usage date/time can be reported to the billing system in a delayed manner. Therefore, even though we send requests based on the reported date/time, the responses containing usage data show the actual usage date/time.

Rate Card API

When you open a new Azure subscription, you might have noticed a code looking like MS-AZR-****P. Have you seen that code before? This is called Offer Durable ID and, based on this, different rates on resources apply. Please refer to this page to see more details about various types of offers. In order to send requests for this, we can use the following query parameters.

  • OfferDurableId: This is the offer Id. eg) MS-AZR-0017P (EA Subscription)
  • Currency: Currency that you want to look for. eg) AUD
  • Locale: Locale of your search region. eg) en-AU
  • Region: Two-letter ISO country code that you purchased this offer. eg) AU

Therefore, in order to calculate the actual spending, we need to combine these two API responses. Fortunately, there’s a good NuGet library called CodeHollow.AzureBillingApi. So we just use it to figure out Azure resource consumption costs.

Scenario

Kloud, as a cloud consulting firm, offers all consultants access to the company’s subscription without restriction so that they can create resources to develop/test scenarios for their clients. However, once resources are created, there’s high chance that those resources are not destroyed in a timely manner, which brings about unnecessary cost spending. Therefore, management team has made a decision to perform cost control by resource groups 1) assigning resource group owners, 2) setting total spend limit, and 3) setting daily spend limit, using tags. By virtue of these tags, resource group owners are notified via email when cost approaches 90% of the total spend limit, and when it reaches the total spend limit. They also get notified if the cost exceeds the daily spend limit so they can take appropriate actions for their resource groups.

Sounds simple, right? Let’s code it!

When the application is written, it should be run daily to aggregate all costs, store it to database, and send notifications to resource group owners that meets the conditions above.

Writing Common Libraries

The common libraries consist of three parts. Firstly, it calls Azure Billing API, and aggregates data by date and resource group. Secondly, it stores those aggregated data into database. Finally, it sends notification to resource group owners who have resource groups that exceeds either total spend limit or daily spend limit.

Azure Billing API Call & Data Aggregation

CodeHollow.AzureBillingApi can reduce huge amount of API calling work. Its simple implementation might look like:

First of all, like the code above, we need to fetch all resource usage/cost data then, like below, those data needs to be grouped by dates and resource groups.

We now have all cost related data per resource group. We then need to fetch tag values from resource groups using another API call and merge it with the data previously populated.

We can look up all resource groups in a given subscription like above, and merge this result with the cost data that we previously found, like below.

Date Storage

This is the simplest part. Just use Entity Framework and store data into the database.

We’ve so far implemented data aggregation part.

Notification

First of all, we need to fetch resource groups that meet conditions, which is not that hard to write.

The code above is self-explanatory: it only returns resource groups that 1) approach the total spend limit or 2) exceed the total spend limit, or 3) exceed the daily spend limit. It works well, even though it looks smelly.

The following code bits show how to send notifications to the resource group owners.

It only writes alarms onto the screen, but we can implement SendGrid for email notification or Twillio for SMS alert, in here.

Now we’ve got the basic application structure. How can we execute it, by the way? We might have two approaches – Azure WebJobs and Azure Functions. Let’s move on.

Monitoring Application on Azure WebJob

A console application might be the simplest way for this purpose. Once the console app is built, it can be deployed to an Azure WebJob straight away. Here’s the simple console application code.

Aggregator service collects and store data and Reminder service sends alerts to resource group owners. In order to deploy this to Azure WebJob, we need to create two extra files, run.cmd and settings.job.

  • settings.job: It contains CRON expression for scheduled job. For example, if this WebJob runs every night at 00:20, the JSON object might look like:

  • run.cmd: When this WebJob is run, it always looks up run.cmd first, which is a simple batch command file. Therefore, if necessary, we can enter the actual executable command with appropriate arguments into this file.

That’s how we can use Azure WebJob for monitoring.

Monitoring Application on Azure Function

We can use Azure Functions instead. But in this case we HAVE TO make sure:

Azure Functions instance MUST be with App Service Plan, NOT Consumption Plan

Basically, this app runs for 1-2 minutes at the shortest or 30-40 minutes at the longest. This execution time is not affordable for Consumption Plan, which charges costs based on execution time. On the other hand, as we have already paid for App Service Plan, we don’t need to pay extra for the Function instance, if we create it under the App Service Plan.

Timer Trigger Function code suits our purpose. Also using Precompiled Azure Functions approach would be more helpful and the function code might look like:

Here’s the function.json for this Timer Trigger one:

Here we have shown how to quickly write a simple application for cost monitoring, using the Azure Billing API. Cloud resources can certainly be used effectively and efficiently, but the flipside of it is, of course, that we have to be very careful not to be wasteful. Therefore, implementing a monitoring application would help in preventing unwanted cost leak.

Precompiled Azure Functions Revisited

Since Microsoft released a preview version of Visual Studio Tools for Azure Functions in December, 2016, it’s been reported very buggy. Current roadmap says the tooling won’t be unveiled until .NET Standard 2.0 is released. Therefore, Azure App Service Team published an article to utilise ASP.NET Web Application projects in the meantime. Even though the approach requires a lot of hands for initial setup, this is literally the only way to play Azure Functions on Visual Studio without the tooling.

In this post, as a complementary one to the article, we are going to build a simple Web API using HTTP Triggers and implement a simple CQRS pattern using Queue Triggers by precompiled approach. I hope this post would offer more details.

The sample code used for this post can be found here.

SIDE NOTES:

  • We use VS2015 in this post. If you want to use VS2017, that would be fine.
  • We only use .NET Framework version 4.6 as Azure Functions uses this version.

Benefits of Precompiled Azure Functions

There are several benefits when we use Azure Functions that come as precompiled .dll files:

  1. We can use full features on Visual Studio, including IntelliSense.
  2. We can easily write unit test codes.
  3. We can easily attach function codes to existing CI/CD pipelines.
  4. We can easily migrate the existing codebase with barely modifying them.
  5. We don’t need project.json for NuGet package management.
  6. We can reduce the total amount of cold start time by removing compiling on-the-fly when requests hit to the Functions.

Let’s find out a scenario we’re using in this post.

Web APIs to Create or View Product Details

We’re creating two Web API endpoints – one to add/update product details, and the other to view product details. We also add another Function code using Azure Storage Queue to implement CQRS patterns.

Entity Framework Code-First Approach for Product Details

First of all, we’re creating a class library to take care of database transaction using Entity Framework. Here are a simple product class and database context.

We can find this at the PrecompiledSample.EntityModels project.

API Request/Response Object

Next, we create a simple DTO, ProductModel.csnot to directly expose database entity.

We can find this at the PrecompiledSample.Models project.

Service Layer

Lastly, When we develop Web API applications, we usually implement service layers that are called by controllers. Of course, there’s no controller in Azure Functions, those service layers are instantiated within the Run method of Azure Functions code.

We can find this at the PrecompiledSample.Services project.

So far, have you found out any difference? Probably not. Now, it’s time to write Azure Functions code.

Creating Web Application Project for Azure Functions

As Azure Functions basically run on top of an Azure Web App instance, we can start with an empty ASP.NET Web Application project, targeting .NET Framework 4.6, within Visual Studio.

Of course there’s nothing in the project, except packages.config and Web.config.

Install NuGet packages below to run Azure Functions within our Web Application project:

For our simple Web API, for database transactions, we need more NuGet packages:

We’ve completed our basic setup for Azure Functions code. Let’s move on.

Writing Azure Functions

Let’s think about the Azure Functions app structure. Each folder of the Web Application project works as an individual function. Therefore, we can simply create folders and put function.json in each of them. Let’s have a look.

  • AddProductHttpTrigger: This is an HTTP Trigger works as an API endpoint, which takes POST requests, sends the message body to Azure Storage Queue and returns HTTP Status Code of 202 (Accepted) right away.
  • AddProductQueueTrigger: This is a Queue Trigger that handles database transaction, which watches Azure Storage Queue, takes messages from there and process them to the database.
  • GetProductHttpTrigger: This is an HTTP Trigger works as an API endpoint, which takes GET request and returns messages with HTTP Status Code of 200 (OK).

Here’s a high-level diagram how POST request is processed through Functions.

We mentioned about function.json just above. What do we put inside it, then? The easiest way to check its content will be to just create a function code on Azure Functions instance. Let’s have a look.

AddProductHttpTrigger

If we create an HTTP Trigger function from Azure Portal, we can find the function.json. It looks like:

There are two bindings defined – one for input binding and the other for output binding. The input binding has a type of httpTrigger, while the output binding has a type of http. Copy all and paste it into the function.json in our Visual Studio project. And add another output binding with the type of queue. Also, we need to provide precompiled .dll file’s information like below:

As defined above, we need AddProductHttpTrigger.cs.

Copy the code from Run.csx of Azure Portal and paste it into the file, then modify it like below:

How can we find out the differences between our code and Run.csx?

  1. There’s no #r directive. This is because the function code is not a script anymore.
  2. There are namespace and class name definition. Previously in script-style function code didn’t allow those definitions around the Run method.
  3. Message body from the request is directly sent to the queue.
  4. At the same time, it returns HTTP Status Code of 202 (Accepted). This is more semantic than returning HTTP Status Code of 200 (OK) as it’s the non-blocking and async function.

We’ve now created the API endpoint to create a resource, which is corresponding to the C of the CQRS pattern.

AddProductQueueTrigger

When we create a sample Queue Trigger function code, we can see the function.json. Copy and paste it into our function.json in Visual Studio and define the entry point like:

Create the AddProductQueueTrigger.cs file like below:

Copy the code from Run.csx and paste it to the file and modify it like below:

This function actually does database transactions. Let’s have a look:

  1. While normal web applications refer to Web.config for database connection string, Azure Functions is not designed to read from it. Instead, it uses the App Settings blade of the Azure Function Portal that defines database connection string. In spite of this fact, we can still write code the same way, ConfigurationManager.ConnectionStrings["NAME"].ConnectionString.
  2. If we want to use additional configurations, use ConfigurationManager.AppSettings["KEY"] that is defined in the App Settings blade of the Azure Functions Portal. We can’t use custom configuration section for this purpose. Alternatively, if you really want to custom configuration settings, create a JSON file, mysettings.json for example, and deserialise it using Json.NET.
  3. It uses the service layer instance that is written in another project, PrecompiledSample.Services. If you are concerned at dependencies, consider the service locator pattern. Testing Precompiled Azure Functions explained well how to apply service locator pattern in Azure Functions.

So far, we have written function codes for resource creation.

GetProductHttpTrigger

Copy the function.json from the HTTP Trigger function created earlier in the portal, paste it into the function.json in Visual Studio, and modify it like:

Then, create the GetProductHttpTrigger.cs file.

Here’s the code for it:

Let’s have a look at the code.

  1. This function code takes the GET request, so it looks up the id query parameter from the querystring and use it as the ProductId value.
  2. It uses the service layer instance that is written in another project, PrecompiledSample.Services.
  3. It returns the resource details with HTTP Status Code of 200 (OK).

So far, we have created the resource lookup API, corresponding to the Q of the CQRS pattern. we’ve completed all necessary function codes. If we integrate this Web Application project with Azure Functions CLI, we can easily perform testing and debugging in our local environment with the same development experiences that Visual Studio offers.

Setting Up Debugging Environment for Azure Functions within Visual Studio

We need two more tools for our local debugging experience within Visual Studio.

In order to install Azure Functions CLI, we can simply run the command, npm install --global azure-functions-cli. Make sure that those two tools only work in Windows at the time of writing. Once both are installed, open the project property window like below:

Move to the Web tab and enter the necessary information.

When we integrate Azure Functions CLI with the Web Application project, there are a few points that we need to make sure.

  • The installed location of node.js might be different:
    • If it is downloaded from https://nodejs.org, the CLI location would be C:\Users\[USERNAME]\AppData\Roaming\npm\node_modules\azure-functions-cli\bin\func.exe.
    • If it is installed through NVM, the CLI would be located at C:\Program Files\nodejs\node_modules\azure-functions-cli\bin\func.exe.
  • For Command line arguments, its value would be host start to run WebJobs host in our local machine.
  • Working directory needs to have the absolute path of the Web Application project where Azure Functions codes reside.

Unfortunately, we can’t use % environment variables here.

Finally, we need to add two .json files – appsettings.json and host.json. Unless necessary, host.json is empty. On the other hand, we need to put some details into appsettings.json like below:

As we’re using Azure Storage Emulator, the value, UseDevelopmentStorage=true, is used. As well as for the database connection string is defined in this file.

Now, it’s time for debugging! Set the Web Application project as a startup project and punch the F5 key, then we’ll be able to see the command prompt window that is running Azure Functions CLI.

Let’s send a POST request through a REST API testing tool like Postman. As we can see the screenshot above, the endpoint URL for resource creation is http://localhost:7071/api/AddProductHttpTrigger, so send a POST request like below:

Then, the code stops at the break point where we setup in Visual Studio.

How did you feel it? We can use the same development experience for Azure Functions development. Now, it’s time for deployment to the actual Azure Functions instance.

Deploying Azure Functions

We have developed Azure Function codes within a Web Application project. That means we will have the same deployment experience.

When we choose the publish menu like above, we can select either Azure App Service option

or import publish profile settings file downloaded from the Azure Function instance.

Once we complete deployment, we can confirm on Azure Function portal that all function codes have been successfully deployed. Please note that we don’t need to build separate web application project for each function, but just put everything in one web application project, which would be sufficient.

Once deployed, let’s send a POST request to the endpoint for the AddProductHttpTrigger function. Request body will flow the diagram mentioned before.

Once data has been processed, let’s check the result on the Azure Function side:

And here’s the database query result.

So far, we have built a sample Azure Functions code using Web Application project on Visual Studio. We have used .dll files instead of .csx files for Function codes. With these precompiled .dll library files, we have performed debugging and deployment as well. How did you guys find out? Is it easier to use? Does it give you the same development experiences? It may not be easy at the first glance. However, because this is the same approach that we develop a web application, we can easily get used to it.

Hope this post will help you write Azure Function codes with full support by Visual Studio.

Calling WCF client proxies in Azure Functions

Azure Functions allow developers to write discrete units of work and run these without having to deal with hosting or application infrastructure concerns. Azure Functions are Microsoft’s answer to server-less computing on the Azure Platform and together with Azure ServiceBus, Azure Logic Apps, Azure API Management (to name just a few) has become an essential part of the Azure iPaaS offering.

The problem

Integration solutions often require connecting legacy systems using deprecating protocols such as SOAP and WS-*. It’s not all REST, hypermedia and OData out there in the enterprise integration world. Development frameworks like WCF help us deliver solutions rapidly by abstracting much of the boiler plate code away from us. Often these frameworks rely on custom configuration sections that are not available when developing solutions in Azure Functions. In Azure Functions (as of today at least) we only have access to the generic appSettings and connectionString sections of the configuration.

How do we bridge the gap and use the old boiler plate code we are familiar with in the new world of server-less integration?

So let’s set the scene. Your organisation consumes a number of legacy B2B services exposed as SOAP web services. You want to be able to consume these services from an Azure Function but definitely do not want to be writing any low level SOAP protocol code. We want to be able to use the generated WCF client proxy so we implement the correct message contracts, transport and security protocols.

In this post we will show you how to use a generated WCF client proxy from an Azure Function.

Start by generating the WCF client proxy in a class library project using Add Service Reference, provide details of the WSDL and build the project.

add_service_reference

Examine the generated bindings to determine the binding we need and what policies to configure in code within our Azure Function.

bindings

In our sample service above we need to create a basic http binding and configure basic authentication.

Create an Azure Function App using an appropriate template for your requirements and follow the these steps to call your WCF client proxy:

Add the System.ServiceModel NuGet package to the function via the project.json file so we can create and configure the WCF bindings in our function
project_json

Add the WCF client proxy assembly to the ./bin folder of our function. Use Kudo to create the folder and then upload your assembly using the View Files panelupload_wcf_client_assembly

In your function, add references to both the System.ServiceModel assembly and your WCF client proxy assembly using the #r directive

When creating an instance of the WCF client proxy, instead of specifying the endpoint and binding in a config file, create these in code and pass to the constructor of the client proxy.

Your function will look something like this

Lastly, add endpoint address and client credentials to appSettings of your Azure Function App.

Test the function using the built-in test harness to check the function executes ok

test_func

 

Conclusion

The suite of integration services available on the Azure Platform are developing rapidly and composing your future integration platform on Azure is a compelling option in a maturing iPaaS marketplace.

In this post we have seen how we can continue to deliver legacy integration solutions using emerging integration-platform-as-a-service offerings.

Automate the nightly backup of your Development FIM/MIM Sync and Portal Servers Configuration

Last week in a customer development environment I had one of those oh shit moments where I thought I’d lost a couple of weeks of work. A couple of weeks of development around multiple Management Agents, MV Schema changes etc. Luckily for me I was just connecting to an older VM Image, but it got me thinking. It would be nice to have an automated process that each night would;

  • Export each Management Agent on a FIM/MIM Sync Server
  • Export the FIM/MIM Synchronisation Server Configuration
  • Take a copy of the Extensions Folder (where I keep my PowerShell Management Agents scripts)
  • Export the FIM/MIM Service Server Configuration

And that is what this post covers.

Overview

My automated process performs the following;

  1. An Azure PowerShell Timer Function WebApp is triggered at 2330 each night
  2. The Azure Function App initiates a Remote PowerShell session to my Dev MIM Sync Server (which is also a MIM Service Server)
  3. In the Remote PowerShell session the script;
    1. Creates a new subfolder under c:\backup with the current date and time (dd-MM-yyyy-hh-mm)

  1. Creates further subfolders for each of the backup elements
    1. MAExports
    2. ServerExport
    3. MAExtensions
    4. PortalExport

    1. Utilizes the Lithnet MIIS Automation PowerShell Module to;
      1. Enumerate each of the Management Agents on the FIM/MIM Sync Server and export each Management Agent to the MAExports Folder
      2. Export the FIM/MIM Sync Server Configuration to the ServerExport Folder
    2. Copies the Extensions folder and subfolder contexts to the MAExtensions Folder
    3. Utilizes the FIM/MIM Export-FIMConfig cmdlet to export the FIM Server Configuration to the PortalExport Folder

Implementing the FIM/MIM Backup Process

The majority of the setup to get this to work I’ve covered in other posts, particularly around Azure PowerShell Function Apps and Remote PowerShell into a FIM/MIM Sync Server.

Pre-requisites

  • I created a C:\Backup Folder on my FIM/MIM Server. This is where the backups will be placed (you can change the path in the script).
  • I installed the Lithnet MIIS Automation PowerShell Module on my MIM Sync Server
  • I configured my MIM Sync Server to accept Remote PowerShell Sessions. That involved enabling WinRM, creating a certificate, creating the listener, opening the firewall port and enabling the incoming port on the NSG . You can easily do all that by following my instructions here. From the same post I setup up the encrypted password file and uploaded it to my Function App and set the Function App Application Settings for MIMSyncCredUser and MIMSyncCredPassword.
  • I created an Azure PowerShell Timer Function App. Pretty much the same as I show in this post, except choose Timer.
    • I configured my Schedule for 2330 every night using the following CRON configuration

0 30 23 * * *

  • I set the Azure Function App Timezone to my timezone so that the nightly backup happened at the correct time relative to my timezone. I got my timezone index from here. I set the  following variable in my Azure Function Application Settings to my timezone name AUS Eastern Standard Time.

    WEBSITE_TIME_ZONE

The Function App Script

With all the pre-requisites met, the only thing left is the Function App script itself. Here it is. Update lines 2, 3 & 6 if your variables and password key file are different. The path to your password keyfile will be different on line 6 anyway.

Update line 25 if you want the backups to go somewhere else (maybe a DFS Share).
If your MIM Service Server is not on the same host as your MIM Sync Server change line 59 for the hostname. You’ll need to get the FIM/MIM Automation PS Modules onto your MIM Sync Server too. Details on how to achieve that are here.

Running the Function App I have limited output but enough to see it run. The first part of the script runs very quick. The Export-FIMConfig is what takes the majority of the time. That said less than a minute to get a nice point in time backup that is auto-magically executed nightly. Sorted.

 

Summary

The script itself can be run standalone and you could implement it as a Scheduled Task on your FIM/MIM Server. However I’m using Azure Functions for a number of things and having something that is easily portable and repeatable and centralised with other functions (pun not intended) keeps things organised.

I now have a daily backup of the configurations associated with my development environment. I’m sure this will save me some time in the near future.

Follow Darren on Twitter @darrenjrobinson

 

 

 

Integrating Microsoft Flow with Azure Functions for Non-IT People

Microsoft Flow (Flow) creates automated workflows between various apps and services so that users can get notifications, collect data and more. This is similar to Azure Logic Apps (Logic Apps), but has different target audiences such as marketing, sales or all other non-IT related people. This document provides high-level comparisons between Flow, Logic Apps and Azure Functions.

Flow contains comprehensive number of pre-defined workflows called templates so we can just simply choose one of them, provide necessary information and use it. If there is no template suitable for our purpose, we can create a new template from scratch using pre-defined triggers and actions. If there is no trigger or action pre-defined, we can use a simple HTTP trigger using Azure Functions. In this post, we are going to have a look how use Azure Functions, HTTP Trigger in particular, to integrate with Flow.

As a Marketing Staff, I Want to …

Let’s say there is someone from a marketing department. They want to search all Twitter posts with a hashtag, #ausopen, for example and those posts are fetched to their marketing Slack channel. This can be easily accomplished by using a pre-defined template.

We can easily set the hashtag they want to follow and Slack channel to fetch like:

This is all set! Too easy! Now, we are with the Free plan, this Flow runs every five minutes. If we want to run the flow more frequently, we should upgrade the plan to paid ones like Flow Plan 1 (runs every 3 mins) or Flow Plan 2 (runs every minute). Once the flow runs, the marketing channel in Slack will receive all tweets like:

We’ve so far created a Flow item as an example.

As a …, I Want to Handle those Tweets in a Different Way

Probably, the marketing staff needs more sophisticated analysis by storing those tweets into database or want to do something else that pre-defined actions/triggers don’t support out-of-the-box. In this case we can introduce HTTP Trigger Functions to do so. Let’s create an HTTP Trigger Function.

Of course, we should implement more complex logic in the function. However, this is just an example, so we only log how Flow passes the data to Azure Function for now. When the function is ready like above, we know its endpoint URL like https://my-function-app.azurewebsites.net/api/TwitterWebhoook?code=XXXXXX.

Copy this endpoint URL for Flow. Now we need to modify the existing Flow item like:

When a new tweet with the hashtag #ausopen is found, the entire tweet object is passed to Azure Functions through the POST method, then the tweet is posted to the Slack channel. Wait for up to five minutes (we’re with the Free Plan!)

Slack channel has finally been updated.

This is the log from Flow:

And this is the log from Azure Functions:

So far, we have integrated Azure Functions (HTTP Trigger) with Microsoft Flow so that we can do more complex jobs through it. The code used in this post was very simple, but depending on the complexity of requirements, the function will handle jobs in more sophisticated way.

Testing Precompiled Azure Functions

Azure Functions has recently added a new feature that allows precompiled assembly to run functions. This gives us a great confidence with regards to unit testing. In this post, we are walking through how to unit test functions with ease, like which we do tests everyday.

The sample code used in this post can be found at HERE.

Function without Dependency

We’re not digging down precompiled function too much as it’s on the document. Let’s have a quick look at the HTTP trigger function code:

Nothing special. Now, we’re writing a test code for this function using xUnit and FluentAssertions.

How does it look like? It’s the same unit test code as what we do everyday. Let’s move on.

Function with Dependency

As I wrote Managing Dependencies in Azure Functions on the other day, dependency management is a bit tricky for Azure Functions due to its static nature. Therefore, we should introduce Service Locator Pattern for dependency management (or injection). Here’s the sample function code:

As we can see the code above, value is retrieved from the service locator instance. Of course, this is just a simple implementation of service locator pattern (If we need more sophisticated one, we should consider an IoC container library like Autofac). And here’s the poorman’s service locator:

Let’s see the test code for the function with dependencies. With the service locator, we can inject mocked object for unit testing, which is convenient for developers. In order for mocking, we use Moq in the following test code.

We create a mocked instance and inject it into the service locator. Then the injected value (or instance) is consumed within the function. How different is it from the everyday testing? There’s no difference at all. In other words, implementing a service locator gives us the same development experiences on Azure Functions, from the testing point of view.

I wrote another article for testing a few months ago, using ScriptCs. This used to be one approach, when Azure Functions didn’t support the precompiled assemblies. Now, we have precompiled functions supported. Therefore, I hope this post would be useful to design functions with better testability.

How to create an Azure Function App to Simultaneously Start|Stop all Virtual Machines in a Resource Group

Just on a year ago I wrote this blog post that detailed a method to “Simultaneously Start|Stop all Azure Resource Manager Virtual Machines in a Resource Group”. It’s a simple script that I use quite a lot and I’ve received a lot of positive feedback on it.

One year on though and there are a few enhancements I’ve been wanting to make to it. Namely;

  • host the script in an environment that is a known state. Often I’m authenticated to different Azure Subscriptions, my personal, my employers and my customers.
  • prioritize the order the virtual machines startup|shutdown
  • allow for a delay between starting each VM (to account for environments where the VM’s have roles that have cross dependencies; e.g A Domain Controller, an SQL Server, Application Servers). You want the DC to be up and running before the SQL Server, and so forth
  • and if I do all those the most important;
    • secure it so not just anyone can start|stop my environments at their whim

Overview

This blog post is the first that executes the first part of implementing the script in an environment that is a known state aka implementing it as an Azure Function App. This won’t be a perfect implementation as you will see, but will set the foundation for the other enhancements. Subsequent posts (as I make time to develop the enhancements) will add the new functionality. This post covers;

  • Creating the Azure Function App
  • Creating the foundation for automating management of Virtual Machines in Azure using Azure Function Apps
  • Starting | Stopping all Virtual Machines in an Azure Resource Group

Create a New Azure Function App

First up we are going to need a Function App. Through your Azure Resource Manager Portal create a new Function App.

For mine I’ve created a new Resource Group and a new Storage Account as this solution will flesh out over time and I’d like to keep everything organised.

Now that we have the Azure App Plan setup, create a New PowerShell HTTP Trigger Function App.

Give it a name and hit Create.

 

Create Deployment Credentials

In order to get some of the dependencies into the Azure Function we need to create deployment credentials so we can upload them. Head to the Function App Settings and choose Go to App Service Settings.

Create a login and give it a password. Record the FTP/Deployment username and the FTP hostname along with your password as you’ll need this in the next step.

Upload our PowerShell  Modules and Dependencies

Just as my original PowerShell script did I’m using the brilliant Invoke Parallel Powershell Script from Rambling Cookie Monster. Download it from that link and save it to your local machine.

Connect to your Azure Function App using your favourite FTP Client using the credentials you created earlier. I’m using WinSCP. Create a new sub-directory under /site/wwwroot/ named “bin” as shown below.

Upload the Invoke-Parallel.ps1 file from wherever you extracted it to on your local machine to the bin folder you just created in the Function App.

We are also going to need the AzureRM Powershell Modules. Download those via Powershell to your local machine (eg. Save-Module -Name AzureRM -Path c:\temp\azurerm). There are a lot of modules obviously and you’re not going to need them all. At a minimum for this solution you’ll need;

  • AzureRM
  • AzureRM.profile
  • AzureRM.Compute

Upload them under the bin directory also as shown below.

Test that our script dependencies are accessible

Now that we have our dependent modules uploaded let’s test that we can load and utilise them. Below is commands to load the Invoke-Parallel script and test that it has loaded by getting the Help.

# Load the Invoke-Parallel Powershell Script
. "D:\home\site\wwwroot\RG-Start-Stop-VirtualMachines\bin\Invoke-Parallel.ps1"

# See if it is loaded by getting some output
Get-Help Invoke-Parallel -Full

Put those lines into the code section, hit Save and Run and select Logs to see the output. If successful you’ll see the help. If you don’t you probably have a problem with the path to where you put the Invoke-Parallel script. You can use the Kudu Console from the Function App Settings to get a command line and verify your path.

Mine worked successfully. Now to test our AzureRM Module Loads. Update the Function to load the AzureRM Profile PSM as per below and test you have your path correct.

# Import the AzureRM Powershell Module
import-module 'D:\home\site\wwwroot\RG-Start-Stop-VirtualMachines\bin\AzureRM.profile\2.4.0\AzureRM.Profile.psm1'
Get-Help AzureRM

Success. Fantastic.

Create an Azure Service Principal

In order to automate the access and control of the Azure Virtual Machines we are going to need to connect to Azure using a Service Principal with the necessary permissions to manage the Virtual Machines.

The following script does just that. You only need to run this as part of the setup for the Azure Function so we have an account we can use for our automation tasks. Update line 6 for your naming and the password you want to use. I’m assigning the Service Principal the “DevTest Labs User” Azure Role (Line 17) as that allows the ability to manage the Virtual Machines. You can find a list of the available roles here.

Take note of the key outputs from this script. You will need to note the;

  • ApplicationID
  • TenantID

I’m also securing the credential that has the permissions to Start|Stop the Virtual Machines using the example detailed here in Tao’s post.

For reference here is an example to generate the keyfile. Update your path in line 5 if required and make sure the password you supply in line 18 matches the password you supplied for the line in the script (line 6) when creating the Security Principal.

Take note of the password encryption string from the end of the script to pair with the ApplicationID and TenantID from the previous steps. You’ll need these shortly in Application Settings.

Additional Dependencies

I created another sub-directory under the function app site named ‘keys’ again using WinSCP. Upload the passkey file created above into that directory.

Whilst we’re there I also created a “logs” directory for any erroneous output (aka logfiles created when you don’t specify them) from the invoke-parallel script.

Application Variables

Using the identity information you have created and generated we will populate variables on the Function App, Application Settings that we can then leverage in our Function App. Go to your Azure Function App, Application Settings and add an application setting (with the respective values you have gathered in the previous steps) for;

  • AzureAutomationPWD
  • AzureAutomationAppID
  • AzureAutomationTennatID (bad speed typing there)

Don’t forget to click Save up the top of the Application Settings screen.

 

The Function App Script

Below is the sample script for your testing purposes. If you plan to use something similar in a production environment you’ll want to add more logging and error handling.

Testing the Function

Select the Test option from the right-hand side pane and update the request body for what the Function takes (mode and resourcegroup) as below.   Select Run and watch the logs. You will need to select Expand to get more screen real estate for them.

You will see the VM’s enumerate then the script starting them all up. My script has a 30 second timeout for the Invoke-Parallel Runspace as the VM’s will take longer than 30 seconds to startup. And you pay for use, so we want to keep this lean. Increase the timeout if you have more VM’s or latency that doesn’t see all your VM’s state transitioning.

Checking in the Azure Portal I can see my VM’s all starting up (too fast on the screenshot for the spfarm-mim host).

 

Sample Remote PowerShell Invoke Script

Below is a sample PowerShell script that is remotely calling the Azure Function and providing the info the Function takes (mode and resourcegroup) the same as we did in the Test Request Body script in the Azure Function Portal.  This time to stop the VMs.

Looking in the Azure Portal and we can see all the VMs shutting down.

 

Summary

A foundational implementation of an Azure Function App to perform orchestration of Azure Virtual Machines.

The Function App is rudimentary in that the script exits (as described in the Runspace timeout) after 30 seconds which is prior to the VMs fully returning after starting|stopping. This is because the Function App will timeout after 5mins anyway.

Now to workout the enhancements to it.

Finally, yes I have renewed/changed the Function Key so no-one else can initiate my Function 🙂

Follow Darren Robinson on Twitter

Is Azure Functions over Web API Beneficial?

Whenever I meet clients and give a talk about Azure Functions, they are immediately interested in replacing their existing Web API features with Azure Functions. In this post, I’d like to discuss:

  • Can Azure Functions replace Web API?
  • Is it worth doing?

It would be a good idea to have a read through this article, Serverless Architectures, before starting.

HTTP Trigger Function == Web API Action

One of characteristics of Serverless Architecture is “event-driven”. In other words, all functions written in Azure Functions are triggered by events. Those events of course include HTTP requests. From this HTTP request point of view, both HTTP trigger function and Web API action work exactly the same way. Let’s compare both codes to each other:

How do both look like? They look pretty similar to each other. Both take an HTTP request, process it and return a response. Therefore, with minor modification, it seems that Web API can be easily migrated to Azure Functions.

HTTP Trigger Function != Web API Action

However, life is not easy. There are some major differences we should know before migration:

Functions are always static methods

Even though Azure Functions are extensions of Azure WebJobs, each function has a static modifier by design, unlike Azure WebJobs can be without the static modifier.

Actions of Web API, by the way, don’t have the static modifier. This results in a significant architectural change during the migration, especially with dependency injection (DI). We will touch it later.

Functions always receive HttpRequestMessage instance as a parameter

Within the HTTP request/response pipeline, a Web API controller internally creates an HttpContext instance to handle data like headers, cookies, sessions, querystrings and request body (of course querystrings and request body can be handled in a different way). The HttpContext instance works as an internal property so any action can directly access to it. As a result, each action only passes necessary details as its parameters.

On the other hand, each function takes a different HTTP request/response pipeline from Web API, which passes the HttpRequestMessage instance to the function as a parameter. The HttpRequestMessage instance only handles headers, querystrings and request body. It doesn’t look after cookies or sessions. This is the huge difference between Web API and Azure Functions in terms of stateless.

Functions define HTTP verbs and routes in function.json

In Web API, we put some decorators like HttpGet, HttpPost, HttpPut, HttpPatch and HttpDelete on each action to declare which HTTP verbs take which action, by combining with the Route decorator.

On the other hand, each function has a definition of HTTP verbs and routes on function.json. With this definition, different functions having the same route URI can handle requests based on HTTP verbs.

Functions define base endpoint URI in host.json

Other than the host part of URI, eg) https://api.myservice.com, the base URI is usually defined on the controller level of Web API by adding the Route decorator. This is dead simple.

However, as there’s no controller on Azure Functions, it is defined in host.json. Default value is api, but we can remove or redefine to others by modifying host.json.

While function.json can be managed at the function level through GUI or editor, unfortunately it’s not possible to edit host.json directly in the function app. There’s a workaround using Azure App Service Editor to modify host.json, by the way.

Functions should consider service locator pattern for dependency management

There are many good IoC container libraries for Web API to manage dependencies. However, we have already discussed this in my previous post, Managing Dependencies in Azure Functions, that Service Locator Pattern should be considered for DI in Azure Functions and actually this is the only way to deal with dependencies for now. This is because every Azure Function has the static modifier which prevents us from using the same way as the one in Web API.

We know different opinions against service locator patterns for Azure Functions exist out there, but this is beyond our topic, so we will discuss it later in another post.

Is Azure Functions over Web API Beneficial?

So far, we have discussed what are same and what are different between Web API and Azure Functions HTTP Trigger. Back to the initial question, is it really worth migrating Web API to Azure Functions? How does your situation fall under any of below?

  • My Web API is designed for microservices architecture: Then it’s good to go for migration to Azure Functions.
  • My Web API takes long for response: Then consider Azure Functions using empty instance in App Service Plan because it costs nothing more. Consumption Plan (or Dynamic Service Plan) would cost too much in this case.
  • My Web API is refactored to use queues: Then calculate the price carefully, not only price for Azure Functions but also price for Azure Service Bus Queue/Topic and Azure Storage Queue. In addition to this, check the number of executions as each Web API is refactored to call one Http Trigger function plus at least one Queue Trigger function (two executions in total, at least). Based on the calculations, we can make a decision to stay or move.
  • My Web API needs a significant amount of efforts for refactoring: Then it’s better to stay until it’s restructured and suitable for microservices architecture.
  • My Web API is written in ASP.NET Core: Then stay there, do not even think of migration, until Azure Functions support ASP.NET Core.

To sum up, unless your Web API requires a significant amount of refactoring or written in ASP.NET Core, it surely is worth considering migration to Azure Functions. It is much easier to use and cost-effective solution for your Web API.

Debugging Azure Functions in Our Local Box

Because of the nature of Azure Functions – Serverless Architecture, it’s a bit tricky to run it on my local machine for debugging purpose.

There is an approach related to the issue in this post, Testing Azure Functions in Emulated Environment with ScriptCs. According to the article, we can use ScriptCs for local unit testing. However, the question on debugging still remains because testing and debugging is a different story. Fortunately, Microsoft has recently released Visual Studio Tools for Azure Functions. It’s still a preview stage, but worth having a look. In this post, I’m going to walk-through how to debug Azure Functions within Visual Studio.

Azure Functions Project & Templates

After we install the toolings, we can create an Azure Functions project.

That gives us the same development experiences. Pretty straight forward. Once we create the new project, we can find nothing but only a couple of .json files   appsettings.json and host.json. The appsettings.json is only used for our local development, not for the production, to hook up the actual Azure Functions in the cloud. We are going to touch this later in this post.

Now let’s create a function in C# codes. Right mouse click on the project and add a new function.

Then we can see a list of templates to start with. We just select from HttpTrigger function in C#.

Now we have a fresh new function.

As we can see, we have a couple of another .json files for settings. function.json defines input and output, and project.json defines list of NuGet packages to import, same as what .NET Core projects do.

We’ve all got now. How to debug the function then? Let’s move on.

Debugging Functions – HTTP Trigger

Open the run.csx file. Set a break point wherever we want.

Now, it’s time for debugging! Just punch F5 key. If we haven’t installed Azure Functions CLI, we will be asked to install it.

We can manually install CLI through npm by typing:

Once the CLI is installed, a command prompt console is open. In the console, it shows a few various useful information.

  • Functions in the debugging mode only takes the 7071 port. If any of application running in our local has already taken the port, it would be in trouble.
  • The endpoint of the Function is always http://localhost:7071/api/Function_Name, which is the same format as the actual Azure Functions in the cloud. We can’t change it.

Now it’s waiting for the request coming in. It’s basically an HTTP request, we can send the request through our web browser, Postman or even curl.

Any of request above is executed, it will hit the break point within Visual Studio.

And the CLI console prints out the log like:

Super cool! Isn’t it? Now, let’s do queue trigger function.

Debugging Functions – Queue Trigger

Let’s create another function called QueueTriggerCSharp. Unlike HTTP triggered functions, this doesn’t have endpoint URL (at least not publicly exposed) as it relies on Azure Storage Queue (or Azure Service Bus Queue). We have Azure Storage Emulator that runs on our dev machine. With this emulator, we can debug our queue-triggered functions.

In order to run this function, we need to setup both appsettings.json and function.json

Here’s the appsettings.json file. We simply assign UseDevelopmentStorage=true value to AzureWebJobsStorage for now. Then, open function.json and assign AzureWebJobsStorage to the connection key.

We’re all set. Hit F5 key see how it’s going.

Seems nothing has happened. But when we see the console, certainly the queue-triggered function is up and running. How can we pass the queue value then? We have to use the CLI command here. If environment variable doesn’t recognise func.exe, we have to use the full path to run it.

Now we can see the break point at Visual Studio.

So far, we’ve briefly walked through how to debug Azure Functions in Visual Studio. There are, however, some known issues. One of those issues is, due to the nature of .csx format, IntelliSense doesn’t work as expected. Other than that, it works great! So, if your organisation was hesitating at using Azure Functions due to the lack of debugging feature, now it’s the time to play around it!