Azure Function Proxies for API Mocking

In my previous posts, Is Your Serverless Application Testable? – Azure Logic Apps and API Mocking for Developers, we have looked how to mock APIs with various approaches including Azure API Management, AWS API Gateway, MuleSoft and Azure Functions. Quite recently, Azure Functions Team released a new mocking feature in Azure Function Proxies. With this feature, API mocking can’t be even easier. In this post, I’m going to show how to use this mocking feature in several ways.

Enable API Mocking via Azure Portal

This is pretty straight forward. Within Azure Portal, simply enable the Azure Function Proxies feature:

If you have already enabled this feature, make sure that its runtime version is 0.2. Then click the plus sign next to Proxies (preview) and put details like:

At this stage, we might have found an interesting one. This proxy doesn’t have to link to an existing HTTP endpoint. In other words, we don’t need to have a real API for mocking. Just create a mocked HTTP endpoint and use it. Once we save this, we’ll have an endpoint like:

Use Postman to hit this mocked URL:

We receive the mocked HTTP status code and response body. How easy!

But if we have an existing Azure Function app and want to integrate this mocking feature via CI/CD pipeline, what can we do? Let’s move on.

Enable Azure Function Proxy Option via ARM Template

Azure Functions app instance is basically the same web app instance. So we can use the same ARM template definition for app settings configurations. In order to enable Azure Function Proxies, we simply add a ROUTING_EXTENSION_VERSION key with a value of ~0.2. Here’s a sample ARM template definition:

Once we update our ARM template, we can simply deploy this template to Azure.

Enable Azure Function Proxy Option via PowerShell

If ARM template is not feasible to enable this proxy option, then we can use a PowerShell script (or Azure CLI equivalent). Here’s a sample script:

Hence, this PowerShell script can be invoked with parameters like:

After either way, ARM template or PowerShell, is executed, the function app instance will have the value like:

Now, we have proxies feature enabled through code! Let’s move on.

Deploy Azure Function Proxy with Azure Function App

Defining Azure Function proxies is just to add another JSON file, proxies.json, to the app instance. The JSON schema definition can be found at http://json.schemastore.org/proxies. So, basically the proxies.json looks like:

As defined above, the mocked API has a URL of /mock/hello-world with response code of 200 and body { "message": "Hello World" }. As long as we follow this structure, we can add as many mocked APIs as we like.

Once we add this proxies.json into our Azure Functions project, deploy it to Azure. Then we’ll be able to see the list of mocked APIs.

So far, we have looked how to add mocked APIs to an Azure Function instance with various ways. Depending on preferences, any approach would be fine. But just make sure that this is the easiest and cheapest way to mock API endpoints.

Monitoring Azure Storage Queues with Application Insights and Azure Monitor

Azure Queues provides an easy queuing system for cloud-based applications. Queues allow for loose coupling between application components, and applications that use queues can take advantage of features like peek-locking and multiple retry attempts to enable application resiliency and high availability. Additionally, when Azure Queues are used with Azure Functions or Azure WebJobs, the built-in poison queue support allows for messages that repeatedly fail processing attempts to be moved to a dedicated queue for later inspection.

An important part of operating a queue-based application is monitoring the length of queues. This can tell you whether the back-end parts of the application are responding, whether they are keeping up with the amount of work they are being given, and whether there are messages that are causing problems. Most applications will have messages being added to and removed from queues as part of their regular operation. Over time, an operations team will begin to understand the normal range for each queue’s length. When a queue goes out of this range, it’s important to be alerted so that corrective action can be taken.

Azure Queues don’t have a built-in queue length monitoring system. Azure Application Insights allows for the collection of large volumes of data from an application, but it does not support monitoring queue lengths with its built-in functionality. In this post, we will create a serverless integration between Azure Queues and Application Insights using an Azure Function. This will allow us to use Application Insights to monitor queue lengths and set up Azure Monitor alert emails if the queue length exceeds a given threshold.

Solution Architecture

There are several ways that Application Insights could be integrated with Azure Queues. In this post we will use Azure Functions. Azure Functions is a serverless platform, allowing for blocks of code to be executed on demand or at regular intervals. We will write an Azure Function to poll the length of a set of queues, and publish these values to Application Insights. Then we will use Application Insights’ built-in analytics and alerting tools to monitor the queue lengths.

Base Application Setup

For this sample, we will use the Azure Portal to create the resources we need. You don’t even need Visual Studio to follow along. I will assume some basic familiarity with Azure.

First, we’ll need an Azure Storage account for our queues. In our sample application, we already have a storage account with two queues to monitor:

  • processorders: this is a queue that an API publishes to, and a back-end WebJob reads from the queue and processes its items. The queue contains orders that need to be processed.
  • processorders-poison: this is a queue that WebJobs has created automatically. Any messages that cannot be processed by the WebJob (by default after five attempts) will be moved into this queue for manual handling.

Next, we will create an Azure Functions app. When we create this through the Azure Portal, the portal helpfully asks if we want to create an Azure Storage account to store diagnostic logs and other metadata. We will choose to use our existing storage account, but if you prefer, you can have a different storage account than the one your queues are in. Additionally, the portal offers to create an Application Insights account. We will accept this, but you can create it separately later if you want.

1-FunctionsApp

Once all of these components have been deployed, we are ready to write our function.

Azure Function

Now we can write an Azure Function to connect to the queues and check their length.

Open the Azure Functions account and click the + button next to the Functions menu. Select a Timer trigger. We will use C# for this example. Click the Create this function button.

2-Function

By default, the function will run every five minutes. That might be sufficient for many applications. If you need to run the function on a different frequency, you can edit the schedule element in the function.json file and specify a cron expression.

Next, paste the following code over the top of the existing function:

This code connects to an Azure Storage account and retrieves the length of each queue specified. The key parts here are:

var connectionString = System.Configuration.ConfigurationManager.AppSettings["AzureWebJobsStorage"];

Azure Functions has an application setting called AzureWebJobsStorage. By default this refers to the storage account created when we provisioned the functions app. If you wanted to monitor a queue in another account, you could reference the storage account connection string here.

var queue = queueClient.GetQueueReference(queueName);
queue.FetchAttributes();
var length = queue.ApproximateMessageCount;

When you obtain a reference to a queue, you must explicitly fetch the queue attributes in order to read the ApproximateMessageCount. As the name suggests, this count may not be completely accurate, especially in situations where messages are being added and removed at a high rate. For our purposes, an approximate message count is enough for us to monitor.

log.Info($"{queueName}: {length}");

For now, this line will let us view the length of the queues within the Azure Functions log window. Later, we will switch this out to log to Application Insights instead.

Click Save and run. You should see something like the following appear in the log output window below the code editor:

2017-09-07T00:35:00.028 Function started (Id=57547b15-4c3e-42e7-a1de-1240fdf57b36)
2017-09-07T00:35:00.028 C# Timer trigger function executed at: 9/7/2017 12:35:00 AM
2017-09-07T00:35:00.028 processorders: 1
2017-09-07T00:35:00.028 processorders-poison: 0
2017-09-07T00:35:00.028 Function completed (Success, Id=57547b15-4c3e-42e7-a1de-1240fdf57b36, Duration=9ms)

Now we have our function polling the queue lengths. The next step is to publish these into Application Insights.

Integrating into Azure Functions

Azure Functions has integration with Appliation Insights for logging of each function execution. In this case, we want to save our own custom metrics, which is not currently supported by the built-in integration. Thankfully, integrating the full Application Insights SDK into our function is very easy.

First, we need to add a project.json file to our function app. To do this, click the View files tab on the right pane of the function app. Then click the + Add button, and name your new file project.json. Paste in the following:

This adds a NuGet reference to the Microsoft.ApplicationInsights package, which allows us to use the full SDK from our function.

Next, we need to update our function so that it writes the queue length to Application Insights. Click on the run.csx file on the right-hand pane, and replace the current function code with the following:

The key new parts that we have just added are:

private static string key = TelemetryConfiguration.Active.InstrumentationKey = System.Environment.GetEnvironmentVariable("APPINSIGHTS_INSTRUMENTATIONKEY", EnvironmentVariableTarget.Process);
private static TelemetryClient telemetry = new TelemetryClient() { InstrumentationKey = key };

This sets up an Application Insights TelemetryClient instance that we can use to publish our metrics. Note that in order for Application Insights to route the metrics to the right place, it needs an instrumentation key. Azure Functions’ built-in integration with Application Insights means that we can simply reference the instrumentation key it has set up for us. If you did not set up Application Insights at the same time as the function app, you can configure this separately and set your instrumentation key here.

Now we can publish our custom metrics into Application Insights. Note that Application Insights has several different types of custom diagnostics that can be tracked. In this case, we use metrics since they provide the ability to track numerical values over time, and set up alerts as appropriate. We have added the following line to our foreach loop, which publishes the queue length into Application Insights.

telemetry.TrackMetric($"Queue length - {queueName}", (double)length);

Click Save and run. Once the function executes successfully, wait a few minutes before continuing with the next step – Application Insights takes a little time (usually less than five minutes) to ingest new data.

Exploring Metrics in Application Insights

In the Azure Portal, open your Application Insights account and click Metrics Explorer in the menu. Then click the + Add chart button, and expand the Custom metric collection. You should see the new queue length metrics listed.

4-AppInsights

Select the metrics, and as you do, notice that a new chart is added to the main panel showing the queue length count. In our case, we can see the processorders queue count fluctuate between one and five messages, while the processorders-poison queue stays empty. Set the Aggregation property to Max to better see how the queue fluctuates.

5-AppInsightsChart

You may also want to click the Time range button in the main panel, and set it to Last 30 minutes, to fully see how the queue length changes during your testing.

Setting up Alerts

Now that we can see our metrics appearing in Application Insights, we can set up alerts to notify us whenever a queue exceeds some length we specify. For our processorders queue, this might be 10 messages – we know from our operations history that if we have more than ten messages waiting to be processed, our WebJob isn’t processing them fast enough. For our processorders-poison queue, though, we never want to have any messages appear in this queue, so we can have an alert if more than zero messages are on the queue. Your thresholds may differ for your application, depending on your requirements.

Click the Alert rules button in the main panel. Azure Monitor opens. Click the + Add metric alert button. (You may need to select the Application Insights account in the Resource drop-down list if this is disabled.)

On the Add rule pane, set the values as follows:

  • Name: Use a descriptive name here, such as `QueueProcessOrdersLength`.
  • Metric: Select the appropriate `Queue length – queuename` metric.
  • Condition: Set this to the value and time period you require. In our case, I have set the rule to `Greater than 10 over the last 5 minutes`.
  • Notify via: Specify how you want to be notified of the alert. Azure Monitor can send emails, call a webhook URL, or even start a Logic App. In our case, I have opted to receive an email.

Click OK to save the rule.

6-Alert.PNG

If the queue count exceeds your specified limit, you will receive an email shortly afterwards with details:

7-Alert

Summary

Monitoring the length of your queues is a critical part of operating a modern application, and getting alerted when a queue is becoming excessively long can help to identify application failures early, and thereby avoid downtime and SLA violations. Even though Azure Storage doesn’t provide a built-in monitoring mechanism, one can easily be created using Azure Functions, Application Insights, and Azure Monitor.

Receive Push Notifications from Microsoft Identity Manager on your Mobile/Tablet/Computer

Background

Recently in a FIM/MIM environment a daily automated process was executing but the task it was performing was dependent on an upstream process that generates a feed, and the schedule for that feed had changed (without notice to me). Needless to say FIM/MIM wasn’t getting the information it needed to process. This got me thinking about notifications.

If you’re anything like me you probably have numerous email accounts and your subconscious has all but programmed itself to ignore “new email” notifications. However Push Notifications I typically do notice. Whilst in the example above I did have some error handling in place if the process completely failed (it is a development environment), I didn’t have anything for partial failures. Anyway it did get me thinking that I’d like to receive a notification if something that should happen didn’t.

Overview

This post details using push notifications to advise when expected events don’t transpire. In this particular example, I have an Azure Function App that connects once a day to a FTP Server and retrieves a series of exports and puts them on my FIM/MIM Synchronisation Sever. The Push Notification service I am using is Push Bullet. Push Bullet for free accounts (without a Pro subscription) are limited to 500 pushes per month. That should be more than enough. If I’ve got errors in excess of 500 per month I’ve got much bigger problems.

Getting Started

First up you will need to sign up for Push Bullet. It is very straight forward if you have a Facebook or Google account. As you’re probably wanting multiple people to receive the notifications it would pay to set up a shared Google Account that your team can use to connect to with their devices. Now you have an account head to your new Account Settings page and create an Access Token. Record it for use in the scripts below.

Connecting to the API

Test you can access the Push Bullet API using your Access Token and PowerShell. Update the following script for your Access Token in line 3 and execute. You should see information returned associated with your new Push Bullet account.

Next you will want to install the Push Bullet App on the device(s) you want to get the notification(s) on. I installed it on my Apple iPhone and also installed the Chrome Browser extension.

Using PowerShell we can then query to get the devices connected to the account. In the same PowerShell session you tested the API with above run this API call

$devices = Invoke-RestMethod -Method Get -Headers $header -Uri ($apiURI +"v2/devices")
$devices

This will return your registered devices.

Devices

If we want a notification to target a particular device we need to provide the Iden value associated with that device. If we don’t specify a target, the push notification will hit all devices. In my example above with two devices registered my iPhone was device two. So the target Iden I could get with;

$iphoneIden = $devices.devices[1].iden

Push Bullet allows for different notification types (Note, Link and File). Note is the one that’ll I’ll be using. More info on the other types here.

Sending Test Notification

To perform a notification test, update the following script for your Access Token (line 3). I’ve omitted the Device Identifier to send the message to all devices. I also had to logout of the iOS Push Bullet App and log in again to get the notifications to show.

Success. I received the notification on my iPhone and also in my Chrome browser.

IMG-5194

Implementation

Getting back to my requirement of being notified when a job didn’t find what it expected, I updated my PowerShell Function App that is based off this blog post here to evaluate what it processed and if it didn’t find what is expected, it sends me a notification. I already had some error handling in my implementation based off that blog post but it was based on full failure, not partial (which is what I was experiencing whereby only one part of the process wasn’t returning data).

NOTE: I had to also add the ServerCertificateValidationCallback line into my Function App script before calling the API POST to send the notification as I was getting the dreaded following PowerShell Invoke-RestMethod / Invoke-WebRequest error when sending the notification via the Function App. I didn’t get that error on my dev workstation which is a bit weird.

Invoke-WebRequest : The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure 
channel.

If you also receive the error above (or you will be sending Push Notifications via Azure Function Apps) insert this line before your invoke-restmethod call.

 [System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}

Summary

Essentially this is my first foray into enabling anything for Push Notifications and this post is food for thought on what can be easily enabled within FIM/MIM to give timely visibility to automated scheduled functions when they don’t perform as expected. It was incredibly simple to set up and get working. I see myself enabling more FIM/MIM functions with Push Notifications in the future.

Is Your Serverless Application Testable? – Azure Logic Apps

I’ve talked about testable Azure Functions in my previous post. In this post, I’m going to introduce building a testable Azure Logic Apps.

Sample codes used in this post can be found here.

User Story

  • As a DevOps engineer,
  • I want to search ARM templates using Azure Logic Apps
  • So that I can browse the search result.

Workflow in Logic Apps

Based on the user story above, the basic workflow in Logic Apps might look like:

  1. An HTTP request triggers the workflow by hitting the Request trigger.
  2. The first action, HTTP in this picture, calls the Azure Function written in the previous post.
  3. If this action returns a response with an HTTP status code greater or equal to 400, the workflow takes the path at the left-hand side, executes the ErrorResponse action and terminates.
  4. If this action returns a response with an HTTP status code less than 400, the workflow takes the path at the right-hand side and executes the Parse JSON action followed by the Condition action.
  5. If the condition returns true, the workflow runs the OkResponse action; otherwise it runs the NotFoundResponse action, and terminates.

Therefore, we can identify that the HTTP action returns three possible scenarios – Error, Success with no result and Success with result. Let’s keep this in mind.

In fact, the workflow is a visual representation of this JSON object.

As a Logic App is basically a JSON object, we can easily plug it into an ARM template and deploy it to Azure so that we use the Logic App workflow straight away.

THERE IS NO CODE AT ALL.

From the unit-testing point of view, no code means we’re not able to test codes. Logic Apps only works in Azure cloud environment, ie. we can’t run test in our isolated local development environment. Well, being unable to run unit-test Logic Apps doesn’t necessarily mean we can’t test it at all.

Let’s see the picture again.

There’s a point where we can run test. Yes, it’s the HTTP action – the API call. As we’ve identified above, there are three scenarios we need to test. Let’s move on.

Manual Testing the Logic App

First of all, send a normal request that returns the result through Postman.

This time, we send a different request that returns no data so that it returns 404 response code.

Finally, we send another request that returns an error response code.

Tests complete! Did you find any issue in this test process? We actually hit the working API defined in the HTTP action. What if the actual API call causes any side effect? Can we test the workflow WITHOUT hitting the actual API? We should find a way to decouple the API call from the workflow to satisfy testabilities. API mocking can achieve this goal.

API Mocking

I wrote a post, API Mocking for Developers. In the post, I introduced how to mock APIs using Azure API Management. This is one way of mocking APIs. Even though it’s very easy to use with a Swagger definition, it has a costing issue. Azure API Management is fairly expensive for mocking. What if we use Azure Functions for mocking? It’s virtually free unless more than one million times of function calls per month occur.

Therefore, based on the three scenarios identified above, we can easily write Functions codes that returns dummy response like below:

It seems to be cumbersome because we need to write code for mocking APIs. So, it’s totally up to you to use either Azure API Management or Azure Functions for mocking.

Now, we’ve got mocked API endpoints for testing. What’s next?

Logic Apps for Testing – Manual Test

As we identified three scenarios, we need to clone the working Logic Apps three times and replace the API endpoint with the mocked ones. Here are the three Logic Apps cloned from the working one.

Each Logic App needs to be called through Postman to check the expected result. In order to pass the test, the first one should return error response, the second one should return 404 and the last one should return 200. Here’s the high level diagram of testing Logic Apps based on the expected scenarios.

We’re now able to test Logic Apps with mocked APIs. However, it’s not enough. We need to automate this test to integrate a CI pipeline. What can we do further?

Logic Apps for Testing – Automated Test

If we write a PowerShell script or script to call those three testing Logic Apps, we can call the script within the CI pipeline.

Here’s the high level diagram for test automation.

During the build process, Jenkins, VSTS or other build automation server calls the PowerShell script. The script runs three API requests and checks their responses. If all requests return expected responses respectively, we consider the test succeeded. If any of request returns a response different from expectation, the test is considered as failure.


So far, we’ve briefly walked through how to run tests against Logic Apps by API mocking. By decoupling API request/response from the Logic App, we can focus on the workflow itself. I hope this approach helps writing your Logic Apps workflow.

Is Your Serverless Application Testable? – Azure Functions

In this post, I’m going to introduce several design patterns when we write Azure Functions codes in C#, by considering full testabilities.

Sample codes used in this post can be found here.

User Story

  • As a DevOps engineer,
  • I want to search ARM templates using Azure Functions
  • So that I can browse the search result.

Basic Function Code

We assume that we have a basic structure that Azure Functions can use. Here’s a basic function code might look like:

As you can see, all dependencies are instantiated and used within the function method. This works perfectly fine so there’s no problem at all. However, from a test point of view, this smells A LOT. How can we refactor this code for testing?

I’m going to introduce several design patterns. They can’t be overlooked, if you’ve been working with OO programming. I’m not going to dive into them too much in this post, by the way.

Service Locator Pattern

As you can see the code above, each and every Azure Functions method has the static modifier. Because of this restriction, we can’t use a normal way of dependency injections but the Service Locator Pattern. With this, the code above can be refactored like this:

First of all, the function code declares the IServiceLocator property with the static modifier. Then, the Run() method calls the service locator to resolve an instance (or dependency). Now, the method is fully testable.

Here’s a simple unit test code for it:

The Mock instance is created and injected to the static property. This is the basic way of dependency injection for Azure Functions. However, there’s still an issue on the service locator pattern.

Mark Seemann argues on his blog post that the service locator pattern is an anti pattern because we can’t guarantee whether dependencies that we want to use have already been registered or not. Let’s have a look at the refactored code again:

Within the Run() method, the IGitHubService instance is resolved by this line:

var service = ServiceLocator.GetInstance();

When the amount of codes is relatively small, that wouldn’t be an issue. However, if the application grows up quickly and has become a fairly large one, can we ensure if the service variable is always NOT null? I don’t think so. We have to use the service locator pattern, but at the same time, try not to use it. That doesn’t sound quite right, though. How can we achieve this goal?

Strategy Pattern

Strategy Pattern is basically one of the most popular design patterns and can be applied to all Functions codes. Let’s start from the IFunction interface.

Every function now implements the IFunction interface and all logics written in a trigger function MUST move into the InvokeAsync() method. Now, the FunctionBase abstract class implements the interface.

While implementing the IFunction interface, the FunctionBase class add the virtual modifier to the InvokeAsync() method so that other functions class inheriting this base class will override the method. Let’s put the real logic.

The GetArmTemplateDirectoriesFunction class overrides the InvokeAsync() method and processes everything within the method. Dead simple. By the way, what is the generic type of TOptions? This is the Options Pattern that we deal with right after this section.

Options Pattern

Many API endpoints have route variables. For example, if an HTTP trigger’s endpoint URL is /api/products/{productId}, the productId value keeps changing and the value can be handled by the function method like Run(HttpRequestMessage req, int productId, ILogger log). In other words, route variables are passed through parameters of the Run() method. Of course each function has different number of route variables. If this is the case, we can pass those route variables, productId for example, through an options instance. Code will be much simpler.

There is an abstract class of FunctionParameterOptions. The GetArmTemplateDirectoriesFunctionParameterOptions class inherits it, and contains a property of Query. This property can be used to store a querystring value for now. The InvokeAsync() method now only cares for the options instance, instead of individual route variables.

Builder Pattern

Let’s have a look at the function code again. The IGitHubService instance is injected through a constructor.

Those dependencies should be managed somewhere else, like an IoC container. Autofac and/or other IoC container libraries are good examples and they need to be combined with a service locator. If we introduce a Builder Pattern, it would be easily accomplished. Here’s a sample code:

With Autofac, all dependencies are registered within the RegisterModule() method and the container is wrapped with AutofacServiceLocator and exported to IServiceLocator within the Build() method.

Factory Method Pattern

All good now. Let’s combine altogether, using the Factory Method Pattern.

Within the constructor, all dependencies are registered by the ServiceLocatorBuilder instance. This includes all function classes that implement the IFuction interface. FunctionFactory has the Create() method and it encapsulates the service locator.

This function now has been refactored again. The IServiceLocator property is replaced with the FunctionFactory property. Then, the Run method calls the Create() method and InvokeAsync() method consecutively by fluent design approach. At the same time, GetArmTemplateDirectoriesFunctionParameterOptions is instantiated and passed the querystring value, which is injected to the InvokeAsync() method.

Now, we’ve got all function triggers and its dependencies are fully decoupled and unit-testable. It has also achieved to encapsulates the service locator so that we don’t even care about that any longer. From now on, code coverages have gone up to 100%, if TDD is properly applied.


So far, I’ve briefly introduced several useful design patterns to test Azure Functions codes. Those patterns are used for everyday development. This might not be the perfect solution, but I’m sure this could be one of the best practices I’ve found so far. If you have a better approach, please send a PR to the GitHub repository.

Azure Logic Apps will be touched in the next post.

Azure Functions Logging to Application Insights

We’re going to have a look at several ways to integrate Application Insights (AppInsights) with Azure Functions (Functions).

Functions supports built-in logging features using TraceWriter instance. Basic sample function might look like:

With TraceWriter, we can log information to the log console like:

However, it has the maximum limit of 1000 records. This is good for simple debugging purposes, but not for logging. Therefore, we should store logs somewhere like database or storage. Fortunately, AppInsights has recently been consolidated to Functions as a preview to overcome these limitations. Let’s have a look.

Application Insights Integration

According to the document, it’s really easy.

  1. Create an AppInsights instance. Its type MUST be General.
  2. Add a new key of APPINSIGHTS_INSTRUMENTATIONKEY to the AppSettings section of the Function instance.
  3. Give the Instrumentation Key value of the AppInsights instance to the key, APPINSIGHTS_INSTRUMENTATIONKEY.

This is it. Once it’s done, simply execute some functions and wait for the aggregated result up to 5 minutes. Then, go to the AppInsights blade and find the graph looking like:

Can’t be easier, huh?

ARM Template Setup for DevOps Engineers

We can add Instrumentation Key like above. However, this is not ideal from the CI/CD point of view. Instead, setting the key within an ARM template would be more preferred, effective and efficient. Here’s the cut-down version of ARM template sample:

As we can see above, we can directly pour the Instrumentation Key within the ARM template without knowing it. If we want to know more about ARM template, this official document would be a good starting point.

ILogger Integration

ILogger is a library of ASP.NET Core. As it supports .NET Standard 1.1, Functions has recently introduced ILogger version 1.1.1. With this, we can virtually add as many logging libraries as possible. Functions provides AppInsights logging through this interface. In other words, we can simply replace the TraceWriter instance with the ILogger one, in order to send all loggings to AppInsights.

This is also really easy. Simply replace TraceWriter with ILogger in the Function parameter and change the method name from log.Info() to log.LogInformation():

If we still want to keep the log.Info() method name, that’s fine. Simply create an extension method like:

And use the extension method like below:

Once everything is done, deploy the Function again and run Functions several times. Then check out the Function log console:

We have the exactly same experience as before. Of course, we can see additional logging details on the AppInsights blade:

As mentioned above, Functions has implemented the ILogger interface. That means, we might be able to add third-party logging libraries such as Serilog. Unfortunately, at the time of writing, we can’t use those third-party ones. But the Azure Functions Team has started looking at those implementation, according to this issue. Hope this feature is released soon.

Another known issue around the ILogger implementation on Functions is that the Azure Functions Core Tools that helps local debugging doesn’t display log to the console. So, don’t panic, even if no log is displayed on your local console. It displays as expected when you deploy your Functions to Azure. The Azure Functions Team works really hard to fix the issue sooner rather than later.

So far, we have walked through a few ways to integrate AppInsights with Azure Functions. As it’s still in preview, features around this may change over time until GA. But I’m pretty sure that this logging integration to AppInsights will be quite useful and powerful.

Integration of Microsoft Identity Manager with Azure Platform-as-a-Service Services

Overview

This isn’t an out of the box solution. This is a bespoke solution that takes a number of elements and puts them together in a unique way. I’m not expecting anyone to implement this specific solution (but you’re more than welcome to) but to take inspiration from it to implement solutions relevant to your environment(s). This post supports a presentation I did to The MIM Team User Group on 14 June 2017.

This post describes a solution that;

  • Leverages an Azure WebApp (NodeJS) to present a simple website. That site can be integrated easily in the FIM/MIM Portal
  • The NodeJS website leverages an Azure Function App to get a list of users from the FIM/MIM Synchronization Server and allows the user to use typeahead functionality to find the user they want to generate a FIM/MIM object report on
  • On selection of a user, a request will be sent to another Azure Function App to generate and return the report to the user in a new browser window

This is shown graphically below.

 

Report Request UI

The NodeJS WebApp is integrated into the FIM/MIM portal. Bootstrap Typeahead is used to find the user to generate a report on. The Typeahead userlist if fulfilled by an Azure Function into the MIM Sync Metaverse. The Generate Report button fires off a call to FIM/MIM via another Azure Function into the MIM Sync and MIM Service to generate the report.

The returned report opens in a new tab in the users browser. The report contains details of the FIM/MIM connectors the user is represented on.

The values of all attributes for the users hologram from the Metaverse are displayed along with the MA the value came from and the last modified date.

Finally the metadata report from the MIM Service MA Connector Space and the MIM Service.

Prerequisites

These are numerous, but I’ve previously posted about them. You will need;

I encourage you to digest those posts to understand how to configure the prerequisites for this solution.

Additional Solution Requirements

To bring all the individual components together, there are a few additional tasks to enable this solution.

  • Enable CORS on your Azure Function App Configuration (see details further below)
  • If you want to display User Object Photos as part of the report, you will likely need to synchronize them into FIM/MIM from an authoritative source (e.g. Office365/Exchange Online)   Checkout this post  and additional details further below
  • In order to embed the NodeJS WebApp into the FIM/MIM Portal, this post provides the details. Change the target URL from PowerBI URL to your NodeJS site
  • Object Report Request WebApp (see below for sample site)

Azure Functions Cross Origin Resource Sharing (CORS)

You will need to configure CORS to allow the NodeJS WebApp to access the Azure Functions (from both local and Azure). Reflect your port number if it is different from 3000, and use the DNS name for your Azure WebApp.

Sample UI NodeJS HTML

Here is a sample HTML file for your NodeJS WebApp with the UI to provide Input for LoginID fulfilled by the NodeJS Javascript file further below.

Sample UI NodeJS JavaScript

The following NodeJS JavaScript supports the HTML UI above. It populates the LoginID typeahead box and takes the Submit Report button to fulfill the report for the desired object(s). Yes if you use the UI to select (individually) multiple different objects all will be returned in their separate output windows.

As the HTML file above indicates you will need to obtain and make available as part of your NodeJS project the typeahead.bundle.js library.

Azure PowerShell Trigger Function App for AccountNames Lookup

The following Azure Function takes the call from the load of the NodeJS WebApp to populate the typeahead userlist.

Azure PowerShell Trigger Function App for User Object Report

Similar in structure to the Username List Lookup Azure Function above, but in the ScriptBlock you embed the Report Generation Script that is detailed here. Modify for what you want to report on.

Photos in the Report

If you want to display images in your report, you will need to determine if the user has an image during the MV metadata report generation part of the script. Add the following lines (updating for the name of your Image attribute; mine is named EXOPhoto) after the Try {} Catch {} in this section $obj = @() ; foreach ($attr in $attributes.Keys)

 # Display the Objects Photo rather than Base64 string 
if ($attr.equals("EXOPhoto")){ 
   $objectphoto = "<img src=$([char]0x22)data:image/jpeg;base64,$($attributes.$attr.Values.Valuestring)$([char]0x22)>" 
   $val = "System.Byte[]" 
}

Then in the output of the HTML report at the end of the report generation insert the $objectphoto variable into the HTML stream.

# Output MIM Service Object Data 
$MIMServiceObjOut = $MIMServiceObjectMetaData | Sort-Object -Property Attribute | ConvertTo-Html -Fragment 
$htmlreport = ConvertTo-HTML -Body "$htmlcss<h1>Microsoft Identity Manager User Object Report</h1><h2>Query</h2>$sourcequery</br><b><center>$objectphoto</br>NOTE: Only attributes with values are displayed.</center></b><h2>Connector(s) Summary</h2>$connectorsummary<h2>MetaVerse Data</h2>$objectmetadata <h2>MIM Service CS Object Data</h2>$MIMServiceCSobjectmetadata <h2>MIM Service Object Data</h2>$MIMServiceObjOut" -Title "MIM Object Report" 

 

As you can see above I’ve also injected the CSS ($htmlcss) into the output stream at the beginning of the Body section.  Somewhere in your script block you will need to define your CSS values. e.g.

 # StyleSheet for nice pretty output 
$htmlcss = "<style> 
   h1, h2, th { text-align: center; } 
   table { margin: auto; font-family: Segoe UI; box-shadow: 10px 10px 5px #888; border: thin ridge grey; } 
   th { background: #0046c3; color: #fff; max-width: 400px; padding: 5px 10px; } 
   td { font-size: 11px; padding: 5px 20px; color: #000; } 
   tr { background: #b8d1f3; } 
   tr:nth-child(even) { background: #dae5f4; } 
   tr:nth-child(odd) { background: #b8d1f3; } 
</style>"

Summary

An interesting solution integrating Azure PaaS Services with Microsoft Identity Manager via PowerShell and the extremely versatile Lithnet FIM/MIM PowerShell Modules.

Please share your implementations enhancing your FIM/MIM Solution.

Azure Functions with Swagger

Azure Functions Team has recently announced the Swagger support as a preview. If we use Azure Functions as APIs, this will be very useful. In this post, we will have a look how to enable Swagger support on Azure Functions.

Sample codes used for this post can be found here.

Sample Azure Functions Instance

First of all, with the sample code provided, we’re creating two HTTP triggers, CreateProduct and GetProduct. Once we deploy them, we can find it from the Azure Portal like:

Here are simple requests and responses through Postman:

Let’s create a Swagger definition document for those Functions.

Auto-Generate Swagger Definition

If we have at least a Function endpoint in our Function instance, we can automatically generate a Swagger definition in YAML format. Click the API definition (preview) tab.

As a default, the External URL button is chosen. Click the Functions button right next to it.

Because we have never generated the Swagger definition, it spits the error screen.

Now, click the Generate API definition template button so that the document is automatically generated.

Now we’ve got the Swagger definition doco. The actual document generated looks like:

However, there are at least three missing gaps that we have to fill in:

  • definitions: There’s no request/response model definition. We have to fill in.
  • produces/consumes: There’s no document type defined. In general, as JSON format is the most popular for REST API, we can simply add application/json here.
  • securityDefinitions: Azure Functions use either code in the querystring or x-functions-key in the request header for processing. The auto-generated template only defines the code one, not the other one. So we have to define the other x-functions-key.

Here’s the updated Swagger definition including the missing ones:

Once we update the Swagger definition, we can test the API right away by providing function key code and payload. Easy, huh? Also the address specified in the middle of the picture, https://xxxx.azurewebsites.net/admin/host/swagger?code=xxxx, allows us to access to the Swagger definition document in JSON format. Azure Functions instance automatically converts the YAML document to the JSON one.

It seems to be very easy. However, there is a critical point we have to bear in mind. The Function instance must have at least one function endpoint so that the Swagger definition should be auto-generated. In other words, API design-first approach is not applicable.

Now, we’ve got a question. If we only have Swagger definition document, not the actual implementation, what can we do with Azure Functions? Why not directly rendering Swagger document by Azure Function code?

Render Swagger Definition via Azure Functions

Here’s the deal. We basically create an Azure Function code that reads Swagger definition and render it as a response. The following function code will give us a brief idea:

The Function instance contains a swagger-v1.yaml file in its root level. When we see the code above, firstly it reads the file. In order to read the file, we have to set a value to represent the root path, called WEBROOT_PATH (or whatever) in the AppSettings section. Its value will be D:\home\site\wwwroot, which never changes unless Azure App Service changes it. There are two ways to read the settings value:

  1. var wwwroot = Environment.GetEnvironmentVariable("WEBROOT_PATH");
  2. var wwwroot = ConfigurationManager.AppSettings["WEBROOT_PATH"];

Either is fine to read the settings value. If we omit this settings value, Azure Functions basically assumes that the file is located at C:\Windows\System32, which will cause an unexpected result.

According to this document, at the time of this writing, we can pass the Microsoft.Azure.WebJobs.ExecutionContext instance as a function parameter so that we can handle the file path a little bit easier. However, as it’s not fully rolled out yet nor dev tools don’t have that feature yet, we should wait for it until it’s fully rolled out.

The code above then read the YAML file, convert it to JSON and render it. We can now see the Swagger definition through a web browser:

So far, we have briefly looked how to enable Swagger support in Azure Functions in two different ways. Both of course have goods and bads. The first one might be the easiest option but needs more work. Also if we want to access to the swagger definition with the first option, we have to use a different access code. This is a bit critical because we have to manage at least two different keys – one for Functions and the other for Swagger, which is not ideal. On the other hand, the second option needs another Function code but can be handled by the same host key that is accessible to the other Function codes. Therefore, from the management point of view, the second option might be better. It’s still in preview, so we hope that the GA version of the Swagger support will be better than now.

Message retry patterns in Azure Functions

Azure Functions provide ServiceBus based trigger bindings that allow us to process messages dropped onto a SB queue or delivered to a SB subscription. In this post we’ll walk through creating an Azure Function using a ServiceBus trigger that implements a configurable message retry pattern.

Note: This post is not an introduction to Azure Functions nor an introduction to ServiceBus. For those not familiar will these Azure services take a look at the Azure Documentation Centre.

Let’s start by creating a simple C# function that listens for messages delivered to a SB subscription.

create azure function

Azure Functions provide a number of ways we can receive the message into our functions, but for the purpose of this post we’ll use the BrokeredMessage type as we will want access to the message properties. See the above link for further options for receiving messages into Azure Functions via the ServiceBus trigger binding.

To use BrokeredMessage we’ll need to import the Microsoft.ServiceBus assembly and change the input type to BrokeredMessage.

If we did nothing else, our function would receive the message from SB, log its message ID and remove it from the queue. Actually, the SB trigger peeks the message off the queue, acquiring a peek lock in the process. If the function executes successfully, the message is removed from the queue. So what happens when things go pear shaped? Let’s add a pear and observe what happens.

Note: To send messages to the SB topic, I use Paolo Salvatori’s ServiceBus Explorer. This tool allows us to view queue and message properties mentioned in this post.

retry by default

Notice the function being triggered multiple times. This will continue until the SB queue’s MaxDeliveryCount is exceeded. By default, SB queues and topics have a MaxDeliveryCount of 10. Let’s output the delivery count in our function using a message property on the BrokeredMessage class so we can observe this in action.

outputing delivery count

From the logs we see the message was retried 10 times until the maximum number of deliveries was reached and ServiceBus expired the message, sending it to the dead letter queue. “Ah hah!”, I hear you say. Implement message retries by configuring the MaxDeliveryCount property on the SB queue or subscription. Well, that will work as a simple, static retry policy but quite often we need a more configurable, dynamic approach. One based on the message context or type of exception caught by the processing logic.

Typical use cases include handling business errors (e.g. message validation errors, downstream processing errors etc.) vs transport errors (e.g. downstream service unavailable, request timeouts etc.). When handling business errors we may elect not to retry the failed message and instead move it to the dead letter queue. When handling transport errors we may wish to treat transient failures (e.g. database connections) and protocol errors (e.g. 503 service unavailable) differently. Transient failures we may wish to retry a few times over a short period of time where-as protocol errors we might want to keep trying the service over an extended period.

To implement this capability we’ll create a shared function that determines the appropriate retry policy based on context of the exception thrown. It will then check if the number of retry attempts against the maximum defined by the policy. If retry attempts have been exceeded, the message is moved to the dead letter queue, else the function waits for the duration defined by the policy before throwing the original exception.

Let’s change our function to throw mock exceptions and call our retry handler function to implement message retry policy.

Now our function implements some basic exception handling and checks for the appropriate retry policy to use. Let’s test our retry polices work by throwing the different exceptions our policy supports

Testing throwing our mock business rules exception…

test mock business exception

…we observe that the message is moved straight to the dead letter queue as per our defined policy.

Testing throwing our mock protocol exception…test mock protocol exception

…we observe that we retry the message a total of 5 times, waiting 3 seconds between retries as per the defined policy for protocol errors.

Considerations  

  • Ensure your SB queues and subscriptions are defined with a MaxDeliveryCount greater than your maximum number of retries.
  • Ensure your SB queues and subscriptions are defined with a TTL period greater than your maximum retry interval.
  • Be aware that using Consumption based service plans have a maximum execution duration of 5 minutes. App Service Plans don’t have this constraint. However, ensure the functionTimeout setting in the host.json file is greater than your maximum retry interval.
  • Also be aware that if you are using Consumption based plans you will still be charged for time spent waiting for the retry interval (thread sleep).

Conclusion

In this post we have explored the behaviour of the ServiceBus trigger binding within Azure Functions and how we can implement a dynamic message retry policy. As long as you are willing to manage the deserialization of message content yourself (rather than have Azure Functions do it for you) you can gain access to the BrokeredMessage class and implement feature rich messaging solutions on the Azure platform.

Know Your Cloud Resource Costs on Azure

An organisation used to invest their IT infrastructure mostly for computers, network or data centre. Over time, they spent their budget for hosting spaces. Nowadays, in cloud environments, they mostly spend their funds to purchase computing power. Here’s a simple diagram about the cloud computing evolution. From left to right, expenditure shifts from infrastructure to computing power.

In the cloud environment, when we need resources, we just create and use them, and when we don’t need them any longer, we just delete them. But let’s think about this. If your organisation runs dev, test and production environment on cloud, the cost of resources running on dev or test environment is likely to be overlooked unless carefully monitored. In this case, your organisation might be receiving an invoice with massive amount of cost! That has to be avoided. In this post, we are going to have a look at the Azure Billing API that was released in preview and build a simple application to monitor costs in an effective way.

The sample codes used for this post can be found here.

Azure Billing API Structure

There are two distinctive APIs for Azure Billing – one is Usage API and the other is Rate Card API. Therefore, we can calculate how much we spent during a particular period.

Usage API

This API is based on a subscription. Within a subscription, we can send a request to calculate how much resources we used in a specified period. Here are the parameters we can use for these requests.

  • ReportedStartTime: Starting date/time reported in the billing system.
  • ReportedEndTime: Ending date/time reported in the billing system.
  • Granularity: Either Daily or Hourly. Hourly can return more detailed result but takes far longer time to get the result.
  • Details: Either true or false. This determines how usage is split into instance level or not. If false is selected, all same instance types are aggregated.

Here’s an interesting point on the term Reported. When we USE cloud resources, that can be interpreted from two different perspectives. The term, USE, might mean that the resources were actually used at the specified date/time, or the resource used events were reported to the billing system at the specified date/time. This happens because Azure is basically a distributed system scattered all around the world, and based on the data centre the resources are situated, the actual usage date/time can be reported to the billing system in a delayed manner. Therefore, even though we send requests based on the reported date/time, the responses containing usage data show the actual usage date/time.

Rate Card API

When you open a new Azure subscription, you might have noticed a code looking like MS-AZR-****P. Have you seen that code before? This is called Offer Durable ID and, based on this, different rates on resources apply. Please refer to this page to see more details about various types of offers. In order to send requests for this, we can use the following query parameters.

  • OfferDurableId: This is the offer Id. eg) MS-AZR-0017P (EA Subscription)
  • Currency: Currency that you want to look for. eg) AUD
  • Locale: Locale of your search region. eg) en-AU
  • Region: Two-letter ISO country code that you purchased this offer. eg) AU

Therefore, in order to calculate the actual spending, we need to combine these two API responses. Fortunately, there’s a good NuGet library called CodeHollow.AzureBillingApi. So we just use it to figure out Azure resource consumption costs.

Scenario

Kloud, as a cloud consulting firm, offers all consultants access to the company’s subscription without restriction so that they can create resources to develop/test scenarios for their clients. However, once resources are created, there’s high chance that those resources are not destroyed in a timely manner, which brings about unnecessary cost spending. Therefore, management team has made a decision to perform cost control by resource groups 1) assigning resource group owners, 2) setting total spend limit, and 3) setting daily spend limit, using tags. By virtue of these tags, resource group owners are notified via email when cost approaches 90% of the total spend limit, and when it reaches the total spend limit. They also get notified if the cost exceeds the daily spend limit so they can take appropriate actions for their resource groups.

Sounds simple, right? Let’s code it!

When the application is written, it should be run daily to aggregate all costs, store it to database, and send notifications to resource group owners that meets the conditions above.

Writing Common Libraries

The common libraries consist of three parts. Firstly, it calls Azure Billing API, and aggregates data by date and resource group. Secondly, it stores those aggregated data into database. Finally, it sends notification to resource group owners who have resource groups that exceeds either total spend limit or daily spend limit.

Azure Billing API Call & Data Aggregation

CodeHollow.AzureBillingApi can reduce huge amount of API calling work. Its simple implementation might look like:

First of all, like the code above, we need to fetch all resource usage/cost data then, like below, those data needs to be grouped by dates and resource groups.

We now have all cost related data per resource group. We then need to fetch tag values from resource groups using another API call and merge it with the data previously populated.

We can look up all resource groups in a given subscription like above, and merge this result with the cost data that we previously found, like below.

Date Storage

This is the simplest part. Just use Entity Framework and store data into the database.

We’ve so far implemented data aggregation part.

Notification

First of all, we need to fetch resource groups that meet conditions, which is not that hard to write.

The code above is self-explanatory: it only returns resource groups that 1) approach the total spend limit or 2) exceed the total spend limit, or 3) exceed the daily spend limit. It works well, even though it looks smelly.

The following code bits show how to send notifications to the resource group owners.

It only writes alarms onto the screen, but we can implement SendGrid for email notification or Twillio for SMS alert, in here.

Now we’ve got the basic application structure. How can we execute it, by the way? We might have two approaches – Azure WebJobs and Azure Functions. Let’s move on.

Monitoring Application on Azure WebJob

A console application might be the simplest way for this purpose. Once the console app is built, it can be deployed to an Azure WebJob straight away. Here’s the simple console application code.

Aggregator service collects and store data and Reminder service sends alerts to resource group owners. In order to deploy this to Azure WebJob, we need to create two extra files, run.cmd and settings.job.

  • settings.job: It contains CRON expression for scheduled job. For example, if this WebJob runs every night at 00:20, the JSON object might look like:

  • run.cmd: When this WebJob is run, it always looks up run.cmd first, which is a simple batch command file. Therefore, if necessary, we can enter the actual executable command with appropriate arguments into this file.

That’s how we can use Azure WebJob for monitoring.

Monitoring Application on Azure Function

We can use Azure Functions instead. But in this case we HAVE TO make sure:

Azure Functions instance MUST be with App Service Plan, NOT Consumption Plan

Basically, this app runs for 1-2 minutes at the shortest or 30-40 minutes at the longest. This execution time is not affordable for Consumption Plan, which charges costs based on execution time. On the other hand, as we have already paid for App Service Plan, we don’t need to pay extra for the Function instance, if we create it under the App Service Plan.

Timer Trigger Function code suits our purpose. Also using Precompiled Azure Functions approach would be more helpful and the function code might look like:

Here’s the function.json for this Timer Trigger one:

Here we have shown how to quickly write a simple application for cost monitoring, using the Azure Billing API. Cloud resources can certainly be used effectively and efficiently, but the flipside of it is, of course, that we have to be very careful not to be wasteful. Therefore, implementing a monitoring application would help in preventing unwanted cost leak.