Is Your Serverless Application Testable? – Azure Logic Apps

I’ve talked about testable Azure Functions in my previous post. In this post, I’m going to introduce building a testable Azure Logic Apps.

Sample codes used in this post can be found here.

User Story

  • As a DevOps engineer,
  • I want to search ARM templates using Azure Logic Apps
  • So that I can browse the search result.

Workflow in Logic Apps

Based on the user story above, the basic workflow in Logic Apps might look like:

  1. An HTTP request triggers the workflow by hitting the Request trigger.
  2. The first action, HTTP in this picture, calls the Azure Function written in the previous post.
  3. If this action returns a response with an HTTP status code greater or equal to 400, the workflow takes the path at the left-hand side, executes the ErrorResponse action and terminates.
  4. If this action returns a response with an HTTP status code less than 400, the workflow takes the path at the right-hand side and executes the Parse JSON action followed by the Condition action.
  5. If the condition returns true, the workflow runs the OkResponse action; otherwise it runs the NotFoundResponse action, and terminates.

Therefore, we can identify that the HTTP action returns three possible scenarios – Error, Success with no result and Success with result. Let’s keep this in mind.

In fact, the workflow is a visual representation of this JSON object.

As a Logic App is basically a JSON object, we can easily plug it into an ARM template and deploy it to Azure so that we use the Logic App workflow straight away.

THERE IS NO CODE AT ALL.

From the unit-testing point of view, no code means we’re not able to test codes. Logic Apps only works in Azure cloud environment, ie. we can’t run test in our isolated local development environment. Well, being unable to run unit-test Logic Apps doesn’t necessarily mean we can’t test it at all.

Let’s see the picture again.

There’s a point where we can run test. Yes, it’s the HTTP action – the API call. As we’ve identified above, there are three scenarios we need to test. Let’s move on.

Manual Testing the Logic App

First of all, send a normal request that returns the result through Postman.

This time, we send a different request that returns no data so that it returns 404 response code.

Finally, we send another request that returns an error response code.

Tests complete! Did you find any issue in this test process? We actually hit the working API defined in the HTTP action. What if the actual API call causes any side effect? Can we test the workflow WITHOUT hitting the actual API? We should find a way to decouple the API call from the workflow to satisfy testabilities. API mocking can achieve this goal.

API Mocking

I wrote a post, API Mocking for Developers. In the post, I introduced how to mock APIs using Azure API Management. This is one way of mocking APIs. Even though it’s very easy to use with a Swagger definition, it has a costing issue. Azure API Management is fairly expensive for mocking. What if we use Azure Functions for mocking? It’s virtually free unless more than one million times of function calls per month occur.

Therefore, based on the three scenarios identified above, we can easily write Functions codes that returns dummy response like below:

It seems to be cumbersome because we need to write code for mocking APIs. So, it’s totally up to you to use either Azure API Management or Azure Functions for mocking.

Now, we’ve got mocked API endpoints for testing. What’s next?

Logic Apps for Testing – Manual Test

As we identified three scenarios, we need to clone the working Logic Apps three times and replace the API endpoint with the mocked ones. Here are the three Logic Apps cloned from the working one.

Each Logic App needs to be called through Postman to check the expected result. In order to pass the test, the first one should return error response, the second one should return 404 and the last one should return 200. Here’s the high level diagram of testing Logic Apps based on the expected scenarios.

We’re now able to test Logic Apps with mocked APIs. However, it’s not enough. We need to automate this test to integrate a CI pipeline. What can we do further?

Logic Apps for Testing – Automated Test

If we write a PowerShell script or script to call those three testing Logic Apps, we can call the script within the CI pipeline.

Here’s the high level diagram for test automation.

During the build process, Jenkins, VSTS or other build automation server calls the PowerShell script. The script runs three API requests and checks their responses. If all requests return expected responses respectively, we consider the test succeeded. If any of request returns a response different from expectation, the test is considered as failure.


So far, we’ve briefly walked through how to run tests against Logic Apps by API mocking. By decoupling API request/response from the Logic App, we can focus on the workflow itself. I hope this approach helps writing your Logic Apps workflow.

Is Your Serverless Application Testable? – Azure Functions

In this post, I’m going to introduce several design patterns when we write Azure Functions codes in C#, by considering full testabilities.

Sample codes used in this post can be found here.

User Story

  • As a DevOps engineer,
  • I want to search ARM templates using Azure Functions
  • So that I can browse the search result.

Basic Function Code

We assume that we have a basic structure that Azure Functions can use. Here’s a basic function code might look like:

As you can see, all dependencies are instantiated and used within the function method. This works perfectly fine so there’s no problem at all. However, from a test point of view, this smells A LOT. How can we refactor this code for testing?

I’m going to introduce several design patterns. They can’t be overlooked, if you’ve been working with OO programming. I’m not going to dive into them too much in this post, by the way.

Service Locator Pattern

As you can see the code above, each and every Azure Functions method has the static modifier. Because of this restriction, we can’t use a normal way of dependency injections but the Service Locator Pattern. With this, the code above can be refactored like this:

First of all, the function code declares the IServiceLocator property with the static modifier. Then, the Run() method calls the service locator to resolve an instance (or dependency). Now, the method is fully testable.

Here’s a simple unit test code for it:

The Mock instance is created and injected to the static property. This is the basic way of dependency injection for Azure Functions. However, there’s still an issue on the service locator pattern.

Mark Seemann argues on his blog post that the service locator pattern is an anti pattern because we can’t guarantee whether dependencies that we want to use have already been registered or not. Let’s have a look at the refactored code again:

Within the Run() method, the IGitHubService instance is resolved by this line:

var service = ServiceLocator.GetInstance();

When the amount of codes is relatively small, that wouldn’t be an issue. However, if the application grows up quickly and has become a fairly large one, can we ensure if the service variable is always NOT null? I don’t think so. We have to use the service locator pattern, but at the same time, try not to use it. That doesn’t sound quite right, though. How can we achieve this goal?

Strategy Pattern

Strategy Pattern is basically one of the most popular design patterns and can be applied to all Functions codes. Let’s start from the IFunction interface.

Every function now implements the IFunction interface and all logics written in a trigger function MUST move into the InvokeAsync() method. Now, the FunctionBase abstract class implements the interface.

While implementing the IFunction interface, the FunctionBase class add the virtual modifier to the InvokeAsync() method so that other functions class inheriting this base class will override the method. Let’s put the real logic.

The GetArmTemplateDirectoriesFunction class overrides the InvokeAsync() method and processes everything within the method. Dead simple. By the way, what is the generic type of TOptions? This is the Options Pattern that we deal with right after this section.

Options Pattern

Many API endpoints have route variables. For example, if an HTTP trigger’s endpoint URL is /api/products/{productId}, the productId value keeps changing and the value can be handled by the function method like Run(HttpRequestMessage req, int productId, ILogger log). In other words, route variables are passed through parameters of the Run() method. Of course each function has different number of route variables. If this is the case, we can pass those route variables, productId for example, through an options instance. Code will be much simpler.

There is an abstract class of FunctionParameterOptions. The GetArmTemplateDirectoriesFunctionParameterOptions class inherits it, and contains a property of Query. This property can be used to store a querystring value for now. The InvokeAsync() method now only cares for the options instance, instead of individual route variables.

Builder Pattern

Let’s have a look at the function code again. The IGitHubService instance is injected through a constructor.

Those dependencies should be managed somewhere else, like an IoC container. Autofac and/or other IoC container libraries are good examples and they need to be combined with a service locator. If we introduce a Builder Pattern, it would be easily accomplished. Here’s a sample code:

With Autofac, all dependencies are registered within the RegisterModule() method and the container is wrapped with AutofacServiceLocator and exported to IServiceLocator within the Build() method.

Factory Method Pattern

All good now. Let’s combine altogether, using the Factory Method Pattern.

Within the constructor, all dependencies are registered by the ServiceLocatorBuilder instance. This includes all function classes that implement the IFuction interface. FunctionFactory has the Create() method and it encapsulates the service locator.

This function now has been refactored again. The IServiceLocator property is replaced with the FunctionFactory property. Then, the Run method calls the Create() method and InvokeAsync() method consecutively by fluent design approach. At the same time, GetArmTemplateDirectoriesFunctionParameterOptions is instantiated and passed the querystring value, which is injected to the InvokeAsync() method.

Now, we’ve got all function triggers and its dependencies are fully decoupled and unit-testable. It has also achieved to encapsulates the service locator so that we don’t even care about that any longer. From now on, code coverages have gone up to 100%, if TDD is properly applied.


So far, I’ve briefly introduced several useful design patterns to test Azure Functions codes. Those patterns are used for everyday development. This might not be the perfect solution, but I’m sure this could be one of the best practices I’ve found so far. If you have a better approach, please send a PR to the GitHub repository.

Azure Logic Apps will be touched in the next post.

Azure Functions Logging to Application Insights

We’re going to have a look at several ways to integrate Application Insights (AppInsights) with Azure Functions (Functions).

Functions supports built-in logging features using TraceWriter instance. Basic sample function might look like:

With TraceWriter, we can log information to the log console like:

However, it has the maximum limit of 1000 records. This is good for simple debugging purposes, but not for logging. Therefore, we should store logs somewhere like database or storage. Fortunately, AppInsights has recently been consolidated to Functions as a preview to overcome these limitations. Let’s have a look.

Application Insights Integration

According to the document, it’s really easy.

  1. Create an AppInsights instance. Its type MUST be General.
  2. Add a new key of APPINSIGHTS_INSTRUMENTATIONKEY to the AppSettings section of the Function instance.
  3. Give the Instrumentation Key value of the AppInsights instance to the key, APPINSIGHTS_INSTRUMENTATIONKEY.

This is it. Once it’s done, simply execute some functions and wait for the aggregated result up to 5 minutes. Then, go to the AppInsights blade and find the graph looking like:

Can’t be easier, huh?

ARM Template Setup for DevOps Engineers

We can add Instrumentation Key like above. However, this is not ideal from the CI/CD point of view. Instead, setting the key within an ARM template would be more preferred, effective and efficient. Here’s the cut-down version of ARM template sample:

As we can see above, we can directly pour the Instrumentation Key within the ARM template without knowing it. If we want to know more about ARM template, this official document would be a good starting point.

ILogger Integration

ILogger is a library of ASP.NET Core. As it supports .NET Standard 1.1, Functions has recently introduced ILogger version 1.1.1. With this, we can virtually add as many logging libraries as possible. Functions provides AppInsights logging through this interface. In other words, we can simply replace the TraceWriter instance with the ILogger one, in order to send all loggings to AppInsights.

This is also really easy. Simply replace TraceWriter with ILogger in the Function parameter and change the method name from log.Info() to log.LogInformation():

If we still want to keep the log.Info() method name, that’s fine. Simply create an extension method like:

And use the extension method like below:

Once everything is done, deploy the Function again and run Functions several times. Then check out the Function log console:

We have the exactly same experience as before. Of course, we can see additional logging details on the AppInsights blade:

As mentioned above, Functions has implemented the ILogger interface. That means, we might be able to add third-party logging libraries such as Serilog. Unfortunately, at the time of writing, we can’t use those third-party ones. But the Azure Functions Team has started looking at those implementation, according to this issue. Hope this feature is released soon.

Another known issue around the ILogger implementation on Functions is that the Azure Functions Core Tools that helps local debugging doesn’t display log to the console. So, don’t panic, even if no log is displayed on your local console. It displays as expected when you deploy your Functions to Azure. The Azure Functions Team works really hard to fix the issue sooner rather than later.

So far, we have walked through a few ways to integrate AppInsights with Azure Functions. As it’s still in preview, features around this may change over time until GA. But I’m pretty sure that this logging integration to AppInsights will be quite useful and powerful.

Integration of Microsoft Identity Manager with Azure Platform-as-a-Service Services

Overview

This isn’t an out of the box solution. This is a bespoke solution that takes a number of elements and puts them together in a unique way. I’m not expecting anyone to implement this specific solution (but you’re more than welcome to) but to take inspiration from it to implement solutions relevant to your environment(s). This post supports a presentation I did to The MIM Team User Group on 14 June 2017.

This post describes a solution that;

  • Leverages an Azure WebApp (NodeJS) to present a simple website. That site can be integrated easily in the FIM/MIM Portal
  • The NodeJS website leverages an Azure Function App to get a list of users from the FIM/MIM Synchronization Server and allows the user to use typeahead functionality to find the user they want to generate a FIM/MIM object report on
  • On selection of a user, a request will be sent to another Azure Function App to generate and return the report to the user in a new browser window

This is shown graphically below.

 

Report Request UI

The NodeJS WebApp is integrated into the FIM/MIM portal. Bootstrap Typeahead is used to find the user to generate a report on. The Typeahead userlist if fulfilled by an Azure Function into the MIM Sync Metaverse. The Generate Report button fires off a call to FIM/MIM via another Azure Function into the MIM Sync and MIM Service to generate the report.

The returned report opens in a new tab in the users browser. The report contains details of the FIM/MIM connectors the user is represented on.

The values of all attributes for the users hologram from the Metaverse are displayed along with the MA the value came from and the last modified date.

Finally the metadata report from the MIM Service MA Connector Space and the MIM Service.

Prerequisites

These are numerous, but I’ve previously posted about them. You will need;

I encourage you to digest those posts to understand how to configure the prerequisites for this solution.

Additional Solution Requirements

To bring all the individual components together, there are a few additional tasks to enable this solution.

  • Enable CORS on your Azure Function App Configuration (see details further below)
  • If you want to display User Object Photos as part of the report, you will likely need to synchronize them into FIM/MIM from an authoritative source (e.g. Office365/Exchange Online)   Checkout this post  and additional details further below
  • In order to embed the NodeJS WebApp into the FIM/MIM Portal, this post provides the details. Change the target URL from PowerBI URL to your NodeJS site
  • Object Report Request WebApp (see below for sample site)

Azure Functions Cross Origin Resource Sharing (CORS)

You will need to configure CORS to allow the NodeJS WebApp to access the Azure Functions (from both local and Azure). Reflect your port number if it is different from 3000, and use the DNS name for your Azure WebApp.

Sample UI NodeJS HTML

Here is a sample HTML file for your NodeJS WebApp with the UI to provide Input for LoginID fulfilled by the NodeJS Javascript file further below.

Sample UI NodeJS JavaScript

The following NodeJS JavaScript supports the HTML UI above. It populates the LoginID typeahead box and takes the Submit Report button to fulfill the report for the desired object(s). Yes if you use the UI to select (individually) multiple different objects all will be returned in their separate output windows.

As the HTML file above indicates you will need to obtain and make available as part of your NodeJS project the typeahead.bundle.js library.

Azure PowerShell Trigger Function App for AccountNames Lookup

The following Azure Function takes the call from the load of the NodeJS WebApp to populate the typeahead userlist.

Azure PowerShell Trigger Function App for User Object Report

Similar in structure to the Username List Lookup Azure Function above, but in the ScriptBlock you embed the Report Generation Script that is detailed here. Modify for what you want to report on.

Photos in the Report

If you want to display images in your report, you will need to determine if the user has an image during the MV metadata report generation part of the script. Add the following lines (updating for the name of your Image attribute; mine is named EXOPhoto) after the Try {} Catch {} in this section $obj = @() ; foreach ($attr in $attributes.Keys)

 # Display the Objects Photo rather than Base64 string 
if ($attr.equals("EXOPhoto")){ 
   $objectphoto = "<img src=$([char]0x22)data:image/jpeg;base64,$($attributes.$attr.Values.Valuestring)$([char]0x22)>" 
   $val = "System.Byte[]" 
}

Then in the output of the HTML report at the end of the report generation insert the $objectphoto variable into the HTML stream.

# Output MIM Service Object Data 
$MIMServiceObjOut = $MIMServiceObjectMetaData | Sort-Object -Property Attribute | ConvertTo-Html -Fragment 
$htmlreport = ConvertTo-HTML -Body "$htmlcss<h1>Microsoft Identity Manager User Object Report</h1><h2>Query</h2>$sourcequery</br><b><center>$objectphoto</br>NOTE: Only attributes with values are displayed.</center></b><h2>Connector(s) Summary</h2>$connectorsummary<h2>MetaVerse Data</h2>$objectmetadata <h2>MIM Service CS Object Data</h2>$MIMServiceCSobjectmetadata <h2>MIM Service Object Data</h2>$MIMServiceObjOut" -Title "MIM Object Report" 

 

As you can see above I’ve also injected the CSS ($htmlcss) into the output stream at the beginning of the Body section.  Somewhere in your script block you will need to define your CSS values. e.g.

 # StyleSheet for nice pretty output 
$htmlcss = "<style> 
   h1, h2, th { text-align: center; } 
   table { margin: auto; font-family: Segoe UI; box-shadow: 10px 10px 5px #888; border: thin ridge grey; } 
   th { background: #0046c3; color: #fff; max-width: 400px; padding: 5px 10px; } 
   td { font-size: 11px; padding: 5px 20px; color: #000; } 
   tr { background: #b8d1f3; } 
   tr:nth-child(even) { background: #dae5f4; } 
   tr:nth-child(odd) { background: #b8d1f3; } 
</style>"

Summary

An interesting solution integrating Azure PaaS Services with Microsoft Identity Manager via PowerShell and the extremely versatile Lithnet FIM/MIM PowerShell Modules.

Please share your implementations enhancing your FIM/MIM Solution.

Azure Functions with Swagger

Azure Functions Team has recently announced the Swagger support as a preview. If we use Azure Functions as APIs, this will be very useful. In this post, we will have a look how to enable Swagger support on Azure Functions.

Sample codes used for this post can be found here.

Sample Azure Functions Instance

First of all, with the sample code provided, we’re creating two HTTP triggers, CreateProduct and GetProduct. Once we deploy them, we can find it from the Azure Portal like:

Here are simple requests and responses through Postman:

Let’s create a Swagger definition document for those Functions.

Auto-Generate Swagger Definition

If we have at least a Function endpoint in our Function instance, we can automatically generate a Swagger definition in YAML format. Click the API definition (preview) tab.

As a default, the External URL button is chosen. Click the Functions button right next to it.

Because we have never generated the Swagger definition, it spits the error screen.

Now, click the Generate API definition template button so that the document is automatically generated.

Now we’ve got the Swagger definition doco. The actual document generated looks like:

However, there are at least three missing gaps that we have to fill in:

  • definitions: There’s no request/response model definition. We have to fill in.
  • produces/consumes: There’s no document type defined. In general, as JSON format is the most popular for REST API, we can simply add application/json here.
  • securityDefinitions: Azure Functions use either code in the querystring or x-functions-key in the request header for processing. The auto-generated template only defines the code one, not the other one. So we have to define the other x-functions-key.

Here’s the updated Swagger definition including the missing ones:

Once we update the Swagger definition, we can test the API right away by providing function key code and payload. Easy, huh? Also the address specified in the middle of the picture, https://xxxx.azurewebsites.net/admin/host/swagger?code=xxxx, allows us to access to the Swagger definition document in JSON format. Azure Functions instance automatically converts the YAML document to the JSON one.

It seems to be very easy. However, there is a critical point we have to bear in mind. The Function instance must have at least one function endpoint so that the Swagger definition should be auto-generated. In other words, API design-first approach is not applicable.

Now, we’ve got a question. If we only have Swagger definition document, not the actual implementation, what can we do with Azure Functions? Why not directly rendering Swagger document by Azure Function code?

Render Swagger Definition via Azure Functions

Here’s the deal. We basically create an Azure Function code that reads Swagger definition and render it as a response. The following function code will give us a brief idea:

The Function instance contains a swagger-v1.yaml file in its root level. When we see the code above, firstly it reads the file. In order to read the file, we have to set a value to represent the root path, called WEBROOT_PATH (or whatever) in the AppSettings section. Its value will be D:\home\site\wwwroot, which never changes unless Azure App Service changes it. There are two ways to read the settings value:

  1. var wwwroot = Environment.GetEnvironmentVariable("WEBROOT_PATH");
  2. var wwwroot = ConfigurationManager.AppSettings["WEBROOT_PATH"];

Either is fine to read the settings value. If we omit this settings value, Azure Functions basically assumes that the file is located at C:\Windows\System32, which will cause an unexpected result.

According to this document, at the time of this writing, we can pass the Microsoft.Azure.WebJobs.ExecutionContext instance as a function parameter so that we can handle the file path a little bit easier. However, as it’s not fully rolled out yet nor dev tools don’t have that feature yet, we should wait for it until it’s fully rolled out.

The code above then read the YAML file, convert it to JSON and render it. We can now see the Swagger definition through a web browser:

So far, we have briefly looked how to enable Swagger support in Azure Functions in two different ways. Both of course have goods and bads. The first one might be the easiest option but needs more work. Also if we want to access to the swagger definition with the first option, we have to use a different access code. This is a bit critical because we have to manage at least two different keys – one for Functions and the other for Swagger, which is not ideal. On the other hand, the second option needs another Function code but can be handled by the same host key that is accessible to the other Function codes. Therefore, from the management point of view, the second option might be better. It’s still in preview, so we hope that the GA version of the Swagger support will be better than now.

Message retry patterns in Azure Functions

Azure Functions provide ServiceBus based trigger bindings that allow us to process messages dropped onto a SB queue or delivered to a SB subscription. In this post we’ll walk through creating an Azure Function using a ServiceBus trigger that implements a configurable message retry pattern.

Note: This post is not an introduction to Azure Functions nor an introduction to ServiceBus. For those not familiar will these Azure services take a look at the Azure Documentation Centre.

Let’s start by creating a simple C# function that listens for messages delivered to a SB subscription.

create azure function

Azure Functions provide a number of ways we can receive the message into our functions, but for the purpose of this post we’ll use the BrokeredMessage type as we will want access to the message properties. See the above link for further options for receiving messages into Azure Functions via the ServiceBus trigger binding.

To use BrokeredMessage we’ll need to import the Microsoft.ServiceBus assembly and change the input type to BrokeredMessage.

If we did nothing else, our function would receive the message from SB, log its message ID and remove it from the queue. Actually, the SB trigger peeks the message off the queue, acquiring a peek lock in the process. If the function executes successfully, the message is removed from the queue. So what happens when things go pear shaped? Let’s add a pear and observe what happens.

Note: To send messages to the SB topic, I use Paolo Salvatori’s ServiceBus Explorer. This tool allows us to view queue and message properties mentioned in this post.

retry by default

Notice the function being triggered multiple times. This will continue until the SB queue’s MaxDeliveryCount is exceeded. By default, SB queues and topics have a MaxDeliveryCount of 10. Let’s output the delivery count in our function using a message property on the BrokeredMessage class so we can observe this in action.

outputing delivery count

From the logs we see the message was retried 10 times until the maximum number of deliveries was reached and ServiceBus expired the message, sending it to the dead letter queue. “Ah hah!”, I hear you say. Implement message retries by configuring the MaxDeliveryCount property on the SB queue or subscription. Well, that will work as a simple, static retry policy but quite often we need a more configurable, dynamic approach. One based on the message context or type of exception caught by the processing logic.

Typical use cases include handling business errors (e.g. message validation errors, downstream processing errors etc.) vs transport errors (e.g. downstream service unavailable, request timeouts etc.). When handling business errors we may elect not to retry the failed message and instead move it to the dead letter queue. When handling transport errors we may wish to treat transient failures (e.g. database connections) and protocol errors (e.g. 503 service unavailable) differently. Transient failures we may wish to retry a few times over a short period of time where-as protocol errors we might want to keep trying the service over an extended period.

To implement this capability we’ll create a shared function that determines the appropriate retry policy based on context of the exception thrown. It will then check if the number of retry attempts against the maximum defined by the policy. If retry attempts have been exceeded, the message is moved to the dead letter queue, else the function waits for the duration defined by the policy before throwing the original exception.

Let’s change our function to throw mock exceptions and call our retry handler function to implement message retry policy.

Now our function implements some basic exception handling and checks for the appropriate retry policy to use. Let’s test our retry polices work by throwing the different exceptions our policy supports

Testing throwing our mock business rules exception…

test mock business exception

…we observe that the message is moved straight to the dead letter queue as per our defined policy.

Testing throwing our mock protocol exception…test mock protocol exception

…we observe that we retry the message a total of 5 times, waiting 3 seconds between retries as per the defined policy for protocol errors.

Considerations  

  • Ensure your SB queues and subscriptions are defined with a MaxDeliveryCount greater than your maximum number of retries.
  • Ensure your SB queues and subscriptions are defined with a TTL period greater than your maximum retry interval.
  • Be aware that using Consumption based service plans have a maximum execution duration of 5 minutes. App Service Plans don’t have this constraint. However, ensure the functionTimeout setting in the host.json file is greater than your maximum retry interval.
  • Also be aware that if you are using Consumption based plans you will still be charged for time spent waiting for the retry interval (thread sleep).

Conclusion

In this post we have explored the behaviour of the ServiceBus trigger binding within Azure Functions and how we can implement a dynamic message retry policy. As long as you are willing to manage the deserialization of message content yourself (rather than have Azure Functions do it for you) you can gain access to the BrokeredMessage class and implement feature rich messaging solutions on the Azure platform.

Know Your Cloud Resource Costs on Azure

An organisation used to invest their IT infrastructure mostly for computers, network or data centre. Over time, they spent their budget for hosting spaces. Nowadays, in cloud environments, they mostly spend their funds to purchase computing power. Here’s a simple diagram about the cloud computing evolution. From left to right, expenditure shifts from infrastructure to computing power.

In the cloud environment, when we need resources, we just create and use them, and when we don’t need them any longer, we just delete them. But let’s think about this. If your organisation runs dev, test and production environment on cloud, the cost of resources running on dev or test environment is likely to be overlooked unless carefully monitored. In this case, your organisation might be receiving an invoice with massive amount of cost! That has to be avoided. In this post, we are going to have a look at the Azure Billing API that was released in preview and build a simple application to monitor costs in an effective way.

The sample codes used for this post can be found here.

Azure Billing API Structure

There are two distinctive APIs for Azure Billing – one is Usage API and the other is Rate Card API. Therefore, we can calculate how much we spent during a particular period.

Usage API

This API is based on a subscription. Within a subscription, we can send a request to calculate how much resources we used in a specified period. Here are the parameters we can use for these requests.

  • ReportedStartTime: Starting date/time reported in the billing system.
  • ReportedEndTime: Ending date/time reported in the billing system.
  • Granularity: Either Daily or Hourly. Hourly can return more detailed result but takes far longer time to get the result.
  • Details: Either true or false. This determines how usage is split into instance level or not. If false is selected, all same instance types are aggregated.

Here’s an interesting point on the term Reported. When we USE cloud resources, that can be interpreted from two different perspectives. The term, USE, might mean that the resources were actually used at the specified date/time, or the resource used events were reported to the billing system at the specified date/time. This happens because Azure is basically a distributed system scattered all around the world, and based on the data centre the resources are situated, the actual usage date/time can be reported to the billing system in a delayed manner. Therefore, even though we send requests based on the reported date/time, the responses containing usage data show the actual usage date/time.

Rate Card API

When you open a new Azure subscription, you might have noticed a code looking like MS-AZR-****P. Have you seen that code before? This is called Offer Durable ID and, based on this, different rates on resources apply. Please refer to this page to see more details about various types of offers. In order to send requests for this, we can use the following query parameters.

  • OfferDurableId: This is the offer Id. eg) MS-AZR-0017P (EA Subscription)
  • Currency: Currency that you want to look for. eg) AUD
  • Locale: Locale of your search region. eg) en-AU
  • Region: Two-letter ISO country code that you purchased this offer. eg) AU

Therefore, in order to calculate the actual spending, we need to combine these two API responses. Fortunately, there’s a good NuGet library called CodeHollow.AzureBillingApi. So we just use it to figure out Azure resource consumption costs.

Scenario

Kloud, as a cloud consulting firm, offers all consultants access to the company’s subscription without restriction so that they can create resources to develop/test scenarios for their clients. However, once resources are created, there’s high chance that those resources are not destroyed in a timely manner, which brings about unnecessary cost spending. Therefore, management team has made a decision to perform cost control by resource groups 1) assigning resource group owners, 2) setting total spend limit, and 3) setting daily spend limit, using tags. By virtue of these tags, resource group owners are notified via email when cost approaches 90% of the total spend limit, and when it reaches the total spend limit. They also get notified if the cost exceeds the daily spend limit so they can take appropriate actions for their resource groups.

Sounds simple, right? Let’s code it!

When the application is written, it should be run daily to aggregate all costs, store it to database, and send notifications to resource group owners that meets the conditions above.

Writing Common Libraries

The common libraries consist of three parts. Firstly, it calls Azure Billing API, and aggregates data by date and resource group. Secondly, it stores those aggregated data into database. Finally, it sends notification to resource group owners who have resource groups that exceeds either total spend limit or daily spend limit.

Azure Billing API Call & Data Aggregation

CodeHollow.AzureBillingApi can reduce huge amount of API calling work. Its simple implementation might look like:

First of all, like the code above, we need to fetch all resource usage/cost data then, like below, those data needs to be grouped by dates and resource groups.

We now have all cost related data per resource group. We then need to fetch tag values from resource groups using another API call and merge it with the data previously populated.

We can look up all resource groups in a given subscription like above, and merge this result with the cost data that we previously found, like below.

Date Storage

This is the simplest part. Just use Entity Framework and store data into the database.

We’ve so far implemented data aggregation part.

Notification

First of all, we need to fetch resource groups that meet conditions, which is not that hard to write.

The code above is self-explanatory: it only returns resource groups that 1) approach the total spend limit or 2) exceed the total spend limit, or 3) exceed the daily spend limit. It works well, even though it looks smelly.

The following code bits show how to send notifications to the resource group owners.

It only writes alarms onto the screen, but we can implement SendGrid for email notification or Twillio for SMS alert, in here.

Now we’ve got the basic application structure. How can we execute it, by the way? We might have two approaches – Azure WebJobs and Azure Functions. Let’s move on.

Monitoring Application on Azure WebJob

A console application might be the simplest way for this purpose. Once the console app is built, it can be deployed to an Azure WebJob straight away. Here’s the simple console application code.

Aggregator service collects and store data and Reminder service sends alerts to resource group owners. In order to deploy this to Azure WebJob, we need to create two extra files, run.cmd and settings.job.

  • settings.job: It contains CRON expression for scheduled job. For example, if this WebJob runs every night at 00:20, the JSON object might look like:

  • run.cmd: When this WebJob is run, it always looks up run.cmd first, which is a simple batch command file. Therefore, if necessary, we can enter the actual executable command with appropriate arguments into this file.

That’s how we can use Azure WebJob for monitoring.

Monitoring Application on Azure Function

We can use Azure Functions instead. But in this case we HAVE TO make sure:

Azure Functions instance MUST be with App Service Plan, NOT Consumption Plan

Basically, this app runs for 1-2 minutes at the shortest or 30-40 minutes at the longest. This execution time is not affordable for Consumption Plan, which charges costs based on execution time. On the other hand, as we have already paid for App Service Plan, we don’t need to pay extra for the Function instance, if we create it under the App Service Plan.

Timer Trigger Function code suits our purpose. Also using Precompiled Azure Functions approach would be more helpful and the function code might look like:

Here’s the function.json for this Timer Trigger one:

Here we have shown how to quickly write a simple application for cost monitoring, using the Azure Billing API. Cloud resources can certainly be used effectively and efficiently, but the flipside of it is, of course, that we have to be very careful not to be wasteful. Therefore, implementing a monitoring application would help in preventing unwanted cost leak.

Precompiled Azure Functions Revisited

Since Microsoft released a preview version of Visual Studio Tools for Azure Functions in December, 2016, it’s been reported very buggy. Current roadmap says the tooling won’t be unveiled until .NET Standard 2.0 is released. Therefore, Azure App Service Team published an article to utilise ASP.NET Web Application projects in the meantime. Even though the approach requires a lot of hands for initial setup, this is literally the only way to play Azure Functions on Visual Studio without the tooling.

In this post, as a complementary one to the article, we are going to build a simple Web API using HTTP Triggers and implement a simple CQRS pattern using Queue Triggers by precompiled approach. I hope this post would offer more details.

The sample code used for this post can be found here.

SIDE NOTES:

  • We use VS2015 in this post. If you want to use VS2017, that would be fine.
  • We only use .NET Framework version 4.6 as Azure Functions uses this version.

Benefits of Precompiled Azure Functions

There are several benefits when we use Azure Functions that come as precompiled .dll files:

  1. We can use full features on Visual Studio, including IntelliSense.
  2. We can easily write unit test codes.
  3. We can easily attach function codes to existing CI/CD pipelines.
  4. We can easily migrate the existing codebase with barely modifying them.
  5. We don’t need project.json for NuGet package management.
  6. We can reduce the total amount of cold start time by removing compiling on-the-fly when requests hit to the Functions.

Let’s find out a scenario we’re using in this post.

Web APIs to Create or View Product Details

We’re creating two Web API endpoints – one to add/update product details, and the other to view product details. We also add another Function code using Azure Storage Queue to implement CQRS patterns.

Entity Framework Code-First Approach for Product Details

First of all, we’re creating a class library to take care of database transaction using Entity Framework. Here are a simple product class and database context.

We can find this at the PrecompiledSample.EntityModels project.

API Request/Response Object

Next, we create a simple DTO, ProductModel.csnot to directly expose database entity.

We can find this at the PrecompiledSample.Models project.

Service Layer

Lastly, When we develop Web API applications, we usually implement service layers that are called by controllers. Of course, there’s no controller in Azure Functions, those service layers are instantiated within the Run method of Azure Functions code.

We can find this at the PrecompiledSample.Services project.

So far, have you found out any difference? Probably not. Now, it’s time to write Azure Functions code.

Creating Web Application Project for Azure Functions

As Azure Functions basically run on top of an Azure Web App instance, we can start with an empty ASP.NET Web Application project, targeting .NET Framework 4.6, within Visual Studio.

Of course there’s nothing in the project, except packages.config and Web.config.

Install NuGet packages below to run Azure Functions within our Web Application project:

For our simple Web API, for database transactions, we need more NuGet packages:

We’ve completed our basic setup for Azure Functions code. Let’s move on.

Writing Azure Functions

Let’s think about the Azure Functions app structure. Each folder of the Web Application project works as an individual function. Therefore, we can simply create folders and put function.json in each of them. Let’s have a look.

  • AddProductHttpTrigger: This is an HTTP Trigger works as an API endpoint, which takes POST requests, sends the message body to Azure Storage Queue and returns HTTP Status Code of 202 (Accepted) right away.
  • AddProductQueueTrigger: This is a Queue Trigger that handles database transaction, which watches Azure Storage Queue, takes messages from there and process them to the database.
  • GetProductHttpTrigger: This is an HTTP Trigger works as an API endpoint, which takes GET request and returns messages with HTTP Status Code of 200 (OK).

Here’s a high-level diagram how POST request is processed through Functions.

We mentioned about function.json just above. What do we put inside it, then? The easiest way to check its content will be to just create a function code on Azure Functions instance. Let’s have a look.

AddProductHttpTrigger

If we create an HTTP Trigger function from Azure Portal, we can find the function.json. It looks like:

There are two bindings defined – one for input binding and the other for output binding. The input binding has a type of httpTrigger, while the output binding has a type of http. Copy all and paste it into the function.json in our Visual Studio project. And add another output binding with the type of queue. Also, we need to provide precompiled .dll file’s information like below:

As defined above, we need AddProductHttpTrigger.cs.

Copy the code from Run.csx of Azure Portal and paste it into the file, then modify it like below:

How can we find out the differences between our code and Run.csx?

  1. There’s no #r directive. This is because the function code is not a script anymore.
  2. There are namespace and class name definition. Previously in script-style function code didn’t allow those definitions around the Run method.
  3. Message body from the request is directly sent to the queue.
  4. At the same time, it returns HTTP Status Code of 202 (Accepted). This is more semantic than returning HTTP Status Code of 200 (OK) as it’s the non-blocking and async function.

We’ve now created the API endpoint to create a resource, which is corresponding to the C of the CQRS pattern.

AddProductQueueTrigger

When we create a sample Queue Trigger function code, we can see the function.json. Copy and paste it into our function.json in Visual Studio and define the entry point like:

Create the AddProductQueueTrigger.cs file like below:

Copy the code from Run.csx and paste it to the file and modify it like below:

This function actually does database transactions. Let’s have a look:

  1. While normal web applications refer to Web.config for database connection string, Azure Functions is not designed to read from it. Instead, it uses the App Settings blade of the Azure Function Portal that defines database connection string. In spite of this fact, we can still write code the same way, ConfigurationManager.ConnectionStrings["NAME"].ConnectionString.
  2. If we want to use additional configurations, use ConfigurationManager.AppSettings["KEY"] that is defined in the App Settings blade of the Azure Functions Portal. We can’t use custom configuration section for this purpose. Alternatively, if you really want to custom configuration settings, create a JSON file, mysettings.json for example, and deserialise it using Json.NET.
  3. It uses the service layer instance that is written in another project, PrecompiledSample.Services. If you are concerned at dependencies, consider the service locator pattern. Testing Precompiled Azure Functions explained well how to apply service locator pattern in Azure Functions.

So far, we have written function codes for resource creation.

GetProductHttpTrigger

Copy the function.json from the HTTP Trigger function created earlier in the portal, paste it into the function.json in Visual Studio, and modify it like:

Then, create the GetProductHttpTrigger.cs file.

Here’s the code for it:

Let’s have a look at the code.

  1. This function code takes the GET request, so it looks up the id query parameter from the querystring and use it as the ProductId value.
  2. It uses the service layer instance that is written in another project, PrecompiledSample.Services.
  3. It returns the resource details with HTTP Status Code of 200 (OK).

So far, we have created the resource lookup API, corresponding to the Q of the CQRS pattern. we’ve completed all necessary function codes. If we integrate this Web Application project with Azure Functions CLI, we can easily perform testing and debugging in our local environment with the same development experiences that Visual Studio offers.

Setting Up Debugging Environment for Azure Functions within Visual Studio

We need two more tools for our local debugging experience within Visual Studio.

In order to install Azure Functions CLI, we can simply run the command, npm install --global azure-functions-cli. Make sure that those two tools only work in Windows at the time of writing. Once both are installed, open the project property window like below:

Move to the Web tab and enter the necessary information.

When we integrate Azure Functions CLI with the Web Application project, there are a few points that we need to make sure.

  • The installed location of node.js might be different:
    • If it is downloaded from https://nodejs.org, the CLI location would be C:\Users\[USERNAME]\AppData\Roaming\npm\node_modules\azure-functions-cli\bin\func.exe.
    • If it is installed through NVM, the CLI would be located at C:\Program Files\nodejs\node_modules\azure-functions-cli\bin\func.exe.
  • For Command line arguments, its value would be host start to run WebJobs host in our local machine.
  • Working directory needs to have the absolute path of the Web Application project where Azure Functions codes reside.

Unfortunately, we can’t use % environment variables here.

Finally, we need to add two .json files – appsettings.json and host.json. Unless necessary, host.json is empty. On the other hand, we need to put some details into appsettings.json like below:

As we’re using Azure Storage Emulator, the value, UseDevelopmentStorage=true, is used. As well as for the database connection string is defined in this file.

Now, it’s time for debugging! Set the Web Application project as a startup project and punch the F5 key, then we’ll be able to see the command prompt window that is running Azure Functions CLI.

Let’s send a POST request through a REST API testing tool like Postman. As we can see the screenshot above, the endpoint URL for resource creation is http://localhost:7071/api/AddProductHttpTrigger, so send a POST request like below:

Then, the code stops at the break point where we setup in Visual Studio.

How did you feel it? We can use the same development experience for Azure Functions development. Now, it’s time for deployment to the actual Azure Functions instance.

Deploying Azure Functions

We have developed Azure Function codes within a Web Application project. That means we will have the same deployment experience.

When we choose the publish menu like above, we can select either Azure App Service option

or import publish profile settings file downloaded from the Azure Function instance.

Once we complete deployment, we can confirm on Azure Function portal that all function codes have been successfully deployed. Please note that we don’t need to build separate web application project for each function, but just put everything in one web application project, which would be sufficient.

Once deployed, let’s send a POST request to the endpoint for the AddProductHttpTrigger function. Request body will flow the diagram mentioned before.

Once data has been processed, let’s check the result on the Azure Function side:

And here’s the database query result.

So far, we have built a sample Azure Functions code using Web Application project on Visual Studio. We have used .dll files instead of .csx files for Function codes. With these precompiled .dll library files, we have performed debugging and deployment as well. How did you guys find out? Is it easier to use? Does it give you the same development experiences? It may not be easy at the first glance. However, because this is the same approach that we develop a web application, we can easily get used to it.

Hope this post will help you write Azure Function codes with full support by Visual Studio.

Calling WCF client proxies in Azure Functions

Azure Functions allow developers to write discrete units of work and run these without having to deal with hosting or application infrastructure concerns. Azure Functions are Microsoft’s answer to server-less computing on the Azure Platform and together with Azure ServiceBus, Azure Logic Apps, Azure API Management (to name just a few) has become an essential part of the Azure iPaaS offering.

The problem

Integration solutions often require connecting legacy systems using deprecating protocols such as SOAP and WS-*. It’s not all REST, hypermedia and OData out there in the enterprise integration world. Development frameworks like WCF help us deliver solutions rapidly by abstracting much of the boiler plate code away from us. Often these frameworks rely on custom configuration sections that are not available when developing solutions in Azure Functions. In Azure Functions (as of today at least) we only have access to the generic appSettings and connectionString sections of the configuration.

How do we bridge the gap and use the old boiler plate code we are familiar with in the new world of server-less integration?

So let’s set the scene. Your organisation consumes a number of legacy B2B services exposed as SOAP web services. You want to be able to consume these services from an Azure Function but definitely do not want to be writing any low level SOAP protocol code. We want to be able to use the generated WCF client proxy so we implement the correct message contracts, transport and security protocols.

In this post we will show you how to use a generated WCF client proxy from an Azure Function.

Start by generating the WCF client proxy in a class library project using Add Service Reference, provide details of the WSDL and build the project.

add_service_reference

Examine the generated bindings to determine the binding we need and what policies to configure in code within our Azure Function.

bindings

In our sample service above we need to create a basic http binding and configure basic authentication.

Create an Azure Function App using an appropriate template for your requirements and follow the these steps to call your WCF client proxy:

Add the System.ServiceModel NuGet package to the function via the project.json file so we can create and configure the WCF bindings in our function
project_json

Add the WCF client proxy assembly to the ./bin folder of our function. Use Kudo to create the folder and then upload your assembly using the View Files panelupload_wcf_client_assembly

In your function, add references to both the System.ServiceModel assembly and your WCF client proxy assembly using the #r directive

When creating an instance of the WCF client proxy, instead of specifying the endpoint and binding in a config file, create these in code and pass to the constructor of the client proxy.

Your function will look something like this

Lastly, add endpoint address and client credentials to appSettings of your Azure Function App.

Test the function using the built-in test harness to check the function executes ok

test_func

 

Conclusion

The suite of integration services available on the Azure Platform are developing rapidly and composing your future integration platform on Azure is a compelling option in a maturing iPaaS marketplace.

In this post we have seen how we can continue to deliver legacy integration solutions using emerging integration-platform-as-a-service offerings.

Automate the nightly backup of your Development FIM/MIM Sync and Portal Servers Configuration

Last week in a customer development environment I had one of those oh shit moments where I thought I’d lost a couple of weeks of work. A couple of weeks of development around multiple Management Agents, MV Schema changes etc. Luckily for me I was just connecting to an older VM Image, but it got me thinking. It would be nice to have an automated process that each night would;

  • Export each Management Agent on a FIM/MIM Sync Server
  • Export the FIM/MIM Synchronisation Server Configuration
  • Take a copy of the Extensions Folder (where I keep my PowerShell Management Agents scripts)
  • Export the FIM/MIM Service Server Configuration

And that is what this post covers.

Overview

My automated process performs the following;

  1. An Azure PowerShell Timer Function WebApp is triggered at 2330 each night
  2. The Azure Function App initiates a Remote PowerShell session to my Dev MIM Sync Server (which is also a MIM Service Server)
  3. In the Remote PowerShell session the script;
    1. Creates a new subfolder under c:\backup with the current date and time (dd-MM-yyyy-hh-mm)

  1. Creates further subfolders for each of the backup elements
    1. MAExports
    2. ServerExport
    3. MAExtensions
    4. PortalExport

    1. Utilizes the Lithnet MIIS Automation PowerShell Module to;
      1. Enumerate each of the Management Agents on the FIM/MIM Sync Server and export each Management Agent to the MAExports Folder
      2. Export the FIM/MIM Sync Server Configuration to the ServerExport Folder
    2. Copies the Extensions folder and subfolder contexts to the MAExtensions Folder
    3. Utilizes the FIM/MIM Export-FIMConfig cmdlet to export the FIM Server Configuration to the PortalExport Folder

Implementing the FIM/MIM Backup Process

The majority of the setup to get this to work I’ve covered in other posts, particularly around Azure PowerShell Function Apps and Remote PowerShell into a FIM/MIM Sync Server.

Pre-requisites

  • I created a C:\Backup Folder on my FIM/MIM Server. This is where the backups will be placed (you can change the path in the script).
  • I installed the Lithnet MIIS Automation PowerShell Module on my MIM Sync Server
  • I configured my MIM Sync Server to accept Remote PowerShell Sessions. That involved enabling WinRM, creating a certificate, creating the listener, opening the firewall port and enabling the incoming port on the NSG . You can easily do all that by following my instructions here. From the same post I setup up the encrypted password file and uploaded it to my Function App and set the Function App Application Settings for MIMSyncCredUser and MIMSyncCredPassword.
  • I created an Azure PowerShell Timer Function App. Pretty much the same as I show in this post, except choose Timer.
    • I configured my Schedule for 2330 every night using the following CRON configuration

0 30 23 * * *

  • I set the Azure Function App Timezone to my timezone so that the nightly backup happened at the correct time relative to my timezone. I got my timezone index from here. I set the  following variable in my Azure Function Application Settings to my timezone name AUS Eastern Standard Time.

    WEBSITE_TIME_ZONE

The Function App Script

With all the pre-requisites met, the only thing left is the Function App script itself. Here it is. Update lines 2, 3 & 6 if your variables and password key file are different. The path to your password keyfile will be different on line 6 anyway.

Update line 25 if you want the backups to go somewhere else (maybe a DFS Share).
If your MIM Service Server is not on the same host as your MIM Sync Server change line 59 for the hostname. You’ll need to get the FIM/MIM Automation PS Modules onto your MIM Sync Server too. Details on how to achieve that are here.

Running the Function App I have limited output but enough to see it run. The first part of the script runs very quick. The Export-FIMConfig is what takes the majority of the time. That said less than a minute to get a nice point in time backup that is auto-magically executed nightly. Sorted.

 

Summary

The script itself can be run standalone and you could implement it as a Scheduled Task on your FIM/MIM Server. However I’m using Azure Functions for a number of things and having something that is easily portable and repeatable and centralised with other functions (pun not intended) keeps things organised.

I now have a daily backup of the configurations associated with my development environment. I’m sure this will save me some time in the near future.

Follow Darren on Twitter @darrenjrobinson