Azure Function Proxies for API Mocking

In my previous posts, Is Your Serverless Application Testable? – Azure Logic Apps and API Mocking for Developers, we have looked how to mock APIs with various approaches including Azure API Management, AWS API Gateway, MuleSoft and Azure Functions. Quite recently, Azure Functions Team released a new mocking feature in Azure Function Proxies. With this feature, API mocking can’t be even easier. In this post, I’m going to show how to use this mocking feature in several ways.

Enable API Mocking via Azure Portal

This is pretty straight forward. Within Azure Portal, simply enable the Azure Function Proxies feature:

If you have already enabled this feature, make sure that its runtime version is 0.2. Then click the plus sign next to Proxies (preview) and put details like:

At this stage, we might have found an interesting one. This proxy doesn’t have to link to an existing HTTP endpoint. In other words, we don’t need to have a real API for mocking. Just create a mocked HTTP endpoint and use it. Once we save this, we’ll have an endpoint like:

Use Postman to hit this mocked URL:

We receive the mocked HTTP status code and response body. How easy!

But if we have an existing Azure Function app and want to integrate this mocking feature via CI/CD pipeline, what can we do? Let’s move on.

Enable Azure Function Proxy Option via ARM Template

Azure Functions app instance is basically the same web app instance. So we can use the same ARM template definition for app settings configurations. In order to enable Azure Function Proxies, we simply add a ROUTING_EXTENSION_VERSION key with a value of ~0.2. Here’s a sample ARM template definition:

Once we update our ARM template, we can simply deploy this template to Azure.

Enable Azure Function Proxy Option via PowerShell

If ARM template is not feasible to enable this proxy option, then we can use a PowerShell script (or Azure CLI equivalent). Here’s a sample script:

Hence, this PowerShell script can be invoked with parameters like:

After either way, ARM template or PowerShell, is executed, the function app instance will have the value like:

Now, we have proxies feature enabled through code! Let’s move on.

Deploy Azure Function Proxy with Azure Function App

Defining Azure Function proxies is just to add another JSON file, proxies.json, to the app instance. The JSON schema definition can be found at http://json.schemastore.org/proxies. So, basically the proxies.json looks like:

As defined above, the mocked API has a URL of /mock/hello-world with response code of 200 and body { "message": "Hello World" }. As long as we follow this structure, we can add as many mocked APIs as we like.

Once we add this proxies.json into our Azure Functions project, deploy it to Azure. Then we’ll be able to see the list of mocked APIs.

So far, we have looked how to add mocked APIs to an Azure Function instance with various ways. Depending on preferences, any approach would be fine. But just make sure that this is the easiest and cheapest way to mock API endpoints.

Mixing Parameters in Logic Apps with ARM Template

One of the most irritating moments of using Azure Logic Apps is writing a HUMONGOUS JSON object that is wrapped by an ARM template. The nature of this Logic App instance inevitably requires us to mix both ARM template expressions and Workflow Definition Language (WDL) expressions. At the time of this writing, as there is no local validation tool around those two, we never know, if the Logic App instance is correctly deployed or not, until it is deployed and run. In this post, we are going to discuss how to deal with parameters between Logic Apps and ARM templates, especially focusing on string concatenation.

Scenario

Here is a simple user story:

  • As a site moderator of SharePointOnline,
  • I want to receive a notification email when a file is uploaded or updated,
  • So that I can delete the file or leave it.

Defining Logic App

Based on the user story, the graphical representation of a Logic App instance might look like:

  • For every 3 minutes, the trigger checks the designated folder whether there is a new file uploaded or existing file has been modified.
  • If there is a new file or modified file, send an email notification to the designated moderator through a webhook action.
  • In the email, the moderator can choose an option to delete the file or not.
  • Based on the option that the moderator has chosen, either the file deletion action or no action is taken.
  • If there is an error while deleting the file, the error will be captured.

The complete definition and its ARM template wrapper used for this scenario can be found here.

Let’s have a look at more details.

Trigger

This workflow is triggered by a scheduled event that looks at a folder in SharePointOnline. The trigger part might look like:

Looks good, right? Nah, it’s actually not. Both path and folderId value are hard-coded. Well, at least the Logic App instance has hard-coded both values, which is not desirable, from automation point of view. Instead, those values should be parametrised for future change. Hence, we define parameters within the parameter section of ARM template like:

And this parameter value is called within the Logic App definition like:

This is deployed without an error. Looks all good, right? Well, let’s see the picture:

The site address field is not showing the actual value. That’s fine, as long as we can run the workflow. When we run the workflow, now we see what has happened:

What we replaced the hard-coded value with parameters('spoSiteName') was not recognised within the workflow definition. What the…?

Parameters in ARM Template Expressions and WDL Expressions

The issue is simple. Both ARM template and WDL have the same parameters object. We defined the parameter value in the one of ARM template, but workflow seeks for it within its workflow definition. We now know what the problem is and we also know how to sort this out – 1) use parameters in workflow definition, or 2) pass the parameters value from ARM template to workflow definition. Let’s try the first one.

Defining Value in Workflow Definition Parameters

In ARM template, there is only one place where we define parameters, in most cases. However, when the ARM template comes with Logic Apps, it brings about two more parameters fields within ARM template. Let’s see the code snippet:

Under the Logic App resource definition, there is the parameters field for connectors. There is another place under the Logic App workflow definition. We can use both places like:

Wait, it still confuses the reference of parameters. That’s not the right way because the parameter value is a mixture of template expressions and WDL expressions. How about we only use template expressions and replace WDL expressions with template functions? Let’s try again:

This time, the parameter value under the Logic App resource definition uses template expressions with square brackets and functions. As we can see, the existing WDL function of encodeURIComponent() has been replaced with template function of uriComponent().

Deployment is successful. But something strange has happened onto the trigger. It doesn’t show the endpoint.

And running the workflow is…

Something weird. It’s not fired! But the log says it points the correct endpoint. WHAAAAT?

Passing Value from ARM Template to Workflow Definition – Direct Way

The parameters section within the Logic App resource or definitions is probably not the correct place. Let’s take that out and apply the parameters value directly from ARM template to workflow definition like:

Then its deployed result might look like:

Now the endpoint has come back! Looks a little bit ugly, though. But that’s fine as long as it works. Let’s run the workflow again.

Yay! It’s running now! So, it seems that mixing two parameters from different sections is not a good idea. Instead, just use the ARM template expressions inside the Logic App definition when the same situation comes. But, there is still a potential issue. As we can see the picture above, the value is URL encoded, which we possibly make mistake if we change the value manually. How can we keep the same experience as what we write the Logic App definition?

Passing Value from ARM Template to Workflow Definition – Indirect Way

There might be another approach. Basically what we are facing is to handle string values in different places. What about parsing WDL expression as a string value within the ARM template expressions? Let’s change the ARM template again. We are not going to use the parameter object in the Logic App resource definition nor workflow definition. Instead, we are going to use the parameters object and variables object in ARM template.

We notice that, in the variables object, we use string concatenation function that ARM template provides and all WDL expressions and functions such as @ signs, encodeURIComponent() function and single quote escaping. This is the final deployment result.

Instead of URL encoded value, we got the human readable value! Actually, in the deployed code side it looks like:

The workflow definition has the original value, even though we parametrised it, which is much nicer. Finally we expect the running result like this:

So far we have sorted out how we could mix ARM template functions and Logic App’s WDL functions, especially around those conflicting parameters objects. I’m pretty sure this needs to be done in a better way, but until Azure provides a solution, we should probably stick with this approach to avoid string concatenation issues like this.

Make Your Azure WebJobs Unit Testable Again

In my previous posts, we had a look at testability for serverless applications. In this post, we’re going to apply a similar approach to Azure WebJobs.

The sample code used in this post can be found here.

For either scheduled WebJob instance or on-demand WebJob instance, it’s just a console application that we’re not dealing with. However, when we look at the continuous running WebJob instance, it will be a different story. Let’s have a look at the basic WebJob code.

This is the main WebJob host entry. Within the Main() method, it creates the JobHostConfiguration instance and JobHost instance for a WebJob application that is continuously running. This is the actual WebJob function.

As we can see, it has the static modifier, which make us hard to test. We can use the service locator pattern for this static method to cope with testability like Azure Function triggers. However, this should only apply when the service locator is the only option. Luckily, Azure WebJobs provides an interface called IJobActivator, which helps us easily integrate WebJob with an IoC container. WebJobActivator.Autofac is one of implementations of the IJobActivator with Autofac. Let’s have a look how the NuGet package has implemented the interface.

DISCLAIMER: I am the author of the NuGet package, WebJobActivator.Autofac.

Extending IJobActivator

When we decompile the IJobActivator interface, we see that it contains only one method CreateInstance(), which is insufficient for any IoC container library because we need to register dependencies beforehand. Therefore, we need to extend the interface, called IWebJobActivator. The new interface declares another method, RegisterDependencies(THandler handler = default(THandler)).

As the RegistrationHandler class is optional, so we can omit it for actual implementation. But the handler is rather useful for unit testing. We’ll touch this later in this post. Here’s the abstract class that implements IWebJobActivator:

As we can see, the implemented method barely does registration. We need an IoC container library here.

Integrating with Autofac

Now, let’s implement the extended interface with Autofac. Here’s the actual implementation:

With Autofac, the AutofacJobActivator does actual dependencies registration process. One of the most powerful features that Autofac offers is Module. We can register dependencies by grouping as a module. For our purpose, the RegistrationHandler can be a perfect place to implement this module. Here’s the handler implementations:

As we can see, AutofacRegistrationHandler implements a property of Action to register a module. Here’s the sample module:

We’re all set. Let’s change the Program.cs to apply IJobActivator.

Enabling IJobActivator

The Main() method can be modified like this:

Inside the method, we instantiate AutufacRegistrationHandler and register WebJobModule, instantiate AutofacJobActivator and register dependencies using the handler, and add the activator to JobHostConfiguration. Now, we can finally remove all the static modifier from the function class:

So far, we’ve extended IJobActivator to IWebJobActivator and implemented AutofacJobActivator to configure JobHost. However, I’m still not happy because there are two instances remain tightly coupled within the Main() method – JobHostConfiguration and JobHost. We also need to decouple those two instances.

Implementing JobHostConfigurationBuilder

JobHostConfigurationBuilder is basically a wrapper class of JobHostconfiguration, with a fluent method, AddConfiguration(Action). Let’s have a look.

This builder class gets the activator instance as a parameter and internally creates an instance of the JobHostConfiguration. Now, we’ve got JobHostConfiguration decoupled. Let’s move on.

Implementing JobHostBuilder

This is the last step. JobHostBuilder is another wrapper class of JobHost and here’s the implementation.

Nothing special. It gets the JobHostConfigurationBuilder, internally creates a JobHostConfiguration instance, create a JobHost instance, and run the host instance. This now needs to be integrated with Autofac like:

AutofacJobHostBuilder accepts Autofac’s module instance which help register dependencies. We’ve all set now. Let’s make another change on the Program.cs:

Putting Them Altogether

This is the final version of Program.cs with full testability.

As a static property, WebJobHost gets the AutofacJobHostBuilder class with dependencies and additional configurations. After that, within the Main() method, it builds JobHost and runs the instance. While performing unit tests, this Program.cs can be also testable by mocking and injecting a dependency of IJobHostBuilder like:

So far, we’ve walked through how an Azure WebJob application can be written by being decoupled and considering testability. With this approach, we don’t even need to run Storage Emulator in our local environment for testing. Happy coding!

Swashbuckle Pro Tips for ASP.NET Web API – XML

In the previous post, we implemented IOperationFilter of Swashbuckle to handle example objects with combination of AutoFixture. According to Swagger spec, it doesn’t only handle JSON payloads, but also copes with XML payloads. In this post, we’re going to use the Swashbuckle library again to handle XML payloads properly.

The sample codes used in this post can be found here.

Acknowledgement

The sample application uses the following spec:

Built-In XML Support of Web API

ASP.NET Web API contains the Httpconfiguration instance out-of-the-box and it handles both JSON and XML payloads serialisation/deserialisation. Therefore, without any extra efforts, we can see those Swagger UI pages showing both JSON and XML payload.


As Swashbuckle is a JSON-biased library, camelCasing in JSON payloads is well supported, while XML payloads is not. All XML nodes is PascalCasing, which is inconsistent. How can we support camel-case on both JSON and XML? Here’s the tip:

It’s a little bit strange that, in order for XML to support camel-casing, we should change JSON serialisation settings, by the way. Go back to the Swagger UI screen and we’ll be able to find out all XML nodes except the root one are now camel-cased. The root node will be dealt later on.

Let’s send a REST API request to the endpoint using an XML payload.

However, the model instance is null! The Web API action can’t recognise the XML payload.

This is because the HttpConfiguration instance basically uses the DataContractSerializer instance for payload serialisation/deserialisation, but this doesn’t work as expected. What if we change the serialiser to XmlSerializer from DataContractSerializer? It would work. Add the following a line of code like:

Go back to the screen again and we’ll confirm the same screen as before.

What was the change? Let’s send an XML request again with some value modifications.

Now, the Web API action recognises the XML payload, but something weird happened. We sent lorem ipsum and 123, but they’re not passed. What’s wrong this time? There might be still the casing issue. So, instead of camel-cased payload, use pascal-cased XML payload and see how it’s going.

Here we go! Now the API action method can get the XML payload properly!

In other words, Swagger UI screen showed the proper camel-cased payload, but in fact it wasn’t that payload. What could be the cause? Previously we set the UseXmlSerializer property value to true for nothing. Now, all payload models should have XmlRoot and/or XmlElement decorators. Let’s update the payload models like:

Send the XML request again and we’ll get the payload!


However, it’s still not quite right. Let’s have a look at the actual payload and example payload.

The example payload defines ValueRequestModel as a root node. However, the actual payload uses request as declared in the XmlRoot decorator. Where should I change it? Swagger spec defines the xml object for this case. This can be easily done by implementing the ISchemaFilter interface of Swashbuckle.

Defining xml in Schema Object

Here’s the sample code defining the request root node by implementing the ISchemaFilter interface.

It applies the Visitor Pattern to filter out designated types only.

Once implemented, Swagger configuration needs to include the filter to register both ValueRequestModel and ValueResponseModel so that the request model has request on its root node and the response model has response on its root node. Try the Swagger UI screen again.

Now the XML payload looks great and works great.

So far, we’ve briefly walked through how Swagger document deals with XML payloads, using Swashbuckle.

Swashbuckle Pro Tips for ASP.NET Web API – Example(s) Using AutoFixture

In the previous post, we implemented IOperationFilter of Swashbuckle to emit the consumes and produces properties in a Swagger document. This post will implement another IOperationFilter to emit example(s) properties containing auto-generated values by AutoFixture.

The sample codes used in this post can be found here.

Acknowledgement

The sample application uses the following spec:

Defining example(s) in Operation Object

There are a few places, in a Swagger definition, where example objects are declared:

  • example property in parameter
  • examples property in responses
  • example field in definitions

Swashbuckle library automatically generates those example data with default values based on their data type. In other words, if its data type is string, the example value becomes string, if its data type is number, the example value becomes 0, and so on:

Is there any way that we can have a sort of meaningful data in those example objects? Of course there is. This NuGet package helps us create more meaningful example objects. Download the package and write some codes like:

Then add a decorator to an action of Web API:

The first one, SwaggerResponse, is built-in Swashbuckle decorator and the second one, SwaggerResponseExample is what we’re going to use for example data generation. Once the decorator is added, we need to add another OptionFilter acton in the Swagger config file like:

Reload the Swagger UI page and we can see the example object with more meaningful values:

This is how the Swagger definition looks like:

This is certainly a good way to show example data. But when we refresh the page, the example objects still show the same value as we hard-coded them.

Automatic Example Data Generation with AutoFixture

We were able to generate an example object by implementing IExampleProvider of Swashbuckle.Examples library. What if we want to create an example object with automatically generated values? AutoFixture is for unit testing by generating a fixture instance with random values. Why not using this for our randomly generating examples? Let’s have a look. This time, we create an abstract class to apply both requests and responses.

The abstract class, ModelExample, internally implements IFixture from AutoFixture and creates a random instance of the type T. Here are request and response classes inheriting the abstract class:

And this is the action method of a Web API. SwaggerRequestModel defines the request example and SwaggerResponseExample defines the response example.

Reload the Swagger UI page several times.

Each time the page is refreshed, the example objects have different values. If we need to put some restrictions like string length or value range, we can simply put other decorators like StringLength or Range on those request/response models.


So far, we have walked through how Swashbuckle library could generate example(s) object with meaningful values rather than default values. There are many other cases that we can customise Swashbuckle library to enrich Swagger document. I’ll introduce some other cases later on.

Swashbuckle Pro Tips for ASP.NET Web API – Content Types

Open API 2.0 (AKA Swagger) is a de-facto standard to document Web API. For ASP.NET Web API applications, Swashbuckle helps developers build the Swagger definition a lot easier. As Swashbuckle hasn’t fully implemented the Swagger specification, we need to develop some extensions using a few interfaces provided by Swashbuckle. In this post we’re going to talk about a couple of extensions to make Swagger definition more completed.

The sample codes used in this post can be found here.

Acknowledgement

The sample application uses the following spec:

Defining consumes and produdes in Operation Object

In Swagger, HTTP verb like GET, POST, PUT, PATCH or DELETE is referred as Operation Object. Each Operation Object can define which content types are to be requested (consumes) and which content types are to be returned (produces). Therefore, with Swashbuckle, Swagger document page produces like:

In other words, Swashbuckle assumes those five content types as default for requests – application/json, text/json, application/xml, text/xml, and application/x-www-form-urlencoded. And those four content types are the default response ones – application/json, text/json, application/xml and text/xml. Here’s a part of the Swagger definition automatically generated.

If we want to globally apply those content types, that can be done within the global configuration. Here’s the sample OWIN configuration:

It clears everything and add only one content type – application/json. By doing so, we can globally set one content type for both requests and responses. If we want to have more control on each endpoint, the IOperationFilter interface of Swashbuckle gives us that flexibility.

Implementing SwaggerConsumesAttribute Decorator

First of all, we need to write a simple decorator, called SwaggerConsumesAttribute which handles the consumes field of the Operation Object.

Through this decorator, we simply define number of content types to pass. How does it apply?

Let’s move on.

Implementing Consumes Filter

We now write the Consumes filter class by implementing the IOperationFilter interface like.

What it does are:

  • To check the content type values passed from the SwaggerConsumerAttribute decorator, and
  • To add all content types passed from the decorator to operation.consumes.

This needs to be added to the Swashbuckle configuration like:

Now run the Web API again and we’ll see the result like:

Implementing SwaggerProducesAttribute and Produces

Both SwaggerProducesAttribute decorator and Produces filter can be written as what we did above for consumes.

The SwaggerProducesAttribute decorator can be applied to:

The picture below is the Swagger definition based on the extensions above:

We’ve so far had a look how to extend Swashbuckle to fill the missing parts in Swagger definition. In the next post, we’ll walk through another extension for Swagger definition.

Is Your Serverless Application Testable? – Azure Logic Apps

I’ve talked about testable Azure Functions in my previous post. In this post, I’m going to introduce building a testable Azure Logic Apps.

Sample codes used in this post can be found here.

User Story

  • As a DevOps engineer,
  • I want to search ARM templates using Azure Logic Apps
  • So that I can browse the search result.

Workflow in Logic Apps

Based on the user story above, the basic workflow in Logic Apps might look like:

  1. An HTTP request triggers the workflow by hitting the Request trigger.
  2. The first action, HTTP in this picture, calls the Azure Function written in the previous post.
  3. If this action returns a response with an HTTP status code greater or equal to 400, the workflow takes the path at the left-hand side, executes the ErrorResponse action and terminates.
  4. If this action returns a response with an HTTP status code less than 400, the workflow takes the path at the right-hand side and executes the Parse JSON action followed by the Condition action.
  5. If the condition returns true, the workflow runs the OkResponse action; otherwise it runs the NotFoundResponse action, and terminates.

Therefore, we can identify that the HTTP action returns three possible scenarios – Error, Success with no result and Success with result. Let’s keep this in mind.

In fact, the workflow is a visual representation of this JSON object.

As a Logic App is basically a JSON object, we can easily plug it into an ARM template and deploy it to Azure so that we use the Logic App workflow straight away.

THERE IS NO CODE AT ALL.

From the unit-testing point of view, no code means we’re not able to test codes. Logic Apps only works in Azure cloud environment, ie. we can’t run test in our isolated local development environment. Well, being unable to run unit-test Logic Apps doesn’t necessarily mean we can’t test it at all.

Let’s see the picture again.

There’s a point where we can run test. Yes, it’s the HTTP action – the API call. As we’ve identified above, there are three scenarios we need to test. Let’s move on.

Manual Testing the Logic App

First of all, send a normal request that returns the result through Postman.

This time, we send a different request that returns no data so that it returns 404 response code.

Finally, we send another request that returns an error response code.

Tests complete! Did you find any issue in this test process? We actually hit the working API defined in the HTTP action. What if the actual API call causes any side effect? Can we test the workflow WITHOUT hitting the actual API? We should find a way to decouple the API call from the workflow to satisfy testabilities. API mocking can achieve this goal.

API Mocking

I wrote a post, API Mocking for Developers. In the post, I introduced how to mock APIs using Azure API Management. This is one way of mocking APIs. Even though it’s very easy to use with a Swagger definition, it has a costing issue. Azure API Management is fairly expensive for mocking. What if we use Azure Functions for mocking? It’s virtually free unless more than one million times of function calls per month occur.

Therefore, based on the three scenarios identified above, we can easily write Functions codes that returns dummy response like below:

It seems to be cumbersome because we need to write code for mocking APIs. So, it’s totally up to you to use either Azure API Management or Azure Functions for mocking.

Now, we’ve got mocked API endpoints for testing. What’s next?

Logic Apps for Testing – Manual Test

As we identified three scenarios, we need to clone the working Logic Apps three times and replace the API endpoint with the mocked ones. Here are the three Logic Apps cloned from the working one.

Each Logic App needs to be called through Postman to check the expected result. In order to pass the test, the first one should return error response, the second one should return 404 and the last one should return 200. Here’s the high level diagram of testing Logic Apps based on the expected scenarios.

We’re now able to test Logic Apps with mocked APIs. However, it’s not enough. We need to automate this test to integrate a CI pipeline. What can we do further?

Logic Apps for Testing – Automated Test

If we write a PowerShell script or script to call those three testing Logic Apps, we can call the script within the CI pipeline.

Here’s the high level diagram for test automation.

During the build process, Jenkins, VSTS or other build automation server calls the PowerShell script. The script runs three API requests and checks their responses. If all requests return expected responses respectively, we consider the test succeeded. If any of request returns a response different from expectation, the test is considered as failure.


So far, we’ve briefly walked through how to run tests against Logic Apps by API mocking. By decoupling API request/response from the Logic App, we can focus on the workflow itself. I hope this approach helps writing your Logic Apps workflow.

Is Your Serverless Application Testable? – Azure Functions

In this post, I’m going to introduce several design patterns when we write Azure Functions codes in C#, by considering full testabilities.

Sample codes used in this post can be found here.

User Story

  • As a DevOps engineer,
  • I want to search ARM templates using Azure Functions
  • So that I can browse the search result.

Basic Function Code

We assume that we have a basic structure that Azure Functions can use. Here’s a basic function code might look like:

As you can see, all dependencies are instantiated and used within the function method. This works perfectly fine so there’s no problem at all. However, from a test point of view, this smells A LOT. How can we refactor this code for testing?

I’m going to introduce several design patterns. They can’t be overlooked, if you’ve been working with OO programming. I’m not going to dive into them too much in this post, by the way.

Service Locator Pattern

As you can see the code above, each and every Azure Functions method has the static modifier. Because of this restriction, we can’t use a normal way of dependency injections but the Service Locator Pattern. With this, the code above can be refactored like this:

First of all, the function code declares the IServiceLocator property with the static modifier. Then, the Run() method calls the service locator to resolve an instance (or dependency). Now, the method is fully testable.

Here’s a simple unit test code for it:

The Mock instance is created and injected to the static property. This is the basic way of dependency injection for Azure Functions. However, there’s still an issue on the service locator pattern.

Mark Seemann argues on his blog post that the service locator pattern is an anti pattern because we can’t guarantee whether dependencies that we want to use have already been registered or not. Let’s have a look at the refactored code again:

Within the Run() method, the IGitHubService instance is resolved by this line:

var service = ServiceLocator.GetInstance();

When the amount of codes is relatively small, that wouldn’t be an issue. However, if the application grows up quickly and has become a fairly large one, can we ensure if the service variable is always NOT null? I don’t think so. We have to use the service locator pattern, but at the same time, try not to use it. That doesn’t sound quite right, though. How can we achieve this goal?

Strategy Pattern

Strategy Pattern is basically one of the most popular design patterns and can be applied to all Functions codes. Let’s start from the IFunction interface.

Every function now implements the IFunction interface and all logics written in a trigger function MUST move into the InvokeAsync() method. Now, the FunctionBase abstract class implements the interface.

While implementing the IFunction interface, the FunctionBase class add the virtual modifier to the InvokeAsync() method so that other functions class inheriting this base class will override the method. Let’s put the real logic.

The GetArmTemplateDirectoriesFunction class overrides the InvokeAsync() method and processes everything within the method. Dead simple. By the way, what is the generic type of TOptions? This is the Options Pattern that we deal with right after this section.

Options Pattern

Many API endpoints have route variables. For example, if an HTTP trigger’s endpoint URL is /api/products/{productId}, the productId value keeps changing and the value can be handled by the function method like Run(HttpRequestMessage req, int productId, ILogger log). In other words, route variables are passed through parameters of the Run() method. Of course each function has different number of route variables. If this is the case, we can pass those route variables, productId for example, through an options instance. Code will be much simpler.

There is an abstract class of FunctionParameterOptions. The GetArmTemplateDirectoriesFunctionParameterOptions class inherits it, and contains a property of Query. This property can be used to store a querystring value for now. The InvokeAsync() method now only cares for the options instance, instead of individual route variables.

Builder Pattern

Let’s have a look at the function code again. The IGitHubService instance is injected through a constructor.

Those dependencies should be managed somewhere else, like an IoC container. Autofac and/or other IoC container libraries are good examples and they need to be combined with a service locator. If we introduce a Builder Pattern, it would be easily accomplished. Here’s a sample code:

With Autofac, all dependencies are registered within the RegisterModule() method and the container is wrapped with AutofacServiceLocator and exported to IServiceLocator within the Build() method.

Factory Method Pattern

All good now. Let’s combine altogether, using the Factory Method Pattern.

Within the constructor, all dependencies are registered by the ServiceLocatorBuilder instance. This includes all function classes that implement the IFuction interface. FunctionFactory has the Create() method and it encapsulates the service locator.

This function now has been refactored again. The IServiceLocator property is replaced with the FunctionFactory property. Then, the Run method calls the Create() method and InvokeAsync() method consecutively by fluent design approach. At the same time, GetArmTemplateDirectoriesFunctionParameterOptions is instantiated and passed the querystring value, which is injected to the InvokeAsync() method.

Now, we’ve got all function triggers and its dependencies are fully decoupled and unit-testable. It has also achieved to encapsulates the service locator so that we don’t even care about that any longer. From now on, code coverages have gone up to 100%, if TDD is properly applied.


So far, I’ve briefly introduced several useful design patterns to test Azure Functions codes. Those patterns are used for everyday development. This might not be the perfect solution, but I’m sure this could be one of the best practices I’ve found so far. If you have a better approach, please send a PR to the GitHub repository.

Azure Logic Apps will be touched in the next post.

Azure Functions Logging to Application Insights

We’re going to have a look at several ways to integrate Application Insights (AppInsights) with Azure Functions (Functions).

Functions supports built-in logging features using TraceWriter instance. Basic sample function might look like:

With TraceWriter, we can log information to the log console like:

However, it has the maximum limit of 1000 records. This is good for simple debugging purposes, but not for logging. Therefore, we should store logs somewhere like database or storage. Fortunately, AppInsights has recently been consolidated to Functions as a preview to overcome these limitations. Let’s have a look.

Application Insights Integration

According to the document, it’s really easy.

  1. Create an AppInsights instance. Its type MUST be General.
  2. Add a new key of APPINSIGHTS_INSTRUMENTATIONKEY to the AppSettings section of the Function instance.
  3. Give the Instrumentation Key value of the AppInsights instance to the key, APPINSIGHTS_INSTRUMENTATIONKEY.

This is it. Once it’s done, simply execute some functions and wait for the aggregated result up to 5 minutes. Then, go to the AppInsights blade and find the graph looking like:

Can’t be easier, huh?

ARM Template Setup for DevOps Engineers

We can add Instrumentation Key like above. However, this is not ideal from the CI/CD point of view. Instead, setting the key within an ARM template would be more preferred, effective and efficient. Here’s the cut-down version of ARM template sample:

As we can see above, we can directly pour the Instrumentation Key within the ARM template without knowing it. If we want to know more about ARM template, this official document would be a good starting point.

ILogger Integration

ILogger is a library of ASP.NET Core. As it supports .NET Standard 1.1, Functions has recently introduced ILogger version 1.1.1. With this, we can virtually add as many logging libraries as possible. Functions provides AppInsights logging through this interface. In other words, we can simply replace the TraceWriter instance with the ILogger one, in order to send all loggings to AppInsights.

This is also really easy. Simply replace TraceWriter with ILogger in the Function parameter and change the method name from log.Info() to log.LogInformation():

If we still want to keep the log.Info() method name, that’s fine. Simply create an extension method like:

And use the extension method like below:

Once everything is done, deploy the Function again and run Functions several times. Then check out the Function log console:

We have the exactly same experience as before. Of course, we can see additional logging details on the AppInsights blade:

As mentioned above, Functions has implemented the ILogger interface. That means, we might be able to add third-party logging libraries such as Serilog. Unfortunately, at the time of writing, we can’t use those third-party ones. But the Azure Functions Team has started looking at those implementation, according to this issue. Hope this feature is released soon.

Another known issue around the ILogger implementation on Functions is that the Azure Functions Core Tools that helps local debugging doesn’t display log to the console. So, don’t panic, even if no log is displayed on your local console. It displays as expected when you deploy your Functions to Azure. The Azure Functions Team works really hard to fix the issue sooner rather than later.

So far, we have walked through a few ways to integrate AppInsights with Azure Functions. As it’s still in preview, features around this may change over time until GA. But I’m pretty sure that this logging integration to AppInsights will be quite useful and powerful.