Is Your Serverless Application Testable? – Azure Logic Apps

I’ve talked about testable Azure Functions in my previous post. In this post, I’m going to introduce building a testable Azure Logic Apps.

Sample codes used in this post can be found here.

User Story

  • As a DevOps engineer,
  • I want to search ARM templates using Azure Logic Apps
  • So that I can browse the search result.

Workflow in Logic Apps

Based on the user story above, the basic workflow in Logic Apps might look like:

  1. An HTTP request triggers the workflow by hitting the Request trigger.
  2. The first action, HTTP in this picture, calls the Azure Function written in the previous post.
  3. If this action returns a response with an HTTP status code greater or equal to 400, the workflow takes the path at the left-hand side, executes the ErrorResponse action and terminates.
  4. If this action returns a response with an HTTP status code less than 400, the workflow takes the path at the right-hand side and executes the Parse JSON action followed by the Condition action.
  5. If the condition returns true, the workflow runs the OkResponse action; otherwise it runs the NotFoundResponse action, and terminates.

Therefore, we can identify that the HTTP action returns three possible scenarios – Error, Success with no result and Success with result. Let’s keep this in mind.

In fact, the workflow is a visual representation of this JSON object.

As a Logic App is basically a JSON object, we can easily plug it into an ARM template and deploy it to Azure so that we use the Logic App workflow straight away.

THERE IS NO CODE AT ALL.

From the unit-testing point of view, no code means we’re not able to test codes. Logic Apps only works in Azure cloud environment, ie. we can’t run test in our isolated local development environment. Well, being unable to run unit-test Logic Apps doesn’t necessarily mean we can’t test it at all.

Let’s see the picture again.

There’s a point where we can run test. Yes, it’s the HTTP action – the API call. As we’ve identified above, there are three scenarios we need to test. Let’s move on.

Manual Testing the Logic App

First of all, send a normal request that returns the result through Postman.

This time, we send a different request that returns no data so that it returns 404 response code.

Finally, we send another request that returns an error response code.

Tests complete! Did you find any issue in this test process? We actually hit the working API defined in the HTTP action. What if the actual API call causes any side effect? Can we test the workflow WITHOUT hitting the actual API? We should find a way to decouple the API call from the workflow to satisfy testabilities. API mocking can achieve this goal.

API Mocking

I wrote a post, API Mocking for Developers. In the post, I introduced how to mock APIs using Azure API Management. This is one way of mocking APIs. Even though it’s very easy to use with a Swagger definition, it has a costing issue. Azure API Management is fairly expensive for mocking. What if we use Azure Functions for mocking? It’s virtually free unless more than one million times of function calls per month occur.

Therefore, based on the three scenarios identified above, we can easily write Functions codes that returns dummy response like below:

It seems to be cumbersome because we need to write code for mocking APIs. So, it’s totally up to you to use either Azure API Management or Azure Functions for mocking.

Now, we’ve got mocked API endpoints for testing. What’s next?

Logic Apps for Testing – Manual Test

As we identified three scenarios, we need to clone the working Logic Apps three times and replace the API endpoint with the mocked ones. Here are the three Logic Apps cloned from the working one.

Each Logic App needs to be called through Postman to check the expected result. In order to pass the test, the first one should return error response, the second one should return 404 and the last one should return 200. Here’s the high level diagram of testing Logic Apps based on the expected scenarios.

We’re now able to test Logic Apps with mocked APIs. However, it’s not enough. We need to automate this test to integrate a CI pipeline. What can we do further?

Logic Apps for Testing – Automated Test

If we write a PowerShell script or script to call those three testing Logic Apps, we can call the script within the CI pipeline.

Here’s the high level diagram for test automation.

During the build process, Jenkins, VSTS or other build automation server calls the PowerShell script. The script runs three API requests and checks their responses. If all requests return expected responses respectively, we consider the test succeeded. If any of request returns a response different from expectation, the test is considered as failure.


So far, we’ve briefly walked through how to run tests against Logic Apps by API mocking. By decoupling API request/response from the Logic App, we can focus on the workflow itself. I hope this approach helps writing your Logic Apps workflow.

Is Your Serverless Application Testable? – Azure Functions

In this post, I’m going to introduce several design patterns when we write Azure Functions codes in C#, by considering full testabilities.

Sample codes used in this post can be found here.

User Story

  • As a DevOps engineer,
  • I want to search ARM templates using Azure Functions
  • So that I can browse the search result.

Basic Function Code

We assume that we have a basic structure that Azure Functions can use. Here’s a basic function code might look like:

As you can see, all dependencies are instantiated and used within the function method. This works perfectly fine so there’s no problem at all. However, from a test point of view, this smells A LOT. How can we refactor this code for testing?

I’m going to introduce several design patterns. They can’t be overlooked, if you’ve been working with OO programming. I’m not going to dive into them too much in this post, by the way.

Service Locator Pattern

As you can see the code above, each and every Azure Functions method has the static modifier. Because of this restriction, we can’t use a normal way of dependency injections but the Service Locator Pattern. With this, the code above can be refactored like this:

First of all, the function code declares the IServiceLocator property with the static modifier. Then, the Run() method calls the service locator to resolve an instance (or dependency). Now, the method is fully testable.

Here’s a simple unit test code for it:

The Mock instance is created and injected to the static property. This is the basic way of dependency injection for Azure Functions. However, there’s still an issue on the service locator pattern.

Mark Seemann argues on his blog post that the service locator pattern is an anti pattern because we can’t guarantee whether dependencies that we want to use have already been registered or not. Let’s have a look at the refactored code again:

Within the Run() method, the IGitHubService instance is resolved by this line:

var service = ServiceLocator.GetInstance();

When the amount of codes is relatively small, that wouldn’t be an issue. However, if the application grows up quickly and has become a fairly large one, can we ensure if the service variable is always NOT null? I don’t think so. We have to use the service locator pattern, but at the same time, try not to use it. That doesn’t sound quite right, though. How can we achieve this goal?

Strategy Pattern

Strategy Pattern is basically one of the most popular design patterns and can be applied to all Functions codes. Let’s start from the IFunction interface.

Every function now implements the IFunction interface and all logics written in a trigger function MUST move into the InvokeAsync() method. Now, the FunctionBase abstract class implements the interface.

While implementing the IFunction interface, the FunctionBase class add the virtual modifier to the InvokeAsync() method so that other functions class inheriting this base class will override the method. Let’s put the real logic.

The GetArmTemplateDirectoriesFunction class overrides the InvokeAsync() method and processes everything within the method. Dead simple. By the way, what is the generic type of TOptions? This is the Options Pattern that we deal with right after this section.

Options Pattern

Many API endpoints have route variables. For example, if an HTTP trigger’s endpoint URL is /api/products/{productId}, the productId value keeps changing and the value can be handled by the function method like Run(HttpRequestMessage req, int productId, ILogger log). In other words, route variables are passed through parameters of the Run() method. Of course each function has different number of route variables. If this is the case, we can pass those route variables, productId for example, through an options instance. Code will be much simpler.

There is an abstract class of FunctionParameterOptions. The GetArmTemplateDirectoriesFunctionParameterOptions class inherits it, and contains a property of Query. This property can be used to store a querystring value for now. The InvokeAsync() method now only cares for the options instance, instead of individual route variables.

Builder Pattern

Let’s have a look at the function code again. The IGitHubService instance is injected through a constructor.

Those dependencies should be managed somewhere else, like an IoC container. Autofac and/or other IoC container libraries are good examples and they need to be combined with a service locator. If we introduce a Builder Pattern, it would be easily accomplished. Here’s a sample code:

With Autofac, all dependencies are registered within the RegisterModule() method and the container is wrapped with AutofacServiceLocator and exported to IServiceLocator within the Build() method.

Factory Method Pattern

All good now. Let’s combine altogether, using the Factory Method Pattern.

Within the constructor, all dependencies are registered by the ServiceLocatorBuilder instance. This includes all function classes that implement the IFuction interface. FunctionFactory has the Create() method and it encapsulates the service locator.

This function now has been refactored again. The IServiceLocator property is replaced with the FunctionFactory property. Then, the Run method calls the Create() method and InvokeAsync() method consecutively by fluent design approach. At the same time, GetArmTemplateDirectoriesFunctionParameterOptions is instantiated and passed the querystring value, which is injected to the InvokeAsync() method.

Now, we’ve got all function triggers and its dependencies are fully decoupled and unit-testable. It has also achieved to encapsulates the service locator so that we don’t even care about that any longer. From now on, code coverages have gone up to 100%, if TDD is properly applied.


So far, I’ve briefly introduced several useful design patterns to test Azure Functions codes. Those patterns are used for everyday development. This might not be the perfect solution, but I’m sure this could be one of the best practices I’ve found so far. If you have a better approach, please send a PR to the GitHub repository.

Azure Logic Apps will be touched in the next post.

Azure Functions Logging to Application Insights

We’re going to have a look at several ways to integrate Application Insights (AppInsights) with Azure Functions (Functions).

Functions supports built-in logging features using TraceWriter instance. Basic sample function might look like:

With TraceWriter, we can log information to the log console like:

However, it has the maximum limit of 1000 records. This is good for simple debugging purposes, but not for logging. Therefore, we should store logs somewhere like database or storage. Fortunately, AppInsights has recently been consolidated to Functions as a preview to overcome these limitations. Let’s have a look.

Application Insights Integration

According to the document, it’s really easy.

  1. Create an AppInsights instance. Its type MUST be General.
  2. Add a new key of APPINSIGHTS_INSTRUMENTATIONKEY to the AppSettings section of the Function instance.
  3. Give the Instrumentation Key value of the AppInsights instance to the key, APPINSIGHTS_INSTRUMENTATIONKEY.

This is it. Once it’s done, simply execute some functions and wait for the aggregated result up to 5 minutes. Then, go to the AppInsights blade and find the graph looking like:

Can’t be easier, huh?

ARM Template Setup for DevOps Engineers

We can add Instrumentation Key like above. However, this is not ideal from the CI/CD point of view. Instead, setting the key within an ARM template would be more preferred, effective and efficient. Here’s the cut-down version of ARM template sample:

As we can see above, we can directly pour the Instrumentation Key within the ARM template without knowing it. If we want to know more about ARM template, this official document would be a good starting point.

ILogger Integration

ILogger is a library of ASP.NET Core. As it supports .NET Standard 1.1, Functions has recently introduced ILogger version 1.1.1. With this, we can virtually add as many logging libraries as possible. Functions provides AppInsights logging through this interface. In other words, we can simply replace the TraceWriter instance with the ILogger one, in order to send all loggings to AppInsights.

This is also really easy. Simply replace TraceWriter with ILogger in the Function parameter and change the method name from log.Info() to log.LogInformation():

If we still want to keep the log.Info() method name, that’s fine. Simply create an extension method like:

And use the extension method like below:

Once everything is done, deploy the Function again and run Functions several times. Then check out the Function log console:

We have the exactly same experience as before. Of course, we can see additional logging details on the AppInsights blade:

As mentioned above, Functions has implemented the ILogger interface. That means, we might be able to add third-party logging libraries such as Serilog. Unfortunately, at the time of writing, we can’t use those third-party ones. But the Azure Functions Team has started looking at those implementation, according to this issue. Hope this feature is released soon.

Another known issue around the ILogger implementation on Functions is that the Azure Functions Core Tools that helps local debugging doesn’t display log to the console. So, don’t panic, even if no log is displayed on your local console. It displays as expected when you deploy your Functions to Azure. The Azure Functions Team works really hard to fix the issue sooner rather than later.

So far, we have walked through a few ways to integrate AppInsights with Azure Functions. As it’s still in preview, features around this may change over time until GA. But I’m pretty sure that this logging integration to AppInsights will be quite useful and powerful.

Azure Functions with Swagger

Azure Functions Team has recently announced the Swagger support as a preview. If we use Azure Functions as APIs, this will be very useful. In this post, we will have a look how to enable Swagger support on Azure Functions.

Sample codes used for this post can be found here.

Sample Azure Functions Instance

First of all, with the sample code provided, we’re creating two HTTP triggers, CreateProduct and GetProduct. Once we deploy them, we can find it from the Azure Portal like:

Here are simple requests and responses through Postman:

Let’s create a Swagger definition document for those Functions.

Auto-Generate Swagger Definition

If we have at least a Function endpoint in our Function instance, we can automatically generate a Swagger definition in YAML format. Click the API definition (preview) tab.

As a default, the External URL button is chosen. Click the Functions button right next to it.

Because we have never generated the Swagger definition, it spits the error screen.

Now, click the Generate API definition template button so that the document is automatically generated.

Now we’ve got the Swagger definition doco. The actual document generated looks like:

However, there are at least three missing gaps that we have to fill in:

  • definitions: There’s no request/response model definition. We have to fill in.
  • produces/consumes: There’s no document type defined. In general, as JSON format is the most popular for REST API, we can simply add application/json here.
  • securityDefinitions: Azure Functions use either code in the querystring or x-functions-key in the request header for processing. The auto-generated template only defines the code one, not the other one. So we have to define the other x-functions-key.

Here’s the updated Swagger definition including the missing ones:

Once we update the Swagger definition, we can test the API right away by providing function key code and payload. Easy, huh? Also the address specified in the middle of the picture, https://xxxx.azurewebsites.net/admin/host/swagger?code=xxxx, allows us to access to the Swagger definition document in JSON format. Azure Functions instance automatically converts the YAML document to the JSON one.

It seems to be very easy. However, there is a critical point we have to bear in mind. The Function instance must have at least one function endpoint so that the Swagger definition should be auto-generated. In other words, API design-first approach is not applicable.

Now, we’ve got a question. If we only have Swagger definition document, not the actual implementation, what can we do with Azure Functions? Why not directly rendering Swagger document by Azure Function code?

Render Swagger Definition via Azure Functions

Here’s the deal. We basically create an Azure Function code that reads Swagger definition and render it as a response. The following function code will give us a brief idea:

The Function instance contains a swagger-v1.yaml file in its root level. When we see the code above, firstly it reads the file. In order to read the file, we have to set a value to represent the root path, called WEBROOT_PATH (or whatever) in the AppSettings section. Its value will be D:\home\site\wwwroot, which never changes unless Azure App Service changes it. There are two ways to read the settings value:

  1. var wwwroot = Environment.GetEnvironmentVariable("WEBROOT_PATH");
  2. var wwwroot = ConfigurationManager.AppSettings["WEBROOT_PATH"];

Either is fine to read the settings value. If we omit this settings value, Azure Functions basically assumes that the file is located at C:\Windows\System32, which will cause an unexpected result.

According to this document, at the time of this writing, we can pass the Microsoft.Azure.WebJobs.ExecutionContext instance as a function parameter so that we can handle the file path a little bit easier. However, as it’s not fully rolled out yet nor dev tools don’t have that feature yet, we should wait for it until it’s fully rolled out.

The code above then read the YAML file, convert it to JSON and render it. We can now see the Swagger definition through a web browser:

So far, we have briefly looked how to enable Swagger support in Azure Functions in two different ways. Both of course have goods and bads. The first one might be the easiest option but needs more work. Also if we want to access to the swagger definition with the first option, we have to use a different access code. This is a bit critical because we have to manage at least two different keys – one for Functions and the other for Swagger, which is not ideal. On the other hand, the second option needs another Function code but can be handled by the same host key that is accessible to the other Function codes. Therefore, from the management point of view, the second option might be better. It’s still in preview, so we hope that the GA version of the Swagger support will be better than now.

Tools for Testing Webhooks

In a microservices environment, APIs are the main communication methods between services. Most of time each API merely sends a request and wait for its response. But there are definitely cases that need longer period to complete the requests. Even some cases stop processing at some stage until they get a signal to continue. Those are not uncommon in the API world. However, implementing those features might be a little bit tricky because most APIs use HTTP protocol, which are basically web-based applications that have timeout constraints. The timeout limitation is pretty critical to achieve those long-running processes within a given period of time. Fortunately, there are already solutions (or patterns) to overcome those sorts of challenges – asynchronous patterns and/or webhook patterns. As both are quite similar to each other, they are used interchangeably or together. While those approaches sort out the long-running process issues, they are hard for testing or debugging without some external tools. In this post, we are going to have a look at a couple of useful tools for debugging and testing, when we develop REST API applications, especially webhook APIs.

Disclaimer: those services introduced in this post do not have any sort of relationship to us, Kloud.

RequestBin

RequestBin is an online webhook request sneaking tool. It has a very simple user interface so that developers can hop into the service straight away. If we want to check webhook request data, follow the steps below:

  1. Click the Create a RequestBin button at the first page.

  2. A temporary bin URL is generated. Grab it.

  3. Use this bin URL as a webhook callback URL. Here’s the screenshot using Postman sending a request to the bin URL.

  4. The request data sent from Postman is captured like below. If the bin URL is https://requestb.in/1fbhrlf1, just append the querystring parameter of ?inspect so that we can inspect the request data. Can we find out the same request data sent?

This service brings a few merits. Firstly, we can simply use the bin URL when we register a webhook. The webhook request body is captured at the bin URL. Secondly, we don’t have to develop a test application to analyse the webhook payload because the data has already been captured by the bin URL. Thirdly, we can save time and resources for those testing application development. And finally, it’s free!

Of course, there are demerits. Each bin URL is only available for limited period of time. According to the service website, the bin URL is only valid for the first 48 hours after it’s generated. In fact, the lifetime varies from 5 mins to 48 hours. If we close a web browser and open it again, the bin URL is no longer valid. Therefore, this is only for testing webhooks that have a short lifecycle. In addition to this, RequestBin doesn’t have a good fit when we try local debugging. There must be cases that the webhooks receive data and process it. RequestBin only shows how webhook request data is captured, nothing further.

If RequestBin is not our solution, what else can we use? How can we debug or test the webhook functions?

ngrok

According to this post, tunneling services take traffic from the Internet to local development environment. ngrok is one of those tunneling services. It has both free version and paid one, but the free one is enough for our purpose.

As it supports cross-platforms, download the suitable binary for OS. For Windows, there is only one binary, ngrok.exe. Copy this to the C:\ngrok folder (or wherever preferred) and enter the command below:

ngrok http 7071 -host-header=localhost
  • http: This specifies the protocol to watch incoming traffic.
  • 7071: This is the port number. Default is 80. If we debug Azure Functions, set this port number to 7071.
  • -host-header=localhost: Without this option, only ngrok captures the traffic but it can’t reach the local debugging environment.

After typing the command above, we can see the screen like below:

As we can see, external traffic hitting the endpoint of http://b46c7c81.ngrok.io can reach our locally running Azure Functions app. Let’s run Azure Functions in a local debugging mode.

Azure Functions is now up and running. Run Postman and send a request to the endpoint that ngrok has generated.

We can now confirm that the code stops at the break point within Visual Studio.

ngrok also provides a replay feature. If we access to http://localhost:4040, how ngrok has captured all requests since it is run.

The free version of ngrok keeps changing the endpoint every time it runs. Run history is also blown away. But this wouldn’t matter for our purpose.

So far, we have briefly looked at a couple of tools to sneak webhook traffic for debugging purpose. If we utilise those tools well, we can more easily perform checking how API request calls are made. It would be definitely worth noting.

Notes for Logic Apps around Webhook Actions

Azure Logic Apps (Logic Apps) is one of serverless services that Azure is offering. Of course the other one in Azure is Azure Functions (Functions). Logic Apps consists of many connectors and triggers to interconnect services outside Azure. A webhook connector is one of them which has unique characteristics to the others. In this post, we are briefly looking at some tips we should know when we use webhook actions in a Logic Apps workflow.

What is Webhook?

Webhook is basically an API. We register an API endpoint onto a certain service. When an event occurs on the service, it invokes the API endpoint and sends a payload to there through its request body. That’s how a webhook API works. Here are some picks:

  • In order to use a webhook, we need to register it onto somewhere. The registration process is called Subscription. We may or may not explicitly perform the subscription process.
  • During the subscription process, we store an endpoint URL, which is called Callback. After the subscription, an event occurs the callback is invoked.
  • When we invoke the callback, we also send a data, called Payload through its request body. That means we always use the POST method rather than any other HTTP methods.
  • If we don’t need the webhook any longer, we need to deregister it, which is called Unsubscription. This is performed either explicitly or implicitly.

Those four behaviours are the most distinctive ones on a webhook. Let’s take an example using Slack and GitHub. We have a simple scenario around those two services:

As a developer, I want to post a notification on a Slack channel when I push a commit to a GitHub repository so that other team members get notified of my code update.

  • Get an endpoint URL from Slack to send a notification, which will be registered onto GitHub. This is the Subscription.
  • The Slack endpoint URL is considered as Callback URL.
  • Through the callback URL, a POST request is sent when a new push is made. The request contains the push commit details, which is the Payload.
  • When we don’t need the notification, we simply delete the callback URL from the GitHub repository. This is the Unsubscription.

Does it make us have much more senses? Let’s apply them to Logic Apps.

Webhook Action on Logic Apps

A webhook action can be found like:

If we select it, we are given to enter those details. Here are the bits and pieces we are talking about in this post.

  1. At the time of this writing, the picture only indicates two required fields – Subscribe - Method and Subscribe - URI. However, this is NOT true. Actually we have to fill in those FIVE fields at least.
    • Subscribe - Method
    • Subscribe - URI
    • Subscribe - Body
    • Unsubscribe - Method
    • Unsubscribe - URI
  2. Both Subscribe - Method and Unsubscribe - Method only accept POST, not any other else. Like the picture above, if we select any method, GET for example, over POST, the Logic App will throw an error below. We can select any method, but this is NOT true. DO NOT get confused by the dropdown box.

  3. Subscribe - Body is actually a de-facto required field as we need to pass a callback URL through its payload in JSON format. Therefore, the callback URL MUST be sent using the WDL (Workflow Definition Language) function, @{listCallbackUrl()} via the request body.

  4. Both Logic App endpoint URL and the callback URL from the webhook action requires a SAS token, which is automatically generated. This SAS token is used for authentication. Therefore, there is no need to have another Authorization header. If we put the Authorization header by accident, the DirectApiAuthorizationRequired error will occur.

  5. When a Logic App meets the webhook action within its workflow, it subscribes the subscription endpoint and waits for the callback sending a payload until it is called. The maximum waiting period is 90 days. The callback only expects the POST request.

  6. When the callback sends a payload, the payload is considered as a response of the webhook action. Let’s have an example like below. The Logic App subscribe a Function endpoint. The Function sends a request back to Logic App through the callback URL, with a payload. Initial request body coming from Logic App has a productId property but the callback request body from Function changes it to objectId. As a result, the webhook action receives the new payload containing the objectId property.

    dynamic data = await req.Content.ReadAsAsync();
    var serialised = JsonConvert.SerializeObject((object)data);
    
    using (var client = new HttpClient())
    {
      var payload = JsonConvert.SerializeObject(new { objectId = (int) data.productId });
      var content = new StringContent(payload);
      await client.PostAsync((string) data.callbackUrl, content);
    }
    
    return req.CreateResponse(HttpStatusCode.OK, serialised);
    
  7. Regardless of the callback request that is successful or not, whenever the callback is invoked and captured by Logic App, its payload always becomes the response message of the webhook action.

  8. Only after the callback response is captured by Logic App, the subsequent actions are triggered. If those following actions use the webhook’s response message, it will be the payload from the callback as described earlier.

  9. Unsubscription is only executed when the Logic App run is cancelled or 90 days timeout occurs. It works as like a garbage collector.

  10. Unsubscription doesn’t have to fire a callback.

So far, we have briefly taken a look at the webhook action of Logic App. Documentation around this still needs more improvements so it is worth noting those characteristics when using Logic Apps. By doing so, we will be able to reduce number of trial-and-error efforts.

API Mocking for Developers

API is the most common practice to exchange messages in a microservices architecture world. There are actually two different approaches for API development. One is called Model First and the other is called Design First. Usually the latter, AKA Spec-Driven Development (SDD), is preferred over the former.

When is the Model First approach useful? If you are running legacy API applications, this would be a good example of using this approach. If those systems are well documented, API documents can be easily extracted by tools like Swagger which is now renamed to Open API. There are many implementations for Swagger, Swashbuckle for example, so it’s very easy to extract API spec document using those tools.

What if we are developing a new API application? Should I develop the application and extract its API spec document like above? Of course we can do like this. There’s no problem at all. If the API spec is updated, what should we do then? The application should be updated first then the updated API spec should be extracted, which might be expensive. In this case, the Design First approach would be useful because we can make all API spec clearly before the actual implementation, which we can reduce time and cost, and achieve better productivity. Here’s a rough diagram how SDD workflow looks like:

  1. Design API spec based on the requirements.
  2. Simulate the API spec.
  3. Gather feedback from API consumers.
  4. Validates the API spec if it meets the requirements.
  5. Publish the API spec.

There is an interesting point. How can we run/simulate the API without the actual implementation? Here we can introduce API mocking. By mocking APIs, front-end developers, mobile developers or other back-end developers who actually consume the APIs can simulate expected result, regardless that the APIs are working or not. Also by mocking those APIs, developers can feed back more easily.

In this post, we will have a look how API mocking features work in different API management tools such as MuleSoft API Manager, Azure API Management and Amazon API Gateway, and discuss their goods and bads.

MuleSoft API Manager + RAML

MuleSoft API Manager is a strong supporter of RAML(RESTful API Modelling Language) spec. RAML spec version 0.8 is the most popular but recently its Spec version 1.0 has been released. RAML basically follows the YAML format, which brings more readability to API spec developers. Here’s a simple RAML based API spec document.

According to the spec document, API has two different endpoints of /products and /products/{productId} and they define GET, POST, PATCH, DELETE respectively. Someone might have picked up in which part of the spec document would be used. Each response body has its example node. These nodes are used for mocking in MuleSoft API Manager. Let’s see how we can mock both API endpoints.

First of all, login to Anypoint website. You can create a free trial account, if you want.

Navigate to API Manager by clicking either the button or icon.

Click the Add new API button to add a new API and fill the form fields.

After creating an API, click the Edit in API designer link to define APIs.

We have already got the API spec in RAML, so just import it.

The designer screen consists of three sections – left, centre, and right. The centre section shows the RAML document itself while the right section displays how RAML is visualised as API document. Now, click the Mocking Service button at the top-right corner to activate the mocking status. If mocking is enabled, the original URL is commented out and a new mocking URL is added like below:

Of course, when the mocking is de-activated, the mocking URL disappears and the original URL is uncommented. With this mocking URL, we can actually send requests through Postman for simulation. In addition to this, developers consuming this API don’t have to wait until it’s implemented but just using it. The API developer can develop in parallel with front-end/mobile developers. This is the screenshot of using Postman to send API requests.

As defined in the example nodes of RAML spec document, the pre-populated result is popping out. Like the same way, other endpoints and/or methods return their mocked results.

So far, we have looked how MuleSoft API Manager can handle RAML and API mocking. It’s very easy to use. We just imported a RAML spec document and switched on the mocking feature, and that’s it. Mocking can’t be easier that this. It also supports Swagger 2.0 spec. When we import a Swagger document, it is automatically converted to a RAML 1.0 document. However, in the API designer, we still have to use RAML. It would be great if MuleSoft API Manager supports to edit Swagger spec documents out of the box sooner rather than later, which is a de-facto standard nowadays.

There are couple of downsides of using API Manager. We can’t have precise controls on individual endpoints and methods.

  • “I want to mock only the GET method on this specific endpoint!”
  • “Nah, it’s not possible here. Try another service.”

Also, the API designer changes the base URL of API, if we activate the mocking feature. It might be critical for other developers consuming the API for their development practice. They have to change the API URL once the API is implemented.

Now, let’s move onto the example supporting Swagger natively.

Azure API Management + Swagger

Swagger has now become Open API, and spec version 2.0 is the most popular. Recently a new spec version 3.0 has been released for preview. Swagger is versatile – ie. it supports YAML and JSON format. Therefore, we can design the spec in YAML and save it into a JSON file so that Azure API Management can import the spec document natively. Here’s the same API document written in YAML based on Swagger spec.

It looks very similar to RAML, which includes a examples node in each endpoint and/or method. Therefore, we can easily activate the mocking feature. Here’s how.

First of all, we need to create an API Management instance on Azure Portal. It takes about half an hour for provisioning. Once the instance is fulfilled, click the APIs - PREVIEW blade at the left-hand side to see the list of APIs.

We can also see tiles for API registration. Click the Open API specification tile to register API.

Upload a Swagger definition file in JSON format, say swagger.json, and enter appropriate name and suffix. We just used shop as a prefix for now.

So, that’s it! We just uploaded the swagger.json file and it completes the API definition on API Management. Now, we need to mock an endpoint. Unlike MuleSoft API Manager, Azure API Management can handle mocking on endpoints and methods individually. Mocking can be set in the Inbound processing tile as it intercepts the back-end processing before it hits the back-end side. It can also set mocking at a global level rather than individual level. For now, we simply setup the mocking feature only on the /products endpoint and the GET method.

Select /products - GET at the left-hand side and click the pencil icon at the top-right corner of the Inbound Processing tile. Then we’re able to see the screen below:

Click the Mocking tab, select the Static responses option on the Mocking behavior item, and choose the 200 OK option of the Sample or schema responses item, followed by Save. We can expect the content defined under the examples node. Once saved, the screen will display something like this:

In order to use the API Management, we have to send a special header key in every request. Go to the Products - PREVIEW blade and add the API that we’ve defined above.

Go to the Users - PREVIEW blade to get the Subscription Key.

We’re ready now. In Postman, we can see the result like:

The subscription key has been sent through the header with a key of Ocp-Apim-Subscription-Key and the result was what we expected above, which has already been defined in the Swagger definition.

So far, we have used Azure API Management with Swagger definition for mocking. The main differences between MuleSoft API Manager and Azure API Management are:

  • Azure API Management doesn’t change the mocking endpoint URL, while MuleSoft API Manager does. That’s actually very important for front-end/mobile developers because they don’t get bothered with changing the base URL – same URL with mocked result or actual result based on the setting.
  • Azure API Management can mock endpoint and method individually. We only mock necessary ones.

However, the downside of using Azure API Management will be the cost. It costs more than $60/month on the developer pricing plan, which is the cheapest. MuleSoft API Manager is literally free to use, from the mocking perspective (Of course we have to pay for other services in MuleSoft).

Well, what else service can we use together with Swagger? Of course there is Amazon AWS. Let’s have a look.

Amazon API Gateway + Swagger

Amazon API Gateway also uses Swagger for its service. Login to the API Gateway Console and import the Swagger definition stated above.

Once imported, we can see all API endpoints. We choose the /products endpoint with the GET method. Select the Mock option of the Integration type item,, and click Save.

Now, we’re at the Method Execution screen. When we see at the rightmost-hand side of the screen, it says Mock Endpoint where this API will hit. Click the Integration Request tile.

We confirm this is mocked endpoint. Right below, select the When there are no templates defined (recommended) option of the Request body passthrough item.

Go back to the Method Execution screen and click the Integration Response tile.

There’s already a definition of the HTTP Status Code of 200 in Swagger, which is automatically showed upon the screen. Click the triangle icon at the left-hand side.

Open the Body Mapping Templates section and click the application/json item. By doing so, we’re able to see the sample data input field. We need to put a JSON object as a response example by hand.

I couldn’t find any other way to automatically populate the sample data. Especially, the array type response object can’t be auto-generated. This is questionable; why doesn’t Amazon API Gateway allow a valid object? If we want to avoid this situation, we have to update our Swagger definition, which is vendor dependent.

Once the sample data is updated, save it and go back to the Method Execution screen again. We are ready to use the mocking feature. Wait. One more step. We need to publish for public access.

Eh oh… We found another issue here. In order to publish (or deploy) the API, we have to set up the Integration Type of all endpoints and methods INDIVIDUALLY!!! We can’t skip one single endpoint/method. We can’t set up a global mocking feature, either. This is a simple example that only contains four endpoints/methods. But, as a real life scenario, there are hundreds of API endpoints/methods in one application. How can we set up individual endpoints/methods then? There’s no such way here.

Anyway, once deployment completes, API Gateway gives us a URL to access to API Gateway endpoints. Use Postman to see the result:

So far, we have looked Amazon API Gateway for mocking. Let’s wrap up this post.

  • Global API Mocking: MueSoft API Manager provides a one-click button, while Amazon API Gateway doesn’t provide such feature.
  • Individual API Mocking: Both Azure API Management and Amazon API Gateway are good for handling individual endpoints/methods, while MuleSoft API Manager can’t do it. Also Amazon API Gateway doesn’t well support to return mocked data, especially for array type response. Azure API Management perfectly supports this feature.
  • Uploading Automation of API Definitions: Amazon API Gateway has to manually update several places after uploading Swagger definition files, like examples as mocked data. On the other hand, both Azure API Management and MuleSoft API Manager perfectly supports this feature. There’s no need for manual handling after uploading definitions.
  • Cost of API Mocking: Azure API Management is horrible from the cost perspective. MuleSoft provides virtually a free account for mocking and Amazon API Gateway offers the first 12 months free trial period.

We have so far briefly looked how we can mock our APIs using the spec documents. As we saw above, we don’t need to code for this feature. Mocking purely relies on the spec document. We also have looked how this mocking can be done in each platform – MuleSoft API Manager, Azure API Management and Amazon API Gateway, and discuss merits and demerits of each service, from the mocking perspective.

Know Your Cloud Resource Costs on Azure

An organisation used to invest their IT infrastructure mostly for computers, network or data centre. Over time, they spent their budget for hosting spaces. Nowadays, in cloud environments, they mostly spend their funds to purchase computing power. Here’s a simple diagram about the cloud computing evolution. From left to right, expenditure shifts from infrastructure to computing power.

In the cloud environment, when we need resources, we just create and use them, and when we don’t need them any longer, we just delete them. But let’s think about this. If your organisation runs dev, test and production environment on cloud, the cost of resources running on dev or test environment is likely to be overlooked unless carefully monitored. In this case, your organisation might be receiving an invoice with massive amount of cost! That has to be avoided. In this post, we are going to have a look at the Azure Billing API that was released in preview and build a simple application to monitor costs in an effective way.

The sample codes used for this post can be found here.

Azure Billing API Structure

There are two distinctive APIs for Azure Billing – one is Usage API and the other is Rate Card API. Therefore, we can calculate how much we spent during a particular period.

Usage API

This API is based on a subscription. Within a subscription, we can send a request to calculate how much resources we used in a specified period. Here are the parameters we can use for these requests.

  • ReportedStartTime: Starting date/time reported in the billing system.
  • ReportedEndTime: Ending date/time reported in the billing system.
  • Granularity: Either Daily or Hourly. Hourly can return more detailed result but takes far longer time to get the result.
  • Details: Either true or false. This determines how usage is split into instance level or not. If false is selected, all same instance types are aggregated.

Here’s an interesting point on the term Reported. When we USE cloud resources, that can be interpreted from two different perspectives. The term, USE, might mean that the resources were actually used at the specified date/time, or the resource used events were reported to the billing system at the specified date/time. This happens because Azure is basically a distributed system scattered all around the world, and based on the data centre the resources are situated, the actual usage date/time can be reported to the billing system in a delayed manner. Therefore, even though we send requests based on the reported date/time, the responses containing usage data show the actual usage date/time.

Rate Card API

When you open a new Azure subscription, you might have noticed a code looking like MS-AZR-****P. Have you seen that code before? This is called Offer Durable ID and, based on this, different rates on resources apply. Please refer to this page to see more details about various types of offers. In order to send requests for this, we can use the following query parameters.

  • OfferDurableId: This is the offer Id. eg) MS-AZR-0017P (EA Subscription)
  • Currency: Currency that you want to look for. eg) AUD
  • Locale: Locale of your search region. eg) en-AU
  • Region: Two-letter ISO country code that you purchased this offer. eg) AU

Therefore, in order to calculate the actual spending, we need to combine these two API responses. Fortunately, there’s a good NuGet library called CodeHollow.AzureBillingApi. So we just use it to figure out Azure resource consumption costs.

Scenario

Kloud, as a cloud consulting firm, offers all consultants access to the company’s subscription without restriction so that they can create resources to develop/test scenarios for their clients. However, once resources are created, there’s high chance that those resources are not destroyed in a timely manner, which brings about unnecessary cost spending. Therefore, management team has made a decision to perform cost control by resource groups 1) assigning resource group owners, 2) setting total spend limit, and 3) setting daily spend limit, using tags. By virtue of these tags, resource group owners are notified via email when cost approaches 90% of the total spend limit, and when it reaches the total spend limit. They also get notified if the cost exceeds the daily spend limit so they can take appropriate actions for their resource groups.

Sounds simple, right? Let’s code it!

When the application is written, it should be run daily to aggregate all costs, store it to database, and send notifications to resource group owners that meets the conditions above.

Writing Common Libraries

The common libraries consist of three parts. Firstly, it calls Azure Billing API, and aggregates data by date and resource group. Secondly, it stores those aggregated data into database. Finally, it sends notification to resource group owners who have resource groups that exceeds either total spend limit or daily spend limit.

Azure Billing API Call & Data Aggregation

CodeHollow.AzureBillingApi can reduce huge amount of API calling work. Its simple implementation might look like:

First of all, like the code above, we need to fetch all resource usage/cost data then, like below, those data needs to be grouped by dates and resource groups.

We now have all cost related data per resource group. We then need to fetch tag values from resource groups using another API call and merge it with the data previously populated.

We can look up all resource groups in a given subscription like above, and merge this result with the cost data that we previously found, like below.

Date Storage

This is the simplest part. Just use Entity Framework and store data into the database.

We’ve so far implemented data aggregation part.

Notification

First of all, we need to fetch resource groups that meet conditions, which is not that hard to write.

The code above is self-explanatory: it only returns resource groups that 1) approach the total spend limit or 2) exceed the total spend limit, or 3) exceed the daily spend limit. It works well, even though it looks smelly.

The following code bits show how to send notifications to the resource group owners.

It only writes alarms onto the screen, but we can implement SendGrid for email notification or Twillio for SMS alert, in here.

Now we’ve got the basic application structure. How can we execute it, by the way? We might have two approaches – Azure WebJobs and Azure Functions. Let’s move on.

Monitoring Application on Azure WebJob

A console application might be the simplest way for this purpose. Once the console app is built, it can be deployed to an Azure WebJob straight away. Here’s the simple console application code.

Aggregator service collects and store data and Reminder service sends alerts to resource group owners. In order to deploy this to Azure WebJob, we need to create two extra files, run.cmd and settings.job.

  • settings.job: It contains CRON expression for scheduled job. For example, if this WebJob runs every night at 00:20, the JSON object might look like:

  • run.cmd: When this WebJob is run, it always looks up run.cmd first, which is a simple batch command file. Therefore, if necessary, we can enter the actual executable command with appropriate arguments into this file.

That’s how we can use Azure WebJob for monitoring.

Monitoring Application on Azure Function

We can use Azure Functions instead. But in this case we HAVE TO make sure:

Azure Functions instance MUST be with App Service Plan, NOT Consumption Plan

Basically, this app runs for 1-2 minutes at the shortest or 30-40 minutes at the longest. This execution time is not affordable for Consumption Plan, which charges costs based on execution time. On the other hand, as we have already paid for App Service Plan, we don’t need to pay extra for the Function instance, if we create it under the App Service Plan.

Timer Trigger Function code suits our purpose. Also using Precompiled Azure Functions approach would be more helpful and the function code might look like:

Here’s the function.json for this Timer Trigger one:

Here we have shown how to quickly write a simple application for cost monitoring, using the Azure Billing API. Cloud resources can certainly be used effectively and efficiently, but the flipside of it is, of course, that we have to be very careful not to be wasteful. Therefore, implementing a monitoring application would help in preventing unwanted cost leak.

Precompiled Azure Functions Revisited

Since Microsoft released a preview version of Visual Studio Tools for Azure Functions in December, 2016, it’s been reported very buggy. Current roadmap says the tooling won’t be unveiled until .NET Standard 2.0 is released. Therefore, Azure App Service Team published an article to utilise ASP.NET Web Application projects in the meantime. Even though the approach requires a lot of hands for initial setup, this is literally the only way to play Azure Functions on Visual Studio without the tooling.

In this post, as a complementary one to the article, we are going to build a simple Web API using HTTP Triggers and implement a simple CQRS pattern using Queue Triggers by precompiled approach. I hope this post would offer more details.

The sample code used for this post can be found here.

SIDE NOTES:

  • We use VS2015 in this post. If you want to use VS2017, that would be fine.
  • We only use .NET Framework version 4.6 as Azure Functions uses this version.

Benefits of Precompiled Azure Functions

There are several benefits when we use Azure Functions that come as precompiled .dll files:

  1. We can use full features on Visual Studio, including IntelliSense.
  2. We can easily write unit test codes.
  3. We can easily attach function codes to existing CI/CD pipelines.
  4. We can easily migrate the existing codebase with barely modifying them.
  5. We don’t need project.json for NuGet package management.
  6. We can reduce the total amount of cold start time by removing compiling on-the-fly when requests hit to the Functions.

Let’s find out a scenario we’re using in this post.

Web APIs to Create or View Product Details

We’re creating two Web API endpoints – one to add/update product details, and the other to view product details. We also add another Function code using Azure Storage Queue to implement CQRS patterns.

Entity Framework Code-First Approach for Product Details

First of all, we’re creating a class library to take care of database transaction using Entity Framework. Here are a simple product class and database context.

We can find this at the PrecompiledSample.EntityModels project.

API Request/Response Object

Next, we create a simple DTO, ProductModel.csnot to directly expose database entity.

We can find this at the PrecompiledSample.Models project.

Service Layer

Lastly, When we develop Web API applications, we usually implement service layers that are called by controllers. Of course, there’s no controller in Azure Functions, those service layers are instantiated within the Run method of Azure Functions code.

We can find this at the PrecompiledSample.Services project.

So far, have you found out any difference? Probably not. Now, it’s time to write Azure Functions code.

Creating Web Application Project for Azure Functions

As Azure Functions basically run on top of an Azure Web App instance, we can start with an empty ASP.NET Web Application project, targeting .NET Framework 4.6, within Visual Studio.

Of course there’s nothing in the project, except packages.config and Web.config.

Install NuGet packages below to run Azure Functions within our Web Application project:

For our simple Web API, for database transactions, we need more NuGet packages:

We’ve completed our basic setup for Azure Functions code. Let’s move on.

Writing Azure Functions

Let’s think about the Azure Functions app structure. Each folder of the Web Application project works as an individual function. Therefore, we can simply create folders and put function.json in each of them. Let’s have a look.

  • AddProductHttpTrigger: This is an HTTP Trigger works as an API endpoint, which takes POST requests, sends the message body to Azure Storage Queue and returns HTTP Status Code of 202 (Accepted) right away.
  • AddProductQueueTrigger: This is a Queue Trigger that handles database transaction, which watches Azure Storage Queue, takes messages from there and process them to the database.
  • GetProductHttpTrigger: This is an HTTP Trigger works as an API endpoint, which takes GET request and returns messages with HTTP Status Code of 200 (OK).

Here’s a high-level diagram how POST request is processed through Functions.

We mentioned about function.json just above. What do we put inside it, then? The easiest way to check its content will be to just create a function code on Azure Functions instance. Let’s have a look.

AddProductHttpTrigger

If we create an HTTP Trigger function from Azure Portal, we can find the function.json. It looks like:

There are two bindings defined – one for input binding and the other for output binding. The input binding has a type of httpTrigger, while the output binding has a type of http. Copy all and paste it into the function.json in our Visual Studio project. And add another output binding with the type of queue. Also, we need to provide precompiled .dll file’s information like below:

As defined above, we need AddProductHttpTrigger.cs.

Copy the code from Run.csx of Azure Portal and paste it into the file, then modify it like below:

How can we find out the differences between our code and Run.csx?

  1. There’s no #r directive. This is because the function code is not a script anymore.
  2. There are namespace and class name definition. Previously in script-style function code didn’t allow those definitions around the Run method.
  3. Message body from the request is directly sent to the queue.
  4. At the same time, it returns HTTP Status Code of 202 (Accepted). This is more semantic than returning HTTP Status Code of 200 (OK) as it’s the non-blocking and async function.

We’ve now created the API endpoint to create a resource, which is corresponding to the C of the CQRS pattern.

AddProductQueueTrigger

When we create a sample Queue Trigger function code, we can see the function.json. Copy and paste it into our function.json in Visual Studio and define the entry point like:

Create the AddProductQueueTrigger.cs file like below:

Copy the code from Run.csx and paste it to the file and modify it like below:

This function actually does database transactions. Let’s have a look:

  1. While normal web applications refer to Web.config for database connection string, Azure Functions is not designed to read from it. Instead, it uses the App Settings blade of the Azure Function Portal that defines database connection string. In spite of this fact, we can still write code the same way, ConfigurationManager.ConnectionStrings["NAME"].ConnectionString.
  2. If we want to use additional configurations, use ConfigurationManager.AppSettings["KEY"] that is defined in the App Settings blade of the Azure Functions Portal. We can’t use custom configuration section for this purpose. Alternatively, if you really want to custom configuration settings, create a JSON file, mysettings.json for example, and deserialise it using Json.NET.
  3. It uses the service layer instance that is written in another project, PrecompiledSample.Services. If you are concerned at dependencies, consider the service locator pattern. Testing Precompiled Azure Functions explained well how to apply service locator pattern in Azure Functions.

So far, we have written function codes for resource creation.

GetProductHttpTrigger

Copy the function.json from the HTTP Trigger function created earlier in the portal, paste it into the function.json in Visual Studio, and modify it like:

Then, create the GetProductHttpTrigger.cs file.

Here’s the code for it:

Let’s have a look at the code.

  1. This function code takes the GET request, so it looks up the id query parameter from the querystring and use it as the ProductId value.
  2. It uses the service layer instance that is written in another project, PrecompiledSample.Services.
  3. It returns the resource details with HTTP Status Code of 200 (OK).

So far, we have created the resource lookup API, corresponding to the Q of the CQRS pattern. we’ve completed all necessary function codes. If we integrate this Web Application project with Azure Functions CLI, we can easily perform testing and debugging in our local environment with the same development experiences that Visual Studio offers.

Setting Up Debugging Environment for Azure Functions within Visual Studio

We need two more tools for our local debugging experience within Visual Studio.

In order to install Azure Functions CLI, we can simply run the command, npm install --global azure-functions-cli. Make sure that those two tools only work in Windows at the time of writing. Once both are installed, open the project property window like below:

Move to the Web tab and enter the necessary information.

When we integrate Azure Functions CLI with the Web Application project, there are a few points that we need to make sure.

  • The installed location of node.js might be different:
    • If it is downloaded from https://nodejs.org, the CLI location would be C:\Users\[USERNAME]\AppData\Roaming\npm\node_modules\azure-functions-cli\bin\func.exe.
    • If it is installed through NVM, the CLI would be located at C:\Program Files\nodejs\node_modules\azure-functions-cli\bin\func.exe.
  • For Command line arguments, its value would be host start to run WebJobs host in our local machine.
  • Working directory needs to have the absolute path of the Web Application project where Azure Functions codes reside.

Unfortunately, we can’t use % environment variables here.

Finally, we need to add two .json files – appsettings.json and host.json. Unless necessary, host.json is empty. On the other hand, we need to put some details into appsettings.json like below:

As we’re using Azure Storage Emulator, the value, UseDevelopmentStorage=true, is used. As well as for the database connection string is defined in this file.

Now, it’s time for debugging! Set the Web Application project as a startup project and punch the F5 key, then we’ll be able to see the command prompt window that is running Azure Functions CLI.

Let’s send a POST request through a REST API testing tool like Postman. As we can see the screenshot above, the endpoint URL for resource creation is http://localhost:7071/api/AddProductHttpTrigger, so send a POST request like below:

Then, the code stops at the break point where we setup in Visual Studio.

How did you feel it? We can use the same development experience for Azure Functions development. Now, it’s time for deployment to the actual Azure Functions instance.

Deploying Azure Functions

We have developed Azure Function codes within a Web Application project. That means we will have the same deployment experience.

When we choose the publish menu like above, we can select either Azure App Service option

or import publish profile settings file downloaded from the Azure Function instance.

Once we complete deployment, we can confirm on Azure Function portal that all function codes have been successfully deployed. Please note that we don’t need to build separate web application project for each function, but just put everything in one web application project, which would be sufficient.

Once deployed, let’s send a POST request to the endpoint for the AddProductHttpTrigger function. Request body will flow the diagram mentioned before.

Once data has been processed, let’s check the result on the Azure Function side:

And here’s the database query result.

So far, we have built a sample Azure Functions code using Web Application project on Visual Studio. We have used .dll files instead of .csx files for Function codes. With these precompiled .dll library files, we have performed debugging and deployment as well. How did you guys find out? Is it easier to use? Does it give you the same development experiences? It may not be easy at the first glance. However, because this is the same approach that we develop a web application, we can easily get used to it.

Hope this post will help you write Azure Function codes with full support by Visual Studio.

MuleSoft Anypoint Studio in High DPI Mode

MuleSoft‘s development platform, Anypoint Studio, is a great tool for service integration. However, if we’re using an OS that supports High DPI mode like Windows 10, its user experience is not quite nice.

Icons are barely recognisable! By the way, this is NOT the issue from the Anypoint Studio side. Rather, it’s the well-known issue that Eclipse has had so far at least since 2013. Anypoint Studio is built on top of Eclipse, so the same issue occurs here.

The version of Anypoint Studio is 6.2.3 at the time of writing.

So, how to fix this? Here’s the magic.

First of all, create a manifesto file at the same location where our AnypointStudio.exe is located and give it a name of AnypointStudio.exe.manifest. Its content will look like:

The main point of the manifesto file is the dpiAware element and its value to be false. By adding this manifesto file, our Anypoint Studio can run by overriding that High DPI settings.

I appreciated Roberto, one of my colleagues at Kloud Solutions, for extracting this manifesto content by decompiling AnypointStudio.exe.

Next, we need to let application know there is an external manifesto file to read at runtime by modifying Windows registry. Open Registry Editor by running regedit. Then follow the steps below:

  1. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\SideBySide\
  2. Add new DWORD (32-bit) Value
  3. Set the name to PreferExternalManifest
  4. Set the value to 1

All good to go! Now run our Anypoint Studio again and see how it looks different from the previous screen.

Can we see the differences? Of course this black magic contains a couple of downsides:

  • The icons are a bit blurry, and
  • Workspace becomes smaller

But that’s fine, it’s not too bad as long as we can recognise all those icons. Hopefully, next version of Eclipse will fix this issue.