Azure Functions or WebJobs? Where to run my background processes on Azure?

functionsvswebjobs-icon

Introduction

Azure WebJobs have been a quite popular way of running background processes on Azure. They have been around since early 2014. When they were released, they were a true PaaS alternative to Cloud Services Worker Roles bringing many benefits like the WebJobs SDK, easy configuration of scalability and availability, a dashboard, and more recently all the advantages of Azure Resource Manager and a very flexible continuous delivery model. My colleague Namit previously compared WebJobs to Worker Roles.

Meanwhile, Azure Functions were announced earlier this year (march 2016). Azure Functions, or “Functions Apps” as they appear on the Azure Portal, are Microsoft’s Function as a Service (FaaS) offering. With them, you can create microservices or small pieces of code which can run synchronously or asynchronously as part of composite and distributed cloud solutions. Even though they are still in the making (at the time of this writing they are in Public Preview version 0.5), Azure Functions are now an appealing alternative for running background processes. Azure Functions are being built on top of the WebJobs SDK but with the option of being deployed with a Serverless model.

So, the question is: which option suits better my requirements to run background processes? In this post, I will try to contrast each of them and shade some light so you can better decide between the two.

Comparing Available Triggers

Let’s see what trigger options we have for each:

WebJobs Triggers

WebJobs can be initiated by:

  • messages in Azure Service Bus queues or topics (when created using the SDK and configured to run continuously),
  • messages in an Azure storage queue (when created using the SDK and configured to run continuously),
  • blobs added to a container in an Azure Storage account (when created using the SDK and configured to run continuously),
  • a schedule configured with a CRON expression (if configured to run on-demand),
  • HTTP call by calling the Kudu WebJobs API (when configured to run on-demand),

Additionally, with the SDK extensions, the triggers below were added:

  • file additions or changes in a particular directory (of the Web App File System),
  • queue messages containing a record id of Azure Mobile App table endpoints,
  • queue messages containing a document id of documents on DocumentDB collections, and
  • third-party WebHooks (requires the Kudu credentials).

Furthermore, the SDK 2.0 (currently in beta) is adding support to:

Azure Functions Triggers

Being Function Apps founded on WebJobs SDK, most of the triggers listed above for WebJobs are supported by Azure Functions. The options available at the time of writing this post are:

And, currently provided as experimental options

  • files added in Cloud File Storage SaaS platforms, such as Box, DropBox, OneDrive, FTP and SFTP (SaaSFileTrigger Template).

I believe the main difference between both in terms of triggers is the HTTP trigger option as detailed below:

Authentication for HTTP Triggers

Being WebJobs hosted on the Kudu SCM site, to trigger them via an HTTP call we need to use the Kudu credentials, which is not ideal. Azure Function Apps, on the other hand, provide more authentication options, including Azure Active Directory and third-party identity providers like Facebook, Google, Twitter, and Microsoft accounts.

HTTP Triggers Metadata

Functions support exposing their API metadata based on the OpenAPI specification, which eases the integration with consumers. This option is not available for WebJobs.

Comparing Outbound Bindings

After comparing the trigger bindings for both options, let’s have a look at the output bindings for each.

WebJobs Outputs

The WebJobs SDK provides the following out-of-the-box output bindings:

Azure Functions Outputs

Function Apps can output messages to different means. Options available at the time of writing are detailed below:

In regard to supported outputs, the only difference between the two is that Azure Functions can return a response to the caller when triggered via HTTP. Otherwise, they provide pretty much the same capabilities.

Supported Languages

Both, WebJobs and Function Apps support a wide variety of languages, including: bash (.sh), batch (.bat / .cmd), C#, F#, Node.Js, PHP, PowerShell, and Python.

So no difference here, probably just that WebJobs require some of them to be compiled as an executable (.exe) program.

Tooling

WebJobs can be easily created using Visual Studio and the WebJobs SDK. For those WebJobs which are a compiled console application, you can run and test them locally, which comes in always very handy.

At the time of this writing, there is no way you can program, compile and test your Functions with Visual Studio. So you might need to code all your functions using the online functions editor which provides different templates. However, being Functions a very promising offering, I believe Microsoft will provide better tooling by the time they reach General Availability. In the meantime, here an alpha version tool and a ScriptCs Functions emulator by my colleague Justin Yoo.

Managing “VM” Instances, Scaling, and Pricing

This is probably the most significant difference between WebJobs and Azure Functions.

WebJobs require you to create and manage an Azure App Service (Web App) and the underlying App Service Plan (a.k.a. server farm). If you want your WebJob to run continuously, you need at least one instance on a Basic App Service Plan to support “Always On”. For WebJobs you always need to pay for at least one VM Instance (as PaaS) regardless of this being used or idle. For WebJobs, the App Service Plan Pricing applies. However, you can always deploy more than one App Service on one App Service Plan. If you have larger loads or load peaks and you need auto-scaling, then you would require at least a Standard App Service Plan.

Conversely, with Azure Functions and the Dynamic Service Plan, the creation and management of a VM Instances and configuring scaling is all abstracted now. We can write functions without caring about server instances and get the benefits of a Serverless architecture. Functions scale out automatic and dynamically as load increases, and scale down if decreases. Scaling up or down is performed based on the traffic, which depends on the configured triggers.

With functions, you get billed only for the resources you actually use. The cost is calculated by the number of executions, memory size, and execution time measure as Gigabyte Seconds. If you have background processes which don’t require a dedicated instance and you only want to pay for the compute resources in use, then a dynamic plan would make a lot of sense.

It’s worth noting that if you already have an App Service Plan, which you are already managing and paying for, and has resources available, you can deploy your Functions on it and avoid extra costs.

One point to consider with the Dynamic Service Plan (Serverless model) is that as you don’t control which instances are hosting your Azure Functions, there might be a cold-startup overhead. This wouldn’t be the case for Functions running on your own App Service Plan (server farm) or WebJobs running as continuous on an “Always On” Web App where you have “dedicated” instances and can benefit from having your components loaded in memory.

Summary

As we have seen, being Azure Functions built on top of the WebJobs SDK, they provide a lot of the previously available and already mature functionality but with additional advantages.

In terms of triggers, Functions now provide HTTP triggers without requiring the use of Publish Profile credentials and bringing the ability to have authentication integrated with Azure AD or third party identity providers. Additionally, functions give the option to expose an OpenAPI specification.

In terms of binding outputs and supported languages, both provide pretty much the same.

In regard to tooling, at the time of writing, WebJobs allow you to develop and test offline with Visual Studio. It is expected that by the time Azure Functions reach General Availability, Microsoft will provide much better tools for them.

I would argue that the most significant difference between Azure Functions and WebJobs is the ability to deploy Functions on the new Dynamic Service Plan. With this service plan, you can have the advantages of not worrying about the underlying instances or scaling, it’s all managed for you. This also means that you only pay for the compute resources you actually use. However, when needed or when you are already paying for an App Service Plan, you have the option of squeezing in your Functions in the same instances and avoid additional costs.

Coming back to the original question, which technology suits better your requirements? I would say that If you prefer a “serverless” approach in which you don’t need or want to worry about the underlying instances and scaling, then Functions is the way to go (considering you are OK with the temporary lack of mature tools). But if you still favour managing your instances, WebJobs might be a better fit for you.

I will update this post once Functions reach GA and tools are there. Probably (just probably), Azure Functions will provide the best of both worlds and the question will only be whether to choose a Dynamic Service Plan or not. We will see. 🙂

Feel free to share your experiences or add comments or queries below.

Interacting with Azure Web Apps Virtual File System using PowerShell and the Kudu API

Introduction

Azure Web Apps or App Services are quite flexible regarding deployment. You can deploy via FTP, OneDrive or Dropbox, different cloud-based source controls like VSTS, GitHub, or BitBucket, your on-premise Git, multiples IDEs including Visual Studio, Eclipse and Xcode, and using MSBuild via Web Deploy or FTP/FTPs. And this list is very likely to keep expanding.

However, there might be some scenarios where you just need to update some reference files and don’t need to build or update the whole solution. Additionally, it’s quite common that corporate firewalls restrictions leave you with only the HTTP or HTTPs ports open to interact with your Azure App Service. I had such a scenario where we had to automate the deployment of new public keys to an Azure App Service to support client certificate-based authentication. However, we were restricted by policies and firewalls.

The Kudu REST API provides a lot of handy features which support Azure App Services source code management and deployments operations, among others. One of these is the Virtual File System (VFS) API. This API is based on the VFS HTTP Adapter which wraps a VFS instance as an HTTP RESTful interface. The Kudu VFS API allows us to upload and download files, get a list of files in a directory, create directories, and delete files from the virtual file system of an Azure App Service; and we can use PowerShell to call it.

In this post I will show how to interact with the Azure App Service Virtual File Sytem (VFS) API via PowerShell.

Authenticating to the Kudu API.

To call any of the Kudu APIs, we need to authenticate by adding the corresponding Authorization header. To create the header value, we need to use the Kudu API credentials, as detailed here. Because we will be interacting with an API related to an App Service, we will be using site-level credentials (a.k.a. publishing profile credentials).

Getting the Publishing Profile Credentials from the Azure Portal

You can get the publishing profile credentials, by downloading the publishing profile from the portal, as shown in the figure below. Once downloaded, the XML document will contain the site-level credentials.

Getting the Publishing Profile Credentials via PowerShell

We can also get the site-level credentials via PowerShell. I’ve created a PowerShell function which returns the publishing credentials of an Azure App Service or a Deployment Slot, as shown below.

Bear in mind that you need to be logged in to Azure in your PowerShell session before calling these cmdlets.

Getting the Kudu REST API Authorisation header via PowerShell

Once we have the credentials, we are able to get the Authorization header value. The instructions to construct the header are described here. I’ve created another PowerShell function, which relies on the previous one, to get the header value, as follows.

Calling the App Service VFS API

Once we have the Authorization header, we are ready to call the VFS API. As shown in the documentation, the VFS API has the following operations:

  • GET /api/vfs/{path}    (Gets a file at path)
  • GET /api/vfs/{path}/    (Lists files at directory specified by path)
  • PUT /api/vfs/{path}    (Puts a file at path)
  • PUT /api/vfs/{path}/    (Creates a directory at path)
  • DELETE /api/vfs/{path}    (Delete the file at path)

So the URI to call the API would be something like:

  • GET https://{webAppName}.scm.azurewebsites.net/api/vfs/

To invoke the REST API operation via PowerShell we will use the Invoke-RestMethod cmdlet.

We have to bear in mind that when trying to overwrite or delete a file, the web server implements ETag behaviour to identify specific versions of files.

Uploading a File to an App Service

I have created the PowerShell function shown below which uploads a local file to a path in the virtual file system. To call this function you need to provide the App Service name, the Kudu credentials (username and password), the local path of your file and the kudu path. The function assumes that you want to upload the file under the wwwroot folder, but you can change it if needed.

As you can see in the script, we are adding the “If-Match”=”*” header to disable ETag version check on the server side.

Downloading a File from an App Service

Similarly, I have created a function to download a file on an App Service to the local file system via PowerShell.

Using the ZIP API

In addition to using the VFS API, we can also use the Kudu ZIP Api, which allows to upload zip files and expand them into folders, and compress server folders as zip files and download them.

  • GET /api/zip/{path}    (Zip up and download the specified folder)
  • PUT /api/zip/{path}    (Upload a zip file which gets expanded into the specified folder)

You could create your own PowerShell functions to interact with the ZIP API based on what we have previously shown.

Conclusion

As we have seen, in addition to the multiple deployment options we have for Azure App Services, we can also use the Kudu VFS API to interact with the App Service Virtual File System via HTTP. I have shared some functions for some of the provided operations. You could customise these functions or create your own based on your needs.

I hope this has been of help and feel free to add your comments or queries below. 🙂

Monitoring Azure WebJobs Health with Application Insights

Introduction

Azure WebJobs have been available for quite some time and have become very popular for running background tasks with programs or scripts. WebJobs are deployed as part of Azure App Services (Web Apps), which include their companion site Kudu. Kudu provides a lot of features, including a REST API, which provides operations for source code management (SCM), virtual file system, deployments, accessing logs, and for WebJob management as well. The Kudu WebJobs API provides different operations including listing WebJobs, uploading a WebJob, or triggering it. One of the operations of this API allows to get the status of a specific WebJob by name.

Another quite popular Azure service is Application Insights. This provides functionality to monitor and diagnose application issues and to analyse usage and performance as well. One of these features are web tests, which provide a way to monitor the availability and health of a web site.

In this blog post I will go through the required configuration on Application Insights to monitor the health of WebJobs using Application Insights web tests calling the Kudu WebJobs API.

Calling the Kudu WebJobs API.

For this exercise, it is worth getting familiar with the WebJobs API, particularly with the endpoint to get a WebJob status. Through this post, I will be working with a triggered WebJob scheduled with a CRON expression, but you can apply the same principles for a continuous WebJob. I will be using postman to call this API.

To get a WebJob status, we need to call the corresponding Kudu WebJob API endpoint. In the case of triggered WebJobs, the endpoint looks something like:

https://{webapp-name}.scm.azurewebsites.net/api/triggeredwebjobs/{webjob-name}/

Before calling the endpoint, we need to add the Authorization header to the GET request. To create the header value, we need to use the corresponding Kudu API credentials, as explained here. Considering we want to monitor the status of a WebJob under a particular web site, I prefer to use site-level credentials (or publishing profile credentials) instead of the user-level ones.

Getting the Publishing Profile Credentials from the Azure Portal

You can get the publishing profile credentials, by downloading the publishing profile from the portal, as shown in the figure below. Once downloaded, the XML document will contain the site-level credentials.

Getting the Publishing Profile Credentials via PowerShell

We can also get the site-level credentials via PowerShell. I’ve created a PowerShell function which returns the publishing credentials of an Azure Web App or a Deployment Slot, as shown below.

Bear in mind that you need to be logged in to Azure in your PowerShell session before calling these cmdlets.

Getting the Kudu REST API Authorisation header via PowerShell

Once we have the credentials, we are able to get the Authorization header value. The instructions to construct the header are described here. I’ve created another PowerShell function, which relies on the previous one, to get the header value, as follows.

Once we have the header value, we can call the api. Let’s call it using postman.

You should be getting a response similar to the one shown below:

Note that for this triggered WebJob, there are status and duration fields.

Now that we are familiar with the response, we can start designing an App Insights web test to monitor the health of our WebJob.

Configuring an App Insights Web Test to Monitor the Health of an Azure WebJob

You can find here detailed documentation on how to create web tests to monitor availability and responsiveness of web end points. In the following sections of this post, I will cover how to create an App Insights web test to Monitor the Health of a WebJob.

As we saw above, to call the WebJobs API we need to add an Authorization Header to the GET request. And once we get the API response, to check the status of the WebJob, we would need to interpret the response in JSON format.

To create the web test on App Insights to monitor a WebJob, I will first create a simple web test via the Azure Portal, and enrich it later.

Creating a Web Test on Application Insights.

I will create a basic web test with the following configuration. You should change it to the values which suit your scenario:

  • Test type: URL ping test
  • URL: My WebJob Rest API, e.g. https://{webapp-name}.scm.azurewebsites.net/api/triggeredwebjobs/{webjob-name}/
  • Test frequency: 5 minutes
  • Test locations: SG Singapore and AU Sydney
  • Success criteria:
    • Test timeout: 120 seconds
    • HTTP Response: (checked)
    • Status code must equal: 200
    • Content match: (checked)
    • Content must contain: “status”:”success”
  • Alerts
    • Status: Enabled
    • Alert threshold location: 1
    • Alert failure time window: 5 minutes
    • Send alert emails to these email addresses: <my email address>

You could also keep email alerts disabled or configure them later.

If you enable the web test as is, you will see that it will start failing. The reason being that we are not adding the required Authorization header to the GET request.

To add headers to the test, you could record web tests on Visual Studio Enterprise or Ultimate. This is explained in details in the Azure documentation. Additionally, in these multi-steps web tests you can add more than one validation rule.

Knowing that not everybody has access to a VS Enterprise or Ultimate license, I will explain here how to create a web test using the corresponding XML format. The first step is to extract the web test XML definition from the test manually created on the portal.

Extracting the Web Test XML Definition from a Test Manually Created on the Portal.

Once we have created the web test manually on the portal, to get its XML definition, we have to open the resource explorer on https://resources.azure.com/ and navigate to subscriptions/<subscription-guid>/resourceGroups/<resourcegroup>/providers/microsoft.insights/webtests/<webtest>-<app-insight> until you are on the definition of the web test you have just created.

Once there, you need to find the member: “WebTest”, which should be something similar to:

Now, we need to extract the XML document by removing the escape characters of the double quotes, and get something like:

which is the XML definition of the web test we created manually on the portal.

Adding a Header to the Application Insights Web Test Request by updating the Web Test XML definition.

Now we should be ready to edit our web test XML definition to add the Authorization header.

To do this, we just need to add a Headers child element to the Request record, similar to the one shown below. You would need to get the Base 64 encoded Authorization header value, similarly to how we did it previously when calling the API via Postman.

Extending the Functionality of the Web Test.

When we created the web test on the portal, we said that we wanted the status to be “success”, however, we might want to add “running” as another valid value. Additionally, in my case, I wanted to check that duration is less than 10 minutes. For this I have updated the Validation Rules to use regular expressions and to have a second rule. The final web test XML definition resulted as follows:

You could play around with the web test XML definition and update or extend it according to your needs. In case you are interested on the capabilities of web tests, here the documentation.

Once our web test XML definition is ready, we save it with a “.webtest” extension.

Uploading the (Multi-Step) Web Test to Application Insights

Having the web test XML definition ready, we can update our Application Insights web test with it. For this, on the portal, we open the Edit Test blade and:

  • Change the Test Type to: Multi-step test, and
  • Upload the web test xml definition file we just saved with the “.webtest” extension.

This will update the web test, and now with the proper Authorization header and the added validation rules, we can monitor the health of our triggered WebJob.

With Application Insights web tests, we can monitor the WebJob via the dashboard as shown above, or configuring alerts to be sent via email.

Summary

Through this post I have shown how to monitor the health of an Azure WebJob using Application Insights web tests. But on the journey, I also showed some tricks which I hope can be useful in other scenarios as well, including

  1. How to call the Azure WebJobs API via Postman, including how to get the Kudu API Authorization header via PowerShell.
  2. How to manually configure App Insights web tests,
  3. How to get the XML definition of a manually created web test using the Azure Resource Explorer,
  4. How to update the web test XML definition to add a request
    header and expand the validation rules. This without requiring Visual Studio Enterprise or Ultimate, and
  5. How to update the Application Insights web test by uploading the updated multi-step web test file.

Thanks for reading, and feel free to add your comments or queries below. 🙂 

When to use an Azure App Service Environment?

Introduction

An Azure App Service Environment (ASE) is a premium Azure App Service hosting environment which is dedicated, fully isolated, and highly scalable. It clearly brings advanced features for hosting Azure App Services which might be required in different enterprise scenarios. But being this a premium service, it comes with a premium price tag. Due to its cost, a proper business case and justification are to be prepared before architecting a solution based on this interesting PaaS offering on Azure.

When planning to deploy Azure App Services, an organisation has the option of creating an Azure Service Plan and hosting them there. This might be good enough for most of the cases. However, when higher demands of scalability and security are present, a dedicated and fully isolated App Service Environment might be necessary.

Below, I will summarise the information required to make a decision regarding the need of using an App Service Environment for hosting App Services. Please, when reading this post, consider that facts and data provided are based on Microsoft documentation at the time of writing, which will eventually change.

App Service Environment Pricing.

To calculate the cost of an App Service Environment, we have to consider its architecture. An Azure App Service Environment is composed of two layers of dedicated compute resources and a reserved static IP. Additionally, it requires a Virtual Network. The Virtual Network is free of charge and reserved IP Addresses carry a nominal charge. So the cost is mostly related to the compute resources. The ASE is composed of one front-end compute resource pool, as well as one to three worker compute resource pools.

The minimum implementation of an App Service Environment requires 2 x Premium P2 instances for the Front-End Pool and 2 x Premium P1 instances for the Worker Pool 1, with a total cost per annum superior to $ 20,000 AUD. This cost can easily escalate by scaling up or scaling out the ASE.

Having said that, the value and benefits must be clear enough so that the business can justify this investment.

The benefits of an Azure App Service Environment.

To understand the benefits and advance features of an App Service Environment, we can compare what we get by deploying our Azure App Services on or without an App Service Environment, as show in the table below.

Without an App Service Environment On an App Service Environment
Isolation Level Compute resources are hosted on a multitenant environment. All compute resources are fully isolated and dedicated exclusively to a single subscription.
Compute resources specialisation There is no out-of-the-box compute resource layer specialisation. Compute resources on an ASE are grouped in 2 different layers: Front-End Pool and Worker Pools (up to 3).

The Front-End Pool is in charge of SSL termination and load balancing of app requests for the corresponding Worker Pools. Once the SSL has been off-loaded and the load balanced, the Worker Pool is in charge of processing all the logic of the App Services. The Front-End Pool is shared by all Worker Pools.

Virtual Network (VNET) Integration A Virtual Network can be created and App Services can be integrated to it.

The Virtual Network provides full control over IP address blocks, DNS settings, security policies, and route tables within the network.

Classic “v1” and Resource Manager “v2” Virtual Networks can be used.

An ASE is always deployed in a regional Virtual Network. This provides the ability to have access to resources in a VNET without any additional configuration required.

[UPDATE] Starting from mid-July 2016, ASEs now support “v2” ARM based virtual networks.

[UPDATE July 2016] Accessible only via Site-to-Site or ExpressRoute VPN App Services are accessible via public Internet.  [UPDATE July 2016] ASEs support an Internal Load Balancer (ILB) which allows you to host your intranet or LOB applications on Azure and access them only via a Site-to-Site or ExpressRoute VPN. 
 
 Control over inbound and outbound traffic Inbound and outbound traffic control is not currently supported. An ASE is always deployed in a regional Virtual Network, thus inbound and outbound network traffic can be controlled using a network security group.

[UPDATE] With updates of mid-July 2016, now ASEs can be deployed into VNETs which use private address ranges.

Connection to On-Prem Resources Azure App Service Virtual Network integration provides the capability to access on-prem resources via a VPN over public Internet. In addition to Azure App Service Virtual Network integration, the ASE provides the ability to connect to on-prem resources via ExpressRoute, which provides a faster and more reliable and secure connectivity without going over public Internet.

Note: ExpressRoute has its own pricing model.

Inspecting inbound web traffic and blocking potential attacks [UPDATE Sept – 2016] A Web Application Firewall (WAF) service is available to App Services through Application Gateway.

Application Gateway WAF has its own pricing model.

ASEs provide the ability to configure a Web Application Firewall for inspecting inbound web traffic which can block SQL injections, cross-site scripting, malware uploads, application DDoS, and other attacks.

Note: Web Application Firewall has its own pricing model.

Static IP Address By default, Azure App Services get assigned virtual IP addresses. However, these are shared with other App Services in that region.

There is a way to give an Azure Web App a dedicated inbound static IP address.

Nevertheless, there is no way to get a dedicated static outbound IP. Thus, an Azure App Service outbound IP cannot be securely whitelisted on on-prem or third-party firewalls.

ASEs provides a static Inbound and Outbound IP Address for all resources contained within it.

App Services (Web App, Azure Web Jobs, API Apps, Mobile Apps and Logic Apps) can connect to third party application using a dedicated static outbound IP which can be whitelisted on on-prem or third-party firewalls.

SLA App Services provide an SLA of 99.95%. App Services deployed on an ASE provide an SLA of 99.95%.
Scalability / Scale-Up App Services can be deployed on almost the full range of pricing tiers from Free to Premium.

However, Premium P4 is not available for App Services without an ASE.

App Services deployed on an ASE can only be deployed on Premium instances, including Premium 4. (8 cores, 14 GB RAM, 500 GB Storage)
Scalability / Scale-Out App Services provisioned on a Standard App Service Plan can Scale-Out with up to 10 instances.

App Services provisioned on a Premium App Service Plan can Scale-Out with up to 20 instances.

App Services deployed on an ASE can scale out with up to 50 instances.

An ASE can be configured to use up to 55 total compute resources. Of those 55, only 50 can be used to host workloads.

Scalability / Auto Scale-Out App Services can be scaled-out automatically. App Services deployed on an ASE can be scaled-out automatically.

However an auto Scale-Out buffer is required. See section below.

Points to consider

As seen above, Azure App Service Environments provide advanced features which might be necessary in enterprise applications. However, there are some additional considerations to bear in mind when architecting solutions to be deployed on these environments.

Without an App Service Environment On an App Service Environment
Use of Front-End Pool Azure App Service provides load-balancing out-of-the-box.

Thus, there is no need to have a Front-End Pool for load balancing.

The Front-End Pool contains compute resources responsible for SSL termination and load balancing of app requests within an App Service Environment.

However, these compute resources cannot host workloads. So depending on your workload, the Front-End Pool, of at least 2 x Premium P2 instances, could be seen as an overhead.

Fault-tolerance overhead SLA is provided without requiring additional compute resources. To provide fault tolerance, one or more additional compute resources have to be allocated per Worker Pool. This compute resource is not available to be assigned a workload.
Auto Scale-Out buffer Auto Scale-Out does not require a buffer. Because Scale-Out operations in an App Service Environment take some time to apply, a buffer of compute resources is required to be able to respond to the demands of the App Service.

The size of the buffer is calculated using the Inflation Rate formula explained in detailed here.

This means that the compute resources of the buffer are idle until a Scale-Out operation happens. In many cases this could be considered as an overhead.

E.g. if auto Scale-Out is configured for an App Service (1 to 2 instances), when only one 1 instance is being used, there is an overhead of 2 compute resources. 1 for fault-tolerance (explained above) and 1 for Scale-Out buffer.

Deployment App Services can be deployed using Azure Resource Manager templates. App Service Environments can be deployed using Azure Resource Manager templates. [UPDATE July 2016] And after the update, ASEs now support ARM VNETs (v2).

In addition, deploying an App Service Environment usually takes more than 3 hours.

Conclusion

So coming back to original the question, when to use an App Service Environment? When is right to deploy App Services on an App Service Environment and to pay the premium price? In summary:

  • When higher scalability is required. E.g. more than 20 instances per App Service Plan or larger instances like Premium P4 OR
  • When inbound and outbound traffic control is required to secure the App Service OR
  • When connecting the App Service to on-prem resources via a secure channel (ExpressRoute) without going by public Internet is necessary OR
  • [Update July 2016] When access to the App Services has to be restricted to be only via a Site-to-Site or ExpressRoute VPN OR
  • [Update Sept 2016] When inspecting inbound web traffic and blocking potential attacks is needed without using Web Roles OR
  • When a static outbound IP Address for the App Service is required.

AND

  • Very important, when there is enough business justification to pay for it (including potential overheads like Front-End Pool, fault-tolerance overhead, and auto Scale-Out buffer)

What else would you consider when deciding whether to use an App Service Environment for your workload or not? Feel free to post your comments or feedback!

Thanks for reading! 🙂

Implementing a WCF Client with Certificate-Based Mutual Authentication without using Windows Certificate Store

Windows Communication Foundation (WCF) provides a relatively simple way to implement Certificate-Based Mutual Authentication on distributed clients and services. Additionally, it supports interoperability as it is based on WS-Security and X.509 certificate standards. This blog post briefly summarises mutual authentication and covers the steps to implement it with an IIS hosted WCF service.

Even though WCF’s out-of-the-box functionality removes much of the complexity of Certificate-Based Mutual Authentication in many scenarios, there are cases in which this is not what we need. For example, by default, WCF relies on the Windows Certificate Store for accessing the own private key and the counterpart’s public key when implementing Certificate-Based Mutual Authentication.

Having said so, there are scenarios in which using the Windows Certificate Store is not an option. It can be a deployment restriction or a platform limitation. For example, what if you want to create an Azure WebJob which calls a SOAP Web Service using Certificate-Based Mutual Authentication? (At the time of writing this post) there is no way to store a certificate containing the counterpart’s public key in the underlying certificate store for an Azure WebJob. And just because of that, we cannot enjoy all the built-in benefits of WCF for building our client.

Here, they explain how to create a WCF service that implements custom certificate validation be defining a class derived from X509CertificateValidator and implementing an abstract “Validate” override method. Once defined the derived class, the CertificateValidationMode has to be set to “Custom” and the CustomCertificateValidatorType to be set to the derived class’ type. This can easily be extended to implement mutual authentication on the service side without using the Windows Certificate Store.

My purpose in this post is to describe how to implement a WCF client with Certificate-Based Mutual Authentication without using Windows Certificate Store by compiling the required sources and filling the gaps of the available documentation.

What to consider

Before we start thinking about coding, we need to consider the following:

  • The WCF client must have access to the client’s private key to be able to authenticate with the service.
  • The WCF client must have access to the service’s public key to authenticate the service.
  • Optionally, the WCF client should have access to the service’s certificate issuer’s certificate (Certificate Authority public key) to validate the service’s certificate chain.
  • The WCF client must implement a custom service’s certificate validation, as it cannot rely on the built-in validation.
  • We want to do this, without using the Windows Certificate Store.

Accessing public and private keys without using Windows Certificate Store

First we need to access the client’s private key. This can be achieved without any problem. We could get it from a local or a shared folder, or from a binary resource. For the purpose of this blog, I will be reading it from a local Personal Information Exchange (pfx) file. For reading a pfx file we need to specify a password; thus you might want to consider encrypting or implementing additional security. There are various X509Certificate2 constructor overloads which allow you to load a certificate in different ways. Furthermore, reading a public key is easier, as it does not require a password.

Implementing a custom validator method

On the other hand, implementing the custom validator requires a bit more thought and documentation is not very detailed. The ServicePointManager
class
has a property called “ServerCertificateValidationCallback” of type RemoteCertificateValidationCallback which allows you to specify a custom service certificate validation method. Here is defined the contract for the delegate method.

In order to authenticate the service, once we get its public key, we could do the following:

  • Compare the service certificate against a preconfigured authorised service certificate. They must be the same.
  • Validate that the certificate is not expired.
  • Optionally, validate that the certificate has not been revoked by the issuer (Certificate Authority). This does not apply for self-signed certificates.
  • Validate the certificate chain, using a preconfigured trusted Certificate Authority.

For comparing the received certificate and the preconfigured one we will use the X509Certificate.Equals Method. For validating that the certificate has not expired and not been revoked we will use the X509Chain.Build Method. And finally, to validate that the certificate has been issued by the preconfigured trusted CA, we will make use of the X509Chain.ChainElements Property.

Let’s jump into the code.

To illustrate how to implement the WCF client, what can be better than code itself J? I have implemented the WCF client as a Console Application. Please pay attention to all the comments when reading my code. With the provided background, I hope it is clear and self-explanatory.

using System;
using System.Configuration;
using System.IdentityModel.Tokens;
using System.Linq;
using System.Net;
using System.Net.Security;
using System.ServiceModel;
using System.ServiceModel.Security;
using System.Security.Cryptography;
using System.Security.Cryptography.X509Certificates;

namespace MutualAuthClient
{
    class Program
    {
        static void Main(string[] args)
        {
            try
            {
                Console.WriteLine("Starting...");

                // Set the ServerCertificateValidationCallback property to a
                // custom method.
                ServicePointManager.ServerCertificateValidationCallback +=
                                                CustomServiceCertificateValidation;

                // We will call a service which expects a string and echoes it
                // as a response.
                var client = new EchoService.EchoServiceClient
                                            ("BasicHttpBinding_IEchoService");

                // Load private key from PFX file.
                // Reading from a PFX file requires specifying the password.
                // You might want to consider adding encryption here.
                Console.WriteLine("Loading Client Certificate (Private Key) from File: "
                                    + ConfigurationManager.AppSettings["ClientPFX"]);
                client.ClientCredentials.ClientCertificate.Certificate =
                                    new X509Certificate2(
                                    ConfigurationManager.AppSettings["ClientPFX"],
                                    ConfigurationManager.AppSettings["ClientPFXPassword"],
                                    X509KeyStorageFlags.MachineKeySet);

                // We are using a custom method for the Server Certificate Validation
                client.ClientCredentials.ServiceCertificate.Authentication.
                                CertificateValidationMode =
                                        X509CertificateValidationMode.None;

                Console.WriteLine();
                Console.WriteLine(String.Format("About to call client.Echo"));
                string response = client.Echo("Test");
                Console.WriteLine();
                Console.WriteLine(String.Format("client.Echo Response: '{0}'", response));
                Console.ReadLine();
            }
            catch (Exception ex)
            {
                Console.WriteLine(
                    String.Format("Exception occurred{0}Message:{1}{2}Inner Exception: {3}"
                                   , Environment.NewLine, ex.Message, Environment.NewLine,
                                   ex.InnerException));
            }

        }

        private static bool CustomServiceCertificateValidation(
                object sender, X509Certificate cert, X509Chain chain,
                SslPolicyErrors error)
        {
            Console.WriteLine();
            Console.WriteLine("CustomServiceCertificateValidation has started");

            // Load the authorised and expected service certificate (public key)
            // from file.
            Console.WriteLine("Loading Service Certificate (Public Key) from File: "
                                + ConfigurationManager.AppSettings["ServicePublicKey"]);
            X509Certificate2 authorisedServiceCertificate = new X509Certificate2
                    (ConfigurationManager.AppSettings["ServicePublicKey"]);

            // Load the trusted CA (public key) from file.
            Console.WriteLine("Loading the Trusted CA (Public Key) from File: "
                                + ConfigurationManager.AppSettings["TrustedCAPublicKey"]);
            X509Certificate2 trustedCertificateAuthority = new X509Certificate2
                    (ConfigurationManager.AppSettings["TrustedCAPublicKey"]);

            // Load the received certificate from the service (input parameter) as
            // an X509Certificate2
            X509Certificate2 serviceCert = new X509Certificate2(cert);

            // Compare the received service certificate against the configured
            // authorised service certificate.
            if (!authorisedServiceCertificate.Equals(serviceCert))
            {
                // If they are not the same, throw an exception.
                throw new SecurityTokenValidationException(String.Format(
                    "Service certificate '{0}' does not match that authorised '{1}'"
                    , serviceCert.Thumbprint, authorisedServiceCertificate.Thumbprint));
            }
            else
            {
                Console.WriteLine(String.Format(
                    "Service certificate '{0}' matches the authorised certificate '{1}'."
                    , serviceCert.Thumbprint, authorisedServiceCertificate.Thumbprint));
            }

            // Create a new X509Chain to validate the received service certificate using
            // the trusted CA
            X509Chain chainToValidate = new X509Chain();

            // When working with Self-Signed certificates,
            // there is no need to check revocation.
            // You might want to change this when working with
            // a properly signed certificate.
            chainToValidate.ChainPolicy.RevocationMode = X509RevocationMode.NoCheck;
            chainToValidate.ChainPolicy.RevocationFlag = X509RevocationFlag.ExcludeRoot;
            chainToValidate.ChainPolicy.VerificationFlags =
                                    X509VerificationFlags.AllowUnknownCertificateAuthority;

            chainToValidate.ChainPolicy.VerificationTime = DateTime.Now;
            chainToValidate.ChainPolicy.UrlRetrievalTimeout = new TimeSpan(0, 0, 0);

            // Add the configured authorised Certificate Authority to the chain.
            chainToValidate.ChainPolicy.ExtraStore.Add(trustedCertificateAuthority);

            // Validate the received service certificate using the trusted CA
            bool isChainValid = chainToValidate.Build(serviceCert);

            if (!isChainValid)
            {
                // If the certificate chain is not valid, get all returned errors.
                string[] errors = chainToValidate.ChainStatus
                    .Select(x =&amp;gt; String.Format("{0} ({1})", x.StatusInformation.Trim(),
                            x.Status))
                    .ToArray();
                string serviceCertChainErrors = "No detailed errors are available.";

                if (errors != null &amp;amp;&amp;amp; errors.Length &amp;gt; 0)
                    serviceCertChainErrors = String.Join(", ", errors);

                throw new SecurityTokenValidationException(String.Format(
                        "The chain of service certificate '{0}' is not valid. Errors: {1}",
                        serviceCert.Thumbprint, serviceCertChainErrors));
            }

            // Validate that the Service Certificate Chain Root matches the Trusted CA.
            if (!chainToValidate.ChainElements
                .Cast&amp;lt;X509ChainElement&amp;gt;()
                .Any(x =&amp;gt; x.Certificate.Thumbprint ==
                                    trustedCertificateAuthority.Thumbprint))
            {
                throw new SecurityTokenValidationException(String.Format(
                        "The chain of Service Certificate '{0}' is not valid. " +
                        " Service Certificate Authority Thumbprint does not match " +
                        "Trusted CA's Thumbprint '{1}'",
                        serviceCert.Thumbprint, trustedCertificateAuthority.Thumbprint));
            }
            else
            {
                Console.WriteLine(String.Format(
                    "Service Certificate Authority '{0}' matches the Trusted CA's '{1}'",
                    serviceCert.IssuerName.Name,
                    trustedCertificateAuthority.SubjectName.Name));
            }
            return true;
        }
    }
}
 


And here is the App.config

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<appSettings>
<add key="ClientPFX" value="certificates\ClientPFX.pfx" />
<add key="ClientPFXPassword" value="********" />
<add key="TrustedCAPublicKey" value="certificates\ServiceCAPublicKey.cer" />
<add key="ServicePublicKey" value="certificates\ServicePublicKey.cer" />
</appSettings>
<startup>
<supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5.2" />
</startup>
<system.serviceModel>
<bindings>
<basicHttpBinding>
<binding name="BasicHttpBinding_IEchoService">
<security mode="Transport">
<transport clientCredentialType="Certificate" />
</security>
</binding>
</basicHttpBinding>
</bindings>
<client>
<endpoint address="https://server/EchoService.svc"
binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IEchoService"
contract="EchoService.IEchoService" name="BasicHttpBinding_IEchoService" />
</client>
</system.serviceModel>
</configuration>


In case you find difficult to read my code from WordPress, you can read it from GitHub on the links below:

I hope you have found this post useful, allowing you to implement a WCF client with Mutual Authentication without relying on the Certificate Store, and making your coding easier and happier! : )

Connecting to MS SQL Server with MuleSoft

MuleSoft provides an extensive list of connectors which come in handy when implementing integration solutions. In many integration scenarios, we need to connect directly to a database to get or update data. MuleSoft provides a database connector which allows JDBC connectivity with relational databases to execute SQL operations such as Select, Insert, Update, Delete, and Stored Procedures.

However, the database connector only provides out-of-the-box connectivity with Oracle, MySQL, and Derby databases. To connect to Microsoft SQL Server, additional steps are to be followed. In this post I will go through the required setup and to show how to connect with a MS SQL Server database. I will implement some basic integration scenarios like querying data, updating data, and polling updated records from a SQL database.

For this post I will be using MuleSoft 3.7.2 EE and SQL Server 2014, however, you could follow the same steps to connect to other versions of SQL Server.

Pre-requisites to connect to MS SQL Server with MuleSoft.

Because MS SQL Server is not one of the supported out-of-the-box databases, the following is to be done:

  • Download and install the Microsoft JDBC Drivers for SQL Server. As detailed in the Install instructions of the link, the installer is to be downloaded and run. Once run, it will prompt for installation directory, for which specifying %ProgramFiles% is suggested. For my exercise I utilised the latest version (4.2).
  • Make sure TCP/IP is enabled in your SQL Server configuration as described here.
  • Add the required references to your MuleSoft project.
    • Right click on your project -> Build Path -> Add External Archives…

    • Select the corresponding jar file from the installation directory specified before (e.g. “C:\Program Files (x86)\Microsoft JDBC Driver 4.2 for SQL Server\sqljdbc_4.2\enu”).

    • Validate that the library is been referenced as shown in the figure.

Now we are ready to create connections to SQL Server from AnyPoint Studio.

Create a Global Configuration Element for your SQL Server database.

Once MS SQL Server JDBC libraries are referenced by your MuleSoft project, you can create a Configuration Element to connect to a SQL Server database. For this exercise I will be using the AdventureWorks2014 Sample Database. The steps to create the global configuration element are described as follows:

  • Create a new Generic Database Configuration.

  • Set the Generic Database Configuration
    • Specify a Name.
    • Specify your connection URL using the format: jdbc:sqlserver://${mssql.server}:${mssql.port};databaseName=${mssql.database};user=${mssql.user};password=${mssql.password}
    • Specify the Driver Class Name as: com.microsoft.sqlserver.jdbc.SQLServerDriver
    • Test connection.

The resulting XML of the Generic Database Global Configuration element should be like the one as follows:

<db:generic-config 
name="MSSQL_AdventureWorks_Configuration"
url="jdbc:sqlserver://${mssql.server}:${mssql.port};databaseName=${mssql.database};user=${mssql.user};password=${mssql.password}"
driverClassName="com.microsoft.sqlserver.jdbc.SQLServerDriver"
doc:name="Generic Database Configuration"/>

Note that I am using a Property Placeholder to store my connection properties. Once the Global Configuration Element has been created, we are ready to start interacting with the database. In the following sections, I will be showing how to implement three common integration scenarios with a SQL Server database

  1. Querying data from a database.
  2. Updating data in a database.
  3. Polling updated records from a database.

Querying Data from a SQL Server Database

The first scenario we will go through is querying data from a SQL Server database based on a HTTP GET request. The requirement of this scenario is to get all employees from the database with first name containing a specified string. To implement this scenario, I have created the following flow containing the steps listed below:

  • HTTP Listener: which expects a GET call with a query param specifying the name to search for.
  • Extract Query Param: A Set Flow Variable Transformer which extracts the query param specifying the name to search for. Because I will be using a LIKE operator on my SQL query, I would default the variable to ‘%%’ to bring all available records when no filter is specified. I set the variable value as follows:
    #[message.inboundProperties.‘http.query.params’.name == empty ? ‘%%’ : ‘%’ + message.inboundProperties.‘http.query.params’.name + ‘%’]
  • Get Employees from DB: Database connector to get employees matching the criterion configured as follows:
    • Connector Configuration: the Generic Database Configuration Element configured previously.
    • Operation: Select
    • Query Type: Parametrized

    • Parametrized query: To query employees based on the filter criterion. I will use a simple SELECT statement and will use the flow variable
      #[flowVars.employeeName]
      as filter as shown below
      :

      SELECT  Person.BusinessEntityID
              ,Person.Title
              ,Person.FirstName
              ,Person.MiddleName
              ,Person.LastName
              ,EmailAddress.EmailAddress
              ,Employee.LoginID
              ,Employee.JobTitle
              ,Employee.BirthDate
              ,Employee.HireDate
              ,Employee.Gender
      FROM    Person.Person
      JOIN    Person.EmailAddress
           ON Person.BusinessEntityID = EmailAddress.BusinessEntityID
      JOIN    HumanResources.Employee
           ON Person.BusinessEntityID = Employee.BusinessEntityID
      WHERE    Person.FirstName LIKE #[flowVars.employeeName]

  • Convert Object to JSON: to convert the default java.util.LinkedList
    object to JSON.

At the end of the flow, the JSON payload will be returned to the caller. This is resulting configuration XML.

<http:listener-config
   name="HTTP_Listener_Configuration"
   host="0.0.0.0"
   port="8081"
   doc:name="HTTP Listener Configuration"/>
<flow name="getEmployees_Flow">
   <http:listener config-ref="HTTP_Listener_Configuration"
      path="/employee"
      allowedMethods="GET"
      doc:name="Listener GET Employees"/>
   <set-variable
      variableName="employeeName"
      value="#[message.inboundProperties.'http.query.params'.name == empty ? '%%' : '%' + message.inboundProperties.'http.query.params'.name + '%' ]"
      doc:name="Extract Query Param"/>
   <db:select
      config-ref="MSSQL_AdventureWorks_Configuration"
      doc:name="Get Employees from DB">
      <db:parameterized-query><![CDATA[
   
      SELECT Person.BusinessEntityID
  
   
           ,Person.Title
    
   
         ,Person.FirstName
      
   
       ,Person.MiddleName
        
   
     ,Person.LastName
          
   
   ,EmailAddress.EmailAddress
            
   
 ,Employee.LoginID
   
             ,Employee.JobTitle
  
   
           ,Employee.BirthDate
    
   
         ,Employee.HireDate
       
   
      ,Employee.Gender
 
   
       FROM
 Person.Person
 
   
       JOIN Person.EmailAddress ON Person.BusinessEntityID = EmailAddress.BusinessEntityID
 
   
       JOIN HumanResources.Employee ON Person.BusinessEntityID = Employee.BusinessEntityID
  
   
     WHERE Person.FirstName LIKE #[flowVars.employeeName]
      ]]></db:parameterized-query>
   </db:select>
   <json:object-to-json-transformer
      doc:name="Convert Object to JSON"/>
</flow>

Now, I should be ready to test this. I will use postman to call my endpoint.

When I call the endpoint without specifying a name using this URL http://localhost:8081/employee, I get a JSON payload with all employees.

When I call the endpoint specifying a name as a query param using this URL: http://localhost:8081/employee?name=sha, I get a JSON payload containing 3 employee records with a first name containing the string “sha”.

All good with my first scenario.

Updating Data into a SQL Server Database

In this scenario, I will update records in a database based on a HTTP PATCH request. The requirement is to update an Employee. The Employee ID is to be sent as a URI param and in the body the values to update.

To achieve this, I have created the following flow containing the steps listed below:

  • HTTP Listener: which expects a PATCH call with a URI param specifying the EmployeeID to update and the values to be updated specified in the body as a JSON payload. The expected payload looks like this:

    {
       "FirstName": "Paco",
       "MiddleName": "",
       "LastName": "de la Cruz",
       "LoginID": "adventure-works\\paco",
       "EmailAddress": paco@adventure-works.com
    }

  • Extract URI Param: A Set Flow Variable Transformer which extracts the URI param specifying the EmployeeID to update.
  • Update Employee: Database connector to update the employee configured as follows:
    • Connector Configuration: the Generic Database Configuration Element configured previously.
    • Operation: Stored Procedure
    • Query Type: Parametrized
    • Parametrized query (I previously created this stored procedure on the AdventureWorks2014 database):

      This is the syntax we have to use to call the stored procedure from MuleSoft using a parametrised query.

      {CALL uspUpdateEmployee (:BusinessEntityID, :FirstName, :MiddleName , :LastName, :EmailAddress, :LoginID)}

      Note the configured parameters in the picture above.

      Here, it is worth mentioning that if you are using MuleSoft versions 3.7.0 or 3.7.1, you might want to update to 3.7.2 or higher to avoid this bug (Database does not supports streaming on stored procedures (java.lang.IllegalArgumentException)) when calling parametrised stored procedures.

  • Choice: Return the corresponding message to the caller notifying if the employee was updated or not.

This is the resulting configuration XML.

<flow name="updateEmployees_Flow">
   <http:listener
      config-ref="HTTP_Listener_Configuration"
   
   path="/employee/{id}"
   
   allowedMethods="PATCH"
   
   doc:name="Listener PATCH Employee"/>
   <set-variable
 
   
  variableName="businessEntityID"
 
   
  value="#[message.inboundProperties.'http.uri.params'.id == empty ? 0 : message.inboundProperties.'http.uri.params'.id]"
  
   
 doc:name="Extract URI Param"/>
   <db:stored-procedure
 
   
  config-ref="MSSQL_AdventureWorks_Configuration"
 
   
  doc:name="Update Employee">
   <db:parameterized-query>
 
   
  <![CDATA[
  
      
 {CALL uspUpdateEmployee (:BusinessEntityID, :FirstName, :MiddleName , :LastName, :EmailAddress, :LoginID)}
  
   
 ]]></db:parameterized-query>
  
   
 <db:in-param
  
      
 name="BusinessEntityID"
  
      
 type="INTEGER"
  
      
 value="#[flowVars.businessEntityID]"/>
  
   
 <db:in-param
  
      
 name="FirstName"
  
      
 type="NVARCHAR"
  
      
 value="#[json:FirstName]"/>
  
   
 <db:in-param
  
      
 name="MiddleName"
  
      
 type="NVARCHAR"
 
      
  value="#[json:MiddleName]"/>
  
   
 <db:in-param
  
      
 name="LastName"
  
      
 type="NVARCHAR"
  
      
 value="#[json:LastName]"/>
  
   
 <db:in-param
  
      
 name="EmailAddress"
  
      
 type="NVARCHAR"
  
      
 value="#[json:EmailAddress]"/>
  
   
 <db:in-param
   
      name="LoginID"
  
      
 type="NVARCHAR"
  
      
 value="#[json:LoginID]"/>
   </db:stored-procedure>
   <choice
  
   
 doc:name="EmployeeUpdated?">
  
   
 <when
  
      
 expression="#[payload.values().toArray()[0] == 1]">
  
      
 <set-payload
  
         
 value="#['Employee: ' + flowVars.businessEntityID + ' has been updated.']"
  
         
 doc:name="Employee has been updated"/>
  
   
 </when>
  
   
 <otherwise>
 
      
  <set-payload
  
         
 value="#['Employee: ' + flowVars.businessEntityID + ' was not found.']"
  
         
 doc:name="Employee was not found"/>
  
   
 </otherwise>
   </choice>
</flow>

Using postman, I will test my new endpoint making a PATCH call and adding the “Content-Type” header with the value “application/json; charset=UTF-8“. I will send the payload below to update the record with EmployeeID = 1:

{
   "FirstName": "Paco",
   "MiddleName": "",
   "LastName": "de la Cruz",
   "LoginID": "adventure-works\\paco",
   "EmailAddress": "paco@adventure-works.com"
}

When I call the endpoint using this URL http://localhost:8081/employee/1, I get the message that the record has been updated. When I check the database, I am now the new CEO of Adventure Works .

When I call the endpoint using this URL http://localhost:8081/employee/0, I get the message that the Employee was not found.

All done with this scenario.

Polling updated records from a SQL Server Database

The last integration scenario is very common, particularly when implementing the Pub-Sub pattern, in which changes in a source system have to be published to one or more subscribers. The good news is that MuleSoft provides polling updates using watermarks, which comes in very handy and easy to implement. Below I explain how to do this with a SQL database.

I created the following flow with the steps listed below to implement the polling scenario.


  • Poll Scope: The poll scope requires to change the Processing Strategy of the flow to: “synchronous”


    The polling scope won’t work if you leave the default processing strategy, which is asynchronous. If you got the error below, you know then what to do .

    Message : Watermarking requires synchronous polling
    Code : MULE_ERROR-344

    For this scenario I configured the polling to occur once a day. To get only updated records, I am implementing watermarking utilising the ModifiedDate in the payload as shown below.

  • Select Updated Records: Inside the Poll scope, I implemented a database connector to select the updated records as shown in the figure. To get only the updated records, I am filtering those which the ModifiedDate is greater (later) than the flow variable DateTimeWatermark, the watermark created on the Poll scope.

  • Filter empty payload: To stop the flow when no updated records are obtained, using the following expression:

    #[payload.size() > 0]

  • Convert Object to JSON: to get a JSON payload out of the result set.
  • Logger: just as a way to test the flow.

This is resulting configuration XML.

<flow name="publishUpdatedEmployees_Flow"
   initialState="started"
   processingStrategy="synchronous">
   <poll
      doc:name="Poll">
      <fixed-frequency-scheduler
         frequency="1"
         timeUnit="DAYS"/>
      <watermark
         variable="DateTimeWatermark"
         default-expression="#[server.dateTime.format("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'")]"
         selector="MAX"
         selector-expression="#[payload.ModifiedDate]"/>
      <db:select
         config-ref="MSSQL_AdventureWorks_Configuration"
         doc:name="Select Updated Employees">
         <db:parameterized-query><![CDATA[
SELECT Person.BusinessEntityID
   ,Person.Title
   ,Person.FirstName
   ,Person.MiddleName
   ,Person.LastName
   ,EmailAddress.EmailAddress
   ,Employee.LoginID
   ,Employee.JobTitle
   ,Employee.BirthDate
   ,Employee.HireDate
   ,Employee.Gender
   ,Person.ModifiedDate
FROM Person.Person
JOIN Person.EmailAddress
  ON Person.BusinessEntityID = EmailAddress.BusinessEntityID
JOIN HumanResources.Employee
  ON Person.BusinessEntityID = Employee.BusinessEntityID
WHERE Person.ModifiedDate > CAST(#[flowVars.DateTimeWatermark] AS DATETIME)
ORDER BY BusinessEntityID
         ]]></db:parameterized-query>
      </db:select>
   </poll>
      <expression-filter
      expression="#[payload.size() >0]"
      doc:name="Filter empty payload"/>
   <json:object-to-json-transformer
      doc:name="Object to JSON"/>
   <logger
      level="INFO"
      doc:name="Logger"
      message="#[payload]"/>
</flow>

To test this scenario, I called the API to update data on my SQL database previously implemented (PATCH request to http://localhost:8081/employee/{id}) with different IDs, so different employees were updated. Then, I ran my solution and the polling started getting only the updated records. As simple and beautiful as that!

In this post I have shown you how to prepare your environment and your MuleSoft AnyPoint studio to work with Microsoft SQL Server, how to create a Global Configuration Element for a SQL database, how to search and poll changes from a SQL database, and how to update records as well. I hope now you have a better idea of how to connect to a MS SQL Server from your MuleSoft solutions. Thanks for reading!

Better Documenting your BizTalk Solutions

The BizTalk Documenter has been available for many years on Codeplex for different BizTalk versions, starting with 2004 all the way to 2013 R2. The Documenter for 2004, 2006, 2006 R2 and 2009 can be found here. Some years later, a version for BizTalk 2010 was created. Last year, MBrimble and Dijkgraaf created a newer version which also supports BizTalk 2013 and 2013 R2. They did a great job; all improvements and fixes are listed here.

As in many things in life, there is always room for further improvement. While using the BizTalk 2013 Documenter, we realised that some changes could be done to better document BizTalk Solutions. I downloaded the source code and did some changes for my own, but then after sharing with the team what I had done, they invited me to collaborate in the project. I created the BizTalk 2013 Documenter 5.1.7.1 for BizTalk 2013 with some fixes and improvements.

I will share here not only the changes that I did, but some tips that I consider can help you to better document your BizTalk solutions. If you would like to implement them, please make sure you have got the latest version of the Documenter.

1. Leverage the BizTalk Documenter

The first and obvious tip is to leverage the BizTalk Documenter. This tool allows you to create a CHM file describing your BizTalk environment and BizTalk Solutions. The first main section of the generated documentation contains your BizTalk Applications, listing all their artefacts and providing a navigable and very graphical documentation of all artefacts. The second main section describes the platform settings like hosts and adapters. The third main section documents BRE policies and vocabularies. You can expect an output similar to the one shown below.

BizTalk Documenter Sample Output

2. Use embedded documentation as much as possible

The practice of embedding documentation can be applied to your BizTalk Solutions. Using the BizTalk artefact’s description field within the Admin Console allows you to describe the purpose and function of each artefact and keep this always visible for administrators and developers. If you use the BizTalk Deployment Framework you can easily replicate your artefact’s description on all you environments by exporting applications’ bindings.

BizTalk Artefact Description Field

In our projects, we wanted to fully use embedded documentation for the BizTalk Solutions, but the previous Documenter has some minor bugs and Receive Ports, Schemas and Pipelines did not include the description field as part of the generated documentation. I’ve fixed them by updating some “.xslt” files and a class of the BizTalkOM library; and now the output includes description for all different artefacts.

Pipeline Field Documented!

3. Include your ESB Itineraries as part of your documentation

The BizTalk ESB Toolkit provides a lot of functionality which allows and simplifies the implementation of an Enterprise Service Bus; and ESB Itineraries are usually a key component of these solutions. That said, when they are part of a solution, itineraries should be within the documentation to fully understand the solution as a whole.

However, itineraries are not documented in the BizTalk Documenter out-of-the-box. One way to do it is to create a web page which briefly describes the itinerary and attach it to the documentation. There is a simple and easy way to do it. The first step is to create a Word document, including a picture of the itinerary designer, a description of its purpose and functionality, and the configuration of the resolvers. Then, after creating this document, save it as a Single File Web Page “.mht”.

Exporting a Word Document to an MHT file

I’ve introduced a change to the BizTalk Documenter to accept not only “.htm” and “.html” files as additional resources, but “.mht” files also. The big advantage of this is that documentation which includes images can be created on Word, saved as a “.mht” file and easily added to the BizTalk documentation.

Once created the documentation for each itinerary, they should be saved in a subfolder which can be called “Itineraries“. I suggest this folder name to have a clear structure in the generated documentation, but it can be set according the specific needs. This folder should be under a “Resources” folder which will be selected during the creation of the documentation.

Folder Structure

The last step is to be executed when generating the documentation. Under the “Output Options” page, in the “Advanced Documentation Options” section, the Resources folder which contains the Itineraries folder should be selected.

Selecting the Resources Folder

Doing so, the generated documentation should have the “Itineraries” branch under “Additional Notes“, and under this, the list of itineraries. This way, these important components of your BizTalk solutions are now part of your documentation.

Resulting output that includes MHT files exported from Word

4. Document your Maps

We have incorporated the functionality of another Codeplex project, the BizTalk Map Documenter, as part of the BizTalk Documenter. If you want to include documentation of your maps in more detail, the BizTalk mapper “.btm” source files must be available, and the following steps must be executed when generating the BizTalk documentation.

First, copy the BizTalk mapper files of the BizTalk Applications that are to be documented into a folder named “BtmSourceFiles“. Then, rename the maps so that they have the full name as they appear in the BizTalk Admin Console, but here including the “.btm” extension. And finally copy the “BtmSourceFiles” folder under your Resources Folder to be selected in the Documenter Output Options. The “BtmSourceFiles” name of the folder and the full name of the maps are required for the Documenter to be able to document in detail the maps.

BTM Source Files under the Resources folder

BTM Source Files under the Resources folder

In the following screenshots it can be seen the detailed BizTalk map documentation which you can expect. It shows direct links between source and target nodes, functoids, and constant values utilised in the map.

Resulting detailed Map documentation

Resulting detailed Map documentation

5. Enrich your documentation with other relevant information

In the tip #3, I mentioned how you can include your itineraries as part of your documentation. In addition to that, you can enrich your documentation with any Word or Excel document saved as “.mht” file or any other html file which is relevant to your solution. As an example, you could include the SettingsFileGenerator file of the BizTalk Deployment Framework. You just need to open it on Excel and save it as “.mht” file. This file must be saved in the corresponding folder under your Resources folder selected when you create the BizTalk Documentation. This way, your settings for Deployment can be included in your documentation.

Settings File for Deployments included in documentation

Settings File for Deployments included in documentation

6. Document only artefacts relevant to your solution

Previous versions of the BizTalk Documenter allowed you to select those BizTalk applications to be included in the documentation. However, the Platform Settings and Business Rule Engine sections of the generated documentation always included all hosts, adapters, policies and vocabularies. In some projects, we had the need of documenting only those hosts, adapters, and BRE artefacts relevant to the solutions in scope. To satisfy this need, I added the “Additional Filters” page on the Documenter. On this page, you can filter hosts, adapters and BRE artefacts. Filters are applied using a “StartWith” function, which means that all artefacts starting with the filter will be included. Multiple filters can be defined using a “|” (pipe) delimiter. The following screenshots show the configuration and the output of this new functionality.

Additional Filters page

Resulting output when using Filters

7. Put a nice cover to your documentation

The icing on the cake of a good documentation would be to put a nice cover which is aligned to your needs. To do this, you need to add a custom “titlePage.htm” file on the root of your Resources folder selected in the Output Options tabs. If you are using your own custom images, you need to add them to the same root folder.

Including a cover page

The default cover page and a customised one can be seen in the following two images.

Cover customisations

Cover customisations

The option of customising the cover page has been available since past versions of the Documenter, but in order to get the template of it, the source code has to be downloaded. In this link you can see and download the html template only which you can customise according to your needs.

Note that the template makes use of a stylesheet and images which are part of the documenter. You can use yours by adding them in the same Resources root folder. You can freely customise this html according to your preferences and needs. Make sure you name your “.htm” file as “titlePage.htm“.

I hope you find these tips useful and the BizTalk Documenter can help you to provide a comprehensive and quality documentation to your implemented BizTalk solutions. Please feel free to suggest to the team your ideas or improvements for the BizTalk Documenter on the Codeplex page.