API Mocking for Developers

API is the most common practice to exchange messages in a microservices architecture world. There are actually two different approaches for API development. One is called Model First and the other is called Design First. Usually the latter, AKA Spec-Driven Development (SDD), is preferred over the former.

When is the Model First approach useful? If you are running legacy API applications, this would be a good example of using this approach. If those systems are well documented, API documents can be easily extracted by tools like Swagger which is now renamed to Open API. There are many implementations for Swagger, Swashbuckle for example, so it’s very easy to extract API spec document using those tools.

What if we are developing a new API application? Should I develop the application and extract its API spec document like above? Of course we can do like this. There’s no problem at all. If the API spec is updated, what should we do then? The application should be updated first then the updated API spec should be extracted, which might be expensive. In this case, the Design First approach would be useful because we can make all API spec clearly before the actual implementation, which we can reduce time and cost, and achieve better productivity. Here’s a rough diagram how SDD workflow looks like:

  1. Design API spec based on the requirements.
  2. Simulate the API spec.
  3. Gather feedback from API consumers.
  4. Validates the API spec if it meets the requirements.
  5. Publish the API spec.

There is an interesting point. How can we run/simulate the API without the actual implementation? Here we can introduce API mocking. By mocking APIs, front-end developers, mobile developers or other back-end developers who actually consume the APIs can simulate expected result, regardless that the APIs are working or not. Also by mocking those APIs, developers can feed back more easily.

In this post, we will have a look how API mocking features work in different API management tools such as MuleSoft API Manager, Azure API Management and Amazon API Gateway, and discuss their goods and bads.

MuleSoft API Manager + RAML

MuleSoft API Manager is a strong supporter of RAML(RESTful API Modelling Language) spec. RAML spec version 0.8 is the most popular but recently its Spec version 1.0 has been released. RAML basically follows the YAML format, which brings more readability to API spec developers. Here’s a simple RAML based API spec document.

According to the spec document, API has two different endpoints of /products and /products/{productId} and they define GET, POST, PATCH, DELETE respectively. Someone might have picked up in which part of the spec document would be used. Each response body has its example node. These nodes are used for mocking in MuleSoft API Manager. Let’s see how we can mock both API endpoints.

First of all, login to Anypoint website. You can create a free trial account, if you want.

Navigate to API Manager by clicking either the button or icon.

Click the Add new API button to add a new API and fill the form fields.

After creating an API, click the Edit in API designer link to define APIs.

We have already got the API spec in RAML, so just import it.

The designer screen consists of three sections – left, centre, and right. The centre section shows the RAML document itself while the right section displays how RAML is visualised as API document. Now, click the Mocking Service button at the top-right corner to activate the mocking status. If mocking is enabled, the original URL is commented out and a new mocking URL is added like below:

Of course, when the mocking is de-activated, the mocking URL disappears and the original URL is uncommented. With this mocking URL, we can actually send requests through Postman for simulation. In addition to this, developers consuming this API don’t have to wait until it’s implemented but just using it. The API developer can develop in parallel with front-end/mobile developers. This is the screenshot of using Postman to send API requests.

As defined in the example nodes of RAML spec document, the pre-populated result is popping out. Like the same way, other endpoints and/or methods return their mocked results.

So far, we have looked how MuleSoft API Manager can handle RAML and API mocking. It’s very easy to use. We just imported a RAML spec document and switched on the mocking feature, and that’s it. Mocking can’t be easier that this. It also supports Swagger 2.0 spec. When we import a Swagger document, it is automatically converted to a RAML 1.0 document. However, in the API designer, we still have to use RAML. It would be great if MuleSoft API Manager supports to edit Swagger spec documents out of the box sooner rather than later, which is a de-facto standard nowadays.

There are couple of downsides of using API Manager. We can’t have precise controls on individual endpoints and methods.

  • “I want to mock only the GET method on this specific endpoint!”
  • “Nah, it’s not possible here. Try another service.”

Also, the API designer changes the base URL of API, if we activate the mocking feature. It might be critical for other developers consuming the API for their development practice. They have to change the API URL once the API is implemented.

Now, let’s move onto the example supporting Swagger natively.

Azure API Management + Swagger

Swagger has now become Open API, and spec version 2.0 is the most popular. Recently a new spec version 3.0 has been released for preview. Swagger is versatile – ie. it supports YAML and JSON format. Therefore, we can design the spec in YAML and save it into a JSON file so that Azure API Management can import the spec document natively. Here’s the same API document written in YAML based on Swagger spec.

It looks very similar to RAML, which includes a examples node in each endpoint and/or method. Therefore, we can easily activate the mocking feature. Here’s how.

First of all, we need to create an API Management instance on Azure Portal. It takes about half an hour for provisioning. Once the instance is fulfilled, click the APIs - PREVIEW blade at the left-hand side to see the list of APIs.

We can also see tiles for API registration. Click the Open API specification tile to register API.

Upload a Swagger definition file in JSON format, say swagger.json, and enter appropriate name and suffix. We just used shop as a prefix for now.

So, that’s it! We just uploaded the swagger.json file and it completes the API definition on API Management. Now, we need to mock an endpoint. Unlike MuleSoft API Manager, Azure API Management can handle mocking on endpoints and methods individually. Mocking can be set in the Inbound processing tile as it intercepts the back-end processing before it hits the back-end side. It can also set mocking at a global level rather than individual level. For now, we simply setup the mocking feature only on the /products endpoint and the GET method.

Select /products - GET at the left-hand side and click the pencil icon at the top-right corner of the Inbound Processing tile. Then we’re able to see the screen below:

Click the Mocking tab, select the Static responses option on the Mocking behavior item, and choose the 200 OK option of the Sample or schema responses item, followed by Save. We can expect the content defined under the examples node. Once saved, the screen will display something like this:

In order to use the API Management, we have to send a special header key in every request. Go to the Products - PREVIEW blade and add the API that we’ve defined above.

Go to the Users - PREVIEW blade to get the Subscription Key.

We’re ready now. In Postman, we can see the result like:

The subscription key has been sent through the header with a key of Ocp-Apim-Subscription-Key and the result was what we expected above, which has already been defined in the Swagger definition.

So far, we have used Azure API Management with Swagger definition for mocking. The main differences between MuleSoft API Manager and Azure API Management are:

  • Azure API Management doesn’t change the mocking endpoint URL, while MuleSoft API Manager does. That’s actually very important for front-end/mobile developers because they don’t get bothered with changing the base URL – same URL with mocked result or actual result based on the setting.
  • Azure API Management can mock endpoint and method individually. We only mock necessary ones.

However, the downside of using Azure API Management will be the cost. It costs more than $60/month on the developer pricing plan, which is the cheapest. MuleSoft API Manager is literally free to use, from the mocking perspective (Of course we have to pay for other services in MuleSoft).

Well, what else service can we use together with Swagger? Of course there is Amazon AWS. Let’s have a look.

Amazon API Gateway + Swagger

Amazon API Gateway also uses Swagger for its service. Login to the API Gateway Console and import the Swagger definition stated above.

Once imported, we can see all API endpoints. We choose the /products endpoint with the GET method. Select the Mock option of the Integration type item,, and click Save.

Now, we’re at the Method Execution screen. When we see at the rightmost-hand side of the screen, it says Mock Endpoint where this API will hit. Click the Integration Request tile.

We confirm this is mocked endpoint. Right below, select the When there are no templates defined (recommended) option of the Request body passthrough item.

Go back to the Method Execution screen and click the Integration Response tile.

There’s already a definition of the HTTP Status Code of 200 in Swagger, which is automatically showed upon the screen. Click the triangle icon at the left-hand side.

Open the Body Mapping Templates section and click the application/json item. By doing so, we’re able to see the sample data input field. We need to put a JSON object as a response example by hand.

I couldn’t find any other way to automatically populate the sample data. Especially, the array type response object can’t be auto-generated. This is questionable; why doesn’t Amazon API Gateway allow a valid object? If we want to avoid this situation, we have to update our Swagger definition, which is vendor dependent.

Once the sample data is updated, save it and go back to the Method Execution screen again. We are ready to use the mocking feature. Wait. One more step. We need to publish for public access.

Eh oh… We found another issue here. In order to publish (or deploy) the API, we have to set up the Integration Type of all endpoints and methods INDIVIDUALLY!!! We can’t skip one single endpoint/method. We can’t set up a global mocking feature, either. This is a simple example that only contains four endpoints/methods. But, as a real life scenario, there are hundreds of API endpoints/methods in one application. How can we set up individual endpoints/methods then? There’s no such way here.

Anyway, once deployment completes, API Gateway gives us a URL to access to API Gateway endpoints. Use Postman to see the result:

So far, we have looked Amazon API Gateway for mocking. Let’s wrap up this post.

  • Global API Mocking: MueSoft API Manager provides a one-click button, while Amazon API Gateway doesn’t provide such feature.
  • Individual API Mocking: Both Azure API Management and Amazon API Gateway are good for handling individual endpoints/methods, while MuleSoft API Manager can’t do it. Also Amazon API Gateway doesn’t well support to return mocked data, especially for array type response. Azure API Management perfectly supports this feature.
  • Uploading Automation of API Definitions: Amazon API Gateway has to manually update several places after uploading Swagger definition files, like examples as mocked data. On the other hand, both Azure API Management and MuleSoft API Manager perfectly supports this feature. There’s no need for manual handling after uploading definitions.
  • Cost of API Mocking: Azure API Management is horrible from the cost perspective. MuleSoft provides virtually a free account for mocking and Amazon API Gateway offers the first 12 months free trial period.

We have so far briefly looked how we can mock our APIs using the spec documents. As we saw above, we don’t need to code for this feature. Mocking purely relies on the spec document. We also have looked how this mocking can be done in each platform – MuleSoft API Manager, Azure API Management and Amazon API Gateway, and discuss merits and demerits of each service, from the mocking perspective.

MuleSoft Anypoint Studio in High DPI Mode

MuleSoft‘s development platform, Anypoint Studio, is a great tool for service integration. However, if we’re using an OS that supports High DPI mode like Windows 10, its user experience is not quite nice.

Icons are barely recognisable! By the way, this is NOT the issue from the Anypoint Studio side. Rather, it’s the well-known issue that Eclipse has had so far at least since 2013. Anypoint Studio is built on top of Eclipse, so the same issue occurs here.

The version of Anypoint Studio is 6.2.3 at the time of writing.

So, how to fix this? Here’s the magic.

First of all, create a manifesto file at the same location where our AnypointStudio.exe is located and give it a name of AnypointStudio.exe.manifest. Its content will look like:

The main point of the manifesto file is the dpiAware element and its value to be false. By adding this manifesto file, our Anypoint Studio can run by overriding that High DPI settings.

I appreciated Roberto, one of my colleagues at Kloud Solutions, for extracting this manifesto content by decompiling AnypointStudio.exe.

Next, we need to let application know there is an external manifesto file to read at runtime by modifying Windows registry. Open Registry Editor by running regedit. Then follow the steps below:

  1. Navigate to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\SideBySide\
  2. Add new DWORD (32-bit) Value
  3. Set the name to PreferExternalManifest
  4. Set the value to 1

All good to go! Now run our Anypoint Studio again and see how it looks different from the previous screen.

Can we see the differences? Of course this black magic contains a couple of downsides:

  • The icons are a bit blurry, and
  • Workspace becomes smaller

But that’s fine, it’s not too bad as long as we can recognise all those icons. Hopefully, next version of Eclipse will fix this issue.

Automate the archiving of your CloudHub application logs

CloudHub is MuleSoft’s integration platform as a service (iPaaS) that enables the deployment and management of integration solutions in the cloud. Runtime Manager, CloudHub’s management tool,  provides an integrated set of logging tools that allow support and operations staff to monitor and troubleshoot application logs of deployed applications.

Currently, application log entries are kept for 30 days or until they reach a max size of 100 MB. Often we are required to keep these logs for greater periods of time for auditing or archiving purposes. Overly chatty applications (applications that write log entries frequently) may find their logs only covering a few days restricting the troubleshooting window even further. Runtime Manager allows portal users to manually download log files via the browser, however no automated solution is provided out-of-the-box.

The good news is, the platform does provide both a command line tool and management API that we can leverage. Leaving the CLI to one side for now, the platform’s management API looks promising. Indeed, a search in Anypoint Exchange also yields a ready built CloudHub Connector we could leverage. However upon further investigation, the connector doesn’t meet all our requirements. The CloudHub Connector does not appear to support different business groups and environments so using it to download logs for applications deployed to non-default environments will not work (at least in the current version). The best approach will be to consume the management APIs provided by the Anypoint Platform directly. RAML definitions have been made available making consuming them within a mule flow very easy.

Solution overview

In this post we’ll develop a CloudHub application that is triggered periodically to loop through a collection of target applications, connect to the Anypoint Management APIs and fetch the current application log for each deployed instance. The downloaded logs will be compressed and sent to an Amazon S3 bucket for archiving.

solution_view

Putting the solution together:

We start by grabbing the RAML for both the Anypoint Access Management API and the Anypoint Runtime Manager API and bring them into the project. The Access Management API provides the authentication and authorisation operations to login and obtain an access token needed in subsequent calls to the Runtime Manager API. The Runtime Manager API provides the operations to enumerate the deployed instances of an application and download the application log.

Download and add the RAML definitions to the project by extracting them into the ~/src/main/api folder.

raml_defs

To consume these APIs we’ll use the HTTP connector so we need to define some global configuration elements that make use of the RAML definitions we just imported.

http_config

Note: Referencing these directly from Exchange currently throws some RAML parsing errors.

exchange_error

So to avoid this, we download manually and reference our local copy of the RAML definition. Obviously we’ll need to update this as the API definition changes in the future.

To provide simple multi-value configuration support I have used a simple JSON structure to describe a collection of applications we need to iterate over.

Our flow then reads in this config and transforms this into a HashMap that we can then iterate over.
Note: Environment IDs can be gathered using the Runtime Manager API or the Anypoint CLI

cli_environments

Next, create our top level flow that is triggered periodically to read and parse our configuration setting into a collection that we can iterate over to download the application logs.

entry_point_flow

Now, we create a sub-flow that describes the process of downloading application logs for each deployed instance. We first obtain an access token using the Access Management API and present that token to the Runtime Manager API to gather details of all deployed instances of the application. We then iterate over that collection and call the Runtime Manager API to download the current application log for each deployed instance.

archive_log_file

Next we add the sub-flows for consuming the Anypoint Platform APIs for each of the in-scope operations

cloudhubLogin

cloudhubDeployments

cloudhublogfiles

In this last sub-flow, we perform an additional processing step of compressing (zip) the log file before sending to our configured Amazon S3 bucket.

The full configuration for the workflow can be found here.

Once packaged and deployed to CloudHub we configure the solution to archive application logs for any deployed CloudHub app, even if they have been deployed into environments other than the one hosting the log archiver solution.

deployed_config

After running the solution for a day or so and checking the configured storage location we can confirm logs are being archived each day.

amazon_S3_bucket

Known limitations:

  • The Anypoint Management API does not allow downloading application logs for a given date range. That is, each time the solution runs a full copy of the application log will be downloaded. The API does support an operation to query the logs for a given date range and return matching entries as a result set but that comes with additional constraints on result set size (number of rows) and entry size (message truncation).
  • The RAML definitions in Anypoint Exchange currently do not parse correctly in Anypoint Studio. As mentioned above, to work around this we download the RAML manually and bring it into the project ourselves.
  • Credentials supplied in configuration are in plain text. Suggest creating a dedicated Anypoint account and granting permissions to only the target environments.

In this post I have outlined a solution that automates the archiving of your CloudHub application log files to external cloud storage. The solution allows periodic scheduling and multiple target applications to be configured even if they exist in different CloudHub environments. Deploy this solution once to archive all of your deployed application logs.

DataWeave: Tips and tricks from the field

DataWeave (DW) has been part of the MuleSoft Anypoint Platform since v3.7.0 and has been a welcome enhancement providing an order of magnitude improvement in performance as well as increased mapping capability that enables more efficient flow design.

However, like most new features of this scope and size (i.e. brand new transformation engine written from the ground up), early documentation was minimal and often we were left to ourselves. At times even the most simple mapping scenarios could take an hour or so to solve what could have taken 5 mins in data-mapper. But it pays to stick with it and push on through the adoption hurdle as the pay offs are worth it in the end.

For those starting out with DataWeave here are some links to get you going:

In this post I will share some tips and tricks I have picked up from the field with the aim that I can give someone out there a few hours of their life back.

Tip #1 – Use the identity transform to check how DW performs it’s intermediate parsing

When starting any new DW transform, it pays to capture and understand how DW will parse the inputs and present it to the transform engine. This helps navigate some of the implicit type conversions going on as well as better understand the data structure being traversed in your map. To do this, start off by using the following identity transform with an output type of application/dw.

Previewing a sample invoice xml yields the following output which gives us insight into the internal data structure and type conversations performed by DW when parsing our sample payload.

and the output of the identity transform

Tip #2 – Handling conditional xml node lists

Mule developers who have being using DW even for a short time will be used to seeing these types of errors displayed in the editor

Cannot coerce a :string to a :object

These often occur when we expect the input payload to be an array or complex data type, but a simple type (string in this case) is actually presented to the transform engine. In our invoice sample, this might occur when an optional xml nodelist contains no child nodes.

To troubleshoot this we would use the identity transform described above to gain insight into the intermediate form of our input payload. Notice the invoices element is no longer treated as a nodelist but rather a string.

We resolve this by checking if ns0#invoices is of type object and provide alternative output should the collection be empty.

Tip #3 – Explicitly setting the type of input reader DataWeave uses

Occasionally you will hit scenarios where the incoming message (input payload) doesn’t have a mimeType assigned or DW cannot infer a reader class from the mimeType that is present. In these cases, we’ll either get an exception thrown or we may get unpredictable behaviour from your transform. To avoid this, we should be in the habit of setting the mimeType of the input payload explicitly. At present we can’t do this in the graphical editor, we will need to edit the configuration xml directly and add the following attribute to the <dw:input-payload> element of our transform shape

<dw:input-payload doc:sample="xml_1.xml" mimeType="application/xml" />

Tip #4 – Register custom formats as types (e.g. datetime formats)

Hopefully we are blessed to be always working against strongly typed message schema where discovery and debate over the data formats of the output fields never happen…yeah right. Too often we need to tweak the output format of data fields a couple of times during the development cycle. In large transforms, this may mean applying the same change to several fields throughout the map, only to come back and change this again the next day. To save you time and better organise your DW maps, we should look to declare common format types in the transform header and reference those throughout the map. If we need to tweak this we apply the change in one central location.

Tip #5 – Move complex processing logic to MEL functions

Sometimes even the most straight forward of transformations can lead to messy chains of functions that are hard to read, difficult to troubleshoot and often error prone. When finding myself falling into these scenarios I look to pull out this logic and move it into a more manageable MEL function. This not only cleans up the DW map but also provides opportunity to place debug points in our MEL function to assist with troubleshooting a misbehaving transform.

Place your MEL function in your flow’s configuration file at the start along with any other config elements.

Call your function as you would if you declared it inline in your map.

Tip #6 – Avoid the graphical drag and drop tool in the editor

One final tip that I find myself regularly doing is avoid using the graphical drag and drop tool in the editor. I’m sure this will be fixed in later versions of DataWeave, but for now I find it creates untidy code that I often end up fighting with the editor to clean up. I would only typically use the graphical editor to map multiple fields en-mass and then cut and paste the code into my text editor, clean it up and paste it back into DW. From then on, I am working entirely in the code window.

There we go, in this post I have outlined six tips that I hope will save at least one time poor developer a couple of hours which could be much better spent getting on with the business off delivering integration solutions for their customers. Please feel free to contribute more tips in the comments section below.

Connecting to MS SQL Server with MuleSoft

MuleSoft provides an extensive list of connectors which come in handy when implementing integration solutions. In many integration scenarios, we need to connect directly to a database to get or update data. MuleSoft provides a database connector which allows JDBC connectivity with relational databases to execute SQL operations such as Select, Insert, Update, Delete, and Stored Procedures.

However, the database connector only provides out-of-the-box connectivity with Oracle, MySQL, and Derby databases. To connect to Microsoft SQL Server, additional steps are to be followed. In this post I will go through the required setup and to show how to connect with a MS SQL Server database. I will implement some basic integration scenarios like querying data, updating data, and polling updated records from a SQL database.

For this post I will be using MuleSoft 3.7.2 EE and SQL Server 2014, however, you could follow the same steps to connect to other versions of SQL Server.

Pre-requisites to connect to MS SQL Server with MuleSoft.

Because MS SQL Server is not one of the supported out-of-the-box databases, the following is to be done:

  • Download and install the Microsoft JDBC Drivers for SQL Server. As detailed in the Install instructions of the link, the installer is to be downloaded and run. Once run, it will prompt for installation directory, for which specifying %ProgramFiles% is suggested. For my exercise I utilised the latest version (4.2).
  • Make sure TCP/IP is enabled in your SQL Server configuration as described here.
  • Add the required references to your MuleSoft project.
    • Right click on your project -> Build Path -> Add External Archives…

    • Select the corresponding jar file from the installation directory specified before (e.g. “C:\Program Files (x86)\Microsoft JDBC Driver 4.2 for SQL Server\sqljdbc_4.2\enu”).

    • Validate that the library is been referenced as shown in the figure.

Now we are ready to create connections to SQL Server from AnyPoint Studio.

Create a Global Configuration Element for your SQL Server database.

Once MS SQL Server JDBC libraries are referenced by your MuleSoft project, you can create a Configuration Element to connect to a SQL Server database. For this exercise I will be using the AdventureWorks2014 Sample Database. The steps to create the global configuration element are described as follows:

  • Create a new Generic Database Configuration.

  • Set the Generic Database Configuration
    • Specify a Name.
    • Specify your connection URL using the format: jdbc:sqlserver://${mssql.server}:${mssql.port};databaseName=${mssql.database};user=${mssql.user};password=${mssql.password}
    • Specify the Driver Class Name as: com.microsoft.sqlserver.jdbc.SQLServerDriver
    • Test connection.

The resulting XML of the Generic Database Global Configuration element should be like the one as follows:

<db:generic-config 
name="MSSQL_AdventureWorks_Configuration"
url="jdbc:sqlserver://${mssql.server}:${mssql.port};databaseName=${mssql.database};user=${mssql.user};password=${mssql.password}"
driverClassName="com.microsoft.sqlserver.jdbc.SQLServerDriver"
doc:name="Generic Database Configuration"/>

Note that I am using a Property Placeholder to store my connection properties. Once the Global Configuration Element has been created, we are ready to start interacting with the database. In the following sections, I will be showing how to implement three common integration scenarios with a SQL Server database

  1. Querying data from a database.
  2. Updating data in a database.
  3. Polling updated records from a database.

Querying Data from a SQL Server Database

The first scenario we will go through is querying data from a SQL Server database based on a HTTP GET request. The requirement of this scenario is to get all employees from the database with first name containing a specified string. To implement this scenario, I have created the following flow containing the steps listed below:

  • HTTP Listener: which expects a GET call with a query param specifying the name to search for.
  • Extract Query Param: A Set Flow Variable Transformer which extracts the query param specifying the name to search for. Because I will be using a LIKE operator on my SQL query, I would default the variable to ‘%%’ to bring all available records when no filter is specified. I set the variable value as follows:
    #[message.inboundProperties.‘http.query.params’.name == empty ? ‘%%’ : ‘%’ + message.inboundProperties.‘http.query.params’.name + ‘%’]
  • Get Employees from DB: Database connector to get employees matching the criterion configured as follows:
    • Connector Configuration: the Generic Database Configuration Element configured previously.
    • Operation: Select
    • Query Type: Parametrized

    • Parametrized query: To query employees based on the filter criterion. I will use a simple SELECT statement and will use the flow variable
      #[flowVars.employeeName]
      as filter as shown below
      :

      SELECT  Person.BusinessEntityID
              ,Person.Title
              ,Person.FirstName
              ,Person.MiddleName
              ,Person.LastName
              ,EmailAddress.EmailAddress
              ,Employee.LoginID
              ,Employee.JobTitle
              ,Employee.BirthDate
              ,Employee.HireDate
              ,Employee.Gender
      FROM    Person.Person
      JOIN    Person.EmailAddress
           ON Person.BusinessEntityID = EmailAddress.BusinessEntityID
      JOIN    HumanResources.Employee
           ON Person.BusinessEntityID = Employee.BusinessEntityID
      WHERE    Person.FirstName LIKE #[flowVars.employeeName]

  • Convert Object to JSON: to convert the default java.util.LinkedList
    object to JSON.

At the end of the flow, the JSON payload will be returned to the caller. This is resulting configuration XML.

<http:listener-config
   name="HTTP_Listener_Configuration"
   host="0.0.0.0"
   port="8081"
   doc:name="HTTP Listener Configuration"/>
<flow name="getEmployees_Flow">
   <http:listener config-ref="HTTP_Listener_Configuration"
      path="/employee"
      allowedMethods="GET"
      doc:name="Listener GET Employees"/>
   <set-variable
      variableName="employeeName"
      value="#[message.inboundProperties.'http.query.params'.name == empty ? '%%' : '%' + message.inboundProperties.'http.query.params'.name + '%' ]"
      doc:name="Extract Query Param"/>
   <db:select
      config-ref="MSSQL_AdventureWorks_Configuration"
      doc:name="Get Employees from DB">
      <db:parameterized-query><![CDATA[
   
      SELECT Person.BusinessEntityID
  
   
           ,Person.Title
    
   
         ,Person.FirstName
      
   
       ,Person.MiddleName
        
   
     ,Person.LastName
          
   
   ,EmailAddress.EmailAddress
            
   
 ,Employee.LoginID
   
             ,Employee.JobTitle
  
   
           ,Employee.BirthDate
    
   
         ,Employee.HireDate
       
   
      ,Employee.Gender
 
   
       FROM
 Person.Person
 
   
       JOIN Person.EmailAddress ON Person.BusinessEntityID = EmailAddress.BusinessEntityID
 
   
       JOIN HumanResources.Employee ON Person.BusinessEntityID = Employee.BusinessEntityID
  
   
     WHERE Person.FirstName LIKE #[flowVars.employeeName]
      ]]></db:parameterized-query>
   </db:select>
   <json:object-to-json-transformer
      doc:name="Convert Object to JSON"/>
</flow>

Now, I should be ready to test this. I will use postman to call my endpoint.

When I call the endpoint without specifying a name using this URL http://localhost:8081/employee, I get a JSON payload with all employees.

When I call the endpoint specifying a name as a query param using this URL: http://localhost:8081/employee?name=sha, I get a JSON payload containing 3 employee records with a first name containing the string “sha”.

All good with my first scenario.

Updating Data into a SQL Server Database

In this scenario, I will update records in a database based on a HTTP PATCH request. The requirement is to update an Employee. The Employee ID is to be sent as a URI param and in the body the values to update.

To achieve this, I have created the following flow containing the steps listed below:

  • HTTP Listener: which expects a PATCH call with a URI param specifying the EmployeeID to update and the values to be updated specified in the body as a JSON payload. The expected payload looks like this:

    {
       "FirstName": "Paco",
       "MiddleName": "",
       "LastName": "de la Cruz",
       "LoginID": "adventure-works\\paco",
       "EmailAddress": paco@adventure-works.com
    }

  • Extract URI Param: A Set Flow Variable Transformer which extracts the URI param specifying the EmployeeID to update.
  • Update Employee: Database connector to update the employee configured as follows:
    • Connector Configuration: the Generic Database Configuration Element configured previously.
    • Operation: Stored Procedure
    • Query Type: Parametrized
    • Parametrized query (I previously created this stored procedure on the AdventureWorks2014 database):

      This is the syntax we have to use to call the stored procedure from MuleSoft using a parametrised query.

      {CALL uspUpdateEmployee (:BusinessEntityID, :FirstName, :MiddleName , :LastName, :EmailAddress, :LoginID)}

      Note the configured parameters in the picture above.

      Here, it is worth mentioning that if you are using MuleSoft versions 3.7.0 or 3.7.1, you might want to update to 3.7.2 or higher to avoid this bug (Database does not supports streaming on stored procedures (java.lang.IllegalArgumentException)) when calling parametrised stored procedures.

  • Choice: Return the corresponding message to the caller notifying if the employee was updated or not.

This is the resulting configuration XML.

<flow name="updateEmployees_Flow">
   <http:listener
      config-ref="HTTP_Listener_Configuration"
   
   path="/employee/{id}"
   
   allowedMethods="PATCH"
   
   doc:name="Listener PATCH Employee"/>
   <set-variable
 
   
  variableName="businessEntityID"
 
   
  value="#[message.inboundProperties.'http.uri.params'.id == empty ? 0 : message.inboundProperties.'http.uri.params'.id]"
  
   
 doc:name="Extract URI Param"/>
   <db:stored-procedure
 
   
  config-ref="MSSQL_AdventureWorks_Configuration"
 
   
  doc:name="Update Employee">
   <db:parameterized-query>
 
   
  <![CDATA[
  
      
 {CALL uspUpdateEmployee (:BusinessEntityID, :FirstName, :MiddleName , :LastName, :EmailAddress, :LoginID)}
  
   
 ]]></db:parameterized-query>
  
   
 <db:in-param
  
      
 name="BusinessEntityID"
  
      
 type="INTEGER"
  
      
 value="#[flowVars.businessEntityID]"/>
  
   
 <db:in-param
  
      
 name="FirstName"
  
      
 type="NVARCHAR"
  
      
 value="#[json:FirstName]"/>
  
   
 <db:in-param
  
      
 name="MiddleName"
  
      
 type="NVARCHAR"
 
      
  value="#[json:MiddleName]"/>
  
   
 <db:in-param
  
      
 name="LastName"
  
      
 type="NVARCHAR"
  
      
 value="#[json:LastName]"/>
  
   
 <db:in-param
  
      
 name="EmailAddress"
  
      
 type="NVARCHAR"
  
      
 value="#[json:EmailAddress]"/>
  
   
 <db:in-param
   
      name="LoginID"
  
      
 type="NVARCHAR"
  
      
 value="#[json:LoginID]"/>
   </db:stored-procedure>
   <choice
  
   
 doc:name="EmployeeUpdated?">
  
   
 <when
  
      
 expression="#[payload.values().toArray()[0] == 1]">
  
      
 <set-payload
  
         
 value="#['Employee: ' + flowVars.businessEntityID + ' has been updated.']"
  
         
 doc:name="Employee has been updated"/>
  
   
 </when>
  
   
 <otherwise>
 
      
  <set-payload
  
         
 value="#['Employee: ' + flowVars.businessEntityID + ' was not found.']"
  
         
 doc:name="Employee was not found"/>
  
   
 </otherwise>
   </choice>
</flow>

Using postman, I will test my new endpoint making a PATCH call and adding the “Content-Type” header with the value “application/json; charset=UTF-8“. I will send the payload below to update the record with EmployeeID = 1:

{
   "FirstName": "Paco",
   "MiddleName": "",
   "LastName": "de la Cruz",
   "LoginID": "adventure-works\\paco",
   "EmailAddress": "paco@adventure-works.com"
}

When I call the endpoint using this URL http://localhost:8081/employee/1, I get the message that the record has been updated. When I check the database, I am now the new CEO of Adventure Works .

When I call the endpoint using this URL http://localhost:8081/employee/0, I get the message that the Employee was not found.

All done with this scenario.

Polling updated records from a SQL Server Database

The last integration scenario is very common, particularly when implementing the Pub-Sub pattern, in which changes in a source system have to be published to one or more subscribers. The good news is that MuleSoft provides polling updates using watermarks, which comes in very handy and easy to implement. Below I explain how to do this with a SQL database.

I created the following flow with the steps listed below to implement the polling scenario.


  • Poll Scope: The poll scope requires to change the Processing Strategy of the flow to: “synchronous”


    The polling scope won’t work if you leave the default processing strategy, which is asynchronous. If you got the error below, you know then what to do .

    Message : Watermarking requires synchronous polling
    Code : MULE_ERROR-344

    For this scenario I configured the polling to occur once a day. To get only updated records, I am implementing watermarking utilising the ModifiedDate in the payload as shown below.

  • Select Updated Records: Inside the Poll scope, I implemented a database connector to select the updated records as shown in the figure. To get only the updated records, I am filtering those which the ModifiedDate is greater (later) than the flow variable DateTimeWatermark, the watermark created on the Poll scope.

  • Filter empty payload: To stop the flow when no updated records are obtained, using the following expression:

    #[payload.size() > 0]

  • Convert Object to JSON: to get a JSON payload out of the result set.
  • Logger: just as a way to test the flow.

This is resulting configuration XML.

<flow name="publishUpdatedEmployees_Flow"
   initialState="started"
   processingStrategy="synchronous">
   <poll
      doc:name="Poll">
      <fixed-frequency-scheduler
         frequency="1"
         timeUnit="DAYS"/>
      <watermark
         variable="DateTimeWatermark"
         default-expression="#[server.dateTime.format("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'")]"
         selector="MAX"
         selector-expression="#[payload.ModifiedDate]"/>
      <db:select
         config-ref="MSSQL_AdventureWorks_Configuration"
         doc:name="Select Updated Employees">
         <db:parameterized-query><![CDATA[
SELECT Person.BusinessEntityID
   ,Person.Title
   ,Person.FirstName
   ,Person.MiddleName
   ,Person.LastName
   ,EmailAddress.EmailAddress
   ,Employee.LoginID
   ,Employee.JobTitle
   ,Employee.BirthDate
   ,Employee.HireDate
   ,Employee.Gender
   ,Person.ModifiedDate
FROM Person.Person
JOIN Person.EmailAddress
  ON Person.BusinessEntityID = EmailAddress.BusinessEntityID
JOIN HumanResources.Employee
  ON Person.BusinessEntityID = Employee.BusinessEntityID
WHERE Person.ModifiedDate > CAST(#[flowVars.DateTimeWatermark] AS DATETIME)
ORDER BY BusinessEntityID
         ]]></db:parameterized-query>
      </db:select>
   </poll>
      <expression-filter
      expression="#[payload.size() >0]"
      doc:name="Filter empty payload"/>
   <json:object-to-json-transformer
      doc:name="Object to JSON"/>
   <logger
      level="INFO"
      doc:name="Logger"
      message="#[payload]"/>
</flow>

To test this scenario, I called the API to update data on my SQL database previously implemented (PATCH request to http://localhost:8081/employee/{id}) with different IDs, so different employees were updated. Then, I ran my solution and the polling started getting only the updated records. As simple and beautiful as that!

In this post I have shown you how to prepare your environment and your MuleSoft AnyPoint studio to work with Microsoft SQL Server, how to create a Global Configuration Element for a SQL database, how to search and poll changes from a SQL database, and how to update records as well. I hope now you have a better idea of how to connect to a MS SQL Server from your MuleSoft solutions. Thanks for reading!

Mule ESB DEV/TEST environments in Microsoft Azure

Agility in delivery of IT services is what cloud computing is all about. Week in, week out, projects on-board and wind-up, developers come and go. This places enormous stress on IT teams with limited resourcing and infrastructure capacity to provision developer and test environments. Leveraging public cloud for integration DEV/TEST environments is not without its challenges though. How do we develop our interfaces in the cloud yet retain connectivity to our on-premises line-of-business systems?

In this post I will demonstrate how we can use Microsoft Azure to run Mule ESB DEV/TEST environments using point-to-site VPNs for connectivity between on-premises DEV resources and our servers in the cloud.

MuleSoft P2S

Connectivity

A point-to-site VPN allows you to securely connect an on-premises server to your Azure Virtual Network (VNET). Point-to-site connections don’t require a VPN device. They use the Windows VPN client and must be started manually whenever the on-premises server (point) wishes to connect to the Azure VNET (site). Point-to-site connections use secure socket tunnelling protocol (SSTP) with certificate authentication. They provide a simple, secure connectivity solution without having to involve the networking boffin’s to stand up expensive hardware devices.

I will not cover the setup of the Azure Point-to-site VPN in this post, there are a number of good articles already covering the process in detail including this great MSDN article.

A summary of steps to create the Point-to-site VPN are as follows:

  1. Create an Azure Virtual Network (I named mine AUEastVNet and used address range 10.0.0.0/8)
  2. Configure the Point-to-site VPN client address range  (I used 172.16.0.0/24)
  3. Create a dynamic routing gateway
  4. Configure certificates (upload root cert to portal, install private key cert on on-premise servers)
  5. Download and install client package from the portal on on-premise servers

Once we established the point-to-site VPN we can verify the connectivity by running ipconfig /all and checking we had been assigned an IP address from the range we configured on our VNET.

IP address assigned from P2S client address range

Testing our Mule ESB Flow using On-premises Resources

In our demo, we want to test the interface we developed in the cloud with on-premises systems just as we would if our DEV environment was located within our own organisation

Mule ESB Flow

The flow above listens for HL7 messages using the TCP based MLLP transport and processes using two async pipelines. The first pipeline maps the HL7 message into an XML message for a LOB system to consume. The second writes a copy of the received message for auditing purposes.

MLLP connector showing host running in the cloud

The HL7 MLLP connector is configured to listen on port 50609 of the network interface used by the Azure VNET (10.0.1.4).

FILE connector showing on-premise network share location

The first FILE connector is configured to write the output of the xml transformation to a network share on our on-premises server (across the point-to-site VPN). Note the IP address used is the one assigned by the point-to-site VPN connection (from the client IP address range configured on our Azure VNET)

P2S client IP address range

To test our flow we launch a MLLP client application on our on-premises server and establish a connection across the point-to-site VPN to our Mule ESB flow running in the cloud. We then send a HL7 message for processing and verify we receive a HL7 ACK and that the transformed xml output message has also been written to the configured on-premises network share location.

Establishing the connection across the point-to-site VPN…

On-premises MLLP client showing connection to host running in the cloud

Sending the HL7 request and receiving an HL7 ACK response…

MLLP client showing successful response from Mule flow

Verifying the transformed xml message is written to the on-premises network share…

On-premises network share showing successful output of transformed message

Considerations

  • Connectivity – Point-to-site VPNs provide a relatively simple connectivity option that allows traffic between the your Azure VNET (site) and your nominated on-premise servers (the point inside your private network). You may already be running workloads in Azure and have a site-to-site VPN or MPLS connection between the Azure VNET and your network and as such do not require establishing the point-to-site VPN connection. You can connect up to 128 on-premise servers to your Azure VNET using point-to-site VPNs.
  • DNS – To provide name resolution of servers in Azure to on-premise servers OR name resolution of on-premise servers to servers in Azure you will need to configure your own DNS servers with the Azure VET. The IP address of on-premise servers will likely change every time you establish the point-to-site VPN as the IP address is assigned from a range of IP addresses configured on the Azure VET.
  • Web Proxies – SSTP does not support the use of authenticated web proxies. If your organisation uses a web proxy that requires HTTP authentication then the VPN client will have issues establishing the connection. You may need the network boffins after all to bypass the web proxy for outbound connections to your Azure gateway IP address range.
  • Operating System Support – Point-to-site VPNs only support the use of the Windows VPN client on Windows 7/Windows 2008 R2 64 bit versions and above.

Conclusion

In this post I have demonstrated how we can use Microsoft Azure to run a Mule ESB DEV/TEST environment using point-to-site VPNs for simple connectivity between on-premises resources and servers in the cloud. Provisioning integration DEV/TEST environments on demand increases infrastructure agility, removes those long lead times whenever projects kick-off or resources change and enforces a greater level of standardisation across the team which all improve the development lifecycle, even for integration projects!

Connecting Salesforce and SharePoint Online with MuleSoft – Nothing but NET

Often enterprises will choose their integration platform based on the development platform required to build integration solutions. That is, java shops typically choose Oracle ESB, JBoss, IBM WebSphere or MuleSoft to name but a few. Microsoft shops have less choice and typically choose to build custom .NET solutions or use Microsoft BizTalk Server. Choosing an integration platform based on the development platform should not be a driving factor and may limit your options.

Your integration platform should be focused on interoperability. It should support common messaging standards, transport protocols, integration patterns and adapters that allow you to connect to a wide range of line of business systems and SaaS applications. Your integration platform should provide frameworks and pattern based templates to reduce implementation costs and improve the quality and robustness of your interfaces.

Your integration platform should allow your developers to use their development platform of choice…no…wait…what!?!

In this post I will walkthrough integrating Salesforce.com and SharePoint Online using the java based Mule ESB platform while writing nothing but .NET code.

MuleSoft .NET Connector

The .NET connector allows developers to use .NET code in their flows enabling you to call existing .NET Framework assemblies, 3rd party .NET assemblies or custom .NET code. Java Native Interface (JNI) is used to communicate between the Java Virtual Machine (JVM), in which your MuleSoft flow is executing, and the Microsoft Common Language Runtime (CLR) where your .NET code is executed.

mule-net-connector---jni_thumb

To demonstrate how we can leverage the MuleSoft .NET Connector in our flows, I have put together a typical SaaS cloud integration scenario.

Our Scenario

  • Customers (Accounts) are entered into Salesforce.com by the Sales team
  • The team use O365 and SharePoint Online to manage customer and partner related documents.
  • When new customers are entered into Salesforce, corresponding document library folders need to be created in SharePoint.
  • Our interface polls Salesforce for changes and needs to create a new document library folder in SharePoint for this customer according to some business rules
  • Our developers prefer to use .NET and Visual Studio to implement the business logic required to determine the target document library based on Account type (Customer or Partner)

Our MuleSoft flow looks something like this:

mule_net_connector_-_flow_thumb5

  • Poll Salesforce for changes based on a watermark timestamp
  • For each change detected:
    • Log and update the last time we sync’d
    • Call our .NET business rules to determine target document library
    • Call our .NET helper class to create the folder in SharePoint

Salesforce Connector

Using the MuleSoft Salesforce Cloud Connector we configure it to point to our Salesforce environment and query for changes to the Accounts entity. Using DataSense, we configure which data items to pull back into our flow to form the payload message we wish to process.

mule_net_connector_-_sf_connector_th

Business Rules

Our business rules are implemented using a bog standard .NET class library that checks the account type and assign either “Customers” or “Partners” as the target document library. We then enrich the message payload with this value and return it back to our flow.

public object GetDocumentLibrary(SF_Account account)
{
    var docLib = "Unknown";     // default

    // Check for customer accounts
    if (account.Type.Contains("Customer"))
        docLib = "Customers";

    // Check for partner accounts
    if (account.Type.Contains("Partner"))
        docLib = "Partners";

    return new 
        { 
            Name = account.Name, 
            Id = account.Id, 
            LastModifiedDate = account.LastModifiedDate, 
            Type = account.Type,
            DocLib = docLib 
        };
}

Note: JSON is used to pass non-primitive types between our flow and our .NET class.

So our message payload looks like

{
	"LastModifiedDate":"2014-11-02T11:11:07.000Z",
	"Type":"TechnologyPartner",
	"Id":"00190000016hEhxAAE",
	"type":"Account",
	"Name":"Kloud"
}

and is de-serialised by the .NET connector into our .NET SF_Account object that looks like

public class SF_Account
{
    public DateTime LastModifiedDate;
    public string Type;
    public string Id;
    public string type;
    public string Name;
    public string DocLib;
}

Calling our .NET business rules assembly is a simple matter of configuration.

mule_net_connector_-_business_rules_

SharePoint Online Helper

Now that we have enriched our message payload with the target document library

{
	"LastModifiedDate":"2014-11-02T11:11:07.000Z",
	"Type":"TechnologyPartner",
	"Id":"00190000016hEhxAAE",
	"type":"Account",
	"Name":"Kloud",
	"DocLib":"Partners"
}

we can pass this to our .NET SharePoint client library to connect to SharePoint using our O365 credentials and create the folder in the target document library

public object CreateDocLibFolder(SF_Account account)
{
    using (var context = new Microsoft.SharePoint.Client.ClientContext(url))
    {
        try
        {
            // Provide client credentials
            System.Security.SecureString securePassword = new System.Security.SecureString();
            foreach (char c in password.ToCharArray()) securePassword.AppendChar(c);
            context.Credentials = new Microsoft.SharePoint.Client.SharePointOnlineCredentials(username, securePassword);

            // Get library
            var web = context.Web;
            var list = web.Lists.GetByTitle(account.DocLib);
            var folder = list.RootFolder;
            context.Load(folder);
            context.ExecuteQuery();

            // Create folder
            folder = folder.Folders.Add(account.Name);
            context.ExecuteQuery();
        }
        catch (Exception ex)
        {
            System.Diagnostics.Debug.WriteLine(ex.ToString());
        }
    }

    // Return payload to the flow 
    return new { Name = account.Name, Id = account.Id, LastModifiedDate = account.LastModifiedDate, Type = account.Type, DocLib = account.DocLib, Site = string.Format("{0}/{1}", url, account.DocLib) };
}

Calling our helper is the same as for our business rules

mule_net_connector_-_sharepoint_help

Configuration

Configuring MuleSoft to know where to load our .NET assemblies from is best done using global configuration references.

mule_net_connector_-_global_elements[2]

We have three options to reference our assembly:

  1. Local file path – suitable for development scenarios.
  2. Global Assembly Cache (GAC) – suitable for shared or .NET framework assemblies that are known to exist on the deployment server.
  3. Packaged – suitable for custom and 3rd party .NET assemblies that get packaged with your MuleSoft project and deployed together.

mule_net_connector_-_global_elements[1]

Deployment

With our flow completed, coding in nothing but .NET, we are good to test and deploy our package to our Mule ESB Integration Server. At the time of writing, CloudHub does not support the .NET Connector but this should be available in the not too distant future. To test my flow I simply spin up and instance on the development server and watch the magic happen.

We enter an Account in Salesforce with Type of “Customer – Direct”…

mule_net_connector_-_sf_customerng_t

and we see a new folder in our “Customers” document library for that Account name in a matter of seconds

mule_net_connector_-_sharepoint_fold[1]

SLAM DUNK!…Nothing but NET Smile

Conclusion

Integration is all about interoperability and not just at runtime. It should be a core capability of our Integration framework. In this post we saw how we can increase our Integration capability without the need to sacrifice our development platform of choice by using MuleSoft and the MuleSoft .NET Connector.