Is Azure Functions over Web API Beneficial?

Whenever I meet clients and give a talk about Azure Functions, they are immediately interested in replacing their existing Web API features with Azure Functions. In this post, I’d like to discuss:

  • Can Azure Functions replace Web API?
  • Is it worth doing?

It would be a good idea to have a read through this article, Serverless Architectures, before starting.

HTTP Trigger Function == Web API Action

One of characteristics of Serverless Architecture is “event-driven”. In other words, all functions written in Azure Functions are triggered by events. Those events of course include HTTP requests. From this HTTP request point of view, both HTTP trigger function and Web API action work exactly the same way. Let’s compare both codes to each other:

How do both look like? They look pretty similar to each other. Both take an HTTP request, process it and return a response. Therefore, with minor modification, it seems that Web API can be easily migrated to Azure Functions.

HTTP Trigger Function != Web API Action

However, life is not easy. There are some major differences we should know before migration:

Functions are always static methods

Even though Azure Functions are extensions of Azure WebJobs, each function has a static modifier by design, unlike Azure WebJobs can be without the static modifier.

Actions of Web API, by the way, don’t have the static modifier. This results in a significant architectural change during the migration, especially with dependency injection (DI). We will touch it later.

Functions always receive HttpRequestMessage instance as a parameter

Within the HTTP request/response pipeline, a Web API controller internally creates an HttpContext instance to handle data like headers, cookies, sessions, querystrings and request body (of course querystrings and request body can be handled in a different way). The HttpContext instance works as an internal property so any action can directly access to it. As a result, each action only passes necessary details as its parameters.

On the other hand, each function takes a different HTTP request/response pipeline from Web API, which passes the HttpRequestMessage instance to the function as a parameter. The HttpRequestMessage instance only handles headers, querystrings and request body. It doesn’t look after cookies or sessions. This is the huge difference between Web API and Azure Functions in terms of stateless.

Functions define HTTP verbs and routes in function.json

In Web API, we put some decorators like HttpGet, HttpPost, HttpPut, HttpPatch and HttpDelete on each action to declare which HTTP verbs take which action, by combining with the Route decorator.

On the other hand, each function has a definition of HTTP verbs and routes on function.json. With this definition, different functions having the same route URI can handle requests based on HTTP verbs.

Functions define base endpoint URI in host.json

Other than the host part of URI, eg) https://api.myservice.com, the base URI is usually defined on the controller level of Web API by adding the Route decorator. This is dead simple.

However, as there’s no controller on Azure Functions, it is defined in host.json. Default value is api, but we can remove or redefine to others by modifying host.json.

While function.json can be managed at the function level through GUI or editor, unfortunately it’s not possible to edit host.json directly in the function app. There’s a workaround using Azure App Service Editor to modify host.json, by the way.

Functions should consider service locator pattern for dependency management

There are many good IoC container libraries for Web API to manage dependencies. However, we have already discussed this in my previous post, Managing Dependencies in Azure Functions, that Service Locator Pattern should be considered for DI in Azure Functions and actually this is the only way to deal with dependencies for now. This is because every Azure Function has the static modifier which prevents us from using the same way as the one in Web API.

We know different opinions against service locator patterns for Azure Functions exist out there, but this is beyond our topic, so we will discuss it later in another post.

Is Azure Functions over Web API Beneficial?

So far, we have discussed what are same and what are different between Web API and Azure Functions HTTP Trigger. Back to the initial question, is it really worth migrating Web API to Azure Functions? How does your situation fall under any of below?

  • My Web API is designed for microservices architecture: Then it’s good to go for migration to Azure Functions.
  • My Web API takes long for response: Then consider Azure Functions using empty instance in App Service Plan because it costs nothing more. Consumption Plan (or Dynamic Service Plan) would cost too much in this case.
  • My Web API is refactored to use queues: Then calculate the price carefully, not only price for Azure Functions but also price for Azure Service Bus Queue/Topic and Azure Storage Queue. In addition to this, check the number of executions as each Web API is refactored to call one Http Trigger function plus at least one Queue Trigger function (two executions in total, at least). Based on the calculations, we can make a decision to stay or move.
  • My Web API needs a significant amount of efforts for refactoring: Then it’s better to stay until it’s restructured and suitable for microservices architecture.
  • My Web API is written in ASP.NET Core: Then stay there, do not even think of migration, until Azure Functions support ASP.NET Core.

To sum up, unless your Web API requires a significant amount of refactoring or written in ASP.NET Core, it surely is worth considering migration to Azure Functions. It is much easier to use and cost-effective solution for your Web API.

powerbi-env20visual-1

Leveraging the PowerBI Beta API for creating PowerBI Tables with Relationships via PowerShell

If anyone actually reads my posts you will have noticed that I’ve been on a bit of a deep dive into PowerBI and how I can use it to provide visualisation of data from Microsoft Identity Manager (here via CSV, and here via API). One point I noticed going direct to PowerBI via the API (v1.0) though was how it is not possible to provide relationships (joins) between tables within datasets (you can via PowerBI Desktop). After a lot of digging and searching I’ve worked out how to actually define the relationships between tables in a dataset via the API and PowerShell. This post is about that.

Overview

In order to define relationships between tables in a dataset (via the API), there are a couple of key points to note:

  • You’ll need to modify the PowerBIPS PowerShell Module to leverage the Beta API
    • see the screenshot in the “How to” section

Prerequisites

To use PowerBI via PowerShell and the PowerBI API you’ll need to get:

How to leverage the PowerBI Beta API

In addition to the prerequisites above, in order to leverage the PowerShell PowerBI module for the PowerBI Beta API I just changed the beta flag $True in the PowerBI PowerShell (PowerBIPS.psm1) module to make all calls on the Beta API. See screenshot below. It will probably be located somewhere like ‘C:\Program Files\WindowsPowerShell\Modules\PowerBIPS

Example Dataset Creation

The sample script below will create a New Dataset in  PowerBI with two tables. Each table holds “Internet of Things” environmental data. One for data from an Seeed WioLink IoT device located outside and one inside. Both tables contain a column with DateTime that is extracted before the sensors are read and then their data added to their respective tables referencing that DateTime.

That is where the ‘relationships‘ part of the schema comes in. The options for the direction of the relationship filter are OneDirection (default), BothDirections and Automatic. For this simple example I’m using OneDirection on DateTime.

To put some data into the tables here is my simple script. I only have a single IoT unit hooked up so I’m fudging the more constant Indoor Readings. The Outside readings are coming from a Seeed WioLink unit.

Visualisation

So there we have it. Something I thought would have been part of the minimum viable product (MVP) release of the API is available in the Beta and using PowerShell we can define relationships between tables in a dataset and use that in our visualisation.

 

Follow Darren on Twitter @darrenjrobinson

Connecting to and Using the Azure MFA Web Service SDK Server SOAP API with Powershell

Background

A colleague and I are validating a number of scenarios for a customer who is looking to deploy Azure MFA Server. One of the requirements from an Identity Management perspective is the ability to interact with the MFA Server for user information. That led us on the exploration of what was possible and how best to approach it.

The title of this post has pretty much given it away as to how. But why ? As Azure MFA Server is a product that Microsoft have acquired through the acquisition of Phone Factor, the usual methods of interacting with applications and services in the Microsoft Stack don’t apply. There is practically no information on how to use Powershell to interact with Azure MFA Server. So this blog post details what we’ve learned and how we have been able to get information out of Azure MFA Server using Powershell.

This post covers;

  • Connecting to the Azure MFA Web Service SDK
  • Searching for users in the MFA Database
  • Returning information about users in the MFA Database
  • Making a test call to a users phone via the MFA Server

Prerequisites

There are a number of prerequisites that I’m not covering here as you can quickly locate many guides to installing/configuring Azure MFA Server.

  • You’ll need to have an Azure MFA environment (obviously)
  • Download the Azure MFA Web Service SDK
  • Install and Configure the Azure MFA Web Service SDK
  • If you aren’t using a Public SSL Cert on the Azure MFA Web Service SDK Server you will need to export the certificate from the Azure MFA Web Service SDK Server and import it to the Trusted Root Certificate Store on the workstation you’ll be using Powershell on to connect to the MFA environment.

 

Connecting to the Azure MFA Web Service SDK

Now that you’ve met the prerequisites listed above you can use Powershell to connect to the API. The URL is the DNS name of the Azure MFA Web Service SDK Server following by the SDK SOAP endpoint. eg. https://mfa.yourdomain.com.au/MultiFactorAuthWebServiceSdk/PfWsSdk.asmx?WSDL

Try out the URL in your browser and using an account that exists in the MFA Server authenticate to the Azure MFA Web Service SDK Server. If it is setup correctly (including your SSL certificate)  you will see the following.

 

The simple script below will perform the same thing, but via Powershell. Update for your domain, username, password and URL for your MFA Web Service SDK Server.

Searching for users in the MFA Database

Now that we’ve setup a web services proxy connection to the MFA Web Service SOAP API endpoint we can start getting some info out of it. Searching for users uses the ‘FindUsers_4’ call. It has many parameters that can be set to alter the results. I’ve gone simple in this one and used ‘*’ as the criteria to return all users in the MFA Database. Alter for your purposes.

Returning information about users in the MFA Database

Using a slightly different criteria to the Search criteria above I returned one entry and set the $mfauser variable to them. I then use that in the GetPhone, GetUserSettings & GetUserDevices calls as shown belown to retrieve all the info about them.

Making a test call to a users phone via the MFA Server

Finally rather than just consuming information from the MFA environment let’s actually trigger something. Selecting an identity from our test environment that had the mobile phone number of a colleague associated with it, I triggered MFA Server to call them to authorize their session (which they hadn’t obviously requested). I may have done this a few times from the other side of the room watching with amusement as their phone rang requesting authentication approval 🙂

Full script snippets below.

Hope that helps someone else.

Follow Darren on Twitter @darrenjrobinson

 

 

Picture 7

Moving SharePoint Online workflow task metadata into the data warehouse using Nintex Flows and custom Web API

This post suggests the idea of automatic copying of SharePoint Online(SPO) workflow tasks’ metadata into the external data warehouse.  In this scenario, workflow tasks are becoming a subject of another workflow that performs automatic copying of task’s data into the external database using a custom Web API endpoint as the interface to that database. Commonly, the requirement to move workflow tasks data elsewhere arises from limitations of SPO. In particular, SPO throttles requests for access to workflow data making it virtually impossible to create a meaningful workflow reporting system with large amounts of workflow tasks. The easiest approach to solve the problem is to use Nintex workflow to “listen” to the changes in the workflow tasks, then request the task data via SPO REST API and, finally, send the data to external data warehouse Web API endpoint.

Some SPO solutions require creation of a reporting system that includes workflow tasks’ metadata. For example, it could be a report about documents with statuses of workflows linked to these documents. Using conventional approach (ex. SPO REST API) to obtain the data seems unfeasible as SPO throttles requests for workflow data. In fact, the throttling is so tight that generation of reports with more than a hundred of records is unrealistic. In addition to that, many companies would like to create Business Intelligence(BI) systems analysing workflow tasks data. Having data warehouse with all the workflow tasks metadata can assist in this job very well.

To be able to implement the solution a few prerequisites must be met. You must know basics of Nintex workflow creation and to be able to create a backend solution with the database of your choice and custom Web API endpoint that allows you to write the data model to that database. In this post we have used Visual Studio 2015 and created ordinary REST Web API 2.0 project with Azure SQL Database.

The solution will involve following steps:

  1. Get sample of your workflow task metadata and create your data model.
  2. Create a Web API capable of writing data model to the database.
  3. Expose one POST endpoint method of the Web REST API that accepts JSON model of the workflow task metadata.
  4. Create Nintex workflow in the SPO list storing your workflow tasks.
  5. Design Nintex workflow: call SPO REST API to get JSON metadata and pass this JSON object to your Web API hook.

Below is detailed description of each step.

We are looking here to export metadata of a workflow task. We need to find the SPO list that holds all your workflow tasks and navigate there. You will need a name of the list to be able to start calling SPO REST API. It is better to use a REST tool to perform Web API requests. Many people use Fiddler or Postman (Chrome Extension) for this job. Request SPO REST API to get a sample of JSON data that you want to put into your database. The request will look similar to this example:

Picture 1

The key element in this request is getbytitle(“list name”), where “list name” is SPO list name of your workflow tasks. Please remember to add header “Accept” with the value “application/json”. It tells SPO to return JSON instead of the HTML. As a result, you will get one JSON object that contains JSON metadata of Task 1. This JSON object is the example of data that you will need to put into your database. Not all fields are required in the data warehouse. We need to create a data model containing only fields of our choice. For example, it can look like this one in C# and all properties are based on model returned earlier:

The next step is to create a Web API that exposes a single method that accepts our model as a parameter from the body of the request. You can choose any REST Web API design. We have created a simple Web API 2.0 in Visual Studio 2015 using general wizard for MVC, Web API 2.0 project. Then, we have added an empty controller and filled  it with the code that works with the Entity Framework to write data model to the database. We have also created code-first EF database context that works with just one entity described above.

The code of the controller:

The code of the database context for Entity Framework

Once you have created the Web API, you should be able to call Web API method like this:
https://yoursite.azurewebsites.net/api/EmployeeFileReviewTasksWebHook

You will need to put your model data in the request body as a JSON object. Also don’t forget to include proper headers for your authentication and header “Accept” with “application/json” and set type of the request to POST. Once you’ve tested the method, you can move on to the next steps. For example, below is how we tested it in our project.

Picture 4

Next, we will create a new Nintex Workflow in the SPO list with our workflow tasks. It is all straightforward. Click Nintex Workflows, then create a new workflow and start designing it.

Picture 5

Picture 6

Once you’ve created a new workflow, click on Workflow Settings button. In the displayed form please set parameters as it shown on screenshot below. We set “Start when items are created” and “Start when items are modified”. In this scenario, any modifications of our Workflow task will start this workflow automatically. It also includes cases when Workflow task have been modified by other workflows.

Picture 7.1

Create 5 steps in this workflow as it shown on the following screenshots labelled as numbers 1 to 5. Please keep in mind that blocks 3 and 5 are there to assist in debugging only and not required in production use.

Picture 7

Step 1. Create a Dictionary variable that contains SPO REST API request headers. You can add any required headers, including Authentication headers. It is essential here to include Accept header with “application/json” in it to tell SPO that we want JSON in responses. We set Output variable to SPListRequestHeaders so we can use it later.

Picture 8

Step 2. Call HTTP Web Service. We call SPO REST API here. It is important to make sure that getbytitle parameter is correctly set to your Workflow Tasks list as we discussed before. The list of fields that we want to be returned is defined in the “$select=…” parameter of OData request. We need only fields that are included in our data model. Other settings are straightforward: we supply our Request Headers created in Step 1 and create two more variables for response. SPListResponseContent will get resulting JSON object that we going to need at the Step 4.

Picture 9

Step 3 is optional. We’ve added it to debug our workflow. It will send an email with the contents of our JSON response from the previous step. It will show us what was returned by SPO REST API.

Picture 10

Step 4. Here we are calling our custom API endpoint with passing JSON object model that we got from SPO REST API. We supply full URL of our Web Hook, Set method to POST and in the Request body we inject SPListResponseContent from the Step 2. We’re also capturing response code to display later in workflow history.

Picture 11

Step 5 is also optional. It writes a log message with the response code that we have received from our API Endpoint.

Picture 12

Once all five steps are completed, we publish this Nintex workflow. Now we are ready for testing.

To test the system, open list of our Workflow tasks. Click on any task and modify any of task’s properties and save the task. This will initiate our workflow automatically. You can monitor workflow execution in workflow history. Once workflow is completed, you should be able to see messages as displayed below. Notice that our workflow has also written Web API response code at the end.

Picture 13

To make sure that everything went well, open your database and check the records updated by your Web API. After every Workflow Task modification you will see corresponding changes in the database. For example:

Picture 14

 

In this post we have shown that automatic copying of Workflow Tasks metadata into your data warehouse can be done with a simple Nintex Workflow setup and performing only two REST Web API requests. The solution is quite flexible as you can select required properties from the SPO list and export into the data warehouse. We can easily add more tables in case if there are more than one workflow tasks lists. This solution enables creation of powerful reporting system  using data warehouse and also allows to employ BI data analytics tool of your choice.

Implementing Application with Office 365 Graph API in App-only Mode

Microsoft has recently release Microsoft Graph to easily integrate Office 365 resources with applications. Graph API basically provides one single endpoint to call bunch of Web APIs to get access Office 365 resources.

In order to use Graph API from another application, the application must be registered in Azure Active Directory (AAD) first. When the application is registered, we can choose how the application is permitted to use resources – application permissions or delegate permissions. The latter one typically requires users to provide user credentials like username and password to get a proper access token. However, there must be requirements that the application doesn’t require users credentials but should directly access to Office 365 resources, which is called app-only mode. For example, a back-end Web API application doesn’t require user credentials to communicate with Graph API. There are many articles on the Internet dealing with user credentials but not many articles coping with this app-only mode. In this post, I will walk through how to register an application as app-only mode and consume Graph API through the registered application.

The sample code used here can be found here. This sample application is built-in ASP.NET 5 and ASP.NET MVC 6. Therefore, in order to get the best development experience, I would recommend using Visual Studio 2015.

Registering Application

OK. First thing goes first. As the sample application is using AAD and Graph API, it’s a mandatory to register this app on your AAD first. Please follow the steps below.

Create User Account on AAD

You can skip this step, if you have already got a user account on AAD.
NOTE: Microsoft Account won’t work with this sample app.

Create a user account on AAD tenant. Once created, the user account should be a co-administrator of the Azure subscription that is currently being used.

Register Application to AAD

  • Create a new application.

  • Choose the "Add an application my organization is developing" option.

  • Enter the application name like "Graph API App-only Sample" and select the "Web Application and/or Web API" option.

  • Enter "https://(tenant-name)/GraphApiAppOnlySample" for both fields. (tenant-name) should look like contoso.onmicrosoft.com. Please note that both won’t be used at all.

Now the app has been registered.

Configure Application

  • Once the app is registered, click the configure tab.

  • Get the Client ID.

  • Get the secret key. Note that the key is only displayed once after click the Save button at the bottom.

  • Add another application called Microsoft Graph

  • Give "Read directory data" permission to Microsoft Graph, as only this permission is necessary to run the sample app.

ATTENTION: In the production environment, appropriate number of application permissions MUST be given to avoid any security breach.

The app has been configured.

Update Settings in Sample Application

As the app has been registered and configured, the sample Web API app should be setup with appropriate settings. Firstly, open appsettings.json

Then change values:

  • Tenant: contoso.onmicrosoft.com to your tenant name.
  • ClientId: Client ID from the app.
  • ClientSecret: Secret key from the app.
  • AppId: contoso.onmicrosoft.com to your tenant name.

Trust IIS or IIS Express with a Self-signed Certificate

  • You can skip this step, if you intend to publish this app to Azure.
  • You can skip this step, if you already have a self-signed certificate on your root certificate storage.

All communications with AAD and Graph API are performed through a secure channel (SSL/TLS), this sample app MUST be signed with a root certificate. However, this is a developer’s local environment, so a self-signed certificate should be issued and stored as a root certificate. If you don’t store the self-signed certificate, you’ll see the following message popped up when you run VS2015.

The following steps show how to register self-signed certificate to the root certificate store using PowerShell.

Check Self-signed Certificate

First, Check if you have a self-signed certificate in your personal certificate store.

If there’s no certificate with name of CN=localhost, you should create the one using makecert.exe. The easiest way to execute makecert.exe is to run Developer Command Prompt for VS2015.

  • -r: Create a self signed certificate.
  • -pe: Mark generated private key as exportable.
  • -n: Certificate subject X509 name. eg) -n "CN=localhost"
  • -b: Start of the validity period in mm/dd/yyyy format; default to now.
  • -e: End of validity period in mm/dd/yyyy format; defaults to 2039.
  • -ss: Subject’s certificate store name that stores the output certificate. eg) -ss My
  • -len: Generated Key Length (Bits). Default to 2048 for ‘RSA’ and 512 for ‘DSS’.

Store Self-signed Certificate to Root Store

Second, store the self-signed certificate to the root store.

Finally, you can verify the self-signed certificate has been stored into the root store.

Build and Run Sample Web API Application

All setup has been completed! Now, build the solution in Visual Studio 2015 and punch F5 key to get into the Debug mode. Then you’ll get a JSON response of your tenant organisation details. How does it work? Let’s look into the code.

Get Access Token from AAD Using ADAL

In order to get an access token from AAD, we need to setup AuthenticationContext instance and call a method to get one. The code snippet below is an extract from the sample application.

The graphApp instance is created from appsettings.json and injected to the controller. With this instance, both AuthenticationContext instance and ClientCredential instance have been created. Note that the ClientCredential instance only uses clientId and clientSecret, not user credentials. In other words, this sample application won’t require any user to login and provide their login details. Let’s see the actual Get() method.

This is the part to get access token. The AuthenticationContext instance sends a request for Graph API with the ClientCredential instance only containing clientId and clientSecret. When the authentication fails, the Get() action will return the exception details as a JSON format. The authResult instance now contains access token.

Call Graph API with Access Token

Now, we have the access token. In the same action method, let’s see how it is consumed to call Graph API.

The access token is set to the request header. In this sample code, we call organization details through Graph API. If you use Fiddler, send a request to the application and you will see the full response details.

So far, we have had a brief look how to implement a simple Web API application as app-only mode to consume Graph API. As stated above, this sample application is not for production use unless proper permissions are given. Once all appropriate permissions are setup, you can easily customise this app for your integration purpose. Hope this will help your development.

Creating a simple nodejs API on AWS (including nginx)

On a recent project I was part of a team developing an AngularJS website with a C# ASP.NET backend API hosted in Azure.  It was a great project as I got to work with a bunch of new tools, but it got me wondering on how simple it could be to use a Javascript API instead.  That way the entire development stack would be written in Javascript.

And so a personal project was born.  To create a simple JS API and get it running in the cloud.

Getting started

So here goes, a quick guide on setting up a nodejs API using the express framework.

I’ll start by getting the environment running locally on my mac in 6 simple steps:

# 1. Create a directory for your application
$ mkdir [your_api_name]

# 2. Install Express (the -g will install it globally)
$ npm install express -g

# 3. Use the express generator as it makes life easier!
$ npm install express-generator -g

# 4. Create your project
$ express [your_project_name_here]

# 5. Install any missing dependencies
$ npm install

# 6. Start your API
$ npm start

That’s it. You now have a nodejs API running locally on your development environment!

To test it, and prove to yourself it’s working fine, run the following curl command:

$ curl http://localhost:3000/users

If everything worked as planned, you should see “respond with a resource” printed in your terminal window. Now this is clearly as basic as it gets, but you can easily make it [a bit] more interesting by adding a new file to your project called myquickapi.js in your [app name]/routes/ folder, and add the following content:

var express = require('express');
var router = express.Router();

// get method route
router.get('/', function (req, res) {
res.send('You sent a get request');
});

// post method route
router.post('/', function(req, res) {
res.send('You sent me ' + req.param('data'));
});

module.exports = router;

A quick change to the app.js file to update our routes, by adding the following 2 lines:

var myquickapi = require(‘./routes/myquickapi');
app.use('/myquickapi', myquickapi);

Re-start your node service, and run:


$ curl -X POST http://localhost:3000/myquickapi?data=boo

And you will see the API handle the request parameter and echo it back to the caller.

Spin up an AWS EC2 instance

Log into the AWS portal and create a new EC2 instance.  For my project, as it is only a dev environment, I have gone for a General Purpose t2.Micro Ubuntu Server.  Plus it’s free which happens to be okay too!

Once the instance is up and running you will want to configure the security group to allow all inbound traffic using port 443, and 80 – after all it is a web api and I guess you want to access it!  I also enabled SSH for my local network:

Security_group

Using your pem file ssh into your new instance, and once connected, run the following commands:


# 1. update any out of date packages
sudo apt-get update

# 2. install nodejs
sudo apt-get install nodejs

# 3. install node package manager
sudo apt-get install npm

Now you can run node using the nodejs command. This is great, but not for the JS packages we’ll be using later on.  They reference the node command instead.  A simple fix is to create a symlink to the nodejs command:


$ sudo ln -s /usr/bin/nodejs /usr/bin/node

Set up nginx on you EC2 instance

We’ll use nginx on our server to proxy network traffic to the running nodejs instance.  Install nginx using the following commands:


# 1. install nginx

$ sudo apt-get install nginx

# 2. make a directory for your sites
$ sudo mkdir -p /var/www/express-api/public

# 3. set the permission of the folder so it is accessible publicly
$ sudo chmod 755 /var/www

# 4. remove the default nginx block
$ sudo rm /etc/nginx/sites-enabled/default

# 5. create the virtual hosts file
$ sudo nano /etc/nginx/sites-available/[your_api_name]

Now copy the following content into your virtual hosts file and save it:

upstream app_nodejs {
server 127.0.0.1:3000;
keepalive 8;
}

server {
listen 80;
listen [::]:80 default_server ipv6only=on;
listen 443 default ssl;

root /var/www/[your_site_folder]/public/[your_site_name];
index index.html index.htm;

# Make site accessible from http://localhost/
server_name [your_server_domain_name];

location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;

proxy_pass http://localhost:3000/;
proxy_redirect off;
}
}

This basically tells your server to listen on ports 80 and 443 and redirect any incoming traffic to your locally running nodes server on port 3000. A simple approach for now, but all that is needed to get our API up and running.

Activate your newly created hosts file by running the following command:

$ sudo ln -s /etc/nginx/sites-available/[your_api_name] /etc/nginx/sites-enabled/[your_api_name]

Now restart nginx to make your settings take place:

$ sudo service nginx restart

As a sanity test you can run the following command to confirm everything is setup correctly. If you are in good shape you will see a confirmation that the test has run successfully.

$ sudo nginx -c /etc/nginx/nginx.conf -t

Final steps

The last step in the process, which you could argue is the most important, is to copy your api code onto your new web server.  If you are creating this for a production system then I would encourage a deployment tool at this stage (and to be frank, probably a different approach altogether), but for now a simple secure copy is probably all that’s required:


$ scp -r [your_api_directory] your_username@aws_ec2_api:/var/www/[your_site_folder]/public/

And that’s it.  Fire up a browser and try running the curl commands against your EC2 instance rather than your local machine.  If all has gone well then you should get the same response as you did with your local environment (and it’ll be lightning fast).

… Just for fun

If you disconnect the ssh connection to your server it will stop the application from running.  A fairly big problem for a web api, but a simple fix to resolve.

A quick solution is to use the Forever tool.

Install it, and run your app (you’ll be glad you added the symlink to nodejs earlier):


$ sudo npm install -g forever

$ sudo forever start /var/www/[your_site_folder]/public/[your_site_name]/bin/www

 

Hopefully this will have provided a good insight into setting up a nodejs API on AWS. At the moment it is fairly basic, but time permitting, I would like to build on the API and add additional features to make it more useable – I’d particularly like to add a Javascript OAuth 2.0 endpoint!

Watch this space for future updates, as I add new features I will be sure to blog about the learnings I find along the way.

As Always; any questions, then just reach out to me, or post them below

Building Applications with Event Sourcing and CQRS Pattern

When we start building an application on cloud, like Azure, we should consider many factors. Those factors include flexibility, scalability, performance and so on. In order to satisfy those factors, components making up the application should be loosely coupled and ready for extension and change at any time. For those considerations, Microsoft has introduced 24 cloud design patterns. Even though they are called as “Cloud Design Patterns”, they can be used just for application development anyway. In this post, I’m going to introduce Event Sourcing Pattern and CQRS Pattern and how they can be used in a single page application (SPA) like AngularJS application.

The complete code sample can be found here.

Patterns Overview

I’m not going into too much details here to explain what Event Sourcing (ES) Pattern and CQRS Pattern are. According to articles linked above, both ES and CQRS easily get along with each other. As the name itself says, CQRS separates commands from query – commands and query use different dataset and ES supports event stream for data store (commands), and materialisation and replaying (query). Let’s take a look at the diagram below.


[Image from: https://msdn.microsoft.com/en-us/library/dn589792.aspx]

This explains how ES and CQRS work together. Any individual input (or behaviour) from a user on the presentation layer (possibly Angular app in this post) is captured as an event and stored into event stream with timestamp. This storing action is append-only, ie events are only to be added. Therefore, the event stream becomes a source of truth, so all events captured and stored into the event stream can be replayed for query or materialised for transaction.

OK. Theory is enough. Let’s build an Angular app with Web API.

Client-side Implementation for Event Triggering

There are three user input fields – Title, Name and Email – and the Submit button. Each field and button acts as an event. In this code sample, they are named as SalutationChangedEvent, UsernameChangedEvent, EmailChangedEvent and UserCreatedEvent. Those events are handled by event handlers at the Web API side. What the Angular app does is to capture the input values when they are being changed and clicked. This is a sample TypeScript code bits for the name field directive.

This HTML is a template used for the directive below. ng-model will capture the field value and the value will be sent to the server to store event stream.

Please bear in mind that, as this is written in TypeScript, the coding style is slightly different from the original Angular 1.x way.

  1. The interface IUserNameScope defines model property and change function. This inherits $scope.
  2. The interface is injected to both link and controller of the directive UserName that implements ng.IDirective.
  3. A link function of the directive takes care of all DOM related ones.
  4. The link function calls the function declared in $scope to send AJAX request to Web API.
  5. A POST AJAX request is sent through userNameFactory to the server.
  6. A response comes from the server as a promise format and the response is passed to replayViewFactory for replay.

Both Title and Email fields work the same way as the Name field. Now, let’s have a look how the replay view section looks like.

This HTML template is used for the directive below. The following directive is only to replay responses.

As you can see, this directive only calls the replayViewFactory.getReplayedView() function to display what changes are. How do those events get consumed at the server-side then? Let’s move onto the next look.

Server-side Implementation for Event Processing

The POST request has been sent through a designated endpoint like:

This request is captured in this Web API action:

The action in the controller merely calls the this._service.ChangeUsernameAsync(request) method. Not too excited. Let’s dig into the service layer then.

  1. Based on the type of the request passed, an appropriate request handler is selected.
  2. The request handler converts the request into a corresponding event. In this code sample, the UsernameChangeRequest is converted to UsernameChangedEvent by the handler.
  3. An event processor takes the event and process it.

A question may arise here. How does request handler selection work? Each request handler implements IRequestHandler and it defines two methods:

Therefore, you can create as many request handlers as you like, and register them into your IoC container (using Autofac for example) like:

In the sample code used here registers five request handlers. If your business logic is way far complex and require many request handlers, you might need to consider moduling those request handlers automatic registration. I’ll discuss this in another post soon. Another question may arise again. How does the event processor work? Let’s have a look. Here’s the event processor:

This is quite similar to the EventStreamService.ChangeUsernameAsync(). First of all, find all event handlers that can handle the event. Then those selected event handlers process the event as all event handlers implements IEventHandler interface:

To wrap up,

  1. A user action is captured at a client-side and passed to a server-side as a request.
  2. The user action request is converted to an event by request handlers.
  3. The event is then processed and stored into event stream by event handlers.

Of course, I’m not arguing this is the perfect example for event processing. However, at least, it’s working and open for extension, which is good.

Replaying Events

Now, all events are raised and stored into event stream with timestamp. Event stream becomes a source of truth. Therefore, if we want to populate a user’s data against a particular time period, as long as we provide timestamp, we’re able to load the data without impacting on the actual data store. If you run the code sample on your local and make some user input change, you’ll actually be able to see the replayed view.

Now, let’s store the user data into the real data store by event materialisation.

Materialising Events

When you hit the Submit button, the server-side replays all events from the event stream with the current timestamp for materialisation. Then the materialised view is stored into the User table. As this is considered as another event, another event, UserCreatedEvent is created and processed by UserCreatedEventHandler. Unlike other event handlers, it does not only use the event stream repository, but also use the user repository.

In other words, the event itself is stored into the event stream and a user data from the event is stored into the user repository. Once stored, you will be able to find on the screen.

Please note that, if you change Title, Name, or Email but not yet click the Submit button, you’ll find some difference like the following screen:

So far, we’ve briefly discussed both ES pattern and CQRS pattern with a simple Angular – Web API app. How did you find it? Wouldn’t it be nice for your next application development? Make sure one thing. Applying those patterns might bring overly complex architecture into your application as there are many abstraction layers involved. If your application is relatively simple or small, you don’t have to consider those patterns. However, your application is growing and becomes heavier and complex, then it’s time to consider getting those patterns implemented for your application. Need help? We are Kloudie, the expert group.

ASP.NET Web API Integration Testing with One Line of Code

A very popular post about integration testing ASP.NET Web API was published quite some time ago. However, since then, OWIN has been released. OWIN makes integration testing ASP.NET Web API much simpler. This post describes what is required to set an OWIN-based integration testing framework up.

This, believe it or not, only requires a single line of code with OWIN self-hosting! It assumes that your web API project is powered by ASP.NET Web API 2.2 with OWIN.

The basic idea behind this particular approach is that the integration test project self-hosts the web API project. Consequently, the integration tests are executed against the self-hosted web app. In order to achieve this, the web API project must be modified a little bit. Fortunately, this is easy with OWIN.

A web app powered by OWIN has two entry points: Application_Start() and Startup.Configuration(IAppBuilder). There isn’t a big difference between the entry points but it is important to know that Startup.Configuration is invoked a bit later than Application_Start().

So the integration test project can self-host the web API project, all web API initialization logic must be moved to Startup.Configuration(IAppBuilder) as follows:

public partial class Startup
{
    public void Configuration(IAppBuilder app)
    {
        var configuration = new HttpConfiguration();
        WebApiConfig.Register(configuration);
        FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
        // Execute any other ASP.NET Web API-related initialization, i.e. IoC, authentication, logging, mapping, DB, etc.
        ConfigureAuthPipeline(app);
        app.UseWebApi(configuration);
    }
}

Not much is left in the old Application_Start() method:

public class WebApiApplication : HttpApplication
{
    protected void Application_Start()
    {
        AreaRegistration.RegisterAllAreas();
        RouteConfig.RegisterRoutes(RouteTable.Routes);
    }
}

Now, in order to test your web API project, add the following OWIN NuGet packages to your tests project:

PM> Install-Package Microsoft.Owin.Hosting
PM> Install-Package Microsoft.Owin.Host.HttpListener

Following is the single line of code that is required to self-host your web API project:

using (var webApp = WebApp.Start<Startup>("http://*:9443/"))
{
    // Execute test against the web API.
    webApp.Dispose();
}

That single line starts the web API up.

It is better that this is set up once for all integration tests for your tests project using MSTest’s AssemblyInitialize attribute as follows:

[TestClass]
public class WebApiTests
{
    private static IDisposable _webApp;

    [AssemblyInitialize]
    public static void SetUp(TestContext context)
    {
        _webApp = WebApp.Start<Startup>("http://*:9443/");
    }

    [AssemblyCleanup]
    public static void TearDown()
    {
        _webApp.Dispose();
    }
}

Now, the test code for testing a web API, which is self-hosted at localhost:9443 and protected using token-based authentication, can be written as follows:

[TestMethod]
public async Task TestMethod()
{
    using (var httpClient = new HttpClient())
    {
        var accessToken = GetAccessToken();
        httpClient.DefaultRequestHeaders.Authorization =
            new AuthenticationHeaderValue("Bearer", accessToken);
        var requestUri = new Uri("http://localhost:9443/api/values");
        await httpClient.GetStringAsync(requestUri);
    }
}

The above approach has a wide range of advantages:

  • You don’t have to deploy the web API project
  • The test and web API code run in the same process
  • Breakpoints can be added to the web API code.
  • The entire ASP.NET pipeline can be tested with routing, filters, configuration, etc.
  • Any web API dependencies that will otherwise be injected from a DI container, can be faked from the test code
  • Tests are quick to run!

On the other hand, there are a few limitations, since there are different behaviours between IIS hosting and self hosting. For example, HttpContext.Current is null, when the web API is self-hosted. Any asynchronous code might also result in deadlocks when it runs in IIS while it runs fine in the self-hosted web app. There are also complexities with setting the SSL certificate for the self-hosted web app. Please note that Visual Studio should run with elevated elevated privileges.

Nevertheless, with a careful consideration, integration tests can be authored to run against a self-hosted or cloud-hosted web API.

This provides a tremendous productivity boost, given the easy ability to run integration tests in process, with a setup cost that is negligible as described in this post.

Update:

It was suggested in the comments to utilize In-Memory hosting instead of Self-Hosting solution. (See details here)

Another nuget package should be referenced: Microsoft.Owin.Testing. The resulting in-memory api call is executed as follows:

using (var server = TestServer.Create<Startup>())
{
    // Execute test against the web API.
    var result = await server.HttpClient.GetAsync("/api/values/");
}

Unfortunately, in-memory solution doesn’t work for me out of the box. Seems like it can’t handle authentication. I use UseJwtBearerAuthentication (JWT bearer token middleware) and my api calls result in 401.