Developing and configuring Multi-tenant applications using AngularJs, WebAPI and Azure Active Directory

In this post, I am going to share my experience about publishing multi-tenant applications in Azure Active Directory where Azure Active Directory’s role is OAuth server.

You can read more about OAuth2.0 at . I am going to use implicit flow where client is an un-trusted application. For instance AngularJs application or phone application etc. Why these clients are called un-trusted because they cannot hide the secrets given/shared by OAuth server.

Let’s have a look at OAuth 2.0 actors in implicit flow. Below is the diagram of the implicit flow in OAuth 2.0:


Implicit flows:

  1. User tries to access the resource (e.g. WebAPI) via Client Application (e.g. AngularJS app)
  2. OAuth 2.0 server presents consent screen to user (e.g. resource owner)
  3. User accepts/rejects the consents
  4. If user accepts the consent, OAuth 2.0 grants client application to resource by presenting the auth token to resource


Now, we have got some terminology related to OAuth 2.0 implicit flow so we can map it to actual implementation. Please see below high level diagram:


We can see from the diagram, we have got all 4 actors but resource owners (or Users) are spread across various active directories.

I am extending angularjs todo list application published at and moved all AngularJs stuff to its own project called “TodoAngularJsClient” like shown below:


Following changes will configure the WebAPI to use Bearer token for authentication:

NOTE: We are not validating issuer as we don’t know the issuer in advance but we can control if desired via custom Authorize filter to allow access to certain tenants.

You can clone the project from and follow below steps to configure applications in your active directory.

  1. Configure AngularJs Client application
    1. Navigate to Azure Active Directory > Applications
    2. Click on Add link at the bottom
    3. Click on “Add an application my organization is developing” link
    4. In Name field, provide a descriptive name which will be shown in consent screen. For instance “Todo Application”
    5. Select “Web Application And/Or Web API” as type
    6. Click Next Arrow
    7. In Sign-On URL, please type the url of the application where you are planning to deploy. For instance https://angularjs-client-app-url/
    8. In APP ID URI, please type following: https://<YOUR-TENANT-NAME>/todo-angularjs-client you should replace <YOUR-TENANT-NAME> with actual tenant name you have.
    9. Click Tick button at the bottom. It will take you to the configuration page of the application.
    10. Now download the Manifest and make following changes:
      1. Enable oauth2AllowImplicitFlow to true.
      2. Enable availableToOtherTenants to true.
      3. Save the .json file and upload it again.
  2. Configure WebAPI application
    1. Navigate to Azure Active Directory > Applications
    2. Click on Add link at the bottom
    3. Click on “Add an application my organization is developing” link
    4. In Name field, provide a descriptive name. For instance Todo WebAPI.
    5. Select “Web Application And/Or Web API” as type
    6. Click Next Arrow
    7. In Sign-On URL, please type the url of the application where you are planning to deploy. For instance https://todo-webapiurl/
    8. In APP ID URI, please type following: https://<YOUR-TENANT-NAME>/todo-webapi you should replace <YOUR-TENANT-NAME> with actual tenant name you have.
    9. Click Tick button at the bottom. It will take you to the configuration page of the application.
    10. Now download the Manifest and make following changes:
      1. Add client id GUID from Angularjs client configuration and paste it to knownClientApplications array like: “knownClientApplications”: [“client-id-guid”]
      2. Enable oauth2AllowImplicitFlow to true.
      3. Enable availableToOtherTenants to true.
      4. Save the .json file and upload it again.
  3. Now update your angularjs client application code
    1. Update adal configuration in app.js
    2. Update todoListSvc.js
    3. Now re-publish the Angularjs client application to App service

ADAL Configuration changes:

todoListSvc.js changes:

Before the application can be access by the users of a directory, we need initiate some process that copies the applications metadata to tenant’s directory. For this purpose a signup process is introduced like shown below that initiate the consent process:


The following code snippet initiates and presents consent screen to administrator:

When an administrator clicks on Signup button, ADAL presents consent screen on behalf of every users in his/her directory like shown below:


The number shown in above consent screen has following characteristics:

  1. Todo Application (This is the name you have configured in Azure Active directory. So put a proper name)
  2. It lists all the scope you have assigned when configured the application.


Once a signup process has been successfully completed, we need to let the admin know about it like shown below.


Also this is the step/point where we can capture the active directory information (e.g. active directory tenant id, admin name etc). These information can be used to control the access of the application to desired tenants.

If you want your application to be accessed by specific tenants, then you can introduce a workflow between signup processes that will let you (configured admin) know that tenant is just subscribed to use your application. If you approve that tenant then he/she and his/her users can use it. You can have this control by writing custom Authorize filter that query database and validate the incoming tenant.

For simplicity we allow every active directory’s tenant to have access to this application. So after signup process, Active directory will copy the application details to your active directory. Once above process is completed, you can access the application and start using it. Like shown below:



That’s it for now. Please provide your feedback.






Implementing Gradient Descent Algorithm in Hadoop for large scale data

In this post I will be exploring how can we use MapReduce to implement Gradient Descent algorithm in Hadoop for large scale data. As we know Hadoop is capable of handling peta-byte scale/size of the data.

Before starting, first we need to understand what is Gradient Descent and where can we use it. Below is an excerpt from Wikipedia:

Gradient descent is a first-order iterative optimization algorithm. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient (or of the approximate gradient) of the function at the current point. If instead one takes steps proportional to the positive of the gradient, one approaches a local maximum of that function; the procedure is then known as gradient ascent.
[Source: ]

If you look at the algorithm, it is an iterative optimisation algorithm. So if we are talking about millions of observations, then we need to iterate those millions of observations and adjust out parameter (theta).
Mathematical notations:






Where p is the number of features.

Now, the question is how can we leverage Hadoop to distribute the work load to minimize the cost function and find the theta parameter?

MapReduce programming model comprises two phases. 1 Map, 2. Reduce shown in below picture. Hadoop gives programmer to only focus on map and reduce phase and rest of the workload is taken care by Hadoop. Programmers do not need to think how I am going to split data etc. Please visit to know about MapReduce framework.

[Multiple mappers with single reducer]

When user uploads data to HDFS, the data is splited and saved in various data nodes. Now we know Hadoop will provide subset of data to each Mapper. So we can program our mapper to emit PartialGradientDescent serializable object. For instance if one split has 50 observations, then that mapper will return 50 partial gradient descent objects. Andrew Ng has well explained at

One more thing, there is only ONE reducer in this example, so reducer will get whole lot of partial gradients, it would be better to introduce combiner so that reducer will get low number of PartialGradientDescent objects or you can apply in-memory combining design pattern for MapReduce which I will cover in next post.

Now let’s get into java map reduce program. I would recommend you to have some reading about Writable in Hadoop.

Here is the mapper code that emits PartialGradientDescent object:

Mapper does following:

  1. Parses received data into data point and validate it.
  2. If data point is not valid, then increment the counter (this counter is used to debug how many invalid records were received by mapper)
  3. calculate partial gradients and emit it.


Lets have a look at reducer code:

Reducer does following:

  1. Sum all partial gradients emitted by all mappers
  2. Update the theta parameters

Reducer receives all the partial gradients, if we are talking about millions observations, it will iterate all to sum. To overcome this issue, we can introduce combiner that does partial sums of partial gradients and emit to reducers. In that case reducer will receive few partial gradients. The other approach is to implement in-mapper combining pattern.

and the last piece of the puzzle is the Driver program that triggers the Hadoop job based on number of iterations you need. Driver program is also responsible for supplying initial theta and alpha.

You can find about PartialGradientWritable at

That’s it for now. Stay tuned.

Implement a SharePoint Timer job using Azure WebJob

The SharePoint Timer service runs in background to do long running tasks. The Timer service does some important SharePoint clean up tasks in the background but can also be used to provide useful functional tasks. For instance, there may be  a situation when you want to send newsletters to your users on regular basis or want to keep your customer up to date with  some regular timed information.

I will be using SharePoint Timer Service to send an email to newly registered customers/users for this demo. The newly registered customers/users are stored in SharePoint list with a status field capturing whether an email has been sent or not.

There are some implementation choices when developing a SharePoint Timer service:

  1. Azure Web Job
  2. Azure Worker Role
  3. Windows Service (can be hosted on premise or vm on Cloud)
  4. Task Scheduler (hosted on premise)

I am choosing WebJob as it is free of cost and I can leverage my Console application as WebJob. Please check why to choose Web Job.

Azure web job does not live it its own. It sits under Azure Web Apps. For this purpose I am going to create a dummy web app and host my Azure web job. I will be hosting all my CSOM code in this web job.

There are two types of web job:

  •  Continuous best fit for queuing application where it keeps receiving messages from queue.
  • On Demand can be scheduled for hourly, weekly and monthly etc.

The Web Job is used to host and execute CSOM code to get information about the user/customers from SharePoint to send email. Following code snippets show what web job is doing:

Querying SharePoint using CSOM and CAML Query:

Sends Email using Office365 Web Exchange:

Composing email using Razor Engine templating engine:

And finally update SharePoint list item using CSOM:

You can download full source code from Codeplex:

When writing a Web Job, the following points should be considered to make your web job diagnosable and reusable:

  1. Do not absorbs exceptions. Handle it first throw it to let web job know something went wrong.
  2. Try to use interfaces so that it can be mocked for unit testings
  3. Always log major steps and errors using Console.WriteLine etc
  4. Make your code like it can be used as console application so that it can be used in Task scheduler
  5. Try to avoid hardcoding. Maximise the use of configuration. It can be plugged from Azure portal as well.

It is time to publish this web job. There are lots of article out there how to create schedule for the web job. I would simply be using Visual Studio to create the schedule before publish it. On Visual Studio, right click the project and click “Publish as Azure Web Job…” and it will launch a UI to specify your schedule as shown below:

Schedule settings

That’s it. Happy SharePointing 🙂