No More Plaintext Passwords: Using Azure Key Vault with Azure Resource Manager

Originally posted on siliconvalve:

A big part of where Microsoft Azure is going is being driven by template-defined environments that leverage the Azure Resource Manager (ARM) for deployment orchestration.

If you’ve spent any time working with ARM deployments you will have gotten used to seeing this pattern in your templates when deploying Virtual Machines (VMs):

The adminPassword property accepts a Secure String object which contains an encrypted string that is passed to the VM provisioning engine in Azure and is used to set the login password. You provide the clear text version of the password either as a command-line parameter, or via a parameters file.

The obvious problems with this way of doing things are:

  1. Someone needs to type the cleartext password which means:
    1. it needs to be known to anyone who provisions the environment and
    2. how do I feed it into an automated environment deployment?
  2. If I store the password in a parameter…

View original 781 more words

Creating a simple nodejs API on AWS (including nginx)

On a recent project I was part of a team developing an AngularJS website with a C# ASP.NET backend API hosted in Azure.  It was a great project as I got to work with a bunch of new tools, but it got me wondering on how simple it could be to use a Javascript API instead.  That way the entire development stack would be written in Javascript.

And so a personal project was born.  To create a simple JS API and get it running in the cloud.

Getting started

So here goes, a quick guide on setting up a nodejs API using the express framework.

I’ll start by getting the environment running locally on my mac in 6 simple steps:

# 1. Create a directory for your application
$ mkdir [your_api_name]

# 2. Install Express (the -g will install it globally)
$ npm install express -g

# 3. Use the express generator as it makes life easier!
$ npm install express-generator -g

# 4. Create your project
$ express [your_project_name_here]

# 5. Install any missing dependencies
$ npm install

# 6. Start your API
$ npm start

That’s it. You now have a nodejs API running locally on your development environment!

To test it, and prove to yourself it’s working fine, run the following curl command:

$ curl http://localhost:3000/users

If everything worked as planned, you should see “respond with a resource” printed in your terminal window. Now this is clearly as basic as it gets, but you can easily make it [a bit] more interesting by adding a new file to your project called myquickapi.js in your [app name]/routes/ folder, and add the following content:

var express = require('express');
var router = express.Router();

// get method route
router.get('/', function (req, res) {
res.send('You sent a get request');

// post method route'/', function(req, res) {
res.send('You sent me ' + req.param('data'));

module.exports = router;

A quick change to the app.js file to update our routes, by adding the following 2 lines:

var myquickapi = require(‘./routes/myquickapi');
app.use('/myquickapi', myquickapi);

Re-start your node service, and run:

$ curl -X POST http://localhost:3000/myquickapi?data=boo

And you will see the API handle the request parameter and echo it back to the caller.

Spin up an AWS EC2 instance

Log into the AWS portal and create a new EC2 instance.  For my project, as it is only a dev environment, I have gone for a General Purpose t2.Micro Ubuntu Server.  Plus it’s free which happens to be okay too!

Once the instance is up and running you will want to configure the security group to allow all inbound traffic using port 443, and 80 – after all it is a web api and I guess you want to access it!  I also enabled SSH for my local network:


Using your pem file ssh into your new instance, and once connected, run the following commands:

# 1. update any out of date packages
sudo apt-get update

# 2. install nodejs
sudo apt-get install nodejs

# 3. install node package manager
sudo apt-get install npm

Now you can run node using the nodejs command. This is great, but not for the JS packages we’ll be using later on.  They reference the node command instead.  A simple fix is to create a symlink to the nodejs command:

$ sudo ln -s /usr/bin/nodejs /usr/bin/node

Set up nginx on you EC2 instance

We’ll use nginx on our server to proxy network traffic to the running nodejs instance.  Install nginx using the following commands:

# 1. install nginx

$ sudo apt-get install nginx

# 2. make a directory for your sites
$ sudo mkdir -p /var/www/express-api/public

# 3. set the permission of the folder so it is accessible publicly
$ sudo chmod 755 /var/www

# 4. remove the default nginx block
$ sudo rm /etc/nginx/sites-enabled/default

# 5. create the virtual hosts file
$ sudo nano /etc/nginx/sites-available/[your_api_name]

Now copy the following content into your virtual hosts file and save it:

upstream app_nodejs {
keepalive 8;

server {
listen 80;
listen [::]:80 default_server ipv6only=on;
listen 443 default ssl;

root /var/www/[your_site_folder]/public/[your_site_name];
index index.html index.htm;

# Make site accessible from http://localhost/
server_name [your_server_domain_name];

location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;

proxy_pass http://localhost:3000/;
proxy_redirect off;

This basically tells your server to listen on ports 80 and 443 and redirect any incoming traffic to your locally running nodes server on port 3000. A simple approach for now, but all that is needed to get our API up and running.

Activate your newly created hosts file by running the following command:

$ sudo ln -s /etc/nginx/sites-available/[your_api_name] /etc/nginx/sites-enabled/[your_api_name]

Now restart nginx to make your settings take place:

$ sudo service nginx restart

As a sanity test you can run the following command to confirm everything is setup correctly. If you are in good shape you will see a confirmation that the test has run successfully.

$ sudo nginx -c /etc/nginx/nginx.conf -t

Final steps

The last step in the process, which you could argue is the most important, is to copy your api code onto your new web server.  If you are creating this for a production system then I would encourage a deployment tool at this stage (and to be frank, probably a different approach altogether), but for now a simple secure copy is probably all that’s required:

$ scp -r [your_api_directory] your_username@aws_ec2_api:/var/www/[your_site_folder]/public/

And that’s it.  Fire up a browser and try running the curl commands against your EC2 instance rather than your local machine.  If all has gone well then you should get the same response as you did with your local environment (and it’ll be lightning fast).

… Just for fun

If you disconnect the ssh connection to your server it will stop the application from running.  A fairly big problem for a web api, but a simple fix to resolve.

A quick solution is to use the Forever tool.

Install it, and run your app (you’ll be glad you added the symlink to nodejs earlier):

$ sudo npm install -g forever

$ sudo forever start /var/www/[your_site_folder]/public/[your_site_name]/bin/www


Hopefully this will have provided a good insight into setting up a nodejs API on AWS. At the moment it is fairly basic, but time permitting, I would like to build on the API and add additional features to make it more useable – I’d particularly like to add a Javascript OAuth 2.0 endpoint!

Watch this space for future updates, as I add new features I will be sure to blog about the learnings I find along the way.

As Always; any questions, then just reach out to me, or post them below

Resource Manager Cmdlets in Azure Powershell 1.0

Azure recently launched the 1.0 version of PowerShell cmdlets. The changes are huge, including new Azure Resource Manager (ARM), which resulted in deprecating Azure-SwitchMode between ASM and ARM. In this post, we only have a brief look at how new PowerShell cmdlets for ARM have been introduced, especially for managing resource groups and templates.


In order to get the newest Azure PowerShell, using MS Web Platform Installer is the quickest and easiest way.

Note: At the moment of writing, the released date of Azure PowerShell is Nov. 9th, 2015.

Of course, there are other ways to install the latest version of Azure PowerShell, but this is beyond the scope of this post.

New Set of Cmdlets

Now, the new version of Azure PowerShell has been installed. Run PowerShell ISE with an Administrator privilege. As always, run Update-Help to get the all help files up-to-date. Then try the following command:

If you can’t see anything, don’t worry. You can restart ISE or even restart your PC to get it working. Alternatively, you can check those cmdlets through ISE like:

Can you find any differences comparing to the previous version of cmdlets? All cmdlets are named as patterns of [action]-AzureRm[noun]. For example, in order to get the list of resource groups, you can run Get-AzureRmResourceGroup. The result will look like:

Now, let’s try to set up a simple web application infrastructure. For the web application, at least one website and database is required. In addition to them, for telemetry purpose, ApplicationInsights might be necessary.

Create a Resource Group

For those infrastructure, we need a resource group. Try the following cmdlets in that order:

Can you find out the differences from the old cmdlets?

Old Cmdlets New Cmdlets
Get-AzureAccount Login-AzureRmAccount
Get-AzureSubscription Get-AzureRmSubscription
Select-AzureSubscription Select-AzureRmSubscription

As stated above, all cmdlets now have names of AzureRm instead of Azure. Once you complete choosing your subscription, if you have more than one subscription, it’s time to create a resource group for those infrastructure items. It might be worth having a look naming guidelines for Azure resources. Let’s try it.

Old Cmdlets New Cmdlets
New-AzureResourceGroup New-AzureRmResourceGroup

Therefore, enter the following to create a resource group:

The resource group is now named as ase-dev-rg-sample and created in the Australia Southeast (Melbourne) region. Let’s move onto the next step, setting up all resources using a template.

Setup Resources with Azure Resource Template

Fortunately, there is a template for our purpose on GitHub repository:

Old Cmdlets New Cmdlets
New-AzureResourceGroupDeployment New-AzureRmResourceGroupDeployment

Use the new cmdlet and add all resources into the group:

As you can see, we set the template file directly from GitHub and left parameter source. Therefore, it will ask to type necessary parameters:

Once everything is entered, as I put -Verbose parameter, all setup details will be displayed as well as result:

Check out the Azure Portal whether all defined resources have been deployed or not.

Everything has been smoothly deployed.

We have so far had a quick look of ARM with resource group management using new version of PowerShell cmdlets. There are more cmdlets in Azure PowerShell to control individual resources more precisely. I’m not going to deep dive too much here, but it’s much worth trying other cmdlets for your infrastructure setup purpose. They are even more powerful than before.

Keep enjoying on cloud with Kloud!

Implementing a WCF Client with Certificate-Based Mutual Authentication without using Windows Certificate Store

Windows Communication Foundation (WCF) provides a relatively simple way to implement Certificate-Based Mutual Authentication on distributed clients and services. Additionally, it supports interoperability as it is based on WS-Security and X.509 certificate standards. This blog post briefly summarises mutual authentication and covers the steps to implement it with an IIS hosted WCF service.

Even though WCF’s out-of-the-box functionality removes much of the complexity of Certificate-Based Mutual Authentication in many scenarios, there are cases in which this is not what we need. For example, by default, WCF relies on the Windows Certificate Store for accessing the own private key and the counterpart’s public key when implementing Certificate-Based Mutual Authentication.

Having said so, there are scenarios in which using the Windows Certificate Store is not an option. It can be a deployment restriction or a platform limitation. For example, what if you want to create an Azure WebJob which calls a SOAP Web Service using Certificate-Based Mutual Authentication? (At the time of writing this post) there is no way to store a certificate containing the counterpart’s public key in the underlying certificate store for an Azure WebJob. And just because of that, we cannot enjoy all the built-in benefits of WCF for building our client.

Here, they explain how to create a WCF service that implements custom certificate validation be defining a class derived from X509CertificateValidator and implementing an abstract “Validate” override method. Once defined the derived class, the CertificateValidationMode has to be set to “Custom” and the CustomCertificateValidatorType to be set to the derived class’ type. This can easily be extended to implement mutual authentication on the service side without using the Windows Certificate Store.

My purpose in this post is to describe how to implement a WCF client with Certificate-Based Mutual Authentication without using Windows Certificate Store by compiling the required sources and filling the gaps of the available documentation.

What to consider

Before we start thinking about coding, we need to consider the following:

  • The WCF client must have access to the client’s private key to be able to authenticate with the service.
  • The WCF client must have access to the service’s public key to authenticate the service.
  • Optionally, the WCF client should have access to the service’s certificate issuer’s certificate (Certificate Authority public key) to validate the service’s certificate chain.
  • The WCF client must implement a custom service’s certificate validation, as it cannot rely on the built-in validation.
  • We want to do this, without using the Windows Certificate Store.

Accessing public and private keys without using Windows Certificate Store

First we need to access the client’s private key. This can be achieved without any problem. We could get it from a local or a shared folder, or from a binary resource. For the purpose of this blog, I will be reading it from a local Personal Information Exchange (pfx) file. For reading a pfx file we need to specify a password; thus you might want to consider encrypting or implementing additional security. There are various X509Certificate2 constructor overloads which allow you to load a certificate in different ways. Furthermore, reading a public key is easier, as it does not require a password.

Implementing a custom validator method

On the other hand, implementing the custom validator requires a bit more thought and documentation is not very detailed. The ServicePointManager
has a property called “ServerCertificateValidationCallback” of type RemoteCertificateValidationCallback which allows you to specify a custom service certificate validation method. Here is defined the contract for the delegate method.

In order to authenticate the service, once we get its public key, we could do the following:

  • Compare the service certificate against a preconfigured authorised service certificate. They must be the same.
  • Validate that the certificate is not expired.
  • Optionally, validate that the certificate has not been revoked by the issuer (Certificate Authority). This does not apply for self-signed certificates.
  • Validate the certificate chain, using a preconfigured trusted Certificate Authority.

For comparing the received certificate and the preconfigured one we will use the X509Certificate.Equals Method. For validating that the certificate has not expired and not been revoked we will use the X509Chain.Build Method. And finally, to validate that the certificate has been issued by the preconfigured trusted CA, we will make use of the X509Chain.ChainElements Property.

Let’s jump into the code.

To illustrate how to implement the WCF client, what can be better than code itself J? I have implemented the WCF client as a Console Application. Please pay attention to all the comments when reading my code. With the provided background, I hope it is clear and self-explanatory.

using System;
using System.Configuration;
using System.IdentityModel.Tokens;
using System.Linq;
using System.Net;
using System.Net.Security;
using System.ServiceModel;
using System.ServiceModel.Security;
using System.Security.Cryptography;
using System.Security.Cryptography.X509Certificates;

namespace MutualAuthClient
    class Program
        static void Main(string[] args)

                // Set the ServerCertificateValidationCallback property to a 
                // custom method. 
                ServicePointManager.ServerCertificateValidationCallback += 

                // We will call a service which expects a string and echoes it 
                // as a response.  
                var client = new EchoService.EchoServiceClient

                // Load private key from PFX file. 
                // Reading from a PFX file requires specifying the password. 
                // You might want to consider adding encryption here. 
                Console.WriteLine("Loading Client Certificate (Private Key) from File: " 
                                    + ConfigurationManager.AppSettings["ClientPFX"]);
                client.ClientCredentials.ClientCertificate.Certificate = 
                                    new X509Certificate2(

                // We are using a custom method for the Server Certificate Validation
                                CertificateValidationMode = 

                Console.WriteLine(String.Format("About to call client.Echo"));
                string response = client.Echo("Test");
                Console.WriteLine(String.Format("client.Echo Response: '{0}'", response));
            catch (Exception ex)
                    String.Format("Exception occurred{0}Message:{1}{2}Inner Exception: {3}"
                                   , Environment.NewLine, ex.Message, Environment.NewLine, 


        private static bool CustomServiceCertificateValidation(
                object sender, X509Certificate cert, X509Chain chain, 
                SslPolicyErrors error)
            Console.WriteLine("CustomServiceCertificateValidation has started");

            // Load the authorised and expected service certificate (public key) 
            // from file.
            Console.WriteLine("Loading Service Certificate (Public Key) from File: " 
                                + ConfigurationManager.AppSettings["ServicePublicKey"]);
            X509Certificate2 authorisedServiceCertificate = new X509Certificate2

            // Load the trusted CA (public key) from file. 
            Console.WriteLine("Loading the Trusted CA (Public Key) from File: " 
                                + ConfigurationManager.AppSettings["TrustedCAPublicKey"]);
            X509Certificate2 trustedCertificateAuthority = new X509Certificate2

            // Load the received certificate from the service (input parameter) as 
            // an X509Certificate2
            X509Certificate2 serviceCert = new X509Certificate2(cert);

            // Compare the received service certificate against the configured 
            // authorised service certificate. 
            if (!authorisedServiceCertificate.Equals(serviceCert))
                // If they are not the same, throw an exception. 
                throw new SecurityTokenValidationException(String.Format(
                    "Service certificate '{0}' does not match that authorised '{1}'"
                    , serviceCert.Thumbprint, authorisedServiceCertificate.Thumbprint));
                    "Service certificate '{0}' matches the authorised certificate '{1}'."
                    , serviceCert.Thumbprint, authorisedServiceCertificate.Thumbprint));

            // Create a new X509Chain to validate the received service certificate using 
            // the trusted CA
            X509Chain chainToValidate = new X509Chain();

            // When working with Self-Signed certificates, 
            // there is no need to check revocation.  
            // You might want to change this when working with 
            // a properly signed certificate. 
            chainToValidate.ChainPolicy.RevocationMode = X509RevocationMode.NoCheck;
            chainToValidate.ChainPolicy.RevocationFlag = X509RevocationFlag.ExcludeRoot;
            chainToValidate.ChainPolicy.VerificationFlags = 

            chainToValidate.ChainPolicy.VerificationTime = DateTime.Now;
            chainToValidate.ChainPolicy.UrlRetrievalTimeout = new TimeSpan(0, 0, 0);

            // Add the configured authorised Certificate Authority to the chain.  

            // Validate the received service certificate using the trusted CA
            bool isChainValid = chainToValidate.Build(serviceCert);

            if (!isChainValid)
                // If the certificate chain is not valid, get all returned errors. 
                string[] errors = chainToValidate.ChainStatus
                    .Select(x => String.Format("{0} ({1})", x.StatusInformation.Trim(), 
                string serviceCertChainErrors = "No detailed errors are available.";

                if (errors != null && errors.Length > 0)
                    serviceCertChainErrors = String.Join(", ", errors);

                throw new SecurityTokenValidationException(String.Format(
                        "The chain of service certificate '{0}' is not valid. Errors: {1}",
                        serviceCert.Thumbprint, serviceCertChainErrors));

            // Validate that the Service Certificate Chain Root matches the Trusted CA. 
            if (!chainToValidate.ChainElements
                .Any(x => x.Certificate.Thumbprint == 
                throw new SecurityTokenValidationException(String.Format(
                        "The chain of Service Certificate '{0}' is not valid. " + 
                        " Service Certificate Authority Thumbprint does not match " +
                        "Trusted CA's Thumbprint '{1}'", 
                        serviceCert.Thumbprint, trustedCertificateAuthority.Thumbprint));
                    "Service Certificate Authority '{0}' matches the Trusted CA's '{1}'", 
            return true;

And here is the App.config

<?xml version="1.0" encoding="utf-8" ?>
    <add key="ClientPFX" value="certificates\ClientPFX.pfx" />
    <add key="ClientPFXPassword" value="********" />
    <add key="TrustedCAPublicKey" value="certificates\ServiceCAPublicKey.cer" />
    <add key="ServicePublicKey" value="certificates\ServicePublicKey.cer" />
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5.2" />
        <binding name="BasicHttpBinding_IEchoService">
          <security mode="Transport">
            <transport clientCredentialType="Certificate" />
      <endpoint address="https://server/EchoService.svc"
                binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IEchoService"
                contract="EchoService.IEchoService" name="BasicHttpBinding_IEchoService" />

In case you find difficult to read my code from WordPress, you can read it from GitHub on the links below:

I hope you have found this post useful, allowing you to implement a WCF client with Mutual Authentication without relying on the Certificate Store, and making your coding easier and happier! J

Using PowerShell to remove users from an Exchange Online in-place hold policy

Originally posted on Lucian’s blog at Follow Lucian on Twitter @lucianfrango.

In-place hold, legal hold, compliance hold, journaling and/or select “D”: all of the above, when it’s simplified down to its simplest form is storing emails for X amount of time in case there’s a problem and these need to be reviewed. What’s great about Office 365 Exchange Online is that there is the ability to store those emails in the cloud for 2,555 days (or roughly speaking 7 years).

Let’s fast forward to having in-place hold enabled for an Exchange Online tenant. In my reference case I have roughly 10,500 users in the tenant and numerous in-place hold policies, with the largest containing 7,500 or so users. I’ve run into a small problem with this Hybrid based environment whereby I need to move a mailbox that is covered by an in-place hold policy (let’s call it “Lucians Mailbox Search Policy”) back to on-premises for a couple of reasons.

The following blog post outlines how to remove users from an in-place hold via PowerShell as the Office 365 / Exchange Online Control Panel may not let you do that when you have thousands of users in a single hold policy.
Read More

Windows Server 2012 R2 (ADFS 3.0): Migrating ADFS Configuration Database from WID to SQL

You already have a working ADFS setup which has been configured to use the Windows Internal Database (WID) to store its configuration database. However, things may have changed since you implemented it and you may now have one (or more) of the below requirements which will need an upgrade to SQL server.

  • Need more than five federation servers in the ADFS Farm (supporting more than 10 relying parties)
  • Leverage high availability features of SQL or
  • Enable support for SAML artefact resolution or WS Federation token replay detection.

The below step-by-step procedure should help you with the migration of the ADFS configuration database from WID to SQL with minimal or no downtime (however, plan accordingly such that it has the least impact in case something goes wrong).

The steps also cover configuration of each of the ADFS servers (Primary and Secondary) in the farm to use the SQL Server for its configuration database.

For simplicity, i have used the below scenario comprising of:

Proposed Design


  • Two Web Application Proxies (WAP) – wap1 and wap2
  • External load balancer (ELB) in front of the WAPs.

Private / Corporate network

  • Two ADFS Servers – adfs1 and adfs2
  • Internal Load Balancer (ILB) in front of the ADFS Servers
  • SQL Server (Standalone). Additional steps need to be performed (not covered in this blog) when using SQL Server with high availability options such as SQL Always-On or Merge Replication


Ensure you have a complete backup of your ADFS servers. You can use Windows Server Backup or your thirty-party backup solution to backup the ADFS servers.

Load Balancer Configuration

During the course of this exercise the internal load balancer will be configured multiple times to ensure a smooth migration with minimal impact to end users.

Remove the primary ADFS Server (adfs1) from the internal load balancer configuration such that all traffic is directed to the secondary server (adfs2).

Primary ADFS Server steps

  • Stop the ADFS windows service by issuing “net stop adfssrv” in an elevated command prompt or via the Windows Services Manager.

net stop adfssrv

  • Download and install SQL Server Management Studio (SSMS) (if not already present)
  • Launch SSMS in Administrator mode
  • Connect to your WID using \\.\pipe\MICROSOFT##WID\tsql\query as the server name in SSMS.

SSMS connect dialog

You should be able to see the two ADFS databases (AdfsArtifactStore and AdfsConfiguration) as shown below:

SSMS showing the two ADFS databases

  • To find the physical location of the ADFSConfiguration and ADFSArtifactStore in WID, run the below query  by starting up a ‘New Query’. The default path is C:\Windows\WID\Data\.
SELECT name, physical_name AS current_file_location FROM sys.master_files

Results showing physical location of DB files

  • Restart WID from SSMS. This is just to ensure that there is no lock on the databases. Right Click on the WID db and select ‘Restart‘.

Restarting the database

Restarting the database

  • Now we need to detach both the databases. Run the below query on the WID using SSMS
USE [master]
EXEC master.dbo.sp_detach_db @dbname = N'AdfsArtifactStore'
EXEC master.dbo.sp_detach_db @dbname = N'AdfsConfiguration'

Running the commands on the WID

  • Now copy the databases identified earlier from the Primary ADFS Server to your SQL Server’s Data directory (for example C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA).


SQL Server – Steps

  • On the SQL Server, bring up the SQL Server Management Studio (SSMS) and connect to the SQL instance (or default instance) where the ADFS databases will be hosted.
  • Create a login with the ADFS windows service account (which was used for the initial ADFS setup and configuration). I used Contoso\svcadfs.

Adding SQL Server user

  • Now attach the databases copied earlier on the SQL server. Run the below using the SQL Server Management Studio. Modify the path as appropriate if the db files were copied to a location other than ‘C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA’

 USE [master]
 CREATE DATABASE [AdfsConfiguration] ON
 ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdfsConfiguration.mdf' ),
 ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdfsConfiguration_log.ldf' )

 CREATE DATABASE [AdfsArtifactStore] ON
 ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdfsArtifactStore.mdf' ),
 ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdfsArtifactStore_log.ldf' )
 ALTER DATABASE AdfsConfiguration set enable_broker with rollback immediate

  • On successful execution of the above, you should be able to see the two ADFS databases in SSMS (you may need to do a refresh if not displayed automatically)

Two databases shown in SSMS

  • Ensure that the ADFS Service Account has the “db_genevaservice” role membership on both the databases

Grant service account right database role

Firewall Configuration

Ensure that the SQL Server is reachable from the ADFS servers on port 1433. You may need to update network firewalls and / or host firewall configuration on the SQL Server (depending on the type of network setup you may have).

Primary ADFS Server Steps

  • Start the ADFS windows service by issuing “net start adfssrv” from an elevated command prompt or from the Windows Services Manager
  • Launch a PowerShell console in Administrator Mode and execute the below lines in order

 $temp= GEt-WmiObject -namespace root/ADFS -class SecurityTokenService
 $temp.ConfigurationdatabaseConnectionstring="data source=[sqlserver\instance];initial catalog=adfsconfiguration;integrated security=true"

Note: replace [sqlserver\instance] with actual server\instance. If not running as an instance, just server. I am using ‘SQLServer’ as it is the hostname of the SQL server being used in this example.

PowerShell Configuration

  • Change the connection string property in “AdfsProperties” by issuing the below command from the PowerShell console

Set-AdfsProperties -ArtifactDbConnection "Data Source=[sqlserver\instance];Initial Catalog=AdfsArtifactStore;Integrated Security=True"

Note: Change [sqlserver\instance]  with the name of your SQL server and instance (as applicable)

PowerShell Configuration

  • Restart the ADFS Service by executing “net stop adfssrv” and “Nnet start adfsrv” from an elevated command prompt or from the Windows Services Manager.

Restarting service

  • To check if the configuration has been successful, run “Get-AdfsProperties” from a PowerShell console. You should see the ADFS properties listed (as below) with the key being Data Source=SQLServer; Initial Catalog=AdfsArtifactStore; Integrated Security=True

Output from Get-AdfsProperties

This completes the migration of the ADFS configuration database from WID to SQL and also the configuration of the Primary ADFS server to use the SQL Database. Now we need to configure the secondary ADFS server(s) to use the SQL Database.

Load Balancer Configuration

Update the internal load balancer to:

  • Add the Primary ADFS (adfs1) to the load balance configuration and
  • Remove the secondary ADFS (adfs2) server which needs to be reconfigured to point to the SQL Server.

Secondary ADFS Server steps

  • Stop the ADFS Windows service by issuing “net stop adfssrv” in an elevated command prompt
  • To change the configuration database connection string to point to the new SQL ADFS configuration database run the below command lines (in order) from a PowerShell Console
$temp= Get-WmiObject -namespace root/ADFS -class SecurityTokenService
$temp.ConfigurationdatabaseConnectionstring=”data source=&amp;amp;lt;SQLServer\SQLInstance&amp;amp;gt;; initial catalog=adfsconfiguration;integrated security=true”

Note: Change [sqlserver\instance] with the name of your SQL server / instance as used for the primary server configuration.

PowerShell Configuration

  • Start the ADFS Service by executing “Net Start ADFSSRV” from an elevated command prompt and verify that the service starts up successfully. ( I have had the issue where my ADFS server was (strangely) not able to resolve the NETBIOS name of the SQL Server, hence the service wouldn’t start properly. Also, check if the federation service is running using the service account that was provided login to the SQL Database)
  • To check if the configuration has been successful, run “Get-AdfsProperties” from a PowerShell console. You should see the ADFS properties listed (as below) with the key being  Data Source=SQLServer; Initial Catalog=AdfsArtifactStore; Integrated Security=True

Output from Get-AdfsProperties

Repeat above steps for each of the secondary servers (if you have more than one) and ensure that all ADFS servers are added back to the internal load balancer configuration.

I hope this post has been useful.

Building Applications with Event Sourcing and CQRS Pattern

When we start building an application on cloud, like Azure, we should consider many factors. Those factors include flexibility, scalability, performance and so on. In order to satisfy those factors, components making up the application should be loosely coupled and ready for extension and change at any time. For those considerations, Microsoft has introduced 24 cloud design patterns. Even though they are called as “Cloud Design Patterns”, they can be used just for application development anyway. In this post, I’m going to introduce Event Sourcing Pattern and CQRS Pattern and how they can be used in a single page application (SPA) like AngularJS application.

The complete code sample can be found here.

Patterns Overview

I’m not going into too much details here to explain what Event Sourcing (ES) Pattern and CQRS Pattern are. According to articles linked above, both ES and CQRS easily get along with each other. As the name itself says, CQRS separates commands from query – commands and query use different dataset and ES supports event stream for data store (commands), and materialisation and replaying (query). Let’s take a look at the diagram below.

[Image from:]

This explains how ES and CQRS work together. Any individual input (or behaviour) from a user on the presentation layer (possibly Angular app in this post) is captured as an event and stored into event stream with timestamp. This storing action is append-only, ie events are only to be added. Therefore, the event stream becomes a source of truth, so all events captured and stored into the event stream can be replayed for query or materialised for transaction.

OK. Theory is enough. Let’s build an Angular app with Web API.

Client-side Implementation for Event Triggering

There are three user input fields – Title, Name and Email – and the Submit button. Each field and button acts as an event. In this code sample, they are named as SalutationChangedEvent, UsernameChangedEvent, EmailChangedEvent and UserCreatedEvent. Those events are handled by event handlers at the Web API side. What the Angular app does is to capture the input values when they are being changed and clicked. This is a sample TypeScript code bits for the name field directive.

This HTML is a template used for the directive below. ng-model will capture the field value and the value will be sent to the server to store event stream.

Please bear in mind that, as this is written in TypeScript, the coding style is slightly different from the original Angular 1.x way.

  1. The interface IUserNameScope defines model property and change function. This inherits $scope.
  2. The interface is injected to both link and controller of the directive UserName that implements ng.IDirective.
  3. A link function of the directive takes care of all DOM related ones.
  4. The link function calls the function declared in $scope to send AJAX request to Web API.
  5. A POST AJAX request is sent through userNameFactory to the server.
  6. A response comes from the server as a promise format and the response is passed to replayViewFactory for replay.

Both Title and Email fields work the same way as the Name field. Now, let’s have a look how the replay view section looks like.

This HTML template is used for the directive below. The following directive is only to replay responses.

As you can see, this directive only calls the replayViewFactory.getReplayedView() function to display what changes are. How do those events get consumed at the server-side then? Let’s move onto the next look.

Server-side Implementation for Event Processing

The POST request has been sent through a designated endpoint like:

This request is captured in this Web API action:

The action in the controller merely calls the this._service.ChangeUsernameAsync(request) method. Not too excited. Let’s dig into the service layer then.

  1. Based on the type of the request passed, an appropriate request handler is selected.
  2. The request handler converts the request into a corresponding event. In this code sample, the UsernameChangeRequest is converted to UsernameChangedEvent by the handler.
  3. An event processor takes the event and process it.

A question may arise here. How does request handler selection work? Each request handler implements IRequestHandler and it defines two methods:

Therefore, you can create as many request handlers as you like, and register them into your IoC container (using Autofac for example) like:

In the sample code used here registers five request handlers. If your business logic is way far complex and require many request handlers, you might need to consider moduling those request handlers automatic registration. I’ll discuss this in another post soon. Another question may arise again. How does the event processor work? Let’s have a look. Here’s the event processor:

This is quite similar to the EventStreamService.ChangeUsernameAsync(). First of all, find all event handlers that can handle the event. Then those selected event handlers process the event as all event handlers implements IEventHandler interface:

To wrap up,

  1. A user action is captured at a client-side and passed to a server-side as a request.
  2. The user action request is converted to an event by request handlers.
  3. The event is then processed and stored into event stream by event handlers.

Of course, I’m not arguing this is the perfect example for event processing. However, at least, it’s working and open for extension, which is good.

Replaying Events

Now, all events are raised and stored into event stream with timestamp. Event stream becomes a source of truth. Therefore, if we want to populate a user’s data against a particular time period, as long as we provide timestamp, we’re able to load the data without impacting on the actual data store. If you run the code sample on your local and make some user input change, you’ll actually be able to see the replayed view.

Now, let’s store the user data into the real data store by event materialisation.

Materialising Events

When you hit the Submit button, the server-side replays all events from the event stream with the current timestamp for materialisation. Then the materialised view is stored into the User table. As this is considered as another event, another event, UserCreatedEvent is created and processed by UserCreatedEventHandler. Unlike other event handlers, it does not only use the event stream repository, but also use the user repository.

In other words, the event itself is stored into the event stream and a user data from the event is stored into the user repository. Once stored, you will be able to find on the screen.

Please note that, if you change Title, Name, or Email but not yet click the Submit button, you’ll find some difference like the following screen:

So far, we’ve briefly discussed both ES pattern and CQRS pattern with a simple Angular – Web API app. How did you find it? Wouldn’t it be nice for your next application development? Make sure one thing. Applying those patterns might bring overly complex architecture into your application as there are many abstraction layers involved. If your application is relatively simple or small, you don’t have to consider those patterns. However, your application is growing and becomes heavier and complex, then it’s time to consider getting those patterns implemented for your application. Need help? We are Kloudie, the expert group.

Google Cloud Messaging using Azure Notification Hub

The Xamarin team have provided a very helpful tutorial to get started with setting up Android notifications – I suggest using this to get the basic implementation working, and ensure you’re using the GcmRegistrationIntentService version, and avoid using the depreciated GCM Client.

To get a complete end to end solution up and running there is a fair bit of additional information required.  Here’s a guide for using Microsoft Azure Notification Hub.

Set up Google Cloud Messaging service

Log into the Google Developer Console (, and if you haven’t done so already create a project for your application.

Once created make a note of the Project ID as this will be used in your application code.  When the application starts it will send the Project ID to Google to exchange for a device token.  This unique token is what will be used to target each device with a notification.  Be sure to check for a new token each time the application starts as it is subject to change.

Next step is to enable the Google “Cloud Messaging for Android” and create an API key:


Click credentials and create a new “Server Key” and make a note of it for later.  We’ll need it for when we set up the Azure Notification Hub, and also to test sending a notification.  Whist here you may want to think about how you’ll set up your dev/test/prod implementations as it’s a good idea to keep these separate.


* the key used here is for dev purposes only!

Set up the Azure notification hub

Next step is to set up the Microsoft Azure Notification Hub.  Log into the portal and create a new App Service > Service Bus > Notification Hub.  As soon as the service is running you can configure the Google Cloud Messaging settings by entering the API key created in the previous step into the GCM API Key field.


Set up the Android code

Before you begin, make sure you add the following to your packages.config file:

<package id="Xamarin.GooglePlayServices.Base" version="" targetFramework="MonoAndroid50" />
<package id="Xamarin.GooglePlayServices.Gcm" version="" targetFramework="MonoAndroid50" />

For my implementation the client wanted to pop up a message when the customer is using the application as well as adding a message to the notification status bar. At the point of receiving the message I create a pending intent to display a message when the customer opens the app (by clicking the status bar message), and display a message if the application is open when the message is received.

public override void OnMessageReceived (string from, Bundle data) {

 Log.Info(TAG, "GCM message received");
 if (from == Constants.SenderId) // make sure we sent the message

    var messageId = data.GetString ("account");
 // message ID		
    var message = data.GetString ("msg");
 // message body

    if (CurrentApplicationState.appIsInForeground)
    // create a pending intent to display to the customer at a later date

    CreateSystemNotification (message, messageId);

   catch (System.Exception e)


    Log.Debug (TAG, "Error displaying message: " + e);



When a message has been received the following code is used to create the pending intent, which will fire when the user selects the notification from the Android notification status bar. For this notification I have chosen to display a message that can be expanded when the user selects it. The other important point here is the messageId is used when displaying the notifications. This ensures a unique notification will be displayed for each account in the status bar. If a notification is delivered with the same ID, then it will replace the existing one.

private void CreateSystemNotification (string message, int messageId) {
  Notification.BigTextStyle textStyle = new Notification.BigTextStyle ();
  textStyle.BigText (message);

  var intent = new Intent (this, typeof(MainActivity));
  intent.AddFlags (ActivityFlags.SingleTop);
  intent.AddFlags (ActivityFlags.ClearTop);

  var pendingIntent = PendingIntent.GetActivity (this, 0, intent, PendingIntentFlags.UpdateCurrent);

  var notificationBuilder = new Notification.Builder (this)
   .SetSmallIcon (Resource.Drawable.app_notifications_status_bar)
   .SetContentTitle (ApplicationInfo.LoadLabel(PackageManager)) // app name
   .SetContentText (message)
   .SetAutoCancel (true)
   .SetContentIntent (pendingIntent)
   .SetLargeIcon (BitmapFactory.DecodeResource (Resources, Resource.Drawable.agl_launch_icon))
   .SetSound (RingtoneManager.GetDefaultUri (RingtoneType.Notification))
   .SetStyle (textStyle);

  _notificationManager = (NotificationManager)GetSystemService (Context.NotificationService);
  _notificationManager.Notify (messageId, notificationBuilder.Build ());

To keep track of whether the application is in the foreground I use the following:

public static class CurrentApplicationState


 private static bool _appIsInForeground;
 public static bool appIsInForeground
  get { return _appIsInForeground; } 
    _appIsInForeground = value; 

And I call this from my base application:

protected override void OnPause () 
 base.OnPause ();
 CurrentApplicationState.appIsInForeground = false;


protected override void OnResume ()
 base.OnResume ();
 CurrentApplicationState.appIsInForeground = true;


In order for a user to receive notifications on his/her Android device they’ll need to have Google Play installed. There is no way of avoiding this as we are using the Google Cloud Messaging service! This code can be used to detect if Google Play is installed and on the latest version:

public static bool IsGoogleServicesAvailable ()
 GoogleApiAvailability googleAPI = GoogleApiAvailability.Instance;
 int resultCode = googleAPI.IsGooglePlayServicesAvailable (Context);
 if (resultCode != ConnectionResult.Success) 
   return false; // There is an issue with the Google Play version installed on the device

 return true;

Using this implementation you’re relying on Google to provide the messaging. I choose not to inform the customer at this point as the message from Google is fairly blunt. As you can see below it mentions the application won’t run without updating Google play which is not strictly true, as it will run fine! Instead I pop up a message at the point in which they configure the notification service, which happens much deeper in the app than the main activity!

Using the simulator

If, like me, you find it easier to use the Xamarin Android emulator for debugging you’ll need to install Google Play before you can test notifications on the simulator.  Download the Gapps package from the following website: Simply drag the zip file into the emulator window (this works on a Mac, I’m not sure about Windows?) and follow the on screen prompts to install Google Play.

Log in with a Google account and you’re ready to test your notifications!

Testing the service

In order to make sure things are running as expected you can send a test notification.  There are a number of ways to do this but my preference is to use Postman (the Chrome extension) with the following settings:

POST /gcm/send HTTP/1.1
Authorization: key=[YOUR API KEY]
Content-Type: application/json
Cache-Control: no-cache

And the body of the POST request:

   "type": "android", 
   "data": { 
     "This is the title of the message", 
     "message": "You have been served, from the Google Cloud Messaging service.", 
     "account": 123456789 
    "to" : "eD4ceBas2SN3:APA93THISchdsISsasdNOTqwe9asczAasd98EHsdREALasd0c+KEYv0kx50GZPsyc3ah6_eyvur-wvwVQe6Lfbv5ICijBfYOCkujQK271sK-RmxTe-Y_Aofx1RCe7yfnYgK7MEL7xgqY"

Image below for further clarity!


Or if you prefer you can also log into Azure and send a notification from there. In order to send a message to a specific device you will need to set up a server side implementation and then target a device using the “Send to Tag” option.  As I only have 1 device activated I can choose Random Broadcast and it will send an alert to my test device.

Screen Shot 2015-11-08 at 20.43.21

Hopefully this will have provided a good taster on how to set up Google Cloud notifications on Android using Microsoft Azure notification hub. In order to target specific devices you’ll need to create a server side API to maintain a dataset of devices and associated tokens. Hopefully if time permits this will be the subject of a future post!

If you’re looking for more information, I found the following sites helpful when setting this up:
The Xamarin Android notification guide:
Google Cloud Messaging docs:

Modern Authentication updates for Office 2013 (MSI-based)

Earlier this year, Office 2013 Modern Authentication using the Active Directory Authentication Library (ADAL) moved to public preview. The steps to take part in the preview and to prepare the Office 2013 software are well documented, particularly by one of my fellow Kloudies (see Lucian’s blog here).

However, you may find that despite creating the registry keys and installing the required updates, Modern Authentication is still not working – something I recently encountered with MSI-based installations of Office 2013 SP1 in a Windows 7 SOE.

With the assistance of the friendly Microsoft Premier Engineering team, for Modern Authentication to function we identified the following component versions should be greater than 15.0.4625.1000:

  • C:\Program Files\Common Files\Microsoft Shared\OFFICE15\MSO.DLL
  • C:\Program Files\Common Files\Microsoft Shared\OFFICE15\Csi.dll
  • C:\Program Files\Microsoft Office\Office15\GROOVE.EXE
  • C:\Program Files\Microsoft Office\Office15\OUTLOOK.EXE

And this dll should be greater than 1.0.1933.710:

  • C:\Program Files\Common Files\Microsoft Shared\OFFICE15\ADAL.DLL

To achieve the necessary version levels, the following updates were installed:

In the land of SCCM managed desktops, not all are managed equally. We found varying patch levels across the fleet and needed a way to quickly identify which Modern Authentication prerequisites were missing, so I wrote this script which can be saved as a ps1 and executed on any Win 7 32-bit PCs to test their readiness:

Write-host "Scanning for Office 2013 Modern Authentication prerequisites..."
If (Test-Path "HKLM:\SOFTWARE\Microsoft\Office\15.0") {
#Create registry keys
$Path = "HKCU:\Software\Microsoft\Office\15.0\Common\Identity"
If (!(Get-Item $Path -ErrorAction SilentlyContinue)) {New-Item $Path -Force | Out-Null}
New-ItemProperty $Path -Name Version -PropertyType DWORD -Value 1 -Force | Out-Null
New-ItemProperty $Path -Name EnableADAL -PropertyType DWORD -Value 1 -Force | Out-Null
#Check for updates
$UpdatesRequired = $False
If (![bool]((Get-Item "C:\Program Files\Common Files\Microsoft Shared\Office15\MSO.DLL").VersionInfo.FileVersion -ge "15.0.4625.1000")) {
Write-host "MSO.DLL requires update -" -Foregroundcolor Red
$UpdatesRequired = $True
If (![bool]((Get-Item "C:\Program Files\Common Files\Microsoft Shared\Office15\Csi.dll").VersionInfo.FileVersion -ge "15.0.4625.1000")) {
Write-host "Csi.dll requires update -" -Foregroundcolor Red
$UpdatesRequired = $True
If (![bool]((Get-Item "C:\Program Files\Microsoft Office\Office15\Groove.exe").VersionInfo.FileVersion -ge "15.0.4625.1000")) {
Write-host "Groove.exe requires update -" -Foregroundcolor Red
$UpdatesRequired = $True
If (![bool]((Get-Item "C:\Program Files\Microsoft Office\Office15\Outlook.exe").VersionInfo.FileVersion -ge "15.0.4625.1000")) {
Write-host "Outlook.exe requires update -" -Foregroundcolor Red
$UpdatesRequired = $True
If (![bool]((Get-Item "C:\Program Files\Common Files\Microsoft Shared\OFFICE15\ADAL.dll").VersionInfo.FileVersion -ge "1.0.1933.710")) {
Write-host "ADAL.dll requires update -" -Foregroundcolor Red
$UpdatesRequired = $True
If ($UpdatesRequired) {Write-host "Scan complete: install the updates and re-run the script" -Foregroundcolor Red}
Else {Write-host "Scan complete: Office 2013 Modern Authentication prerequisites have been met" -Foregroundcolor Green}
Else {Write-Host "Scan complete: Office 2013 is not installed" -Foregroundcolor Red}

Hope this is helpful!

Polycom VVX Phone – Call Transfer Options

During the pilot phase of a Skype for Business Enterprise Voice rollout it is standard practice to get some IP handsets in and do testing on functionality. Its important to try match feature sets between old and new handset functionality to make sure adoption of the new handset is simple.

How did you do it before? Here is how you do it now.

The longest conversation I have with this phase of testing is which transfer type will the business adopt as a default for all phones. There are 3 types of transfer that I like to discuss in Skype for Business;

  1. Blind Transfer – Transfer happens immediately
  2. Consultative Transfer – Transfer happens after a consult with the transferring recipient
  3. Safe Transfer – Returns the call on a blind transfer if the recipient doesn’t answer

The Polycom VVX Business Media phone range is what I prefer to deploy in organisations adopting Skype for Business. Polycom offer flexibility in how you configure the phone. This is done via in-band provisioning process built in by Polycom for a configuration file download. The configuration file allows for a default transfer type to be associated with the transfer soft and hard key in a standard call scenario. Two options are available;


Having a singular transfer type isn’t always optimal, especially when in conversations with workers that do high call volume for a public interfacing number i.e. Receptionist/Admin workers. These users quite often will give me the response when asked the question about which transfer they use in their job pre Skype for Business;

“Sometimes I like blind and other times I like consultative, it all depends on the calling party and how comfortable I feel with the requester in each call”.

This seems like a fair statement but not easily achieved with a VVX phone. I would tend towards default being blind and if the user wanted to do an ad-hoc consultative we would say to put the calling party on hold while you liased with the destination party and stich the call together after. Not perfect. Another method is to create a different configuration file for special cases where the user needed consultative as their default. Once again, not perfect.

My idea was to make soft keys on the ‘in call’ window on the phone for both blind and consultative transfer in my customized configuration file. This is achievable but I never sat down for long enough to pilot the process in its entirety. Older versions of the firmware I believe had consultative as default, during a call you had the option of transferring using the Blind method via a softkey but it wasn’t always on the first page of soft keys. Users wouldn’t always have the time or patience to go find the Blind key on the second page under the ‘more’ button.

But alas, Polycom have come to the rescue with UC 5.3.+ software now available for the VVX range that includes the simplest change in detail of the user guide that got me excited about an addition in functionality. User Guide (Page 46 or below)

To transfer a call:

  1. During a call, do one of the following
    1. Press Transfer to use the default transfer type
    2. Press and hold Transfer and select a transfer type.
  2. Dial a number or choose a contact from the call list or directory.
    1. If the transfer type is set to Blind, the call is transferred immediately.
  3. If the transfer type is set to Consultative, press Transfer after speaking with your contact.

Well there you have it, a very simple method of selecting a transfer method during a call. In some ways almost comical that I never once tried to hold down the transfer key. Now I do it by design. This is definetly something I’ve updated in my ‘how-to’ guides for end users adoption as it was the most common question I get in regards to VVX IP phones.