Azure Preview Features website

I had stumbled upon this site before, however, on my long journey through the interwebs I must have forgotten or lost it. The site I’m referring to is the Azure Preview Features site which isn’t directly accessible through the main Azure site top or bottom menu’s. So as this is a lucky find, I thought I’d share.

(Note: If you Google Azure preview; the site is the first result that comes up. Face palm?)


The Azure Feature Preview site is a list of current publicly accessible preview features and functionality. Moreover, Microsoft explain that the preview features in Azure are as follows:

Azure currently offers the following preview features, which are made available to you for evaluation purposes and subject to reduced or different service terms, as set forth in your service agreement and the preview supplemental terms. Azure may include preview, beta, or other pre-release features, services, software, or regions to obtain customer feedback (“Previews”). Previews are made available to you on the condition that you agree to these terms of use, which supplement your agreement governing use of Microsoft Azure.

Read More

Azure VNET gateway: basic, standard and high performance

Originally posted on Lucian’s blog over at Follow Lucian on twitter @Lucianfrango.


I’ve been working allot with Azure virtual network (VNET) virtual private network (VPN) gateways of late. The project I’m working on at the moment requires two sites to connect to a multi-site dynamic routing VPN gateway in Azure. This is for redundancy when connecting to the Azure cloud as there is a dedicated link between the two branch sites.

Setting up a multi-site VPN is a relatively streamlined process and Matt Davies has written a great article on how to run through that process via the Azure portal on the Kloud blog.

Read More

ADFS sign-in error: “An error occurred. Contact your administrator for more information.”

Originally posted on Lucian’s blog over at Follow Lucian on twitter @Lucianfrango.

I’ve not had that much luck deploying Azure AD Connect and ADFS 3.0 in Azure for a client in the last few weeks. After some networking woes I’ve moved onto the server provisioning and again got stuck. Now, I know IT is not meant to be easy otherwise there wouldn’t be some of the salaries paid out to the best and brightest, this install though was simple and nothing out of the ordinary. A standard deployment that I and many others have done before.

Let me paint the picture: ADFS is now running, although not working, in Azure compute across a load balanced set of two servers with a further load balanced set of web application proxy (WAP) servers in front. Theres two domain controllers and a AAD Connect server all across a couple of subnets in a VNET.

Read More

AWS Direct Connect in Australia via Equinix Cloud Exchange

I discussed Azure ExpressRoute via Equinix Cloud Exchange (ECX) in my previous blog. In this post I am going to focus on AWS Direct Connect which ECX also provides. This means you can share the same physical link (1GBps or 10GBps) between Azure and AWS!

ECX also provides connectivity service to AWS for connection speed less than 1GBps. AWS Direct Connect provides dedicated, private connectivity between your WAN or datacenter and AWS services such as AWS Virtual Private Cloud (VPC) and AWS Elastic Compute Cloud (EC2).

AWS Direct Connect via Equinix Cloud Exchange is Exchange (IXP) provider based allowing us to extend our infrastructure that is:

  • Private: The connection is dedicated bypassing the public Internet which means better performance, increases security, consistent throughput and enables hybrid cloud use cases (Even hybrid with Azure when both connectivity using Equinix Cloud Exchange)
  • Redundant: If we configure a second AWS Direct Connect connection, traffic will failover to the second link automatically. Enabling Bidirectional Forwarding Detection (BFD) is recommended when configuring your connections to ensure fast detection and failover. AWS does not offer any SLA at the time of writing
  • High Speed and Flexible: ECX provides a flexible range of speeds: 50, 100, 200, 300, 400 and 500MBps.

The only tagging mechanism supported by AWS Direct Connect is 802.1Q (Dot1Q). AWS always uses 802.1Q (Dot1Q) on the Z-side of ECX.

ECX pre-requisites for AWS Direct Connect

The pre-requisites for connecting to AWS via ECX:

  • Physical ports on ECX. Two physical ports on two separate ECX chassis is required if redundancy is required.
  • Virtual Circuit on ECX. Two virtual circuits are also required for redundancy

Buy-side (A-side) Dot1Q and AWS Direct Connect

The following diagram illustrates the network setup required for AWS Direct Connect using Dot1Q ports on ECX:


The Dot1Q VLAN tag on the A-side is assigned by the buyer (A-side). The Dot1Q VLAN tag on the seller side (Z-side) is assigned by AWS.

There are a few steps needing to be noted when configuring AWS Direct Connect via ECX:

  • We need our AWS Account ID to request ECX Virtual Circuits (VC)s
  • Create separate Virtual Interfaces (VI)s for Public and Private Peering on AWS Management Console. We need two ECX VCs and two AWS VIs for redundancy Private or Public Peering.
  • We can accept the Virtual Connection either from ECX Portal after requesting the VCs or  on AWS Management Console.
  • Configure our on-premises edge routers for BGP sessions. We can download the router configuration which we can use to configure our BGP sessions from AWS Management Console
  • Attach the AWS Virtual Gateway (VGW) to the Route Table associated with our VPC
  • Verify the connectivity.

Please refer to the AWS Direct Connect User Guide on how to configure edge routers for BGP sessions. Once we have configured the above we will need to make sure any firewall rules are modified so that traffic can be routed correctly.

I hope you’ve found this post useful – please leave any comments or questions below!

Read more from me on the Kloud Blog or on my own blog at

Azure AD Connect: Connect Service error “stopped-extension-dll-exception”

Originally posted on Lucian’s blog over at Follow Lucian on twitter @Lucianfrango.

I was rather stuck the other day. Azure AD Connect provisioning has not been the smoothest of installs even following the wizard and successfully completing the mostly automated process. Azure AD Connect has built upon the previous generation sync services and, from what I’ve read, isn’t much of a new app, rather a version upgrade and re-name from the AADSync service still (as of July 2015) the default for Office 365 directory replication from on-premises to Azure AD.

Past versions and previous generation aside, a now generally available app should feature a working and thoroughly tested feature set. Should…

Read More

How to provision Azure Active Directory Connect

Originally posted by Lucian Franghiu on his blog over at, #clouduccino.

Time flies when you’re connecting to Azure AD. Late last month Microsoft announced that Azure AD Connect is now generally available. At the time of writing this, the synchronisation app itself still isn’t the default sync standard for Azure and obtaining the installer requires a quick Google. Since I’m deploying it for a client, I thought I’d run through the install process for future reference.

AADConnect provides allot of new functionality like for example this new fandangled ADDS password sync. In this scenario I’m keeping federation services, so ADFS will be deployed, which is more aligned with the previous or most common enterprise identity design.

This is going to be a long blog post with allot of screen shots (you’re welcome) on how to deploy Azure AD Connect. I’ll be going though the wizard process which will follow the automated process to deploy AADConnect, ADFS and ADFS WAP servers- pretty cool indeed.

At the moment AADConnect still isn’t the standard synchronisation service for Office 365 or Azure AD and requires download from the Microsoft Download Centre. To begin with, I’ve downloaded the AADConnect installer from this location.

Read More

Sharing HTTP sessions between WebView requests and HttpClient on Windows Phone


I have been working on a hybrid mobile application that requires displaying/containing a few mobile apps in a WebView control. In the background, some HTTP requests need to go through to collect data and do further processing. We need to maintain the same session in all of the web requests going through the mobile app. This means all web (HTTP) requests originated by the Webview as well as our background (HttpClient) requests need to share cookies, cache, etc. So how can we achieve this?


HttpClient has become the go-to class for all things HTTP, especially with the support of HttpClient in PCLs (Portable Class Library), who can resist it? So my first thought when I considered this requirement was to use HttpClient with a HttpClientHandler, preserve the session cookies, and share them with the WebView. I started my initial research, and I found that somebody has done exactly that, you can find it here. This gave me some more confidence for a successful solution.

This first approach would mean using HttpClient (along with a HttpClientHandler) to hold cookies and share them with the WebView. However, this would be error-prone because I will need to continuously monitor both cookies and update the other group of requests. Plus, sharing the data cache between the WebView and HttpClient would still be an issue that I was not sure how to address.


Before going further, I thought I would look for an alternative, and I found Windows.Web.HttpClient. This one seemed very similar to System.Net.Http.HttpClient, but the implementation is quite different, regardless of the exact matching of the name :). I found this video (below) from Microsoft’s //Build conference that refers to this different implementation of HttpClient for Windows specific development.

Five Great Reasons to Use the New HttpClient API to Connect to Web Services

Five Great Reasons to Use the New HttpClient API to Connect to Web Services

Apparently the Windows implementation of HttpClient gives you the ability customise all aspects your HTTP requests. The video above lists the following five reasons why you should use Windows.Web.HttpClient:

  1. Shared Cookies, Cache, and Credentials
  2. Strongly Typed headers => fewer bugs
  3. Access to Cookies and Shared Cookies
  4. Control over Cache and Shared Cache
  5. Inject your code modules into the processing pipe-line => cleaner integration.

When I read the first statement above, I really thought that this is too good to be true :), just exactly what I am looking for. So I decided to give it a go. As you can see some of the features listed for this HttpClient (Windows implementation) are similar to what we have in the System.Net world, but this gives us extra capabilities.

HttpClientHandlers vs HttpBaseProtocolFilter

It is worth mentioning that Windows.Web library does not have HttpClientHandlers that we are familiar with in System.Net, instead it gives you the ability do more with HttpBaseProtocolFilter, and this is a key point. HttpBaseProtocolFilter enables developers to customise/manipulate HTTP requests (headers, cookies, cache, etc) and the changes will be applied across the board in your application. This applies whether you are making a HTTP request programmatically using HttpClient or via the user interface (using a WebView for instance).

Code Time

  // creating the filter
  var myFilter = new HttpBaseProtocolFilter();
  myFilter.AllowAutoRedirect = true;
  myFilter.CacheControl.ReadBehavior = HttpCacheReadBehavior.Default;
  myFilter.CacheControl.WriteBehavior = HttpCacheWriteBehavior.Default;

  // get a reference to the cookieManager (this applies to all requests)
  var cookieManager = myFilter.CookieManager;

  // make the httpRequest
  using (var client = new HttpClient()) 
     HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Get, "your-url-address"); 
     // add any request-specific headers here
     // more code been omitted

     var result = await client.SendRequestAsync(request);

     var content = await result.Content.ReadAsStringAsync();

     // now we can do whatever we need with the html content we got here :)
     // Debug.WriteLine(content);

  // assuming that the previous request created a session (set cookies, cached some data, etc)
  // subsequent requests in the webview will share this data
  myWebView.Navigate(new Uri("your-url-address"));

Hopefully this short code snippet gives you a good idea of what you can do with Windows Implementation of HttpClient. Notice that myWebView automatically shares the CookieManager implicitly.

Other Apps?

One might ask about how will this impact other apps? It doesn’t. The Windows.Web library was designed to work across all requests in one app. Therefore, you do not need to be concerned about impacting other apps or leaking your data to other external requests.


Someone wise once said “with great power, comes great responsibility”. This should be remembered when using HttpBaseProtocolFilter with your HTTP requests as it can (as mentioned above) impact all your subsequent requests. Hope you find this useful and I would love to hear your comments and feedback.


Using Azure Machine Learning to predict Titanic survivors

So in the last blog I looked at one of the Business Intelligence tools available in the Microsoft stack by using the Power Query M language to query data from an Internet source and present in Excel. Microsoft are making a big push into the BI space at the moment, and for good reason. BI is a great cloud workload. So now let’s take a look at one of the heavy hitters at the other end of the BI scale spectrum, Azure Machine Learning.

The list of services available in Azure is growing fast. In fact, it used to be that you could see them all on one page in the Azure Portal, but now I have 28 different icons to choose from. Many of these are traditional cloud infrastructure services like Virtual Machines, Networks and SQL databases but there are now many higher level services where the infrastructure under the hood has long since been forgotten. While bigger virtual machines and faster disks get plenty of publicity because they are easy to understand and compare between cloud vendors, it is the higher level services that are much more interesting, after all, this is what Cloud is all about, not cheaper virtual machines but services that solve business problems.

Azure Stream Analytics, Azure Machine Learning and Azure Search have all been added recently to the Azure platform and fall into the category of “don’t bother me with annoying infrastructure just answer this problem”. Recently the Microsoft Research team had an Internet sensation when they released which uses a trained machine learning model to guess how old the faces in an uploaded photo are. The simplicity of the problem and the user interface belie the huge amount of human knowledge and compute power that is brought together to solve it which is available to you and me today.

So what’s that science fiction sounding “Machine Learning” all about?

Machine Learning

Machine Learning is a data modelling environment where the tools and algorithms for data modelling are presented in an environment that can be used to test and retest a hypothesis and then use that model to make predications. Which all sounds a bit 1st year Uni Stats lecture, and it is. In browsing around the many samples for Azure Machine Learning most demonstrate how easy it is to use the tools but with very little depth or understanding and without the end to end approach from a problem to an answer. So let’s fix that.

There’s plenty of good background reading on machine learning but a good take away is to follow a well-defined method:

  • Business Understanding
  • Data Understanding
  • Data Preparation
  • Modelling
  • Evaluation
  • Refinement
  • Deployment

Business Understanding

So we need to find a problem to solve. There’s a lot of problems out there to solve, and if you want to pick one Kaggle is a great place to start. Kaggle has a number of tutorials, competitions and even commercial prize based problems that you can solve. We are going to pick the Kaggle tutorial Titanic survivors.

The business of dying on the Titanic was not a random, indiscriminate one. There was a selection process at play as there were not enough lifeboats for people and survival in the water of the North Atlantic very difficult. The process of getting into a life boat during the panic would have included a mixture of position (where you are on the boat), women and children first, families together and maybe some other class or seniority (or bribing) type effect.

Data Understanding

The data has been split into two parts, a train.csv data set which includes the data on who survived and who didn’t and a test.csv which you use to guess who died and who didn’t. So let’s first take a look at the data in Excel to get a better understanding. The data set of the passengers have the following attributes.

survival Survival (0 = No; 1 = Yes)
pclass Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd)
name Name
sex Sex
age Age
sibsp Number of Siblings/Spouses Aboard
parch Number of Parents/Children Aboard
ticket Ticket Number
fare Passenger Fare
cabin Cabin
embarked Port of Embarkation (C = Cherbourg; Q = Queenstown; S = Southampton)


First let’s take a look at the data in Excel. A quick scan shows there are 891 passengers represented in this training set (there will be more in the test set also) that the data is not fully populated, in fact the Cabin data is limited and Age has some holes in it. Also the columns PassengerId, Name, Ticket have “high cardinality”; that is there are many categories so they don’t form a useful way to categorise or group the data.

Now create a Pivot table and use Average of Survived as a Value in the Pivot table. With no other dinemsions we can see that 38% of the passengers survived. Now we can test some of the attributes and see if they have an effect on survival rate. This process is just to help understand if there is an interesting problem here to be solved and to test some of the assumptions about how lifeboats may have been filled.

There certainly seems to be some biases at play that are not random which can lift your survival rate as a passenger from the average 0.38 probablility. So let’s see what Azure Machine Learning can come up with.

Data Preparation

Create a new Azure Machine Learning environment if you don’t already have one. This will create a matching Azure Storage account; mine is called backtesterml.

Now upload the training data into the Azure Blob storage in a container called “titanic”. I use the AzCopy tool from the Azure Storage Tools download.

C:\Program Files (x86)\Microsoft SDKs\Azure>AzCopy /Source:C:\Users\PeterReid\Downloads /Dest: /DestKey:yourstoragekey /Pattern:train.csv

Create a new Blank Experiment and rename it “Titanic Survivors” and drop a Reader on the design surface and point it at the Blob file we uploaded into titanic/train.csv

Add a Metadata Editor and rename the Survived column to Target. This is an indication that the model we will build is trying to predict the target value Survived

Now add a Project Columns and remove those columns that we think are not going to help with prediction or have too few values to be useful.

Then add a “Clean Missing Data” and set to “Remove entire row”. The Minimum and Maximum settings are set to remove all rows with any missing values rather than some other ratio.

Now Split the data in two. This is important because we want to create a model that guesses survival rate and then test it against some data that was not used to generate the model. With such a small data set, how we split the data is important. I have used a “Stratified” split on the “Target” survivor column so that we get an even representation of survivors in the predication and testing data sets. Also while in here change the Random seed to something other than 0 so that on repeated runs we get the same breakup of passengers.


Now it’s time to revisits Statistics 101 so workout what to do with this data. There are a huge number of pre-built Models that can be dropped onto the design surface choosing the right one is a little bit art and a little bit science. Here’s the science

  • We have a classification problem, who survives and who doesn’t
  • It is a “Two-Class” problem since you can only survive or not

So start by opening Machine Learning->Initialize Model->Classification->”Two-Class Logistic Regression” onto the design surface. This is a binary logistic model used to predict a binary response (survival) based on one or more predictor variables (age, sex etc). Connect this to a “Train Model” with the prediction column “Target” selected and connect that to one side of a “Score Model” block. The other side of the Score Model connect up the data from the “Split” block and send the results into an “Evaluate Model”. Then hit Run!

After processing (note your using a shared service so it’s not as responsive as running something locally but is built for scale) it should look something like this:

Now one the run has completed you can click on the little ports on the bottom of the blocks to see the data and results. We are interested in the


Click on the output of the “Train Model” block which will bring up this window

This is what the output of the regression looks like. And it means something like this:

  • Having a large class value (e.g. 3rd class) has a big negative impact on survival (those on upper decks got out easier)
  • Being female has a big positive impact on survival and being male has an equal negative impact (women first as we thought)
  • Having a larger age has a reasonable negative impact on survival (children first as we thought)
  • And having a larger ticket price has a small positive impact (rich people had some preference)
    (Note Bias is an internally generated feature representing the regression fit)

OK so now we have the model built let’s see how good it is at predicting the survival rate of the other 25% of passengers we split off at the beginning.

Add and connect up a “Score Model” connect it to the right hand side of the Split block and then to the “Evaluate Model”. Then Run again

Click on the Output of the “Score Model” block to get this.

Here we have all of the remaining 25% of passengers who were not submitted to generate the model. Taking the first row, this passenger is Male, 3rd class, 20 years old and on a cheap fare given what we know about the lifeboat process he probably didn’t survive. Indeed, the model says the survival probability is only 0.119 and has assigned him a “Scored Label” of 0 which is a non-survivor. But the “Target” column says that in reality, this guy survived so the model didn’t predict correctly. The next Female was predicted properly etc, etc.

Now go to the “Evaluate Model” output. This step has looked at all the predictions vs actuals to evaluate the model’s effectiveness at prediction.

The way to read this charts is to say for every probability of survival what percentage actually survived. The best possible model is represented by the green line and a totally random pick is represented by the orange line. (You could of course have a really bad model which consistently chooses false positives which would be a line along the bottom and up the right hand side). The way to compare these models is by looking at the “Area Under the Curve” which will be somewhere between 0 (the worst) and 1 (the best). Our model is reasonably good and is given an AUC score of 0.853.


There are many ways to refine the model. When we first looked at the data there was something in that Age chart. While there certainly is a bias toward younger survivors. It’s not a linear relationship, there is a dramatic drop off at 16 and a slight increase in survival rate after 45. That fits with the expected lifeboat filling process where you would be considered as your either Young, or Old or Other.

So let’s build that into the model. To do that we will need to “Execute R-Script”. (R is a very powerful open source language and libraries built for exactly this purpose.) Drag a script editor onto the design surface, connect up the first input and enter this script:

Then connect the output to the Metadata Editor. Open up the Project Columns block and remove Age and add the new AgeGroup category. Then hit Run.

So let’s use the power of infinite cloud computing and try the brute force method first. Add all of the Two-Class regressions onto the design surface and connect them up to the Score and Evaluate blocks then Run.

Amazingly the whole thing runs in less than 2 minutes. I have done two runs, one using Age and one using AgeGroup to see which models are sensitive to using the categorisation. What is interesting here is that despite some very different approaches to modelling the data the results are remarkably the same.

Model AUC Age AUC Age Group
Two-Classed Locally Deep Support Vector Machine



Two-Classed Neural Network



Two-Classed Support Vector Machine



Two-Classed Average Perceptron



Two-Classed Decision Jungle



Two-Classed Logistic Regression



Two-Classed Bayes Point Machine



Two-Classed Boosted Decision Tree



Two-Classed Decision Forest




Some of the models benefitted from the AgeGroup category and some didn’t, but the standout success is the Neural network with AgeGroup. Now it’s time to play with the Neural network parameters, iterations, nodes, parameter sweeps etc to get an optimal outcome.


Now back to the task at hand, we started with a train.csv set of data and a test.csv data set. The aim is to determine survivors in that test.csv data set. So now “Create Scoring Experiment” and “Publish Web Service”. This will make a few modifications to your design surface including saving the “Trained Model” and plumbing it in with a web service input and output. Ive added an extra “Project Columns” so only the single column “Scored Label” is returned to indicate whether the passenger is a survivor or not. Which will look like this

Then “Publish the Web Service” which will create a web service endpoint that can be called with the input parameters of the original dataset

Now by downloading that Excel Spreadsheet and pasting in the data from the test.csv, each row generates a request to our newly created web service and populates the survival prediction value. Some of the data in the test data set has empty data for Age which we will just set to the Median age of 30 (there is significant room for improvement in the model by taking this into account properly). The beta version of the Spreadsheet doesn’t support spaces or delimiters so there is a little bit of extra data cleansing in Excel before submitting to the web service (remove quotes).


Heres the first few rows of the scored test data. Each one of these passengers were not present in the train data which stopped at passenger 891. So the “Predicted Values” is the assigned survival flag as predicted by our trained model. As a sanity check over the results the survival rate is about 27%, similar, but slightly less than the training data. Passenger 892 looks like a reasonable candidate for non survival and Passenger 893 looks like a candidate for survival.

So then prepare the data and submit to the Kaggle competition for evaluation. This model is currently sitting at position 2661 out of 17390 entries! (there are clearly a few cheaters in there with the source data available)

The process of developing this model end-to-end shows how the power of Cloud can be put to use to solve real world problems. And there are plenty of such problems. The Kaggle web site not only has fun competitions like the Titanic survivor problem, but is also a crowd sourcing site for resolving some much more difficult problems for commercial gain. Currently there is a $100K bounty if you can “Identify signs of diabetic retinopathy in eye images”.

While these sorts of problems used to be an area open to only the most well-funded institutions, Azure Machine Learning opens the opportunity to solve the world’s problems to anyone with a bit of time on a rainy Sunday afternoon!


Azure ExpressRoute in Australia via Equinix Cloud Exchange

Microsoft Azure ExpressRoute provides dedicated, private circuits between your WAN or datacentre and private networks you build in the Microsoft Azure public cloud. There are two types of ExpressRoute connections – Network (NSP) based and Exchange (IXP) based with each allowing us to extend our infrastructure by providing connectivity that is:

  • Private: the circuit is isolated using industry-standard VLANs – the traffic never traverses the public Internet when connecting to Azure VNETs and, when using the public peer, even Azure services with public endpoints such as Storage and Azure SQL Database.
  • Reliable: Microsoft’s portion of ExpressRoute is covered by an SLA of 99.9%. Equinix Cloud Exchange (ECX) provides an SLA of 99.999% when redundancy is configured using an active – active router configuration.
  • High Speed speeds differ between NSP and IXP connections – but go from 10Mbps up to 10Gbps. ECX provides three choices of virtual circuit speeds in Australia: 200Mbps, 500Mbps and 1Gbps.

Microsoft provided a handy table comparison between all different types of Azure connectivity on this blog post.

ExpressRoute with Equinix Cloud Exchange

Equinix Cloud Exchange is a Layer 2 networking service providing connectivity to multiple Cloud Service Providers which includes Microsoft Azure. ECX’s main features are:

  • On Demand (once you’re signed up)
  • One physical port supports many Virtual Circuits (VCs)
  • Available Globally
  • Support 1Gbps and 10Gbps fibre-based Ethernet ports. Azure supports virtual circuits of 200Mbps, 500Mbps and 1Gbps
  • Orchestration using API for automation of provisioning which provides almost instant provisioning of a virtual circuit.

We can share an ECX physical port so that we can connect to both Azure ExpressRoute and AWS DirectConnect. This is supported as long as we use the same tagging mechanism based on either 802.1Q (Dot1Q) or 802.1ad (QinQ). Microsoft Azure uses 802.1ad on the Sell side (Z-side) to connect to ECX.

ECX pre-requisites for Azure ExpressRoute

The pre-requisites for connecting to Azure regardless the tagging mechanism are:

  • Two Physical ports on two separate ECX chassis for redundancy.
  • A primary and secondary virtual circuit per Azure peer (public or private).

Buy-side (A-side) Dot1Q and Azure ExpressRoute

The following diagram illustrates the network setup required for ExpressRoute using Dot1Q ports on ECX:

Dot1Q setup

Tags on the Primary and Secondary virtual circuits are the same when the A-side is Dot1Q. When provisioning virtual circuits using Dot1Q on the A-Side use one VLAN tag per circuit request. This VLAN tag should be the same VLAN tag used when setting up the Private or Public BGP sessions on Azure using Azure PowerShell.

There are few things that need to be noted when using Dot1Q in this context:

  1. The same Service Key can be used to order separate VCs for private or public peerings on ECX.
  2. Order a dedicated Azure circuit using Azure PowerShell Cmdlet (shown below) and obtain the Service Key and use the this to raise virtual circuit requests with Equinix.

    Get-AzureDedicatedCircuit returns the following output.Get-AzureDedicatedCircuit Output

    As we can see the status of ServiceProviderProvisioningState is NotProvisioned.

    Note: ensure the physical ports have been provisioned at Equinix before we use this Cmdlet. Microsoft will start charging as soon as we create the ExpressRoute circuit even if we don’t connect it to the service provider.

  3. Two physical ports need to be provisioned for redundancy on ECX – you will get the notification from Equinix NOC engineers once the physical ports have been provisioned.
  4. Submit one virtual circuit request for each of the private and public peers on the ECX Portal. Each request needs a separate VLAN ID along with the Service Key. Go to the ECX Portal and submit one request for private peering (2 VCs – Primary and Secondary) and One Request for public peering (2VCs – Primary and Secondary).Once the ECX VCs have been provisioned check the Azure Circuit status which will now show Provisioned.expressroute03

Next we need to configure BGP for exchanging routes between our on-premises network and Azure as a next step, but we will come back to this after we have a quick look at using QinQ with Azure ExpressRoute.

Buy-side (A-side) QinQ Azure ExpressRoute

The following diagram illustrates the network setup required for ExpressRoute using QinQ ports on ECX:

QinQ setup

C-TAGs identify private or public peering traffic on Azure and the primary and secondary virtual circuits are setup across separate ECX chassis identified by unique S-TAGs. The A-Side buyer (us) can choose to either use the same or different VLAN IDs to identify the primary and secondary VCs.  The same pair of primary and secondary VCs can be used for both private and public peering towards Azure. The inner tags identify if the session is Private or Public.

The process for provisioning a QinQ connection is the same as Dot1Q apart from the following change:

  1. Submit only one request on the ECX Portal for both private and public peers. The same pair of primary and secondary virtual circuits can be used for both private and public peering in this setup.

Configuring BGP

ExpressRoute uses BGP for routing and you require four /30 subnets for both the primary and secondary routes for both private and public peering. The IP prefixes for BGP cannot overlap with IP prefixes in either your on-prem or cloud environments. Example Routing subnets and VLAN IDs:

  • Primary Private: (VLAN 100)
  • Secondary Private: (VLAN 100)
  • Primary Public: (VLAN 101)
  • Secondary Public: (VLAN 101)

The first available IP address of each subnet will be assigned to the local router and the second will be automatically assigned to the router on the Azure side.

To configure BGP sessions for both private and public peering on Azure use the Azure PowerShell Cmdlets as shown below.

Private peer:

Public peer:

Once we have configured the above we will need to configure the BGP sessions on our on-premises routers and ensure any firewall rules are modified so that traffic can be routed correctly.

I hope you’ve found this post useful – please leave any comments or questions below!

Read more from me on the Kloud Blog or on my own blog at

Azure MFA Server – International Deployment

Hi all – this blog will cover off some information to assist with multilingual/international deployment of Azure MFA server. There are some nuances of the product that make ongoing management of language preferences a little challenging. Also some MFA Methods are preferable to others in international scenarios due to carrier variances.

Language Preferences

Ideally when a user is on-boarded, their language preferences for the various MFA Methods should be configured to their native language. This can easily be achieved using MFA Server, however there are some things to know:

  1. Language settings are defined in in Synchronisation Items.
  2. Synchronisation Items can be set in a hierarchy so that settings in the most precedent Synchronisation Item apply.
  3. Language settings set explicitly in Synchronisation Items apply at FIRST SYNC ONLY. Adding, removing or changing Sync Items with explicitly set language configurations will have no effect on existing MFA Server users.

I’ll cover off how to get around item 3 a bit later with non-explicitly configured language settings. To demonstrate hierarchical application of Sync items – first create two Sync Items with different explicitly set language settings as below. I’ve used Active directory OUs to differentiate user locations, however AD security groups can be used too, as can an LDAP filter:

Add Synchronization Item Dialog - Australia

Add Synchronization Item Dialog - Italy

Then set them in the required hierarchy, in this case our default language is English for the Aussies, then we have an overriding Sync Item for the Italians – easy:

Multi-Factor Authentication Server - Syncrhonization tab

The above is all well and good if you are 100% guaranteed that users will be in the correct OU, group, or have the user filter apply when MFA server synchronises. If they are subsequently modified and a different Sync Item now applies, their language settings will not be updated. I’ll now cover how to get around this next.

Use of Language Attributes to control Language Settings

MFA Server has the option to specify a language attribute in user accounts to contain the short code (e.g. en = English) for language preferences. When the account is imported, the user’s default language can be set based on this attribute. Also unlike explicitly configured language settings, when a Sync Item is configured to use language attributes it will update existing users if the language short code in their account changes.

To use this feature, first define an AD attribute that stores the language short code within the Attributes tab in the Directory Integration. Shown below I’m using an AD Extension attribute to store this code:

Multi-Factor Authentication Server - setting Attributes

Then create a Sync Item that covers all your users, and configure “(use language attribute)” instead of explicitly setting a language – it’s the last item on the list of languages so unless you’re Zulu it’s easy to miss. By the way, those are the short codes for the language attribute for each language listed in this drop down list. Once user accounts have been configured with the language short code in the specified attribute, on next sync their language will be updated.

Edit Synchronization Item Dialog

Other Stuff to Keep in Mind

When dealing with international phone carriers, not all are created equal. Unfortunately, Microsoft has little control over what happens to messages once they’re sent from Azure MFA Services. Some issues that have been experienced:

  1. Carriers not accepting two way SMS, thus MFA requests time out as user One Time Password responses are not processed.
  2. Carriers not accepting the keypad response for Phone call MFA, again resulting in MFA request time out.
  3. Carriers spamming users immediately after SMS MFA has been sent to users.
  4. Users being charged international SMS rates for two-way SMS (makes sense, but often forgotten).

In my experience SMS has proven to be the least reliable, unless the Mobile MFA App/OATH can be used Phone is the better method for International users.

Often overlooked is the ability to export and import user settings using the MFA Server Console. You can export by selecting File/Export within the console. The CSV can then be updated and re-imported – this will modify existing user accounts including MFA Method preference and language defaults.

One Last Thing

For Apple iPhone users, Notifications must be enabled for the MFA Mobile App as the App will fail to activate with a timeout error if Notifications are disabled. Notifications can be enabled in iOS via navigating to Settings/Notifications/Mulit-Factor and enabling “Allow Notifications”. Changing this setting can take a little while to apply, so if the Mobile App still does’t activate after enabling Notifications, wait a bit then try again.