ADFS v 3.0 (2012 R2) Migration to ADFS 4.0 (2016) – Part 1


With the release of Windows Server 2016, Microsoft has introduced new and improved features. One of those features is ADFS 4.0, better known as ADFS 2016.

Organisations have already started leveraging ADFS 2016 as it covers most of their requirement, specially in terms of security.

In this series of blog posts, I will demonstrate how you can upgrade from ADFS v 3.0 (Running Windows Server 2012 R2) to ADFS 2016 (Running Windows Server 2016 Datacenter). In the series to come I will also cover Web Application Proxy (WAP) migration from Windows Server 2012 R2 to Windows Server 2016. Moreover, I will cover integration of Azure MFA with the new ADFS 2016.

The posts in this series assume you have knowledge in Windows Servers, AD, ADFS, WAP, and MFA. This blog post will not go into detailed step-by-step installation of roles and features. This blog post also assumes you have a running environment of AD, ADFS/WAP (2012 R2), AAD Connect already configured.

What’s New in ADFS 2016?

ADFS 2016 offers new and improved features included:

  • Eliminate Passwords from the Extranet
  • Sign in with Azure Multi-factor Authentication
  • Password-less Access from Compliant Devices
  • Sign in with Microsoft Passport
  • Secure Access to Applications
  • Better Sign in experience
  • Manageability and Operational Enhancements

For detailed description on the aforementioned points, please refer to this link.

Current Environment

  • 2x ADFS v3 Servers (behind an internal load balancer)
  • 2x WAP 2012 R2 Server (behind an external load balancer)
  • 2x AD 2012 R2 Servers
  • 1x AAD Connect server

At a high level design, this is how the ADFS/WAP environment looks:


Future environment:

  • 2x ADFS 2016 Servers (behind the same internal load balancer)
  • 2x WAP 2016 Servers (behind the same external load balancer)
  • 2x AD 2012 R2 Servers
  • 1x AAD Connect Server

Planning for your ADFS and WAP Migration

At first, you need to make sure that your applications can support ADFS 2016, some legacy applications may not be supported.

The steps to implement SSO are as follows:

  1. Active Directory schema update using ‘ADPrep’ with the Windows Server 2016 additions
  2. Build Windows Server 2016 servers with ADFS and install into the existing farm and add the servers to the Azure load balancer
  3. Promote one of the ADFS 2016 servers as “primary” of the farm, and point all other secondary servers to the new “primary”
  4. Build Windows Server 2016 servers with WAP and add the servers to the Azure load balancer
  5. Remove the WAP 2012 servers from the Azure load balancer
  6. Remove the ADFSv3 servers from the Azure load balancer
  7. Raise the Farm Behavior Level feature (FBL) to ‘2016’
  8. Remove the WAP servers from the cluster
  9. Upgrade the WebApplicationProxyConfiguration version to ‘2016’
  10. Configure ADFS 2016 to support Azure MFA and complete remaining configuration

The steps for the AD schema upgrade are as follows:

  1. Prior to starting, the Active Directory needs to be in a health state, in particular replication needs to be performing without error.
  2. The Active Directory needs to be backed-up. Best to backup (at a minimum) a few Active Directory Domain Controllers including the ‘system state’
  3. Identify which Active Directory Domain Controller maintains the Schema Master role
  4. Perform the update using an administrative account by temporarily adding the account to the Schema Admin group
  5. Download and have handy the Windows Server 2016 installation media
  6. When ready to update the schema, perform the following:
  7. Open an elevated command prompt and navigate to support\adprep directory in the Windows Server 2016 installation media. Run the following: adprep /forestprep.
  8. Once that completes run the following: adprep/domainprep

Upgrading the Active Directory schema will not impact your current environment, nor will it raise the domain/forest level.

Part 2 of this series will be published early next week. Therefore make sure to please come back and check in details the migration process.



How to configure a Graphical PowerShell Dev/Admin/Support User Interface for Azure/Office365/Microsoft Identity Manager

During the development of an identity management solution I find myself with multiple PowerShell/RDP sessions connected to multiple environments using different credentials often to obtain trivial data/information. It is easy to trip yourself up as well with remote powershell sessions to differing environments. If only there was a simple UI that could front-end a set of PowerShell modules and make those simple queries quick and painless. Likewise to allow support staff to execute a canned set of queries without providing them elevated permissions.

I figured someone would have already solved this problem and after some searching with the right keywords I found the powershell-command-executor-ui from bitsofinfo . Looking into it he had solved a lot of the issues with building a UI front-end for PowerShell with the powershell-command-executor and the stateful-process-command-proxy. That solution provided the framework for what I was thinking. The ability to provide a UI for PowerShell using powershell modules including remote powershell was exactly what I was after. And it was built on NodeJS and AngularJS so simple enough for some customization.


In this blog post I’ll detail how I’ve leveraged the projects listed above for integration with;

Initially I had a vision of serving up the UI from an Azure WebApp. NodeJS on Azure WebApp’s is supported, however with all the solution dependencies I just couldn’t get it working.

My fallback was to then look to serve up the UI from a Windows Server 2016 Nano Server. I learnt from my efforts that a number of the PowerShell modules I was looking to provide a UI for, have .NET Framework dependencies. Nano Server does not have full .NET Framework support. Microsoft state to do so would mean the server would no longer be Nano.

For now I’ve deployed an Azure Windows Server 2016 Server secured by an Azure NSG to only allow my machine to access it. More on security later.


Simply, put the details in Github for the powershell-command-executor provide the architecture and integration. What I will detail is the modifications I’ve made to utilize the more recent AzureADPreview PowerShell Module over the MSOL PowerShell Module. I also updated the dependencies of the solution for the latest versions and hooked it into Microsoft Identity Manager. I also made a few changes to allow different credentials to be used for Azure and Microsoft Identity Manager.

Getting Started

I highly recommend you start with your implementation on a local development workstation/development virtual machine. When you have a working version you’re happy with you can then look at other ways of presenting and securing it.


NodeJS is the webserver for this solution. Download NodeJS for your Windows host here. I’m using the 64-bit version, but have also implemented the solution on 32-bit. Install NodeJS on your local development workstation/development virtual machine.

You can accept all the defaults.

Following the installation of NodeJS download the powershell-command-executor-ui from GitHub. Select Clone 0r Download, Download ZIP and save it to your machine.

Right click the download when it has finished and select Extract All. Select Browse and create a folder at the root of C:\ named nodejs. Extract powershell-command-executor-ui.

Locate the c:\nodejs\powershell-command-executor-ui-master\package.json file.

Using an editor such as Notepad++ update the package.json file ……

…… so that it looks like the following. This will utilise the latest versions of the dependencies for the solution.

From an elevated (Administrator) command prompt in the c:\nodejs\powershell-command-executor-ui-master directory run “c:\program files\nodejs npm” installThis will read the package.json file you edited and download the dependencies for the solution.

You can see in the screenshot below NodeJS has downloaded all the items in package.json including the powershell-command-executor and stateful-process-command-proxy.

When you now list the directories under C:\nodejs\powershell-command-executor-ui-master\node_modules you will see those packages and all their dependencies.

We can now test that we have a working PowerShell UI NodeJS website. From an elevated command prompt whilst still in the c:\nodejs\powershell-command-executor-ui-master directory run “c:\Program Files\nodejs\node.exe” bin\www

Open a browser on the same host and go to http://localhost:3000”. You should see the default UI.

Configuration and Customization

Now it is time to configure and customize the PowerShell UI for our needs.

The files we are going to edit are:

  • C:\nodejs\powershell-command-executor-ui-master\routes\index.js
    • Update Paths to the encrypted credentials files used to connect to Azure, MIM. We’ll create the encrypted credentials files soon.
  • C:\nodejs\powershell-command-executor-ui-master\public\console.html
    • Update for your customizations for CSS etc.
  • C:\nodejs\powershell-command-executor-ui-master\node_modules\powershell-command-executor\O365Utils.js
    • Update for PowerShell Modules to Import
    • Update for Commands to make available in the UI

We also need to get a couple of PowerShell Modules installed on the host so they are available to the site. The two I’m using I’ve mentioned earlier. With WMF5 intalled using Powershell we can simply install them as per the commands below.

Install-Module AzureADPreview
Install-Module LithnetRMA

In order to connect to our Microsoft Identity Manager Synchronization Server we are going to need to enable Remote Powershell on our Microsoft Identity Manager Synchronization Server. This post I wrote here details all the setup tasks to make that work. Test that you can connect via RPS to your MIM Sync Server before updating the scripts below.

Likewise for the Microsoft Identity Manager Service Server. Make sure after installing the LithnetRMA Powershell Module you can connect to the MIM Service using something similar to:

# Import LithnetRMA PS Module
import-module lithnetrma

# MIM AD User Admin
$username = ""
# Password 
$password = "Secr3tSq1rr3l!" | convertto-securestring -AsPlainText -Force
# PS Creds
$credentials = New-Object System.Management.Automation.PSCredential $Username,$password

# Connect to the FIM service instance
# Will require an inbound rule for TCP 5725 (or your MIM Service Server Port) in you Resource Group Network Security Group Config
Set-ResourceManagementClient -BaseAddress http://mymimportalserver.:5725 -Credentials $credentials



This file details the encrypted credentials the site uses. You will need to generate the encrypted credentials for your environment. You can do this using the powershell-credentials-encryption-tools. Download that script to your workstation and unzip it. Open the credentialEncryptor.ps1 script using an Administrator PowerShell ISE session.

I’ve changed the index.js to accept two sets of credentials. This is because your Azure Admin Credentials are going to be different from your MIM Administrator Credentials (both in name and password). The username for my Azure account looks something like whereas for MIM it is Domainname\Username.

Provide an account name for your Azure environment and the associated password.

The tool will create the encrypted credential files.

Rename the encrypted.credentials file to whatever makes sense for your environment. I’ve renamed it creds1.encrypted.credentials.

Now we re-run the script to create another set of encrypted credentials. This time for Microsoft Identity Manager. Once created, rename the encrypted.credentials file to something that makes sense in your environment. I’ve renamed the second set to creds2.encrypted.credentials.

We now need to copy the following files to your UI Website C:\nodejs\powershell-command-executor-ui-master directory:

  • creds1.encrypted.credentials
  • creds2.encrypted.credentials
  • decryptUtil.ps1
  • secret.key

Navigate back to Routes.js and open the file in an editor such as Notepad++

Update the index.js file for the path to your credentials files. We also need to add in the additional credentials file.

The changes to the file are, the paths to the files we just copied above along with the addition var PATH_TO_ENCRYPTED_RPSCREDENTIALS_FILE for the second set of credentials used for Microsoft Identity Manager.

var PATH_TO_DECRYPT_UTILS_SCRIPT = "C:\\nodejs\\powershell-command-executor-ui-master\\decryptUtil.ps1";
var PATH_TO_ENCRYPTED_CREDENTIALS_FILE = "C:\\nodejs\\powershell-command-executor-ui-master\\creds1.encrypted.credentials";
var PATH_TO_ENCRYPTED_RPSCREDENTIALS_FILE = "C:\\nodejs\\powershell-command-executor-ui-master\\creds2-encrypted.credentials";
var PATH_TO_SECRET_KEY = "C:\\nodejs\\powershell-command-executor-ui-master\\secret.key";

Also to initCommands to pass through the additional credentials file

initCommands: o365Utils.getO365PSInitCommands(

Here is the full index.js file for reference.



The public/console.html file is for formatting and associated UI components. The key things I’ve updated are the Bootstrap and AngularJS versions. Those are contained in the top of the html document. A summary is below.
<link rel="stylesheet" href="">
<link rel="stylesheet" href="">

You will also need to download the updated Bootstrap UI (ui-bootstrap-tpls-2.4.0.min.js). I’m using v2.4.0 which you can download from here. Copy it to the javascripts directory.

I’ve also updated the table types, buttons, colours, header, logo etc in the appropriate locations (CSS, Tables, Div’s etc). Here is my full file for reference. You’ll need to update for your colours, branding etc.


Finally the O365Utils.js file. This contains the commands that will be displayed along with their options, as well as the connection information for your Microsoft Identity Manager environment.

You will need to change:

  • Line 52 for the address of your MIM Sync Server
  • Line 55 for the addresses of your MIM Service Server
  • Line 141 on-wards for what commands and parameters for those commands you want to make available in the UI

Here is an example with a couple of AzureAD commands, a MIM Sync and a MIM Service command.

Show me my PowerShell UI Website

Now that we have everything configured let’s start the site and browse to it. If you haven’t stopped the NodeJS site from earlier go to the command window and press Cntrl+C a couple of times. Run “c:\Program Files\nodejs\node.exe” bin\www again from the C:\nodejs\powershell-command-executor-ui-master directory unless you have restarted the host and now have NodeJS in your environment path.

In a browser on the same host go to http://localhost:3000 again and you should see the site as it is below.

Branding and styling from the console.html, menu options from the o365Utils.js and when you select a command and execute it data from the associated service …….

… you can see results. From the screenshot below a Get-AzureADUser command for the associated search string executed in milliseconds.



The powershell-command-executor-ui from bitsofinfo is a very extensible and powerful NodeJS website as a front-end to PowerShell.

With a few tweaks and updates the look and feel can be easily changed along with the addition of any powershell commands that you wish to have a UI for.

As it sits though keep in mind you have a UI with hard-coded credentials that can do whatever commands you expose.

Personally I am running one for my use only and I have it hosted in Azure in its own Resource Group with an NSG allowing outgoing traffic to Azure and my MIM environment. Incoming traffic is only allowed from my personal management workstations IP address. I also needed to allow port 3000 into the server on the NSG as well as the firewall on the host. I did that quickly using the command below.

# Enable the WebPort NodeJS is using on the firewall 
netsh advfirewall firewall add rule name="NodeJS WebPort 3000" dir=in action=allow protocol=TCP localport=3000

Follow Darren on Twitter @darrenjrobinson


Resolving “The Microsoft Identity Manager server database could not be successfully populated” installation error

Here is yet another of those Microsoft Identity Manager installation errors that doesn’t give you much information and when looking for a resolution you can’t find an exact match through Dr Google.

Nearing the end of the Microsoft Identity Manager Service and Portal installation you receive the “The Microsoft Identity Manager server database could not be successfully populated” error.

Looking into the installation log (which I’m in the good practice of always initiating when doing an installation of the MIM Service/Portal these days eg. msiexec /i “e:\Service and Portal\Service and Portal.msi” /l*v c:\temp\install.log )  didn’t give up much information at all. Fatal Error. Dialog created.

Looking at the server that the installation was being done on I could see that it was being spanked. This server is for a customers development environment, hosted in Azure but also done rather frugally (my Virtual Machine was running, SQL, MIM Sync, all the dependencies for the MIM Portal and then the MIM Service/Portal itself). FWIW the initially provisioned VM was an Azure DS1v2 server. Seems a character may have got lost in the VM request where an Azure DS11v2 server would have been more appropriate.

I re-sized the Azure VM and actually chose to go with an Azure DS3v2 VM size. I kicked off the Microsoft Identity Manager Service & Portal installation again and ….


Hope this helps someone else who may find themselves in a similar position.

Follow Darren on Twitter @darrenjrobinson


Microsoft Identity Manager installation error “Internal Error 2337. 0,”

Today I was doing a fresh installation of Microsoft Identity Manger 2016 with Service Pack 1 into a new development environment. The exact binary is “en_microsoft_identity_manager_2016_with_service_pack_1_x64_dvd_9270854”

Not too far into the installation of the Microsoft Identity Manager Synchronization Server I got the “Internal Error 2337. 0,” error as shown below.

Doing a few searches didn’t throw me any bones. I could see that the installation had added the MIM Sync Server Service Account to the Logins on the SQL Server.

I then recalled that there was an updated version that was released just before Xmas (14 Dec 2016). The exact binary is “en_microsoft_identity_manager_2016_with_service_pack_1_x64_dvd_9656597” and you can get the updated Microsoft Identity Manager 2016 with SP1 media here.

Re-running the installation and ….


Now I’m not going to spend any time trying to figure out was is bad with the base MIM 2016 SP1 media, I’m just going to pretend it never existed and use the latest. Onward and upwards.

Follow Darren on Twitter @darrenjrobinson






Azure AD Connect pass-through authentication. Yes, no more AD FS required.

Originally posted on Lucian’s blog at Follow Lucian on Twitter: @LucianFrango.


Yesterday I received a notification email from Alex Simons (Director of PM, Microsoft Identity Division) which started like this:

Todays news might well be our biggest news of the year. Azure AD Pass-Through Authentication and Seamless Single Sign-on are now both in public preview!

So I thought I’d put together a streamlined overview of what this means for authentication with regards to the Microsoft Cloud and my thoughts on if I’d use it.



What is Azure AD pass-through auth?

When working with the Microsoft Cloud, organisations from small to enterprise leverage the ability to have common credentials across on-premises directories (ADDS) and the cloud. It’s the best user experience, it’s the best IT management experience as well. The overhead of facilitating this can be quite a large endeavour. There’s all the complexities of AD FS and AADConnect to work through and build with high availability and disaster recovery in mind as this core identity infrastructure needs to be online 24/7/365.

Azure AD Pass-through authentication (public preview) simplifies this down to Azure AD Connect. This new feature can, YES, do away with AD FS. I’m not in the “I hate AD FS” boat. I think as a tool it does a good job: proxying authentication from external to ADDS and from Kerberos to SAML. For all those out there, I know you’re out there, this would be music to your ears.

Moreover, if you wanted to enjoy SINGLE SIGN ON, you needed AD FS. Now, with pass-through authentication, SSO works with just Azure AD Connect. This is a massive win!

So, what do I need?

Nothing too complicated or intricate. The caveat is that the support for this in public preview feature is limited, as with all preview offerings from Azure. Don’t get to excited and roll this out into production as yet! Dev or test it as much as possible and get an understanding of it, but, don’t go replacing AD FS just yet.

Azure AD Connect generally needs a few ports to communicate with ADDS on-premises and Azure AD in the cloud. The key port being TCP443. With pass-through authentication, there are ~17 other ports (with 10 of which included in a range) that need to be opened up for communication. While this could be locked down at the firewall level to just Azure IP’s, subnets and hosts, it will still make security question and probe for details.

The version of AADConnect also needs to be updated to the latest, released on December 7th (2016). From version 1.1.371.0, the preview feature is now publicly available. Consider upgrade requirements as well before taking the plunge.

What are the required components?

Apart from the latest version of Azure AD Connect, I touched on a couple more items required to deploy pass-though authentication. The following rapid fire list outlines what is required:

  • A current, latest, version of Azure AD Connect (v 1.1.371.0 as mentioned above)
  • Windows Server 2012 R2 or higher is listed as the operating system for Azure AD Connect
    • If you still have Server 2008 R2, get your wiggle on and upgrade!
  • A second Windows Server 2012 R2 instance to run the Azure App Proxy .exe to leverage high availability
    • Yes, HA, but not what you think… more details below
  • New firewall rules to allow traffic to a couple wildcard subdomains
  • A bunch of new ports to allow communication with the Azure Application proxy

For a complete list, check out the article with detailed config.

What is this second server business? AADConnect has HA now?

Much like AD FS being designed to provide high availability for that workload, there is the ability to provide some HA around the connectors required for the pass-through auth process. This is where it gets a little squirly. You won’t be deploying a second Azure AD Connect server and load balance the two.

The second server is actually an additional server, I would imagine a vanilla Windows Server instance, in which a deployment of Azure AD Application Proxy is downloaded, executed and run.

The Azure Application Proxy is enabled in Azure AD, the client (AADApplicationProxyConnectorInstaller.exe) downloaded and run (as mentioned above). This then introduces two services that run on the Windows Server instance and provide that connector to Azure AD. For more info, review the article outlines the setup process.



Would I use this?

I’ll answer this in two ways, then I’ll give you my final verdict.

Yes, I would use this. Simplifying what was “federated identity” to “hybrid identity” has quite a few benefits. Most importantly the reduced complexity and reduced requirements to maintain the solution. This reduced overhead can maintain a higher level of security, no credentials (rather passwords), being stored in the cloud at all. No hashes. Nadda. Zilch. Zero. Security managers and those tightly aligned to government regulations: small bit of rejoice for you.

No, I would not use this. AD FS is a good solution. If I had an existing AD FS solution that tied in with other applications, I would not go out of my way to change that just for Office 365 / Azure AD. Additionally, in preview functionality does not have the same level of SLA support. In fact, Azure in preview features are provided “as is”. That is not ideal for production environments. Sure, I’d put this in a proof of concept, but, I wouldn’t recommend rolling this out anytime soon.

Personally, I think this is a positive piece of SSO evolution within the Microsoft stack. I would certainly be trying to get customers to proof of concept this and trial it in dev or test environments. It can further streamline identity deployments and could well defer customers from ever needing or deploying AD FS, thus saving in instance cost and managed services cost supporting the infrastructure. Happy days!

Final thoughts

Great improvement, great step forward. If this was announced at Microsoft Ignite a couple of months back, I think it would have been big news. Try it, play with it, send me your feedback. I want to always been learning, improving and finding out new gotchas.



Real world Azure AD Connect: the case for TWO Azure AD Connect servers

Originally posted on Lucian’s blog at Follow Lucian on Twitter @LucianFrango.


I was exchanging some emails with an account manager (Andy Walker) at Kloud and thought the exchange would be for some interesting reading. Here’s the outcome in an expanded and much more helpful (to you dear reader) format…



When working with the Microsoft Cloud and in particular with identity, depending on some of the configuration options, it can be quite important to have Azure AD Connect highly available. Unfortunately for us, Microsoft has not developed AADConnect to be highly available. That’s not ideal in today’s 24/7/365 cloud-centric IT world.

When disaster strikes, and yes that was a deliberate use of the word WHEN, with email, files, real-time communication and everything in between now living up in the sky with those clouds, should your primary data centre be out of action, there is a high probability that a considerable chunk of functionality will be affected by your on-premises outage.

Azure AD Connect

AADConnect’s purpose, it’s only purpose, is to synchronise on-premises directories to Azure AD. There is no other option. You cannot sync an ADDS forest to another with Azure AD Connect. So this simple and lightweight tool only requires a single deployment in any given ADDS forest.

Even when working with multiple forests, you only need one. It can even be deployed on a domain controller; which can save on an instance when working with the cloud.

The case for two Azure AD Connect servers

As of Monday April 17th 2017, just 132 days from today (Tuesday December 6th), DirSync and Azure AD Sync, the predecessors to Azure AD Connect will be deprecated. This is good news. This means that there is an opportunity to review your existing synchronisation architecture to take advantage of a feature of AADConnect and deploy two servers.


Staging mode is fantastic. Yes, short and sweet and to the point. It’s fantastic. Enough said, blog done; use it and we’re all good here? Right?

Okay, I’ll elaborate. Staging mode allows for the following rapid fire benefits which greatly improves IT as a result:

1 – Redundancy and disaster recovery, not high availability

I’ve read in certain articles that staging mode offers high availability. I disagree and argue it offers redundancy and disaster recovery. High availability is putting into practice architecture and design around applications, systems or servers that keeps those operational and accessible as near as possible to 100% uptime, or always online, mostly through automated solutions.

I see staging mode as offering the ability to manually bring online a secondary AADConnect server should the primary be unavailable. This could be due to a disaster or a scheduled maintenance window. Therefore it makes sense to deploy a secondary server in an alternate data centre, site or geographic location to your primary server.

To do the manual update of the secondary AADConnect server config and bring it online, the following needs to take place:

  • Run Azure AD Connect
    • That is the AzureADConnect.exe which is usually located in the Microsoft Azure Active Directory Connect folder in Program Files
  • Select Configure staging mode (current state: enabled), select next
  • Validate credentials and connect to Azure AD
  • Configure staging mode – un-tick
  • Configure and complete the change over
  • Once this has been run, run a full import/full sync process

This being a manual process which takes time and breaks service availability to complete (importantly: you can’t have two AADConnect servers synchronising to a single Azure AD tenant), it’s not highly available.

2 – Update and test management

AADConnect now offers (since February 2016) an in-place automatic upgrade feature. Updates are rolled out by Microsoft and installed automatically. Queue the alarm bells!

I’m going to take a stab at making an analogy here: you wouldn’t have your car serviced while it’s running and taking you from A to B, so why would you have a production directory synchronisation server upgraded while its running. A bit of a stretch, but, can you see where I was going there?

Having a secondary AADConnect server allows updates and or server upgrades without impacting the core functionality of directory synchronisation to Azure AD. While this seems trivial, it’s certainly not trivial in my books. I’d want my AADConnect server functioning correctly all the time with minimal downtime. Having the two servers can allow for a 15 min (depending on how fast you can click) managed change over from server to server.

3 – Improved management and less impact on production

Lastly, this is more of a stretch, but, I think it’s a good practice to not have user accounts assigned Domain Admin rights. The same applies to logging into production servers to do trivial tasks. Having a secondary AADConnect server allows for IT administrators or service desk engineers to complete troubleshooting of sync processes for the service or objects away from the production server. That, again in my books, is a better practice and just as efficient as using primary or production servers.

Final words

Staging mode is fantastic. I’ve said it again. Since I found out about staging mode around this time last year, I’ve recommended to every customer I’ve mentioned AADConnect too, to use two servers referencing the 3 core arguments listed above. I hope that’s enough to convince you dear reader to do so as well.



Real world Azure AD Connect: multi forest user and resource + user forest implementation

Originally posted on Lucian’s blog at Follow Lucian on Twitter @LucianFrango.


Disclaimer: During October I spent a few weeks working on this blog posts solution at a customer and had to do the responsible thing and pull the pin on further time as I had hit a glass ceiling. I reached what I thought was possible with Azure AD Connect. In comes Nigel Jones (Identity Consultant @ Kloud) who, through a bit of persuasion from Darren (@darrenjrobinson), took it upon himself to smash through that glass ceiling of Azure AD Connect and figured this solution out. Full credit and a big high five!



  • Azure AD Connect multi-forest design
  • Using AADC to sync user/account + resource shared forest with another user/account only forest
  • Why it won’t work out of the box
  • How to get around the issue and leverage precedence to make it work
  • Visio’s on how it all works for easy digestion


In true Memento style, after the quick disclaimer above, let me take you back for a quick background of the solution and then (possibly) blow your mind with what we have ended up with.

Back to the future

A while back in the world of directory synchronisation with Azure AD, to have a user and resource forest solution synchronised required the use of Microsoft Forefront Identity Manager (FIM), now Microsoft Identity Manager (MIM). From memory, you needed the former of those products (FIM) whenever you had a multi-forest synchronisation environment with Azure AD.

Just like Marty McFly, Azure AD synchronisation went from relative obscurity to the mainstream. In doing so, there have been many advancements and improvements that negate the need to ever deploy FIM or MIM for ever the more complex environment.

When Azure AD Connect, then Azure AD Sync, introduced the ability to synchronise multiple forests in a user + resource model, it opened the door for a lot of organisations to streamline the federated identity design for Azure and Office 365.


In the beginning…

The following outlines a common real world scenario for numerous enterprise organisations. In this environment we have an existing Active Directory forest which includes an Exchange organisation, SharePoint, Skype for Business and many more common services and infrastructure. The business grows and with the wealth and equity purchases another business to diversity or expand. With that comes integration and the sharing of resources.

We have two companies: Contoso and Fabrikam. A two-way trust is set up between the ADDS forests and users can start to collaboration and share resources.

In order to use Exchange, which is the most common example, we need to start to treat Contoso as a resource forest for Fabrikam.

Over at the Contoso forest, IT creates disabled user objects and linked mailboxes Fabrikam users. When where in on-premises world, this works fine. I won’t go into too much more detail, but, I’m sure that you, mr or mrs reader, understand the particulars.

In summary, Contoso is a user and resource forest for itself, and a resource forest for Fabrikam. Fabrikam is simply a user forest with no deployment of Exchange, SharePoint etc.

How does a resource forest topology work with Azure AD Connect?

For the better part of two years now, since AADConnect was AADSync, Microsoft added in support for multi-forest connectivity. Last year, Jason Atherton (awesome Office 365 consultant @ Kloud) wrote a great blog post summarising this compatibility and usage.

In AADConnect, a user/account and resource forest topology is supported. The supported topology assumes that a customer has that simple, no-nonsense architecture. There’s no room for any shared funny business…

AADConnect is able to select the two forests common identities and merge them before synchronising to Azure AD. This process uses the attributes associated with the user objects: objectSID in the user/account forest and the msExchMasterAccountSID in the resource forest, to join the user account and the resource account.

There is also the option for customers to have multiple user forests and a single resource forest. I’ve personally not tried this with more than two forests, so I’m not confident enough to say how additional user/account forests would work out as well. However, please try it out and be sure to let me know via comments below, via Twitter or email me your results!

Quick note: you can also merge two objects by sAmAccountName and sAmAccountName attribute match, or specifying any ADDS attribute to match between the forests.



If you’d like to read up on this a little more, here are two articles reference in detail the above mentioned topologies:

Why won’t this work in the example shown?

Generally speaking, the first forest to sync in AADConnect, in a multi-forest implementation, is the user/account forest, which likely is the primary/main forest in an organisation. Lets assume this is the Contoso forest. This will be the first connector to sync in AADConnect. This will have the lowest precedence as well, as with AADConnect, the lower the precedence designated number, the higher the priority.

When the additional user/account forest(s) is added, or the resource forest, these connectors run after the initial Contoso connector due to the default precedence set. From an external perspective, this doesn’t seem like much of a bit deal. AADConnect merges two matching or mirrored user objects by way of the (commonly used) objectSID and msExchMasterAccountSID and away we go. In theory, precedence shouldn’t really matter.

Give me more detail

The issue is that precedence does in deed matter when we go back to our Contoso and Fabrikam example. The reason that this does not work is indeed precedence. Here’s what happens:


  • #1 – Contoso is sync’ed to AADC first as it was the first forest connected to AADC
    • Adding in Fabrikam first over Contoso doesn’t work either
  • #2 – The Fabrikam forest is joined with a second forest connector
  • AADC is configured with user identities exist across multiple directories
  • objectSID and msExchMasterAccountSID is selected to merge identities
  • When the objects are merged, sAmAccountName is taken Contoso forest – #1
    • This happens for Contoso forest users AND Fabrikam forest users
  • When the objects are merged, mail or primarySMTPaddress is taken Contoso forest – #1
    • This happens for Contoso forest users AND Farikam forest users
  • Should the two objects not have a completely identical set of attributes, the attributes that are set are pulled
    • In this case, most of the user object details come from Fabrikam – #2
    • Attributes like the users firstname, lastname, employee ID, branch / office

The result is this standard setup is having Fabrikam users with their resource accounts in Contoso sync’ed, but, have their UPN set with the prefix from the Contoso forest. An example would be a UPN of rather than the desired When this happens, there is no SSO as Windows Integrated Authentication in the Fabrikam forest does not recognise the Contoso forest UNP prefix of

Yes, even with ADDS forest trusts configured correctly and UPN routing etc all working correctly, authentication just does not work. AADC uses the incorrect attributes and sync’s those to Azure AD.

Is there any other way around this?

I’ve touched on and referenced precedence a number of times in this blog post so far. The solution is indeed precedence. The issue that I had experienced was a lack of understanding of precedence in AADConnect. Sure it works on a connector rule level precedence which is set by AADConnect during the configuration process as forests are connected to.

Playing around with precedence was not something I want to do as I didn’t have enough Microsoft Identity Manager or Forefront Identity Manager background to really be certain of the outcome of the joining/merging process of user and resource account objects. I know that FIM/MIM has the option of attribute level precedence, which is what we really wanted here, so my thinking as that we needed FIM/MIM to do the job. Wrong!

In comes Nigel…

Nigel dissected the requirements over the course of a week. He reviewed the configuration in an existing FIM 2010 R2 deployment and found the requirements needed of AADConnect. Having got AADConnect setup, all that was required was tweaking a couple of the inbound rules and moving higher up the precedence order.

Below is the AADConnect Sync Rules editor output from the final configuration of AADConnect:


The solution centres around the main precedence rule, rule #1 for Fabrikam (red arrows pointing and yellow highlight) to be above the highest (and default) Contoso rule (originally #1). When this happened, AADConnect was able to pull the correct sAmAccountName and mail attributes from Fabrikam and keep all the other attributes associated with Exchange mailboxes from Contoso. Happy days!

Final words

Tinkering around with AADConnect shows just how powerful the “cut down FIM/MIM” application is. While AADConnect lacks the in-depth configuration and customisation that you find in FIM/MIM, it packs a lot in a small package! #Impressed



Azure AD Connect – Using AuthoritativeNull in a Sync Rule

There is a feature in Azure AD Connect that became available in the November 2015 build 1.0.9125.0 (listed here), which has not had much fanfare but can certainly come in handy in tricky situations. I happened to be working on a project that required the DNS domain linked to an old Office 365 tenant to be removed so that it could be used in a new tenant. Although the old tenant was no long used for Exchange Online services, it held onto the domain in question, and Azure AD Connect was being used to synchronise objects between the on-premise Active Directory and Azure Active Directory.

Trying to remove the domain using the Office 365 Portal will reveal if there are any items that need to be remediated prior to removing the domain from the tenant, and for this customer it showed that there were many user and group objects that still had the domain used as the userPrincipalName value, and in the mail and proxyAddresses attribute values. The AuthoritativeNull literal could be used in this situation to blank out these values against user and groups (ie. Distribution Lists) so that the domain can be released. I’ll attempt to show the steps in a test environment, and bear with me as this is a lengthy blog.

Trying to remove the domain listed the items needing attention, as shown in the following screenshots:

This report showed that three actions are required to remove the domain values out of the attributes:

  • userPrincipalName
  • proxyAddresses
  • mail

userPrincipalName is simple to resolve by changing the value in the on-premise Active Directory using a different domain suffix, then synchronising the changes to Azure Active Directory so that the default or another accepted domain is set.

Clearing the proxyAddresses and mail attribute values is possible using the AuthoritativeNull literal in Azure AD Connect. NOTE: You will need to assess the outcome of performing these steps depending on your scenario. For my customer, we were able to perform these steps without affecting other services required from the old Office 365 tenant.

Using the Synchronization Rules Editor, locate and edit the In from AD – User Common rule. Editing the out-of-the-box rules will display a message to suggest you create an editable copy of the rule and disable the original rule which is highly recommended, so click Yes.

The rule is cloned as shown below and we need to be mindful of the Precedence value which we will get to shortly.

Select Transformations and edit the proxyAddresses attribute, set the FlowType to Expression, and set the Source to AuthoritativeNull.

I recommend setting the Precedence value in the new cloned rule to be the same as the original rule, in this case value 104. Firstly edit the original rule to a value such as 1001, and you can also notice the original rule is already set to Disabled.

Set the cloned rule Precedence value to 104.

Prior to performing a Full Synchronization run profile to process the new logic, I prefer and recommend to perform a Preview by selecting a user affected and previewing a Full Synchronization change. As can be seen below, the proxyAddresses value will be deleted.

The same process would need to be done for the mail attribute.

Once the rules are set, launch the following PowerShell command to perform a Full Import/Full Synchronization cycle in Azure AD Connect:

  • Start-ADSyncSyncCycle -PolicyType Initial

Once the cycle is completed, attempt to remove the domain again to check if any other items need remediation, or you might see a successful domain removal. I’ve seen it take upto 30 minutes or so before being able to remove the domain if all remediation tasks have been completed.

There will be other scenarios where using the AuthoritativeNull literal in a Sync Rule will come in handy. What others can you think of? Leave a description in the comments.

Forcing MFA in Amazon Web Services

Many organisations will want to enforce MFA for an added security layer for their users. As each service is different, in some cases enforcing MFA may not be as easy as it sound.

In AWS, an administrator cannot simply “tick” to enable MFA on all users (as of this writing). However, MFA can be enforced on API calling, to “force” a user to setup MFA. Think of it as a backdoor, to forcing or enabling MFA on all your IAM users.

The only way in which that can be achieved, is by creating a policy. This policy is a based on a JSON script. You could use AWS for that, or Visual Studio.

In this blog, I will demonstrate how to enforce MFA in AWS for all your IAM users. This blog will not go through setting up MFA for IAM users. This could be achieved by following the steps here .

This blog assumes you have some AWS knowledge and experience, and will not go into detailing how to attach a policy or step-by-step creating a policy, this can be found here .

How Does AWS define MFA?

AWS Multi-Factor Authentication (MFA) is a simple best practice that adds an extra layer of protection on top of your user name and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password (the first factor—what they know), as well as for an authentication code from their AWS MFA device (the second factor—what they have). Taken together, these multiple factors provide increased security for your AWS account settings and resources.

You can enable MFA for your AWS account and for individual IAM users you have created under your account. MFA can be also be used to control access to AWS service APIs.

You can read more about it here.

What is API Calling?

Enforcing MFA on API calling means that an IAM user, for example will not be able to make any changes to EC2 instances, or launch new EC2 instances unless they’re MFA authenticated. Even if users have full admin permissions.

This is not to be confused with enforcing MFA upon signing in, the IAM user will still have to setup MFA to complete tasks in AWS.

Not very descriptive, but an IAM user will be faced with the following error messages if MFA isn’t setup.


In terms of Route53, the IAM users will have different error messages.


In this case, it is up for administrators to let IAM users know that MFA should be setup upon signing in.

Setting up and creating the MFA Policy

The MFA policy that we will create and apply, will only allow the use of IAM services, for users to setup their MFA, among other configurations.


Creating the MFA script

Before you start, you will need your IAM ARN (Amazon Resource Name). For IAM users, this will look like this “arn:aws:iam::123456789:user/test”

For groups, it will look like this “arn:aws:iam::123456789:group/AmazonEC2FullAccess

Basically, the ARN is your AWS Account ID which you can find under “My Account”.

The ARN will be used in the script, also too if required for particular users or groups (explained in the examples below). Nevertheless, in this script we will allow MFA to be enforced on all groups and users by default.

Don’t forget to replace your AWS ARN.

"Version": "2012-10-17",
"Statement": [
"Sid": "AllowAllUsersToListAccounts",
"Effect": "Allow",
"Action": [
"Resource": [
"Sid": "AllowIndividualUserToSeeTheirAccountInformation",
"Effect": "Allow",
"Action": [
"Resource": [
"Sid": "AllowIndividualUserToListTheirMFA",
"Effect": "Allow",
"Action": [
"Resource": [
"Sid": "AllowIndividualUserToManageThierMFA",
"Effect": "Allow",
"Action": [
"Resource": [
"Sid": "DoNotAllowAnythingOtherThanAboveUnlessMFAd",
"Effect": "Deny",
"NotAction": "iam:*",
"Resource": "*",
"Condition": {
"Null": {
"aws:MultiFactorAuthAge": "true"

If you want to enforce MFA on particular groups or users, then you will have to include them as follow.


“Resource”: [







Note that in the script, in resource we specified all groups by adding * at the end. Same applies for users if you need to include all users.


“Resource”: [

Previously created users may have to have this policy attached to them manually. Creating IAM users after creating this policy will be attached to the newly created users by default.

I hope you found this blog informative. Please feel free to post your comments below.

Thanks for reading.

Auto-Acceleration for SharePoint Online

Working with one of my colleagues recently, we were tasked with implementing Smart Links to speed up the login processes for a client’s SharePoint Online implementation.

The client was working towards replacing their on-premises implementation of SharePoint and OpenSpaces with SharePoint Online. The issue they faced was that when when a user tries to access a SharePoint Online site collection and is not already authenticated with Office 365, the user will be directed to the default Microsoft Online login page. The user then provides their email address and something called home-realm discovery is performed. This is where the Microsoft Online login screen will use the domain portion of the users email address to determine whether the domain is managed by Office 365, in which case the user will need to provide a password. If the domain is Federated (i.e. not managed by Office 365), the user is re-directed to the Identity Provider (IdP) for the user’s organisation – i.e. AD FS. At this point providing internet security settings within the user’s web browser are correct, they will be authenticated using their logged in credentials. Although this is the default SharePoint Online experience, it was different to the seamless on-premises SharePoint experience with Windows Integrated Authentication.

After spending a little time working through David Ross’ great blog on the topic, we discovered that due to a change in the URL construct with AD FS 3.0, the Smart Link process is no longer supported by Microsoft and has been replaced with Auto-Acceleration for SharePoint Online. This feature reduces logon prompts for the user by instructing SharePoint Online to use a pre-defined home-realm, rather than perform home-realm discovery, when a user accesses the site and “accelerating” the user through the logon process.

However, there are two caveats for getting Auto-Acceleration to work:

  • You can have multiple domains but there must be a single SSO end-point to authenticate users (AD FS or a 3rd party IdP such as Okta, can be used).
  • Auto-acceleration only works with sites that are accessible internally to the organisation, it will not work for external sites.

There isn’t much documentation around for this feature, however it’s pretty simple:

  1. If you haven’t already, download the SharePoint Online Management Shell
  2. Connect to SharePoint Online using the SPO Management Shell:
    $adminUPN="[UPN for Office 365 Admin]"
    $orgName="[Your tenant name]"
    $userCredential = Get-Credential -UserName $adminUPN -Message "Type the password."
    Connect-SPOService -Url https://$ -Credential $userCredential 
  3. Type the command Set-SPOTenant –SignInAccelerationDomain “”
  4. If your IdP supports guest users, you can also run Set-SPOTenant -EnableGuestSignInAcceleration $true