One of the benefits of Cloud Services is the continual enhancements that vendors provide based on feedback from their customers. One such item of feedback that Microsoft has heard often is the request to know what state a Guest user in Azure AD is in. In the last couple of days Microsoft exposed two additional attributes on the User objectClass in Azure AD;
This means we can now query the Microsoft Graph for B2B users and understand if they have Accepted or are PendingAcceptance, and the datetime of the last change.
My customers have been wanting such information and would like to report on it. Here is an example PowerShell script I’ve put together that queries Microsoft Graph to locate B2B Users in the Accepted and PendingAcceptance states and outputs summary information into two HTML Reports. One for Accepted and one for PendingAcceptance.
Update the following script;
Line 2 for a Microsoft Azure PowerShell Module such as AzureAD that has the Microsoft.IdentityModel.Clients.ActiveDirectory.dll library in it
Line 5 for your Tenant name
Lines 15 and 16 for a Username and Password
Line 19 for where you want the Reports written to
Running the Script will then generate the Accepted and Pending HTML Reports.
Here is an example of the AcceptedB2BUsers HTML Report.
With the addition of these two additional attributes to the Microsoft Graph we can now query for B2B users based on their State and using PowerShell quickly report on them.
Earlier this year Microsoft released the Microsoft Identity Manager Azure AD B2B Management Agent. I wrote about using it to write to Azure AD in this post here. As detailed in that post my goal was to write to Azure AD using the MA. I provided an incomplete example of doing that for Guests. This post fills in the gap and unlike the note preceding the post indicates, I’ve updated my MA to use the Graph API over the Azure AD PowerShell Module. It does though work in unison with the Microsoft Azure AD B2B Management Agent.
The process is;
Using the Microsoft Azure B2B Management Agent connect to an Azure AD Tenant that contains users that you want to invite as Guests to your Tenant. Flow in the naming information for users and their email address and any other metadata that you need to drive the logic for who you wish to invite
Use my Azure AD B2B Invitation Management Agent to automate the invitation of users to your Azure AD Tenant using Azure AD B2B
My Azure AD B2B Invitation Management Agent works in two phases;
Invitation of Users as Guests
Update of Guests with naming information (Firstname, Lastname, DisplayName)
The Azure AD B2B Invite Management Agent uses my favorite PowerShell Management Agent (the Granfeldt PSMA). I’ve posted many times on how to configure it. See these posts here if you are new to it.
Setting up an Azure AD User Account for the Management Agent
In your Azure AD create a New User that will be used by the Management Agent to invite users to your Azure AD. I named mine B2B Inviter as shown below.
You then want to assign them the Guest inviter role as shown below. This will be enough permissions to invite users to the Azure AD.
However depending on how you want these invitee’s to look, you probably also want their names to be kept consistent with their home Azure AD. To also enable the Management Agent to do that you need to also assign the User administrator role as shown below.
Now log in using that account to Azure AD and change the password. The account is now ready to go.
I’ve kept the schema small with just enough interesting info to facilitate the functionality required. Expand it if you need additional attributes and update the import.ps1 accordingly.
The Import script imports users from the Azure AD Tenant that you will be inviting remote Azure AD users too (as Guests).
Change line 10 for your file path
Change line 24 for the version of an AzureAD or AzureADPreview PowerShell Module that you have installed on the MIM Sync Server so that the AuthN Helper Lib can be used. Note if using a recent version you will also need to change the AuthN calls as well as the modules change. See this post here for details.
Change line 27 for your tenant name
Change line 47/48 for a sync watermark file
The Import script also contains an attribute from the MA Schema named AADGuestUser that is a boolean attribute. I create the corresponding attribute in the MetaVerse and MIM Service Schemas for the Person/User objectClasses. This is used to determine when a Guest has been successfully created so their naming attributes can then be updated (using a second synchronisation rule).
The Export script handles the creation (invitation) of users from another azure AD Tenant as Guests as well synchronizing their naming data. It doesn’t include deletion logic, but that is simple enough include a deletion API call based on your MA Deprovisioning logic and requirements.
By default I’m not sending invitation notifications. If you want to send invitation notifications change “sendInvitationMessage“= $false to $true on Line 129. You should then also change the Invitation Reply URL on line 55 to your Tenant/Application.
Change Line 10 for the path for the debug logging
Change Line 24 as per the Import Script if you are using a different version of the help lib
Change Line 27 for your Azure AD Tenant Name
Declarative Sync Rules
I’m not going to cover import flow configurations on the MS Azure AD B2B MA here. See here for that. Below details how I’ve configured my Invitation MA for the Creation/Export functions. My join rule (configured in the Sync Engine Invitation MA Config) is email address as shown below. Not the best anchor as it isn’t immutable. But as we don’t know what the DN is going to be until after it is created this is the next best thing.
Creation Sync Rule
Here are the three attributes I’m syncing to the B2B Invite Management Agent to perform the invitation. I’m using the mail attribute for the DN as it matches the anchor for the schema. We don’t know what objectID will be assigned until the directory service does it. By using email/upn once created we will get the join and won’t end up with two objects on the MA for the same user.
For Inbound I’m flowing in the AADGuestUser boolean value. You will need to create this attribute in the MetaVerse and then the MIM Service. In the MIM Service allow the Sync Service Account to manage the attribute and change the MIM Service Filter Permissions to allow Admins to use the attribute in Sets. Also on the MIM Service MA add an Export flow from the MV to the MIM Service for AADGuestUser.
Naming Update Sync Rule
The second Sync Rule is to update the guests GivenName, Surname and DisplayName. This sync rule has a dependency on the creation sync rule and has a corresponding Set, Workflow and MPR associated with value of the AADGuestUser boolean attribute populated by the Import script. If True (which it will be after successful creation and the confirming import) the second synchronization rule will be applied.
I will trigger an export flow for the three naming attributes.
Example of Inviting Guests
In this example Rick Sanchez is a member of a guest organisation and meets the criteria of my rules to be invited as a guest to our Tenant. We then, that we get an Add for Rick on the B2B Invite MA.
On export Rick is created in our Azure AD as a Guest User
Rick appears in Azure AD as a Guest via the Azure Portal.
Following the confirming import our second sync rule fires and flows through an update to DisplayName and adds GivenName and Surname.
This naming attributes are then successfully exported.
Going to the Azure AD Portal we see that Rick has indeed been updated.
If you enable notification emails a generic notification email is sent like shown below. The import.ps1 MA script has them disabled by default.
Using a combination of the Microsoft Azure AD B2B Management Agent and my Azure AD B2B Invitation Management Agent you can automate the invitation of Guest users to your Azure AD Tenant.
Unfortunately (as with most things auth-related), there are some gotcha’s to be aware of. One relates to how ADAL obtains refresh tokens in this crazy world of implicit auth.
Implicit Auth Flow
Implicit auth allows for the application developer to not have to host their own token authentication service. The ADAL.js and the Azure AD auth endpoint do all the heavy lifting:
It’s the bottom third of the diagram (after the token expires) that causes the issue I am addressing in this post. This is where ADAL.js creates a hidden iframe (browser fragment) that sends a request to get a fresh token. This will show up in the DOM (if you inspect in the browser dev tools) as an iframe element with an ID of “adalRenewFrame” followed by the endpoint that it is renewing the token for (in the below example this is https://graph.microsoft.com).
What’s the problem?
What’s the solution?
The ADAL.js team recommend a couple of different approaches to getting around this issue in their FAQ page on github. The simpler solution is to control how your app is bootstrapped so that it only loads if the window === window.parent (i.e. it isn’t in an iframe), which is fine if you have this kind of control over how your app starts (like with AngularJS or React). But this won’t always suit.
The other option is to have an alternative redirect page that is targeted after the iframe renews the token (by specifying it in the ADAL config with the redirectUri property). N.B. you have to specify the exact url in the Azure AD app settings for your app as well.
Suffice it to say, just pointing to an empty page doesn’t do the trick and there is a bunch of hassle with getting everything working (see the comments on this gist for the full adventure), but to cut a long story short – here’s what worked for me.
The redirect page itself redirects back to our SPA (in this case, the root of the web app) only if the window === window.parent (not an iframe) and passes the token etc in the window.location.hash as well. See the below example.
Hope this saves you some time/grief with getting this all working. I haven’t seen a satisfactory write up of a solution to this issue before (hence this post).
Any questions, suggested improvements, please let me know in the comments!
‘Till next time…
In order to utilise Azure AD credentials that are synchronised from on-premises, synchronisation of NTLM/Kerberos credential hashes must be enabled in Azure AD Connect, this is not enabled by default.
If there is any cloud-only user accounts, all users who need to use Azure AD Domain Services must change their passwords after Azure AD Domain Services is provisioned. The password change process causes the credential hashes for Kerberos and NTLM authentication to be generated in Azure AD.
Once a cloud-only user account has changed their password, you will need to wait for a minimum of 20 minutes before you will be able to use Azure AD Domain Services (this got me as I was impatient).
Speaking of patience the provisioning process of Azure Domain Services takes about an hour.
Have a dedicated subnet for Azure AD Domain services to avoid any connectivity issues that may occur with NSGs/firewalls.
You can only have one managed domain connected to your Azure Active Directory.
That’s it, hopefully this helped you get a better understanding of Azure AD Domain Services and assists with a smooth deployment.
Those who have rolled out Azure MFA (in the cloud) to non-administrative users are probably well aware of the nifty Trusted IPs feature. For those that are new to this, the short version is that this capability is designed to make it a little easier on the end user experience by allowing you to define a set of ‘trusted locations’ (e.g. your corporate network) in which MFA is not required.
This capability works via two methods:
Defining a set of ‘Trusted” IP addresses. These IP addresses will be the public facing IP addresses of your Web Proxies and/or network gateways and firewalls
Utilising issued claims from Federated Users. This uses the insidecorporatenetwork = trueclaim, sent by ADFS, to determine that this user is coming from a ‘trusted location’. Enabling this capability is discussed in this article.
Now, the latter of these is what needs further consideration when you are looking to moving to the ‘modern world’ of Windows 10 and Azure AD (AAD). Unfortunately, due to some changes made in the way that ‘Win10 Domain Joined with AAD Registration (AAD+DJ) machines performs Single Sign On (SSO) with AAD, the method of utilising federated claims to determine a ‘trusted location’ for MFA will no longer work.
To understand why this is the case, I highly encourage that you first read Jairo Cadena‘s truly excellent blog series that discuss in detail around how Win10 AAD SSO and its associated services works. The key takeaways from those posts are that Win10 now has this concept of a Primary Refresh Token (PRT) and with this approach to authentication you now have the following changes:
The PRT is what is used to obtain access tokens to AAD applications
The PRT is cached and has a sliding window lifetime from 14 days up to 90 days
The use of the PRT is built into the Windows 10 credential provider. Both IE and Edge know to utilise the PRT when communicating with AAD
It effectively replaces the ADFS with Integrated Windows Auth (IWA) approach to achieve SSO with Azure AD
That is, the auth flow is no longer: Browser –> Login to AAD –> Redirect to ADFS –> Perform IWA SSO –> SAML Token provided with claims –> AAD grants access
Instead, the auth flow is a lot more streamlined: Browser –> Login and provide PRT to AAD –> AAD grants access
Hopefully from this auth flow change you can see why Microsoft have done this. Because the old way relied on IWA to perform ‘seamless’ SSO, it only worked when the device was domain joined and you had line of sight to a DC to perform kerberos. So when connecting externally, you would always see the prompt from the ADFS forms based authentication. In the new way, whenever an auth prompt came in from AAD, the credential provider could see this and immediately provide the cached PRT, providing SSO regardless of your network location. It also meant that you no longer needed a domain joined machine to achieve ‘seamless’ SSO!
The side effect though is that because the SAML token provided by ADFS is no longer involved in gaining access, Azure AD loses visibility on those context based claims like insidecorporatenetwork which subsequently means that specific Trusted IPs feature no longer works. While this is most commonly used for MFA scenarios, be aware that this will also apply to any Azure AD Conditional Access rules you define that uses the Trusted IPs criteria (e.g. block access to app when external). Side Note: If you want to confirm this behaviour yourself, simply use a Win10 machine that is both Domain Joined and AAD Registered, perform a fiddler capture, and compare the sign in experience differences between a IE and Edge (i.e. PRT aware) and Chrome (i.e. not PRT aware)
So, you might ask, how do you fix this apparent gap in capability? Does this mean you’re going backwards now? For any enterprise customer of decent size, managing a set of IP address ranges may not be practical or desireable in order to drive MFA (or conditional access) behaviours between internal and external users. The federated user claim method was a simple, low admin, way of solving that problem.
To answer this question, I would actually take a step back and look at the underlying problem that you’re trying to solve. If we remind ourselves of the MFA mantra, the idea is to ensure that the user provides “something they know” (e.g. a secret/password) and “something they have” (e.g. a mobile device) to prove their ‘trustworthiness’.
When we make a decision to allow an MFA bypass for internal users, we are predicating this on the fact that, from a security standpoint, they have met their ‘trustworthiness’ level through a seperate means. This might be through a security access card that lets them into an office location or utilising a corporate laptop that can perform a VPN connection. Both of which ultimately lets them connect to the internal network and thus is what you use as your criteria for granting them the luxury of not having to perform another factor of authentication.
So with that in mind, what you could then do is to also expand that critera to include Domain Joined machines. That is, if a user is utilising a corporate issued device that has been domain joined (and registered to AAD), this can now act as your “something you have” aspect of the MFA mantra to prove your trustworthiness, and so you no longer need to differentiate whether they are actually internal or external anymore.
To achieve this, you’ll need to use Azure AD Conditional Access policies, and modify your Access Grant rules to look something like that below:
You’ll also need to perform the steps outlined in the How to configure automatic registration of Windows domain-joined devices with Azure Active Directory article to ensure the devices properly identify themselves as being domain joined. Side Note: If you include the Workplace Join packages as outlined above, this approach can also expand to Windows 7 and Windows 8.1 devices. Side Note 2: You can also include Intune managed mobile devices for your ‘bypass criterias’ if you include the Require device to be marked as compliant critera as well. Fun Fact: You’ll note that in my image the (preview) reference for ‘require one of the selected controls’ is still there. This is because until recently (approx. May/June 2017), the MFA or domain joined device criteria didn’t acutally work because of the behaviour/order of how the evaluations were being done. When AAD was evaluating the domain joined criteria, if it failed it would immediately block access rather then trying the MFA approach next, thus preventing an ‘or’ scenario. This has now been fixed and I expect the (preview) tag to be removed soon.
The move to the modern ‘any where, any device’ approach to end user computing means that there is a need to start re-assessing how you approach security. Old world views of security being defined via network boundaries will eventually disappear and instead you’ll need to consider user-and device based contexts to define when to initiate security controls.
With Windows 10’s approach to authentication with AAD, internal and external access is no longer relevant and should not be used for your criteria in driving MFA or conditional access. Instead, use the device based conditions such as ‘device compliance’ or ‘domain join’ as one of your deciding factors.
As mentioned in my previous post, Using ADFS on-premises MFA with Azure AD Conditional Access, if you have implemented Azure AD Conditional Access to enforce MFA for all your Cloud Apps and you are using the SupportsMFA=true parameter to direct MFA execution to your ADFS on-premises MFA server you may have encountered what I call the ‘Double Auth’ prompt issue.
While this doesn’t happen across all Cloud Apps, you will see it on the odd occasion (in particular the Intune Company Portal and Azure AD Powershell Cmdlets) and it has the following symptoms:
User signs into Azure AD App (e.g. Azure AD Powershell with Modern Auth support)
User sees auth prompt, enters their username, which redirects to ADFS
User enters credentials and clicks enter
It looks like it signs in successfully but then ADFS reappears and the user is prompted to enter credentials again.
After the second successful attempt, the user is then prompted for MFA as expected
Understanding the reason behind why this happens is reliant on two things:
The background I provided in the blog post I referenced above, specifically that when SupportsMFA is being used, two requests to ADFS are sent by Azure AD instead of one as part of the authentication process when MFA is involved.
Configuration and behaviour of the prompt=login behaviour of Azure AD, which is discussed in this Microsoft Docs article.
So to delve into this, let’s crack out our trusty Fiddler tool to look at what’s happening:
Highlighted in the image above is the culprit. You’ll see in the request strings that the user is being sent to ADFS with two key parameters wauth=… and wfresh=0. What is happening here is that this particular Azure AD application has decided that as part of sign in, they want to ensure that ‘fresh credentials’ are being provided (say, to ensure the correct user creds are used). They do this by telling Azure AD to generate a request with prompt=login, however as noted in the article referenced, because some legacy ADFS systems don’t understand this ‘modern’ parameter, the default behaviour is for Azure AD to pre-emptively translate this request into two ‘WS-Fed’ parameters that they can understand. In particular, wfresh=0 as per the WS-Fed specs means:
… If specified as “0” it indicates a request for the IP/STS to re-prompt the user for authentication before issuing the token….
The problem of course is that ADFS sees the wfresh=0 parameter in both requests and will abide by that behaviour by prompting the user for credentials each time!
So, the fix for this is fairly simple and is in fact (very vaguely) called out in the technet article I’ve referenced above – which is to ensure that Azure AD uses the NativeSupport configuration so that it sends the parameter as-is to ADFS to interpret instead of pre-emptively translating it.
The specific command to run is:
The prerequisite to this fix is to ensure that you are either running:
ADFS 2012 R2 with the July 2016 update rollup
Once this update is applied (remember that these DomainFederationSettings changes can take up to 15-30 mins) you’ll be able to see the difference via Fiddler – ADFS is sent with a prompt=login parameter instead and its only for the first request so the overall experience is the credential prompt only occurs once.
Hope that saves a few hairs for anyone out there who’s come across this issue!
[UPDATE 12/09/17] Looks like there’s a Microsoft KB article around this issue now! Helpful for those who need official references: https://support.microsoft.com/en-us/help/4037806/federated-users-in-azure-active-directory-may-have-to-sign-in-two-time
With the recent announcement of General Availability of the Azure AD Conditional Access policies in the Azure Portal, it is a good time to reassess your current MFA policies particularly if you are utilising ADFS with on-premises MFA; either via a third party provider or with something like Azure MFA Server.
Prior to conditional MFA policies being possible, when utilising on-premises MFA with Office 365 and/or Azure AD the MFA rules were generally enabled on the ADFS relying party trust itself. The main limitation with this of course is the inability to define different MFA behaviours for the various services behind that relying party trust. That is, within Office 365 (Exchange Online, Sharepoint Online, Skype for Business Online etc.) or through different Azure AD Apps that may have been added via the app gallery (e.g. ServiceNow, SalesForce etc.). In some circumstances you may have been able to define some level of granularity utilising custom authorisation claims, such as bypassing MFA for activesync and legacy authentication scenarios, but that method was reliant on special client headers or the authentication endpoints that were being used and hence was quite limited in its use.
Now with Azure AD Conditional Access policies, the definition and logic of when to trigger MFA can, and should, be driven from the Azure AD side given the high level of granularity and varying conditions you can define. This doesn’t mean though that you can’t keep using your on-premises ADFS server to perform the MFA, you’re simply letting Azure AD decide when this should be done.
In this article I’ll show you the method I like to use to ‘migrate’ from on-premises MFA rules to Azure AD Conditional Access. Note that this is only applicable for the MFA rules for your Azure AD/Office 365 relying party trust. If you are using ADFS MFA for other SAML apps on your ADFS farm, they will remain as is.
At a high level, the process is as follows:
Configure Azure AD to pass ‘MFA execution’ to ADFS using the SupportsMFA parameter
Port your existing ADFS MFA rules to an Azure AD Conditional Access (CA) Policy
Configure ADFS to send the relevant claims
“Cutover” the MFA execution by disabling the ADFS MFA rules and enabling the Azure AD CA policy
The ordering here is important, as by doing it like this, you can avoid accidentally forcing users with a ‘double MFA’ prompt.
Step 1: Using the SupportsMFA parameter
The crux of this configuration is the use of the SupportsMFA parameter within your MSOLDomainFederationSettings configuration.
Setting this parameter to True will tell Azure AD that your federated domain is running an on-premises MFA capability and that whenever it determines a need to perform MFA, it is to send that request to your STS IDP (i.e. ADFS) to execute, instead of triggering its own ‘Azure Cloud MFA’.
To perform this step is a simple MSOL PowerShell command:
Pro Tip: This setting can take up to 15-30 mins to take effect. So make sure you factor in this into your change plan. If you don’t wait for this to kick in before cutting over your users will get ‘double MFA’ prompts.
Step 2: Porting your existing MFA Rules to Azure AD Conditional Access Policies
There’s a whole article in itself talking about what Azure AD CA policies can do nowadays, but for our purposes let’s use the two most common examples of MFA rules:
Bypass MFA for users that are a member of a group
Bypass MFA for users on the internal network*
Item 1 is pretty straight forward, just ensure our Azure AD CA policy has the following:
Assignment – Users and Groups:
Include: All Users
Exclude: Bypass MFA Security Group (simply reuse the one used for ADFS if it is synced to Azure AD)
Item 2 requires the use of the Trusted Locations feature. Note that at the time of writing, this feature is still the ‘old’ MFA Trusted IPs feature hosted in the Azure Classic Portal. Note*: If you are using Windows 10 Azure AD Join machines this feature doesn’t work. Why this is the case will be an article in itself, so I’ll add a link here when I’ve written that up.
So within your Azure AD CA policy do the following:
Conditions – Locations:
Include: All Locations
Exclude: All Trusted IPs
Then make sure you click on Configure all trusted locations to be taken to the Azure Classic Portal. From there you must set Skip multi-factor authentication for requests from federated users on my intranet
This effectively tells Azure AD that a ‘trusted location’ is any authentication requests that come in with a InsideCorporateNetwork claim. Note: If you don’t use ADFS or an IDP that can send that claim, you can always use the actual ‘Trusted IP addresses’ method.
Now you can define exactly which Azure AD apps you want MFA to be enabled for, instead of all of them as you had originally.
Pro Tip: If you are going to enable MFA on All Cloud Apps to start off with, check the end of this article for some extra caveats you should consider for, else you’ll start breaking things.
Finally, to make this Azure AD CA policy actually perform MFA, set the access controls:
For now, don’t enable the policy just yet as there is more prep work to be done.
Step 3: Configure ADFS to send all the relevant claims
So now that Azure AD is ready for us, we have to configure ADFS to actually send the appropriate claims across to ‘inform’ it of what is happening or what it is doing.
The first is to make sure we send the InsideCorporateNetwork claim so Azure AD can apply the ‘bypass for all internal users’ rule. This is well documented everywhere, but the short version is, within your Microsoft Office 365 Identity Platform relying party trust in ADFS and Add a new Issuance Transform Rule to pass through the Inside Corproate Network Claim:
Fun fact: The Inside Corporate Network claim is automatically generated by ADFS when it detects that the authentication was performed on the internal ADFS server, rather then through the external ADFS proxy (i.e. WAP). This is why it’s a good idea to always use an ADFS proxy as opposed to simply reverse proxying your ADFS. Without it you can’t easily tell whether it was an ‘internal’ or ‘external’ authentication request (plus its more secure).
The other important claim to send through is the authnmethodsreferences claim. Now you may already have this if you were following some online Microsoft Technet documentation when setting up ADFS MFA. If so, you can skip this step.
This claim is what is generated when ADFS successfully performs MFA. So think of it as a way for ADFS to tell Azure AD that it has performed MFA for the user.
Step 4: “Cutover” the MFA execution
So now that everything is prepared, the ‘cutover’ can be performed by doing the following:
Disable the MFA rules on the ADFS Relying Party Trust
Now if it all goes as planned, what should happen is this:
User attempts sign into an Azure AD application. Since their domain is federated, they are redirected to ADFS to sign in.
User will perform standard username/password authentication.
If internal, this is generally ‘SSO’ with Windows Integrated Auth (WIA). Most importantly this user will get a ‘InsideCorporateNetwork’ = true claim
If external, this is generally a Forms Based credential prompt
Once successfully authenticated, they will be redirected back to Azure AD with a SAML token. Now is actually when Azure AD will assess the CA policy rules and determines whether the user requires MFA or not.
If they do, Azure AD actually generates a new ADFS sign in request, this time specifically stating via the wauth parameter to use multipleauthn. This will effectively tell ADFS to execute MFA using its configured providers
Once the user successfully completes MFA, they will go back to Azure AD with this new SAML token that contains a claim telling Azure AD that MFA has now been performed and subsequently lets the user through
This is what the above flow looks like in Fiddler:
This is what your end-state SAML token should like as well:
The main takeaway is that Step 4 is the new auth flow that is introduced by moving MFA evaluation into Azure AD. Prior to this, step 2 would have simply perform both username/password authentication and MFA in the same instance rather then over two requests.
Extra Considerations when enabling MFA on All Cloud Apps
If you decide to take a ‘exclusion’ approach to MFA enforcement for Cloud Apps, be very careful with this. In fact you’ll even see Microsoft giving you a little extra warning about this.
The main difference with taking this approach compared to just doing MFA enforcement at the ADFS level is that you are now enforcing MFA on all cloud identities as well! This may very well unintentionally break some things, particularly if you’re using ‘cloud identity’ service accounts (e.g. for provisioning scripts or the like). One thing that will definitely break is the AADConnect account that is created for directory synchronisation.
So at a very minimum, make sure you remember to add the On-Premises Directory Synchronization Service Account(s) into the exclusion list for for your Azure AD MFA CA policy.
The very last thing to call out is that some Azure AD applications, such as the Intune Company Portal and Azure AD Powershell cmdlets, can cause a ‘double ADFS prompt’ when MFA evaluation is being done in Azure AD. The reason for this and the fix is covered in my next article Resolving the ‘double auth’ prompt issue with Azure AD Conditional Access MFA and ADFS so make sure you check that out as well.
I was exchanging some emails with an account manager (Andy Walker) at Kloud and thought the exchange would be for some interesting reading. Here’s the outcome in an expanded and much more helpful (to you dear reader) format…
When working with the Microsoft Cloud and in particular with identity, depending on some of the configuration options, it can be quite important to have Azure AD Connect highly available. Unfortunately for us, Microsoft has not developed AADConnect to be highly available. That’s not ideal in today’s 24/7/365 cloud-centric IT world.
When disaster strikes, and yes that was a deliberate use of the word WHEN, with email, files, real-time communication and everything in between now living up in the sky with those clouds, should your primary data centre be out of action, there is a high probability that a considerable chunk of functionality will be affected by your on-premises outage.
Azure AD Connect
AADConnect’s purpose, it’s only purpose, is to synchronise on-premises directories to Azure AD. There is no other option. You cannot sync an ADDS forest to another with Azure AD Connect. So this simple and lightweight tool only requires a single deployment in any given ADDS forest.
Even when working with multiple forests, you only need one. It can even be deployed on a domain controller; which can save on an instance when working with the cloud.
The case for two Azure AD Connect servers
As of Monday April 17th 2017, just 132 days from today (Tuesday December 6th), DirSync and Azure AD Sync, the predecessors to Azure AD Connect will be deprecated. This is good news. This means that there is an opportunity to review your existing synchronisation architecture to take advantage of a feature of AADConnect and deploy two servers.
Staging mode is fantastic. Yes, short and sweet and to the point. It’s fantastic. Enough said, blog done; use it and we’re all good here? Right?
Okay, I’ll elaborate. Staging mode allows for the following rapid fire benefits which greatly improves IT as a result:
1 – Redundancy and disaster recovery, not high availability
I’ve read in certain articles that staging mode offers high availability. I disagree and argue it offers redundancy and disaster recovery. High availability is putting into practice architecture and design around applications, systems or servers that keeps those operational and accessible as near as possible to 100% uptime, or always online, mostly through automated solutions.
I see staging mode as offering the ability to manually bring online a secondary AADConnect server should the primary be unavailable. This could be due to a disaster or a scheduled maintenance window. Therefore it makes sense to deploy a secondary server in an alternate data centre, site or geographic location to your primary server.
To do the manual update of the secondary AADConnect server config and bring it online, the following needs to take place:
Run Azure AD Connect
That is the AzureADConnect.exe which is usually located in the Microsoft Azure Active Directory Connect folder in Program Files
Select Configure staging mode (current state: enabled), select next
Validate credentials and connect to Azure AD
Configure staging mode – un-tick
Configure and complete the change over
Once this has been run, run a full import/full sync process
This being a manual process which takes time and breaks service availability to complete (importantly: you can’t have two AADConnect servers synchronising to a single Azure AD tenant), it’s not highly available.
2 – Update and test management
AADConnect now offers (since February 2016) an in-place automatic upgrade feature. Updates are rolled out by Microsoft and installed automatically. Queue the alarm bells!
I’m going to take a stab at making an analogy here: you wouldn’t have your car serviced while it’s running and taking you from A to B, so why would you have a production directory synchronisation server upgraded while its running. A bit of a stretch, but, can you see where I was going there?
Having a secondary AADConnect server allows updates and or server upgrades without impacting the core functionality of directory synchronisation to Azure AD. While this seems trivial, it’s certainly not trivial in my books. I’d want my AADConnect server functioning correctly all the time with minimal downtime. Having the two servers can allow for a 15 min (depending on how fast you can click) managed change over from server to server.
3 – Improved management and less impact on production
Lastly, this is more of a stretch, but, I think it’s a good practice to not have user accounts assigned Domain Admin rights. The same applies to logging into production servers to do trivial tasks. Having a secondary AADConnect server allows for IT administrators or service desk engineers to complete troubleshooting of sync processes for the service or objects away from the production server. That, again in my books, is a better practice and just as efficient as using primary or production servers.
Staging mode is fantastic. I’ve said it again. Since I found out about staging mode around this time last year, I’ve recommended to every customer I’ve mentioned AADConnect too, to use two servers referencing the 3 core arguments listed above. I hope that’s enough to convince you dear reader to do so as well.
Disclaimer: During October I spent a few weeks working on this blog posts solution at a customer and had to do the responsible thing and pull the pin on further time as I had hit a glass ceiling. I reached what I thought was possible with Azure AD Connect. In comes Nigel Jones (Identity Consultant @ Kloud) who, through a bit of persuasion from Darren (@darrenjrobinson), took it upon himself to smash through that glass ceiling of Azure AD Connect and figured this solution out. Full credit and a big high five!
Azure AD Connect multi-forest design
Using AADC to sync user/account + resource shared forest with another user/account only forest
Why it won’t work out of the box
How to get around the issue and leverage precedence to make it work
Visio’s on how it all works for easy digestion
In true Memento style, after the quick disclaimer above, let me take you back for a quick background of the solution and then (possibly) blow your mind with what we have ended up with.
Back to the future
A while back in the world of directory synchronisation with Azure AD, to have a user and resource forest solution synchronised required the use of Microsoft Forefront Identity Manager (FIM), now Microsoft Identity Manager (MIM). From memory, you needed the former of those products (FIM) whenever you had a multi-forest synchronisation environment with Azure AD.
Just like Marty McFly, Azure AD synchronisation went from relative obscurity to the mainstream. In doing so, there have been many advancements and improvements that negate the need to ever deploy FIM or MIM for ever the more complex environment.
When Azure AD Connect, then Azure AD Sync, introduced the ability to synchronise multiple forests in a user + resource model, it opened the door for a lot of organisations to streamline the federated identity design for Azure and Office 365.
In the beginning…
The following outlines a common real world scenario for numerous enterprise organisations. In this environment we have an existing Active Directory forest which includes an Exchange organisation, SharePoint, Skype for Business and many more common services and infrastructure. The business grows and with the wealth and equity purchases another business to diversity or expand. With that comes integration and the sharing of resources.
We have two companies: Contoso and Fabrikam. A two-way trust is set up between the ADDS forests and users can start to collaboration and share resources.
In order to use Exchange, which is the most common example, we need to start to treat Contoso as a resource forest for Fabrikam.
Over at the Contoso forest, IT creates disabled user objects and linked mailboxes Fabrikam users. When where in on-premises world, this works fine. I won’t go into too much more detail, but, I’m sure that you, mr or mrs reader, understand the particulars.
In summary, Contoso is a user and resource forest for itself, and a resource forest for Fabrikam. Fabrikam is simply a user forest with no deployment of Exchange, SharePoint etc.
How does a resource forest topology work with Azure AD Connect?
For the better part of two years now, since AADConnect was AADSync, Microsoft added in support for multi-forest connectivity. Last year, Jason Atherton (awesome Office 365 consultant @ Kloud) wrote a great blog post summarising this compatibility and usage.
In AADConnect, a user/account and resource forest topology is supported. The supported topology assumes that a customer has that simple, no-nonsense architecture. There’s no room for any shared funny business…
AADConnect is able to select the two forests common identities and merge them before synchronising to Azure AD. This process uses the attributes associated with the user objects: objectSID in the user/account forest and the msExchMasterAccountSID in the resource forest, to join the user account and the resource account.
There is also the option for customers to have multiple user forests and a single resource forest. I’ve personally not tried this with more than two forests, so I’m not confident enough to say how additional user/account forests would work out as well. However, please try it out and be sure to let me know via comments below, via Twitter or email me your results!
Quick note: you can also merge two objects by sAmAccountName and sAmAccountName attribute match, or specifying any ADDS attribute to match between the forests.
If you’d like to read up on this a little more, here are two articles reference in detail the above mentioned topologies:
Generally speaking, the first forest to sync in AADConnect, in a multi-forest implementation, is the user/account forest, which likely is the primary/main forest in an organisation. Lets assume this is the Contoso forest. This will be the first connector to sync in AADConnect. This will have the lowest precedence as well, as with AADConnect, the lower the precedence designated number, the higher the priority.
When the additional user/account forest(s) is added, or the resource forest, these connectors run after the initial Contoso connector due to the default precedence set. From an external perspective, this doesn’t seem like much of a bit deal. AADConnect merges two matching or mirrored user objects by way of the (commonly used) objectSID and msExchMasterAccountSID and away we go. In theory, precedence shouldn’t really matter.
Give me more detail
The issue is that precedence does in deed matter when we go back to our Contoso and Fabrikam example. The reason that this does not work is indeed precedence. Here’s what happens:
#1 – Contoso is sync’ed to AADC first as it was the first forest connected to AADC
Adding in Fabrikam first over Contoso doesn’t work either
#2 – The Fabrikam forest is joined with a second forest connector
AADC is configured with user identities exist across multiple directories
objectSID and msExchMasterAccountSID is selected to merge identities
When the objects are merged, sAmAccountName is taken Contoso forest – #1
This happens for Contoso forest users AND Fabrikam forest users
When the objects are merged, mail or primarySMTPaddress is taken Contoso forest – #1
This happens for Contoso forest users AND Farikam forest users
Should the two objects not have a completely identical set of attributes, the attributes that are set are pulled
In this case, most of the user object details come from Fabrikam – #2
Attributes like the users firstname, lastname, employee ID, branch / office
The result is this standard setup is having Fabrikam users with their resource accounts in Contoso sync’ed, but, have their UPN set with the prefix from the Contoso forest. An example would be a UPN of email@example.com rather than the desired firstname.lastname@example.org. When this happens, there is no SSO as Windows Integrated Authentication in the Fabrikam forest does not recognise the Contoso forest UNP prefix of @contoso.com.
Yes, even with ADDS forest trusts configured correctly and UPN routing etc all working correctly, authentication just does not work. AADC uses the incorrect attributes and sync’s those to Azure AD.
Is there any other way around this?
I’ve touched on and referenced precedence a number of times in this blog post so far. The solution is indeed precedence. The issue that I had experienced was a lack of understanding of precedence in AADConnect. Sure it works on a connector rule level precedence which is set by AADConnect during the configuration process as forests are connected to.
Playing around with precedence was not something I want to do as I didn’t have enough Microsoft Identity Manager or Forefront Identity Manager background to really be certain of the outcome of the joining/merging process of user and resource account objects. I know that FIM/MIM has the option of attribute level precedence, which is what we really wanted here, so my thinking as that we needed FIM/MIM to do the job. Wrong!
In comes Nigel…
Nigel dissected the requirements over the course of a week. He reviewed the configuration in an existing FIM 2010 R2 deployment and found the requirements needed of AADConnect. Having got AADConnect setup, all that was required was tweaking a couple of the inbound rules and moving higher up the precedence order.
Below is the AADConnect Sync Rules editor output from the final configuration of AADConnect:
The solution centres around the main precedence rule, rule #1 for Fabrikam (red arrows pointing and yellow highlight) to be above the highest (and default) Contoso rule (originally #1). When this happened, AADConnect was able to pull the correct sAmAccountName and mail attributes from Fabrikam and keep all the other attributes associated with Exchange mailboxes from Contoso. Happy days!
Tinkering around with AADConnect shows just how powerful the “cut down FIM/MIM” application is. While AADConnect lacks the in-depth configuration and customisation that you find in FIM/MIM, it packs a lot in a small package! #Impressed
There is a feature in Azure AD Connect that became available in the November 2015 build 1.0.9125.0 (listed here), which has not had much fanfare but can certainly come in handy in tricky situations. I happened to be working on a project that required the DNS domain linked to an old Office 365 tenant to be removed so that it could be used in a new tenant. Although the old tenant was no long used for Exchange Online services, it held onto the domain in question, and Azure AD Connect was being used to synchronise objects between the on-premise Active Directory and Azure Active Directory.
Trying to remove the domain using the Office 365 Portal will reveal if there are any items that need to be remediated prior to removing the domain from the tenant, and for this customer it showed that there were many user and group objects that still had the domain used as the userPrincipalName value, and in the mail and proxyAddresses attribute values. The AuthoritativeNull literal could be used in this situation to blank out these values against user and groups (ie. Distribution Lists) so that the domain can be released. I’ll attempt to show the steps in a test environment, and bear with me as this is a lengthy blog.
Trying to remove the domain minnelli.net listed the items needing attention, as shown in the following screenshots:
This report showed that three actions are required to remove the domain values out of the attributes:
userPrincipalName is simple to resolve by changing the value in the on-premise Active Directory using a different domain suffix, then synchronising the changes to Azure Active Directory so that the default onmicrosoft.com or another accepted domain is set.
Clearing the proxyAddresses and mail attribute values is possible using the AuthoritativeNull literal in Azure AD Connect. NOTE: You will need to assess the outcome of performing these steps depending on your scenario. For my customer, we were able to perform these steps without affecting other services required from the old Office 365 tenant.
Using the Synchronization Rules Editor, locate and edit the In from AD – User Common rule. Editing the out-of-the-box rules will display a message to suggest you create an editable copy of the rule and disable the original rule which is highly recommended, so click Yes.
The rule is cloned as shown below and we need to be mindful of the Precedence value which we will get to shortly.
Select Transformations and edit the proxyAddresses attribute, set the FlowType to Expression, and set the Source to AuthoritativeNull.
I recommend setting the Precedence value in the new cloned rule to be the same as the original rule, in this case value 104. Firstly edit the original rule to a value such as 1001, and you can also notice the original rule is already set to Disabled.
Set the cloned rule Precedence value to 104.
Prior to performing a Full Synchronization run profile to process the new logic, I prefer and recommend to perform a Preview by selecting a user affected and previewing a Full Synchronization change. As can be seen below, the proxyAddresses value will be deleted.
The same process would need to be done for the mail attribute.
Once the rules are set, launch the following PowerShell command to perform a Full Import/Full Synchronization cycle in Azure AD Connect:
Start-ADSyncSyncCycle -PolicyType Initial
Once the cycle is completed, attempt to remove the domain again to check if any other items need remediation, or you might see a successful domain removal. I’ve seen it take upto 30 minutes or so before being able to remove the domain if all remediation tasks have been completed.
There will be other scenarios where using the AuthoritativeNull literal in a Sync Rule will come in handy. What others can you think of? Leave a description in the comments.