Windows 10 Domain Join + AAD and MFA Trusted IPs

Background

Those who have rolled out Azure MFA (in the cloud) to non-administrative users are probably well aware of the nifty Trusted IPs feature.   For those that are new to this, the short version is that this capability is designed to make it a little easier on the end user experience by allowing you to define a set of ‘trusted locations’ (e.g. your corporate network) in which MFA is not required.

This capability works via two methods:

  • Defining a set of ‘Trusted” IP addresses.  These IP addresses will be the public facing IP addresses of your Web Proxies and/or network gateways and firewalls
  • Utilising issued claims from Federated Users.   This uses the insidecorporatenetwork = true claim, sent by ADFS, to determine that this user is coming from a ‘trusted location’.  Enabling this capability is discussed in this article.

The Problem

Now, the latter of these is what needs further consideration when you are looking to moving to the ‘modern world’ of Windows 10 and Azure AD (AAD).  Unfortunately, due to some changes made in the way that ‘Win10 Domain Joined with AAD Registration (AAD+DJ) machines performs Single Sign On (SSO) with AAD, the method of utilising federated claims to determine a ‘trusted location’ for MFA will no longer work.

To understand why this is the case, I highly encourage that you first read Jairo Cadena‘s truly excellent blog series that discuss in detail around how Win10 AAD SSO and its associated services works.  The key takeaways from those posts are that Win10 now has this concept of a Primary Refresh Token (PRT) and with this approach to authentication you now have the following changes:

  • The PRT is what is used to obtain access tokens to AAD applications
  • The PRT is cached and has a sliding window lifetime from 14 days up to 90 days
  • The use of the PRT is built into the Windows 10 credential provider.  Both IE and Edge know to utilise the PRT when communicating with AAD
  • It effectively replaces the ADFS with Integrated Windows Auth (IWA) approach to achieve SSO with Azure AD
    • That is, the auth flow is no longer: Browser –> Login to AAD –> Redirect to ADFS –> Perform IWA SSO –> SAML Token provided with claims –> AAD grants access
    • Instead, the auth flow is a lot more streamlined:  Browser –> Login and provide PRT to AAD –> AAD grants access

Hopefully from this auth flow change you can see why Microsoft have done this.  Because the old way relied on IWA to perform ‘seamless’ SSO, it only worked when the device was domain joined and you had line of sight to a DC to perform kerberos.  So when connecting externally, you would always see the prompt from the ADFS forms based authentication.  In the new way, whenever an auth prompt came in from AAD, the credential provider could see this and immediately provide the cached PRT, providing SSO regardless of your network location.  It also meant that you no longer needed a domain joined machine to achieve ‘seamless’ SSO!

The side effect though is that because the SAML token provided by ADFS is no longer involved in gaining access, Azure AD loses visibility on those context based claims like insidecorporatenetwork which subsequently means that specific Trusted IPs feature no longer works.   While this is most commonly used for MFA scenarios, be aware that this will also apply to any Azure AD Conditional Access rules you define that uses the Trusted IPs criteria (e.g. block access to app when external).

Side Note: If you want to confirm this behaviour yourself, simply use a Win10 machine that is both Domain Joined and AAD Registered, perform a fiddler capture, and compare the sign in experience differences between a IE and Edge (i.e. PRT aware) and Chrome (i.e. not PRT aware)

The Solution/Workaround?

So, you might ask, how do you fix this apparent gap in capability?   Does this mean you’re going backwards now?   For any enterprise customer of decent size, managing a set of IP address ranges may not be practical or desireable in order to drive MFA (or conditional access) behaviours between internal and external users.   The federated user claim method was a simple, low admin, way of solving that problem.

To answer this question, I would actually take a step back and look at the underlying problem that you’re trying to solve.  If we remind ourselves of the MFA mantra, the idea is to ensure that the user provides “something they know” (e.g. a secret/password) and “something they have” (e.g. a mobile device) to prove their ‘trustworthiness’.

When we make a decision to allow an MFA bypass for internal users, we are predicating this on the fact that, from a security standpoint, they have met their ‘trustworthiness’ level through a seperate means.  This might be through a security access card that lets them into an office location or utilising a corporate laptop that can perform a VPN connection.  Both of which ultimately lets them connect to the internal network and thus is what you use as your criteria for granting them the luxury of not having to perform another factor of authentication.

So with that in mind, what you could then do is to also expand that critera to include Domain Joined machines.  That is, if a user is utilising a corporate issued device that has been domain joined (and registered to AAD), this can now act as your “something you have” aspect of the MFA mantra to prove your trustworthiness, and so you no longer need to differentiate whether they are actually internal or external anymore.

To achieve this, you’ll need to use Azure AD Conditional Access policies, and modify your Access Grant rules to look something like that below:

Win10PRT1

You’ll also need to perform the steps outlined in the How to configure automatic registration of Windows domain-joined devices with Azure Active Directory article to ensure the devices properly identify themselves as being domain joined.

Side Note:  If you include the Workplace Join packages as outlined above, this approach can also expand to Windows 7 and Windows 8.1 devices.

Side Note 2: You can also include Intune managed mobile devices for your ‘bypass criterias’ if you include the Require device to be marked as compliant critera as well.

Fun Fact: You’ll note that in my image the (preview) reference for ‘require one of the selected controls’ is still there.  This is because until recently (approx. May/June 2017), the MFA or domain joined device criteria didn’t acutally work because of the behaviour/order of how the evaluations were being done.  When AAD was evaluating the domain joined criteria, if it failed it would immediately block access rather then trying the MFA approach next, thus preventing an ‘or’ scenario.   This has now been fixed and I expect the (preview) tag to be removed soon.

Summary

The move to the modern ‘any where, any device’ approach to end user computing means that there is a need to start re-assessing how you approach security.  Old world views of security being defined via network boundaries will eventually disappear and instead you’ll need to consider user-and device based contexts to define when to initiate security controls.

With Windows 10’s approach to authentication with AAD, internal and external access is no longer relevant and should not be used for your criteria in driving MFA or conditional access. Instead, use the device based conditions such as ‘device compliance’ or ‘domain join’ as one of your deciding factors.

Resolving the ‘Double Auth’ prompt issue in ADFS with Azure AD Conditional Access MFA

As mentioned in my previous post, Using ADFS on-premises MFA with Azure AD Conditional Access, if you have implemented Azure AD Conditional Access to enforce MFA for all your Cloud Apps and you are using the SupportsMFA=true parameter to direct MFA execution to your ADFS on-premises MFA server you may have encountered what I call the ‘Double Auth’ prompt issue.

While this doesn’t happen across all Cloud Apps, you will see it on the odd occasion (in particular the Intune Company Portal and Azure AD Powershell Cmdlets) and it has the following symptoms:

  1. User signs into Azure AD App (e.g. Azure AD Powershell with Modern Auth support)
  2. User sees auth prompt, enters their username, which redirects to ADFS
  3. User enters credentials and clicks enter
  4. It looks like it signs in successfully but then ADFS reappears and the user is prompted to enter credentials again.
  5. After the second successful attempt, the user is then prompted for MFA as expected

 

DoubleAuth

Understanding the reason behind why this happens is reliant on two things:

  1. The background I provided in the blog post I referenced above, specifically that when SupportsMFA is being used, two requests to ADFS are sent by Azure AD instead of one as part of the authentication process when MFA is involved.
  2. Configuration and behaviour of the prompt=login behaviour of Azure AD, which is discussed in this Microsoft Docs article.

So to delve into this, let’s crack out our trusty Fiddler tool to look at what’s happening:

DoubleAuth3a.png

Highlighted in the image above is the culprit.  You’ll see in the request strings that the user is being sent to ADFS with two key parameters wauth=… and wfresh=0.  What is happening here is that this particular Azure AD application has decided that as part of sign in, they want to ensure that ‘fresh credentials’ are being provided (say, to ensure the correct user creds are used).  They do this by telling Azure AD to generate a request with prompt=login, however as noted in the article referenced, because some legacy ADFS systems don’t understand this ‘modern’ parameter, the default behaviour is for Azure AD to pre-emptively translate this request into two ‘WS-Fed’ parameters that they can understand.   In particular, wfresh=0 as per the WS-Fed specs means:

…  If specified as “0” it indicates a request for the IP/STS to re-prompt the user for authentication before issuing the token….

The problem of course is that ADFS sees the wfresh=0 parameter in both requests and will abide by that behaviour by prompting the user for credentials each time!

So, the fix for this is fairly simple and is in fact (very vaguely) called out in the technet article I’ve referenced above – which is to ensure that Azure AD uses the NativeSupport configuration so that it sends the parameter as-is to ADFS to interpret instead of pre-emptively translating it.

The specific command to run is:

Set-MsolDomainFederationSettings –DomainName yourFederatedDomain.com -PromptLoginBehavior NativeSupport

The prerequisite to this fix is to ensure that you are either running:

  • ADFS 2016
  • ADFS 2012 R2 with the July 2016 update rollup

Once this update is applied (remember that these DomainFederationSettings changes can take up to 15-30 mins) you’ll be able to see the difference via Fiddler – ADFS is sent with a prompt=login parameter instead and its only for the first request so  the overall experience is the credential prompt only occurs once.

DoubleAuth4.png

Hope that saves a few hairs for anyone out there who’s come across this issue!

Using ADFS on-premises MFA with Azure AD Conditional Access

With the recent announcement of General Availability of the Azure AD Conditional Access policies in the Azure Portal, it is a good time to reassess your current MFA policies particularly if you are utilising ADFS with on-premises MFA; either via a third party provider or with something like Azure MFA Server.

Prior to conditional MFA policies being possible, when utilising on-premises MFA with Office 365 and/or Azure AD the MFA rules were generally enabled on the ADFS relying party trust itself.  The main limitation with this of course is the inability to define different MFA behaviours for the various services behind that relying party trust.  That is, within Office 365 (Exchange Online, Sharepoint Online, Skype for Business Online etc.) or through different Azure AD Apps that may have been added via the app gallery (e.g. ServiceNow, SalesForce etc.).  In some circumstances you may have been able to define some level of granularity utilising custom authorisation claims, such as bypassing MFA for activesync and legacy  authentication scenarios, but that method was reliant on special client headers or the authentication endpoints that were being used and hence was quite limited in its use.

Now with Azure AD Conditional Access policies, the definition and logic of when to trigger MFA can, and should, be driven from the Azure AD side given the high level of granularity and varying conditions you can define. This doesn’t mean though that you can’t keep using your on-premises ADFS server to perform the MFA, you’re simply letting Azure AD decide when this should be done.

In this article I’ll show you the method I like to use to ‘migrate’ from on-premises MFA rules to Azure AD Conditional Access.  Note that this is only applicable for the MFA rules for your Azure AD/Office 365 relying party trust.  If you are using ADFS MFA for other SAML apps on your ADFS farm, they will remain as is.

Summary

At a high level, the process is as follows:

  1. Configure Azure AD to pass ‘MFA execution’ to ADFS using the SupportsMFA parameter
  2. Port your existing ADFS MFA rules to an Azure AD Conditional Access (CA) Policy
  3. Configure ADFS to send the relevant claims
  4. “Cutover” the MFA execution by disabling the ADFS MFA rules and enabling the Azure AD CA policy

The ordering here is important, as by doing it like this, you can avoid accidentally forcing users with a ‘double MFA’ prompt.

Step 1:  Using the SupportsMFA parameter

The crux of this configuration is the use of the SupportsMFA parameter within your MSOLDomainFederationSettings configuration.

Setting this parameter to True will tell Azure AD that your federated domain is running an on-premises MFA capability and that whenever it determines a need to perform MFA, it is to send that request to your STS IDP (i.e. ADFS) to execute, instead of triggering its own ‘Azure Cloud MFA’.

To perform this step is a simple MSOL PowerShell command:

Set-MsolDomainFederationSettings -domain yourFederatedDomain.com -SupportsMFA $true

Pro Tip:  This setting can take up to 15-30 mins to take effect.  So make sure you factor in this into your change plan.  If you don’t wait for this to kick in before cutting over your users will get ‘double MFA’ prompts.

Step 2:  Porting your existing MFA Rules to Azure AD Conditional Access Policies

There’s a whole article in itself talking about what Azure AD CA policies can do nowadays, but for our purposes let’s use the two most common examples of MFA rules:

  1. Bypass MFA for users that are a member of a group
  2. Bypass MFA for users on the internal network*

Item 1 is pretty straight forward, just ensure our Azure AD CA policy has the following:

  • Assignment – Users and Groups:
    • Include:  All Users
    • Exclude:  Bypass MFA Security Group  (simply reuse the one used for ADFS if it is synced to Azure AD)

MFABypass1

Item 2 requires the use of the Trusted Locations feature.  Note that at the time of writing, this feature is still the ‘old’ MFA Trusted IPs feature hosted in the Azure Classic Portal.   Note*:  If you are using Windows 10 Azure AD Join machines this feature doesn’t work.  Why this is the case will be an article in itself, so I’ll add a link here when I’ve written that up.

So within your Azure AD CA policy do the following:

  • Conditions – Locations:
    • Include:  All Locations
    • Exclude:  All Trusted IPs

MFABypass2.png

Then make sure you click on Configure all trusted locations to be taken to the Azure Classic Portal.  From there you must set Skip multi-factor authentication for requests from federated users on my intranet

MFABypass3.png

This effectively tells Azure AD that a ‘trusted location’ is any authentication requests that come in with a InsideCorporateNetwork claim.

Note:  If you don’t use ADFS or an IDP that can send that claim, you can always use the actual ‘Trusted IP addresses’ method.

Now you can define exactly which Azure AD apps you want MFA to be enabled for, instead of all of them as you had originally.

MFABypass7.png

Pro Tip:  If you are going to enable MFA on All Cloud Apps to start off with, check the end of this article for some extra caveats you should consider for, else you’ll start breaking things.

Finally, to make this Azure AD CA policy actually perform MFA, set the access controls:

MFABypass8.png

For now, don’t enable the policy just yet as there is more prep work to be done.

Step 3:  Configure ADFS to send all the relevant claims

So now that Azure AD is ready for us, we have to configure ADFS to actually send the appropriate claims across to ‘inform’ it of what is happening or what it is doing.

The first is to make sure we send the InsideCorporateNetwork claim so Azure AD can apply the ‘bypass for all internal users’ rule.  This is well documented everywhere, but the short version is, within your Microsoft Office 365 Identity Platform relying party trust in ADFS and Add a new Issuance Transform Rule to pass through the Inside Corproate Network Claim:

MFABypass4

Fun fact:   The Inside Corporate Network claim is automatically generated by ADFS when it detects that the authentication was performed on the internal ADFS server, rather then through the external ADFS proxy (i.e. WAP).  This is why it’s a good idea to always use an ADFS proxy as opposed to simply reverse proxying your ADFS.  Without it you can’t easily tell whether it was an ‘internal’ or ‘external’ authentication request (plus its more secure).

The other important claim to send through is the authnmethodsreferences claim.  Now you may already have this if you were following some online Microsoft Technet documentation when setting up ADFS MFA.  If so, you can skip this step.

This claim is what is generated when ADFS successfully performs MFA.  So think of it as a way for ADFS to tell Azure AD that it has performed MFA for the user.

MFABypass6

Step 4: “Cutover” the MFA execution

So now that everything is prepared, the ‘cutover’ can be performed by doing the following:

  1. Disable the MFA rules on the ADFS Relying Party Trust
    Set-AdfsRelyingPartyTrust -TargetName "Microsoft Office 365 Identity Platform" -AdditionalAuthenticationRules $null
  2. Enable the Azure AD CA Policy

Now if it all goes as planned, what should happen is this:

  1. User attempts sign into an Azure AD application.  Since their domain is federated, they are redirected to ADFS to sign in.
  2. User will perform standard username/password authentication.
    • If internal, this is generally ‘SSO’ with Windows Integrated Auth (WIA).  Most importantly this user will get a ‘InsideCorporateNetwork’ = true claim
    • If external, this is generally a Forms Based credential prompt
  3. Once successfully authenticated, they will be redirected back to Azure AD with a SAML token.  Now is actually when Azure AD will assess the CA policy rules and determines whether the user requires MFA or not.
  4. If they do, Azure AD actually generates a new ADFS sign in request, this time specifically stating via the wauth parameter to use multipleauthn. This will effectively tell ADFS to execute MFA using its configured providers
  5. Once the user successfully completes MFA, they will go back to Azure AD with this new SAML token that contains a claim telling Azure AD that MFA has now been performed and subsequently lets the user through

This is what the above flow looks like in Fiddler:

MFABypass9.png

This is what your end-state SAML token should like as well:

MFABypass10

The main takeaway is that Step 4 is the new auth flow that is introduced by moving MFA evaluation into Azure AD.  Prior to this, step 2 would have simply perform both username/password authentication and MFA in the same instance rather then over two requests.

Extra Considerations when enabling MFA on All Cloud Apps

If you decide to take a ‘exclusion’ approach to MFA enforcement for Cloud Apps, be very careful with this.  In fact you’ll even see Microsoft giving you a little extra warning about this.

MFABypass12

The main difference with taking this approach compared to just doing MFA enforcement at the ADFS level is that you are now enforcing MFA on all cloud identities as well!  This may very well unintentionally break some things, particularly if you’re using ‘cloud identity’ service accounts (e.g. for provisioning scripts or the like).  One thing that will definitely break is the AADConnect account that is created for directory synchronisation.

So at a very minimum, make sure you remember to add the On-Premises Directory Synchronization Service Account(s) into the exclusion list for for your Azure AD MFA CA policy.

The very last thing to call out is that some Azure AD applications, such as the Intune Company Portal and Azure AD Powershell cmdlets, can cause a ‘double ADFS prompt’ when MFA evaluation is being done in Azure AD.   The reason for this and the fix is covered in my next article Resolving the ‘double auth’ prompt issue with Azure AD Conditional Access MFA and ADFS so make sure you check that out as well.

 

Real world Azure AD Connect: the case for TWO Azure AD Connect servers

Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango.

***

I was exchanging some emails with an account manager (Andy Walker) at Kloud and thought the exchange would be for some interesting reading. Here’s the outcome in an expanded and much more helpful (to you dear reader) format…

***

Background

When working with the Microsoft Cloud and in particular with identity, depending on some of the configuration options, it can be quite important to have Azure AD Connect highly available. Unfortunately for us, Microsoft has not developed AADConnect to be highly available. That’s not ideal in today’s 24/7/365 cloud-centric IT world.

When disaster strikes, and yes that was a deliberate use of the word WHEN, with email, files, real-time communication and everything in between now living up in the sky with those clouds, should your primary data centre be out of action, there is a high probability that a considerable chunk of functionality will be affected by your on-premises outage.

Azure AD Connect

AADConnect’s purpose, it’s only purpose, is to synchronise on-premises directories to Azure AD. There is no other option. You cannot sync an ADDS forest to another with Azure AD Connect. So this simple and lightweight tool only requires a single deployment in any given ADDS forest.

Even when working with multiple forests, you only need one. It can even be deployed on a domain controller; which can save on an instance when working with the cloud.

The case for two Azure AD Connect servers

As of Monday April 17th 2017, just 132 days from today (Tuesday December 6th), DirSync and Azure AD Sync, the predecessors to Azure AD Connect will be deprecated. This is good news. This means that there is an opportunity to review your existing synchronisation architecture to take advantage of a feature of AADConnect and deploy two servers.

2016-12-06-two-aadc-v01

Staging mode is fantastic. Yes, short and sweet and to the point. It’s fantastic. Enough said, blog done; use it and we’re all good here? Right?

Okay, I’ll elaborate. Staging mode allows for the following rapid fire benefits which greatly improves IT as a result:

1 – Redundancy and disaster recovery, not high availability

I’ve read in certain articles that staging mode offers high availability. I disagree and argue it offers redundancy and disaster recovery. High availability is putting into practice architecture and design around applications, systems or servers that keeps those operational and accessible as near as possible to 100% uptime, or always online, mostly through automated solutions.

I see staging mode as offering the ability to manually bring online a secondary AADConnect server should the primary be unavailable. This could be due to a disaster or a scheduled maintenance window. Therefore it makes sense to deploy a secondary server in an alternate data centre, site or geographic location to your primary server.

To do the manual update of the secondary AADConnect server config and bring it online, the following needs to take place:

  • Run Azure AD Connect
    • That is the AzureADConnect.exe which is usually located in the Microsoft Azure Active Directory Connect folder in Program Files
  • Select Configure staging mode (current state: enabled), select next
  • Validate credentials and connect to Azure AD
  • Configure staging mode – un-tick
  • Configure and complete the change over
  • Once this has been run, run a full import/full sync process

This being a manual process which takes time and breaks service availability to complete (importantly: you can’t have two AADConnect servers synchronising to a single Azure AD tenant), it’s not highly available.

2 – Update and test management

AADConnect now offers (since February 2016) an in-place automatic upgrade feature. Updates are rolled out by Microsoft and installed automatically. Queue the alarm bells!

I’m going to take a stab at making an analogy here: you wouldn’t have your car serviced while it’s running and taking you from A to B, so why would you have a production directory synchronisation server upgraded while its running. A bit of a stretch, but, can you see where I was going there?

Having a secondary AADConnect server allows updates and or server upgrades without impacting the core functionality of directory synchronisation to Azure AD. While this seems trivial, it’s certainly not trivial in my books. I’d want my AADConnect server functioning correctly all the time with minimal downtime. Having the two servers can allow for a 15 min (depending on how fast you can click) managed change over from server to server.

3 – Improved management and less impact on production

Lastly, this is more of a stretch, but, I think it’s a good practice to not have user accounts assigned Domain Admin rights. The same applies to logging into production servers to do trivial tasks. Having a secondary AADConnect server allows for IT administrators or service desk engineers to complete troubleshooting of sync processes for the service or objects away from the production server. That, again in my books, is a better practice and just as efficient as using primary or production servers.

Final words

Staging mode is fantastic. I’ve said it again. Since I found out about staging mode around this time last year, I’ve recommended to every customer I’ve mentioned AADConnect too, to use two servers referencing the 3 core arguments listed above. I hope that’s enough to convince you dear reader to do so as well.

Best,

Lucian

Real world Azure AD Connect: multi forest user and resource + user forest implementation

Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango.

***

Disclaimer: During October I spent a few weeks working on this blog posts solution at a customer and had to do the responsible thing and pull the pin on further time as I had hit a glass ceiling. I reached what I thought was possible with Azure AD Connect. In comes Nigel Jones (Identity Consultant @ Kloud) who, through a bit of persuasion from Darren (@darrenjrobinson), took it upon himself to smash through that glass ceiling of Azure AD Connect and figured this solution out. Full credit and a big high five!

***

tl;dr

  • Azure AD Connect multi-forest design
  • Using AADC to sync user/account + resource shared forest with another user/account only forest
  • Why it won’t work out of the box
  • How to get around the issue and leverage precedence to make it work
  • Visio’s on how it all works for easy digestion

***

In true Memento style, after the quick disclaimer above, let me take you back for a quick background of the solution and then (possibly) blow your mind with what we have ended up with.

Back to the future

A while back in the world of directory synchronisation with Azure AD, to have a user and resource forest solution synchronised required the use of Microsoft Forefront Identity Manager (FIM), now Microsoft Identity Manager (MIM). From memory, you needed the former of those products (FIM) whenever you had a multi-forest synchronisation environment with Azure AD.

Just like Marty McFly, Azure AD synchronisation went from relative obscurity to the mainstream. In doing so, there have been many advancements and improvements that negate the need to ever deploy FIM or MIM for ever the more complex environment.

When Azure AD Connect, then Azure AD Sync, introduced the ability to synchronise multiple forests in a user + resource model, it opened the door for a lot of organisations to streamline the federated identity design for Azure and Office 365.

2016-12-02-aadc-design-02

In the beginning…

The following outlines a common real world scenario for numerous enterprise organisations. In this environment we have an existing Active Directory forest which includes an Exchange organisation, SharePoint, Skype for Business and many more common services and infrastructure. The business grows and with the wealth and equity purchases another business to diversity or expand. With that comes integration and the sharing of resources.

We have two companies: Contoso and Fabrikam. A two-way trust is set up between the ADDS forests and users can start to collaboration and share resources.

In order to use Exchange, which is the most common example, we need to start to treat Contoso as a resource forest for Fabrikam.

Over at the Contoso forest, IT creates disabled user objects and linked mailboxes Fabrikam users. When where in on-premises world, this works fine. I won’t go into too much more detail, but, I’m sure that you, mr or mrs reader, understand the particulars.

In summary, Contoso is a user and resource forest for itself, and a resource forest for Fabrikam. Fabrikam is simply a user forest with no deployment of Exchange, SharePoint etc.

How does a resource forest topology work with Azure AD Connect?

For the better part of two years now, since AADConnect was AADSync, Microsoft added in support for multi-forest connectivity. Last year, Jason Atherton (awesome Office 365 consultant @ Kloud) wrote a great blog post summarising this compatibility and usage.

In AADConnect, a user/account and resource forest topology is supported. The supported topology assumes that a customer has that simple, no-nonsense architecture. There’s no room for any shared funny business…

AADConnect is able to select the two forests common identities and merge them before synchronising to Azure AD. This process uses the attributes associated with the user objects: objectSID in the user/account forest and the msExchMasterAccountSID in the resource forest, to join the user account and the resource account.

There is also the option for customers to have multiple user forests and a single resource forest. I’ve personally not tried this with more than two forests, so I’m not confident enough to say how additional user/account forests would work out as well. However, please try it out and be sure to let me know via comments below, via Twitter or email me your results!

Quick note: you can also merge two objects by sAmAccountName and sAmAccountName attribute match, or specifying any ADDS attribute to match between the forests.

Compatibility

aadc-multi-forest

If you’d like to read up on this a little more, here are two articles reference in detail the above mentioned topologies:

Why won’t this work in the example shown?

Generally speaking, the first forest to sync in AADConnect, in a multi-forest implementation, is the user/account forest, which likely is the primary/main forest in an organisation. Lets assume this is the Contoso forest. This will be the first connector to sync in AADConnect. This will have the lowest precedence as well, as with AADConnect, the lower the precedence designated number, the higher the priority.

When the additional user/account forest(s) is added, or the resource forest, these connectors run after the initial Contoso connector due to the default precedence set. From an external perspective, this doesn’t seem like much of a bit deal. AADConnect merges two matching or mirrored user objects by way of the (commonly used) objectSID and msExchMasterAccountSID and away we go. In theory, precedence shouldn’t really matter.

Give me more detail

The issue is that precedence does in deed matter when we go back to our Contoso and Fabrikam example. The reason that this does not work is indeed precedence. Here’s what happens:

2016-12-02-aadc-whathappens-01

  • #1 – Contoso is sync’ed to AADC first as it was the first forest connected to AADC
    • Adding in Fabrikam first over Contoso doesn’t work either
  • #2 – The Fabrikam forest is joined with a second forest connector
  • AADC is configured with user identities exist across multiple directories
  • objectSID and msExchMasterAccountSID is selected to merge identities
  • When the objects are merged, sAmAccountName is taken Contoso forest – #1
    • This happens for Contoso forest users AND Fabrikam forest users
  • When the objects are merged, mail or primarySMTPaddress is taken Contoso forest – #1
    • This happens for Contoso forest users AND Farikam forest users
  • Should the two objects not have a completely identical set of attributes, the attributes that are set are pulled
    • In this case, most of the user object details come from Fabrikam – #2
    • Attributes like the users firstname, lastname, employee ID, branch / office

The result is this standard setup is having Fabrikam users with their resource accounts in Contoso sync’ed, but, have their UPN set with the prefix from the Contoso forest. An example would be a UPN of user@contoso.com rather than the desired user@fabrikam.com. When this happens, there is no SSO as Windows Integrated Authentication in the Fabrikam forest does not recognise the Contoso forest UNP prefix of @contoso.com.

Yes, even with ADDS forest trusts configured correctly and UPN routing etc all working correctly, authentication just does not work. AADC uses the incorrect attributes and sync’s those to Azure AD.

Is there any other way around this?

I’ve touched on and referenced precedence a number of times in this blog post so far. The solution is indeed precedence. The issue that I had experienced was a lack of understanding of precedence in AADConnect. Sure it works on a connector rule level precedence which is set by AADConnect during the configuration process as forests are connected to.

Playing around with precedence was not something I want to do as I didn’t have enough Microsoft Identity Manager or Forefront Identity Manager background to really be certain of the outcome of the joining/merging process of user and resource account objects. I know that FIM/MIM has the option of attribute level precedence, which is what we really wanted here, so my thinking as that we needed FIM/MIM to do the job. Wrong!

In comes Nigel…

Nigel dissected the requirements over the course of a week. He reviewed the configuration in an existing FIM 2010 R2 deployment and found the requirements needed of AADConnect. Having got AADConnect setup, all that was required was tweaking a couple of the inbound rules and moving higher up the precedence order.

Below is the AADConnect Sync Rules editor output from the final configuration of AADConnect:

2016-12-01-syncrules-01

The solution centres around the main precedence rule, rule #1 for Fabrikam (red arrows pointing and yellow highlight) to be above the highest (and default) Contoso rule (originally #1). When this happened, AADConnect was able to pull the correct sAmAccountName and mail attributes from Fabrikam and keep all the other attributes associated with Exchange mailboxes from Contoso. Happy days!

Final words

Tinkering around with AADConnect shows just how powerful the “cut down FIM/MIM” application is. While AADConnect lacks the in-depth configuration and customisation that you find in FIM/MIM, it packs a lot in a small package! #Impressed

Cheers,

Lucian

Azure AD Connect – Using AuthoritativeNull in a Sync Rule

There is a feature in Azure AD Connect that became available in the November 2015 build 1.0.9125.0 (listed here), which has not had much fanfare but can certainly come in handy in tricky situations. I happened to be working on a project that required the DNS domain linked to an old Office 365 tenant to be removed so that it could be used in a new tenant. Although the old tenant was no long used for Exchange Online services, it held onto the domain in question, and Azure AD Connect was being used to synchronise objects between the on-premise Active Directory and Azure Active Directory.

Trying to remove the domain using the Office 365 Portal will reveal if there are any items that need to be remediated prior to removing the domain from the tenant, and for this customer it showed that there were many user and group objects that still had the domain used as the userPrincipalName value, and in the mail and proxyAddresses attribute values. The AuthoritativeNull literal could be used in this situation to blank out these values against user and groups (ie. Distribution Lists) so that the domain can be released. I’ll attempt to show the steps in a test environment, and bear with me as this is a lengthy blog.

Trying to remove the domain minnelli.net listed the items needing attention, as shown in the following screenshots:

This report showed that three actions are required to remove the domain values out of the attributes:

  • userPrincipalName
  • proxyAddresses
  • mail

userPrincipalName is simple to resolve by changing the value in the on-premise Active Directory using a different domain suffix, then synchronising the changes to Azure Active Directory so that the default onmicrosoft.com or another accepted domain is set.

Clearing the proxyAddresses and mail attribute values is possible using the AuthoritativeNull literal in Azure AD Connect. NOTE: You will need to assess the outcome of performing these steps depending on your scenario. For my customer, we were able to perform these steps without affecting other services required from the old Office 365 tenant.

Using the Synchronization Rules Editor, locate and edit the In from AD – User Common rule. Editing the out-of-the-box rules will display a message to suggest you create an editable copy of the rule and disable the original rule which is highly recommended, so click Yes.

The rule is cloned as shown below and we need to be mindful of the Precedence value which we will get to shortly.

Select Transformations and edit the proxyAddresses attribute, set the FlowType to Expression, and set the Source to AuthoritativeNull.

I recommend setting the Precedence value in the new cloned rule to be the same as the original rule, in this case value 104. Firstly edit the original rule to a value such as 1001, and you can also notice the original rule is already set to Disabled.

Set the cloned rule Precedence value to 104.

Prior to performing a Full Synchronization run profile to process the new logic, I prefer and recommend to perform a Preview by selecting a user affected and previewing a Full Synchronization change. As can be seen below, the proxyAddresses value will be deleted.

The same process would need to be done for the mail attribute.

Once the rules are set, launch the following PowerShell command to perform a Full Import/Full Synchronization cycle in Azure AD Connect:

  • Start-ADSyncSyncCycle -PolicyType Initial

Once the cycle is completed, attempt to remove the domain again to check if any other items need remediation, or you might see a successful domain removal. I’ve seen it take upto 30 minutes or so before being able to remove the domain if all remediation tasks have been completed.

There will be other scenarios where using the AuthoritativeNull literal in a Sync Rule will come in handy. What others can you think of? Leave a description in the comments.

AAD Connect – Using Directory Extensions to add attributes to Azure AD

I was recently asked to consult on a project that was looking at the integration of Workday with Azure AD for Single Sign On. One of the requirements for the project, is that staff number be used as the NameID value for authentication.

This got me thinking as the staff number wasn’t represented in Azure AD at all at this point, and in order to use it, we will need to get it to Azure AD.

These days, this is fairly easy to achieve by using the “Directory Extensions” option in Azure AD Connect. Directory Extensions allows us to synchronise additional attributes from the on-premises environment to Azure AD.

Launch the AADC configuration utility and select “Customize synchronisation options”

aadc_config_p1

Enter your Azure AD global administrator credentials to connect to Azure AD.

aadc_config_p2

Once authenticated to Azure AD, click next through the options until we get to “Optional Features” and select “Directory extension attribute sync”

aadc_config_p3

There are two additional attributes that I want to make use of in Azure AD, employeeID and employeeNumber. As such, I have selected these attributes from the list.

aadc_config_p4

Completing the wizard will configure AAD Connect to sync the requested attributes to Azure AD. A full synchronisation is required post configuration and can be launched either from the configuration wizard itself, or from powershell using the following cmdlet.

Start-ADSyncSyncCycle -PolicyType initial

Once the sync was complete, I went to validate that my new attributes were visible in Azure AD. They weren’t!

After some digging, I found that my attributes had in fact synced successfully, they just weren’t in the location I wanted them to be. My attributes had synced as follows:

Source AD                                                                    Azure AD

employeeNumber                                                         extension_tenantGUID_employeeNumber

employeeID                                                                    extension_tenantGUID_employeeID

 

So… This wasn’t exactly what I was looking for, but at least the theory works in practice.

Fortunately, this problem is also easy to fix. We can configure AAD Connect to synchronise to a different target attribute using the synchronisation rules editor.

When we configured Directory extensions above, a new rule was created in the synchronisation rules editor for “User Directory Extension”. If we edit this rule, we can see the source and target attributes as they are currently configured, and we are able to make changes to these rules.

Before making any changes, follow the received prompt guiding you not to change the existing rule, but instead clone the current rule and then edit the clone. The original rule will be disabled during the cloning process.

As seen below, I have now configured the employeeNumber and employeeID attributes to sync to extensionattribute5 and 6 respectively.

aadc_config_p6

A full synchronisation is required before the above changes will take effect. We are now in a position to configure Single Sign On settings in Azure AD, using extensionAttribute5 as the NameID value.

I hope this helps some of you out there.

Cheers,

Shane.

AAD Connect – Updating OU Sync Configuration Error: stopped-deletion-threshold-exceeded

I was recently working with a customer on cleaning up their Azure AD Connect synchronisation configuration.

Initially, the customer had enabled sync for all OU’s in the Forest (As a lot of companies do),  and had now come to a point in maturity where they could look at optimising the solution.

We identified an OU with approximately 7000 objects which did not need to be synced.

So…

I logged onto the AAD Connect server and launched the configuration utility. After authenticating with my Office365 global admin account, I navigated to the OU sync configuration and deselected the required OU.

At this point, everything appeared to be working as expected. The configuration utility saved my changes successfully and started a delta sync based on the checkbox which was automatically selected in the tool. The delta sync also completed successfully.

I went to validate my results, and noticed that no changes had been made, and no objects had been deleted from Azure AD. ????

It occurred to me that a full sync was probably required in order to force the deletion to occur. I kicked off a full synchronisation using the following command.

Start-ADSyncSyncCycle -PolicyType initial

When the sync cycle reach the export phase however, I noticed that the task had thrown an error as seen below:

aadc_deletion_error

It would seem I’m trying to delete too many objects. Well that does make sense considering we identified 7000 objects earlier. We need to disable the Export deletion threshold before we can move forward!

Ok, so now we know what we have to do! What does the order of events look like? See below:

  1. Update OU synchronisation configuration in Azure AD Connect utility
  2. Delect the run synchronisation option before saving the AADC utility
  3. Run the following powershell command to disable the deletion threshold
    1. Disable-ADSyncExportDeletionThreshold
  4. Run the following powershell command to start the full synchronisation
    1. Start-ADSyncSyncCycle -PolicyType initial
  5. Wait for the full synchronisation cycle to complete
  6. Run the following powershell command to disable the deletion threshold
    1. Enable-ADSyncExportDeletionThreshold

I hope this helps save some time for some of you out there.

Cheers,

Shane.

How to export user error data from Azure AD Connect with CSExport

A short post is a good post?! – the other day I had some problems with users synchronising with Azure AD via Azure AD Connect. Ultimately Azure AD Connect was not able to meet the requirements of the particular solution, as Microsoft Identity Manager (MIM) 2016 has the final 5% of the config required for, as I found out, a complicated user+resource and user forest design.

In saying that though, during my troubleshooting, I was looking at ways to export the error data from Azure AD Connect. I wanted to have the data more accessible as sometimes looking at problematic users one by one isn’t ideal. Having it all in a CSV file makes it rather easy.

So here’s a short blog post on how to get that data out of Azure AD Connect to streamline troubleshooting purposes.

What

Azure AD Connect has a way to make things nice and easy, but, at the same time makes you want to pull your hair out. When digging a little, you can get the information that you want. However, at first, you could be presented with a whole bunch of errors like this:

Unable to update this object because the following attributes associated with this object have values that may already be associated with another object in your local directory services: [UserPrincipalName user@domain.com]. Correct or remove the duplicate values in your local directory. Please refer to http://support.microsoft.com/kb/2647098 for more information on identifying objects with duplicate attribute values.

I beleive its Event ID: 6941 in eventvwr as well 

It’s not a complicated error. It’s rather self explanatory. However, when you have a bunch of them; say anything more that 20 or so, as I said earlier; it’s easier to export it all for quick reference and faster review.

How

To export that error data to a CSV file, complete the following steps:

Open a cmd prompt 
CD: or change the directory to "C:\Program Files\Microsoft Azure AD Sync\bin" 
Run: "CSExport “[Name of Connector]” [%temp%]\Errors-Export.xml /f:x" - without the [ ]

The name of the connector above can be found in the AADC Synchronisation Service.

Now to view that data in a nice CSV format, the following steps can be run to convert that into something more manageable:

Run: "CSExportAnalyzer [%temp%]\Errors-Export.xml > [%temp%]\Errors-Export.csv" - again, without the [ ] 
You now have a file in your [%temp%] directory named "Errors-Export.csv".

Happy days!

Final words

So a short blog post, but, I think a valuable one in that getting the info into a more easily digestible format should result in faster troubleshooting. In saying that, this doesn’t give you all errors in all area’s of AADC. Enjoy!

Best, Lucian


Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango.

Complex Mail Routing in Exchange Online Staged Migration Scenario

Notes From the Field:

I was recently asked to assist an ongoing project with understanding some complex mail routing and identity scenario’s which had been identified during planning for an upcoming mail migration from an external system into Exchange Online.

New User accounts were created in Active Directory for the external staff who are about to be migrated. If we were to assign the target state, production email attributes now, and create the exchange online mailboxes, we would have a problem nearing migration.

When the new domain is verified in Office365 & Exchange Online, new mail from staff already in Exchange Online would start delivering to the newly created mailboxes for the staff soon to be onboarded.

Not doing this, will delay the project which is something we didn’t want either.

I have proposed the following in order to create a scenario whereby cutover to Exchange Online for the new domain is quick, as well as not causing user downtime during the co-existence period. We are creating some “co-existence” state attributes on the on-premises AD user objects that will allow mail flow to continue in all scenarios up until cutover. (I will come back to this later).

generic_exchangeonline_migration_process_flow

We have configured the AD user objects in the following way

  1. UserPrincipalName – username@localdomainname.local
  2. mail – username@mydomain.onmicrosoft.com
  3. targetaddress – username@mydomain.com

We have configured the remote mailbox objects in the following way

  1. mail – username@mydomain.onmicrosoft.com
  2. targetaddress – username@mydomain.com

We have configured the on-premises Exchange Accepted domain in the following way

  1. Accepted Domain – External Relay

We have configured the Exchange Online Accepted domain in the following way

  1. Accepted Domain – Internal Relay

How does this all work?

Glad you asked! As I eluded to earlier, the main problem here is with staff who already have mailboxes in Exchange Online. By configuring the objects in this way, we achieve several things:

  1. We can verify the new domains successfully in Office365 without impacting existing or new users. By setting the UPN & mail attributes to @mydomain.onmicrosoft.com, Office365 & Exchange Online do not (yet) reference the newly onboarded domain to these mailboxes.
  2. By configuring the accepted domains in this way, we are doing the following:
    1. When an email is sent from Exchange Online to an email address at the new domain, Exchange Online will route the message via the hybrid connector to the Exchange on-premises environment. (the new mailbox has an email address @mydomain.onmicrosoft.com)
    2. When the on-premises environment receives the email, Exchange will look at both the remote mailbox object & the accepted domain configuration.
      1. The target address on the mail is configured @mydomain.com
      2. The accepted domain is configured as external relay
      3. Because of this, the on-premises exchange environment will forward the message externally.

Why is this good?

Again, for a few reasons!

We are now able to pre-stage content from the existing external email environment to Exchange Online by using a target address of @mydomain.onmicrosoft.com. The project is no longer at risk of being delayed ! 🙂

At the night of cutover for MX records to Exchange Online (Or in this case, a 3rd party email hygiene provider),  We are able to use the same powershell code that we used in the beginning to configure the new user objects to modify the user accounts for production use. (We are using a different csv import file to achieve this).

Target State Objects

We have configured the AD user objects in the following way

  1. UserPrincipalName – username@mydomain.com
  2. mail – username@mydomain.com
  3. targetaddress – username@mydomain.mail.onmicrosoft.com

We have configured the remote mailbox objects in the following way

  1. mail
    1. username@mydomain.com (primary)
    2. username@mydomain.onmicrosoft.com
  2. targetaddress – username@mydomain.mail.onmicrosoft.com

We have configured the on-premises Exchange Accepted domain in the following way

  1. Accepted Domain – Authoritive

We have configured the Exchange Online Accepted domain in the following way

  1. Accepted Domain – Internal Relay

NOTE: AAD Connect sync is now run and a manual validation completed to confirm user accounts in both on-premises AD & Exchange, as well as Azure AD & Exchange Online to confirm that the user updates have been successful.

We can now update DNS MX records to our 3rd party email hygiene provider (or this could be Exchange Online Protection if you don’t have one).

A final synchronisation of mail from the original email system is completed once new mail is being delivered to Exchange Online.