Real world Azure AD Connect: multi forest user and resource + user forest implementation

Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango.

***

Disclaimer: During October I spent a few weeks working on this blog posts solution at a customer and had to do the responsible thing and pull the pin on further time as I had hit a glass ceiling. I reached what I thought was possible with Azure AD Connect. In comes Nigel Jones (Identity Consultant @ Kloud) who, through a bit of persuasion from Darren (@darrenjrobinson), took it upon himself to smash through that glass ceiling of Azure AD Connect and figured this solution out. Full credit and a big high five!

***

tl;dr

  • Azure AD Connect multi-forest design
  • Using AADC to sync user/account + resource shared forest with another user/account only forest
  • Why it won’t work out of the box
  • How to get around the issue and leverage precedence to make it work
  • Visio’s on how it all works for easy digestion

***

In true Memento style, after the quick disclaimer above, let me take you back for a quick background of the solution and then (possibly) blow your mind with what we have ended up with.

Back to the future

A while back in the world of directory synchronisation with Azure AD, to have a user and resource forest solution synchronised required the use of Microsoft Forefront Identity Manager (FIM), now Microsoft Identity Manager (MIM). From memory, you needed the former of those products (FIM) whenever you had a multi-forest synchronisation environment with Azure AD.

Just like Marty McFly, Azure AD synchronisation went from relative obscurity to the mainstream. In doing so, there have been many advancements and improvements that negate the need to ever deploy FIM or MIM for ever the more complex environment.

When Azure AD Connect, then Azure AD Sync, introduced the ability to synchronise multiple forests in a user + resource model, it opened the door for a lot of organisations to streamline the federated identity design for Azure and Office 365.

2016-12-02-aadc-design-02

In the beginning…

The following outlines a common real world scenario for numerous enterprise organisations. In this environment we have an existing Active Directory forest which includes an Exchange organisation, SharePoint, Skype for Business and many more common services and infrastructure. The business grows and with the wealth and equity purchases another business to diversity or expand. With that comes integration and the sharing of resources.

We have two companies: Contoso and Fabrikam. A two-way trust is set up between the ADDS forests and users can start to collaboration and share resources.

In order to use Exchange, which is the most common example, we need to start to treat Contoso as a resource forest for Fabrikam.

Over at the Contoso forest, IT creates disabled user objects and linked mailboxes Fabrikam users. When where in on-premises world, this works fine. I won’t go into too much more detail, but, I’m sure that you, mr or mrs reader, understand the particulars.

In summary, Contoso is a user and resource forest for itself, and a resource forest for Fabrikam. Fabrikam is simply a user forest with no deployment of Exchange, SharePoint etc.

How does a resource forest topology work with Azure AD Connect?

For the better part of two years now, since AADConnect was AADSync, Microsoft added in support for multi-forest connectivity. Last year, Jason Atherton (awesome Office 365 consultant @ Kloud) wrote a great blog post summarising this compatibility and usage.

In AADConnect, a user/account and resource forest topology is supported. The supported topology assumes that a customer has that simple, no-nonsense architecture. There’s no room for any shared funny business…

AADConnect is able to select the two forests common identities and merge them before synchronising to Azure AD. This process uses the attributes associated with the user objects: objectSID in the user/account forest and the msExchMasterAccountSID in the resource forest, to join the user account and the resource account.

There is also the option for customers to have multiple user forests and a single resource forest. I’ve personally not tried this with more than two forests, so I’m not confident enough to say how additional user/account forests would work out as well. However, please try it out and be sure to let me know via comments below, via Twitter or email me your results!

Quick note: you can also merge two objects by sAmAccountName and sAmAccountName attribute match, or specifying any ADDS attribute to match between the forests.

Compatibility

aadc-multi-forest

If you’d like to read up on this a little more, here are two articles reference in detail the above mentioned topologies:

Why won’t this work in the example shown?

Generally speaking, the first forest to sync in AADConnect, in a multi-forest implementation, is the user/account forest, which likely is the primary/main forest in an organisation. Lets assume this is the Contoso forest. This will be the first connector to sync in AADConnect. This will have the lowest precedence as well, as with AADConnect, the lower the precedence designated number, the higher the priority.

When the additional user/account forest(s) is added, or the resource forest, these connectors run after the initial Contoso connector due to the default precedence set. From an external perspective, this doesn’t seem like much of a bit deal. AADConnect merges two matching or mirrored user objects by way of the (commonly used) objectSID and msExchMasterAccountSID and away we go. In theory, precedence shouldn’t really matter.

Give me more detail

The issue is that precedence does in deed matter when we go back to our Contoso and Fabrikam example. The reason that this does not work is indeed precedence. Here’s what happens:

2016-12-02-aadc-whathappens-01

  • #1 – Contoso is sync’ed to AADC first as it was the first forest connected to AADC
    • Adding in Fabrikam first over Contoso doesn’t work either
  • #2 – The Fabrikam forest is joined with a second forest connector
  • AADC is configured with user identities exist across multiple directories
  • objectSID and msExchMasterAccountSID is selected to merge identities
  • When the objects are merged, sAmAccountName is taken Contoso forest – #1
    • This happens for Contoso forest users AND Fabrikam forest users
  • When the objects are merged, mail or primarySMTPaddress is taken Contoso forest – #1
    • This happens for Contoso forest users AND Farikam forest users
  • Should the two objects not have a completely identical set of attributes, the attributes that are set are pulled
    • In this case, most of the user object details come from Fabrikam – #2
    • Attributes like the users firstname, lastname, employee ID, branch / office

The result is this standard setup is having Fabrikam users with their resource accounts in Contoso sync’ed, but, have their UPN set with the prefix from the Contoso forest. An example would be a UPN of user@contoso.com rather than the desired user@fabrikam.com. When this happens, there is no SSO as Windows Integrated Authentication in the Fabrikam forest does not recognise the Contoso forest UNP prefix of @contoso.com.

Yes, even with ADDS forest trusts configured correctly and UPN routing etc all working correctly, authentication just does not work. AADC uses the incorrect attributes and sync’s those to Azure AD.

Is there any other way around this?

I’ve touched on and referenced precedence a number of times in this blog post so far. The solution is indeed precedence. The issue that I had experienced was a lack of understanding of precedence in AADConnect. Sure it works on a connector rule level precedence which is set by AADConnect during the configuration process as forests are connected to.

Playing around with precedence was not something I want to do as I didn’t have enough Microsoft Identity Manager or Forefront Identity Manager background to really be certain of the outcome of the joining/merging process of user and resource account objects. I know that FIM/MIM has the option of attribute level precedence, which is what we really wanted here, so my thinking as that we needed FIM/MIM to do the job. Wrong!

In comes Nigel…

Nigel dissected the requirements over the course of a week. He reviewed the configuration in an existing FIM 2010 R2 deployment and found the requirements needed of AADConnect. Having got AADConnect setup, all that was required was tweaking a couple of the inbound rules and moving higher up the precedence order.

Below is the AADConnect Sync Rules editor output from the final configuration of AADConnect:

2016-12-01-syncrules-01

The solution centres around the main precedence rule, rule #1 for Fabrikam (red arrows pointing and yellow highlight) to be above the highest (and default) Contoso rule (originally #1). When this happened, AADConnect was able to pull the correct sAmAccountName and mail attributes from Fabrikam and keep all the other attributes associated with Exchange mailboxes from Contoso. Happy days!

Final words

Tinkering around with AADConnect shows just how powerful the “cut down FIM/MIM” application is. While AADConnect lacks the in-depth configuration and customisation that you find in FIM/MIM, it packs a lot in a small package! #Impressed

Cheers,

Lucian

Completing an Exchange Online Hybrid individual MoveRequest for a mailbox in a migration batch

I can’t remember for certain, however, I would say since at least Exchange Server 2010 Hybrid, there was always the ability to complete a MoveRequest from on-premises to Exchange Online manually (via PowerShell) for a mailbox that was a within a migration batch. It’s really important for all customers to have this feature and something I have used on every enterprise migration to Exchange Online.

What are we trying to achive here?

With enterprise customers and the potential for thousands of mailboxes to move from on-premises to Exchange Online, business analyst’s get their “kind in a candy store” on and sift through data to come up with relationships between mailboxes so these mailboxes can be grouped together in migration batches for synchronised cutovers.

This is the tried and true method to ensure that not only business units, departments or teams cutover around the same timeframe, but, any specialised mailbox and shared mailbox relationships cutover. This then keeps business processes working nicely and with minimal disruption to users.

Sidebar – As of March 2016, it is now possible to have permissions to shared mailboxes cross organisations from cloud to on-premises, in hybrid deployments with Exchange Server 2013 or 2016 on-premises. Cloud mailboxes can access on-premises shared mailboxes which can streamline migrations where no all shared mailboxes need to be cutover with their full access users.

Reference: https://blogs.technet.microsoft.com/mconeill/2016/03/20/shared-mailboxes-in-exchange-hybrid-now-work-cross-premises/

How can we achieve this?

In the past and from what I had been doing for, what I feel is, the last 4 years, required one or two PowerShell cmdlets. The main reference that I’ve recently gone to is Paul Cunningham’s ExchangeServerPro blog post that conveniently is also the main Google search result. The two lines of PowerShell are as follows:

Step 1 – Change the Suspend When Ready to Complete switch to $false or turn it off

Get-MoveRequest mailbox@domain.com | Set-MoveRequest -SuspendWhenReadyToComplete:$false

Step 2 – Resume the move request to complete the cutover of the mailbox

Get-MoveRequest mailbox@domain.com | Resume-MoveRequest

Often I don’t even do step 1 mentioned above. I just resumed the move request. It all worked and go me the result I was after. This was across various migration projects with both Exchange Server 2010 and Exchange Server 2013 on-premises.

Things changed recently!?!? I’ve been working with a customer on their transition to Exchange Online for the last month and we’re now up to moving a pilot set of mailboxes to the cloud. Things were going great, as expected, nothing out of the ordinary. That is, until I wanted to cutover one initial mailbox in our first migration batch.

Why didn’t this work?

It’s been a few months, three quarters of a year, since I’ve done an Office 365 Exchange Online mail migration. I did the trusted Google and found Paul’s blog post. Had that “oh yeah, thats it” moment and remembered the PowerShell cmdlet’s. Our pilot user was cutover! Wrong..

The mailbox went from a status of “synced” to “incrementalsync” and back to “synced” with no success. I ran through the PowerShell again. No buono. Here’s a screen grab of what I was seeing.

SERIOUS FACE > Some names and identifying details have been changed to protect the privacy of individuals

And yes, that is my PowerShell colour theme. I have to get my “1337 h4x0r” going and make it green and stuff.

I see where this is going. There’s a work around. So Lucian, hurry up and tell me!

Yes, dear reader, you would be right. After a bit of back and forth with Microsoft, there is indeed a solution. It still involves two PowerShell cmdlet’s, but, there is two additional properties. One of those properties is actually hidden: -preventCompletion. The other property is a date and time value normally represented like: “28/11/2016 12:00:00 PM”.

Lets cut to the chase; here’s the two cmdlets”

Get-MoveRequest -Identity mailbox@domain.com | Set-MoveRequest -SuspendWhenReadyToComplete:$false -preventCompletion:$false -CompleteAfter 5
Get-MoveRequest -Identity mailbox@domain.com | Resume-MoveRequest

After running that, the mailbox, along with another four to be sure it worked, had all successfully cutover to Exchange Online. My -CompleteAfter value of “5” was meant to be 5 minutes. I would say that could be set with the correct date and time format and have the current date and time + 5minutes added to do it correctly. For me, it worked with the above PowerShell.

Final words

I know it’s been sometime since I moved mailboxes from on-premises to the cloud, however, it’s like riding a bike: you never forget. In this case, I knew the option was there to do it. I pestered Office 365 support for two days. I hope the awesome people there don’t hate me for it. Although, in the long run, it’s good that I did as figuring this out and using this method of manual mailbox cutover for sync’ed migration batches is crucial to any transition to the cloud.

Enjoy!

Lucian

shane_fisher_kloud_telstraid

AAD Connect – Using Directory Extensions to add attributes to Azure AD

I was recently asked to consult on a project that was looking at the integration of Workday with Azure AD for Single Sign On. One of the requirements for the project, is that staff number be used as the NameID value for authentication.

This got me thinking as the staff number wasn’t represented in Azure AD at all at this point, and in order to use it, we will need to get it to Azure AD.

These days, this is fairly easy to achieve by using the “Directory Extensions” option in Azure AD Connect. Directory Extensions allows us to synchronise additional attributes from the on-premises environment to Azure AD.

Launch the AADC configuration utility and select “Customize synchronisation options”

aadc_config_p1

Enter your Azure AD global administrator credentials to connect to Azure AD.

aadc_config_p2

Once authenticated to Azure AD, click next through the options until we get to “Optional Features” and select “Directory extension attribute sync”

aadc_config_p3

There are two additional attributes that I want to make use of in Azure AD, employeeID and employeeNumber. As such, I have selected these attributes from the list.

aadc_config_p4

Completing the wizard will configure AAD Connect to sync the requested attributes to Azure AD. A full synchronisation is required post configuration and can be launched either from the configuration wizard itself, or from powershell using the following cmdlet.

Start-ADSyncSyncCycle -PolicyType initial

Once the sync was complete, I went to validate that my new attributes were visible in Azure AD. They weren’t!

After some digging, I found that my attributes had in fact synced successfully, they just weren’t in the location I wanted them to be. My attributes had synced as follows:

Source AD                                                                    Azure AD

employeeNumber                                                         extension_tenantGUID_employeeNumber

employeeID                                                                    extension_tenantGUID_employeeID

 

So… This wasn’t exactly what I was looking for, but at least the theory works in practice.

Fortunately, this problem is also easy to fix. We can configure AAD Connect to synchronise to a different target attribute using the synchronisation rules editor.

When we configured Directory extensions above, a new rule was created in the synchronisation rules editor for “User Directory Extension”. If we edit this rule, we can see the source and target attributes as they are currently configured, and we are able to make changes to these rules.

Before making any changes, follow the received prompt guiding you not to change the existing rule, but instead clone the current rule and then edit the clone. The original rule will be disabled during the cloning process.

As seen below, I have now configured the employeeNumber and employeeID attributes to sync to extensionattribute5 and 6 respectively.

aadc_config_p6

A full synchronisation is required before the above changes will take effect. We are now in a position to configure Single Sign On settings in Azure AD, using extensionAttribute5 as the NameID value.

I hope this helps some of you out there.

Cheers,

Shane.

Office365 Licensing Management Agent for Microsoft Identity Manager

Licensing for Office365 has always been a moving target for enterprise customers. Over the years I’ve implemented a plethora of solutions to keep licensing consistent with entitlement logic. For some customers this is as simple as everyone gets say, an E3 license. For other institutions there are often a mix of ‘E’ and ‘K’ licenses depending on EmployeeType.

Using the Granfeldt PowerShell Management Agent to import Office365 Licensing info

In this blog post I detail how I’m using Søren Granfeldt’s extremely versatile PowerShell Management Agent yet again. This time to import Office365 licensing information into Microsoft Identity Manager.

I’m bringing in the licenses associated with users as attributes on the user account. I’m also bringing in the licenses from the tenant as their own ObjectType into the Metaverse. This includes the information about each license such as how many licenses have been purchased, how many licenses have been issued etc.

Overview

I’m not showing assigning licenses. In the schema I have included the LicensesToAdd and LicensesToRemove attributes. Check out my Adding/Removing User Office365 Licences using PowerShell and the Azure AD Graph RestAPI post to see how to assign and remove licenses using Powershell. From that you can workout your logic to implement an Export flow to manage Office365 licenses.

Getting Started with the Granfeldt PowerShell Management Agent

If you don’t already have it, what are you waiting for. Go get it from here. Søren’s documentation is pretty good but does assume you have a working knowledge of FIM/MIM and this blog post is no different.

Three items I had to work out that I’ll save you the pain of are;

  • You must have a Password.ps1 file. Even though we’re not doing password management on this MA, the PS MA configuration requires a file for this field. The .ps1 doesn’t need to have any logic/script inside it. It just needs to be present
  • The credentials you give the MA to run this MA are the credentials for the account that has permissions to the Office365 Tenant. Just a normal account is enough to enumerate it, but you’ll need additional permissions to assign/remove licenses.
  • The path to the scripts in the PS MA Config must not contain spaces and be in old-skool 8.3 format. I’ve chosen to store my scripts in an appropriately named subdirectory under the MIM Extensions directory. Tip: from a command shell use dir /x to get the 8.3 directory format name. Mine looks like C:\PROGRA~1\MICROS~2\2010\SYNCHR~1\EXTENS~2\O365Li~1

Schema.ps1

My Schema is based around the core Office365 Licenses function. You’ll need to create a number of corresponding attributes in the Metaverse Schema on the Person ObjectType to flow the attributes into. You will also need to create a new ObjectType in the Metaverse for the O365 Licenses. I named mine LicensePlans. Use the Schema info below for the attributes that will be imported and the attribute object types to make sure what you create in the Metaverse aligns, so you can import the values. Note the attributes that are multi-valued.

Import.ps1

The logic which the Import.ps1 implements I’m not going to document here as this post goes into all the details Enumerating all Users/Groups/Contacts in an Azure tenant using PowerShell and the Azure Graph API ‘odata.nextLink’ paging function

Password Script (password.ps1)

Empty as not implemented

Export.ps1

Empty as not implemented

Management Agent Configuration

As per the tips above, the format for the script paths must be without spaces etc. I’m using 8.3 format and I’m using an Office 365 account to connect to Office365 and import the user and license data.

As per the Schema script earlier in this post I’m bringing user licensing metadata as well as the Office365 Tenant Licenses info.

Attributes to bring through aligned with what is specified in the Schema file.

Flow through the attributes to the attributes I created in the Metaverse on the Person ObjectType and to the new ObjectType LicensePlans.

Wiring it up

To finish it up you’ll need to do the usual tasks of creating run profiles, staging the connector space from Office365 and importing into the Metaverse.

Enjoy.

Follow Darren on Twitter @darrenjrobinson

Debugging an Office 365 ADFS/SSO issue when accessing Office Store in browser

We recently came across an issue with a customer where they had configured a standard SSO experience with Office 365 using ADFS and it was working perfectly except for a specific use case.   When a user accesses the office store via the Office 365 portal (e.g. portal.office.com/store) they got into an endless SSO login loop.  Specfically, they would see the following:

  1. Connection to Portal.Office.com
  2. Redirection to login.microsoftonline.com
  3. Redirection to adfs.customerdomain.com (automatically signed in because of WIA SSO)
  4. Redirection to login.microsftonline.com
  5. Redirection to Portal.Office.com\Store page but loads partially and then redirection to login.microsoftonline.com
  6. Redirection to adfs.customerdomain.com (automatically signed in because of WIA SSO)
  7. Rinse and repeat steps 4-6 ad nauseum

Normally, steps 1-4 is expected, because what is normally happening here in laymen’s terms are:

  1. Portal.office.com provides a response to the user’s browser saying “please provide sign in credentials, and do so via redirecting to this url”
  2. login.microsoftonline.com provides a reponse to the user’s browser saying “we know you are trying to access @customerdomain.com resources, which is federated, so please connect to adfs.customerdomain.com and get a valid auth token to present to us”
  3. User connects to adfs.customerdomain.com, and because it’s in it’s trusted sites list, and trusted sites is configured to perform windows integrated auth (WIA), the user’s browser uses the computers cached kerberos/ntlm auth token to sign into ADFS.  ADFS responds with a valid SAML token which the user can present to Azure AD.
  4. User connects back to login.microsoftonline.com and presents the auth token.  From there it is validated and an auth token browser cookie is created.
  5. At this point, the user would normally then connect to portal.office.com and access the relevant resources.

Now this was certainly the case for the majority of services in Office 365, including the main portal, Exchange Online, Sharepoint etc.   It was just the Office Store that was the problem, and bizarrely it was doing a partial load and then getting into the loop.

The resolution to the problem was discovered by doing a Fiddler trace of the sign in traffic.

First, we confirmed the normal ADFS SSO components were working (highlighted in red)

  1. Connection to login.microsoftonline.com
  2. Redirection to our ADFS servers (sts.{domain}.com) against the WIA path
  3. Connection back to login.microsoftonline.com with the SAML token response
  4. Subsequent connection to portal.office.com/store which is what we’re trying to access

ADFS-SSO-Process

The cause of the looping however can be seen further down in the fiddler trace, shown here:

Store-Office-NoAuth

From this we can see what would be an unexpected connection to a different domain url, store.office.com, and looking at it we see an authentication error, and no auth cookies are being presented to it.

A quick inventory of the configuration on the client gave us the answer as to why.   While the customer had done the common practice of including the key Office 365 URLs into their trusted sites (office365.com, outlook.com, sharepoint.com etc.), they did not include store.office.com.   This in itself is not specifically a problem, BUT they also had ‘mixed’ mode set up where their ‘Internet Zone’ was set to run in Protected Mode, while their ‘Trusted Sites’ and ‘Intranet Zone’ were configured to have Protected Mode turned off.

For a bit of background around what IE Protected Mode is, refer to this article, but the short version is, Protected mode is a security feature in IE which ensures that sites run in an ‘isolated instance’.  An effect of this isolation is that sites in protected mode cannot access the regular cookie cache.

As such, there was a ‘zone mismatch’ between the two, and what effectively was happening was:

  1. ADFS sign in was working as expected, and the user was successfully accessing portal.office.com resources and creating an auth cookie
  2. Portal.office.com/store however was actually loading content that was hosted under the site store.office.com
  3. store.office.com needed authentication credentials in order to correctly serve the content.  But because it was not included in trusted sites, that content request was running in ‘protected mode’.
  4. Because of the isolation, it couldn’t see the valid auth token which was stored in the regular cookie cache, and so triggers an auth sign in request to login.microsoftonline.com to get one.  And thus begins the endless authentication cycle.

The simple fix for this was simply adding the *.office.com into the trusted sites zone to ensure that it did not execute in protected mode.   Once we did it, we could see that when a connection to store.office.com is made, the appropriate auth token is presented and the page load works as expected:

Store-Office-Auth

Now, I’m not personally aware of a Microsoft guidance in terms of what should go into trusted sites for Office 365 functionality, but generally at Kloud we recommend the following to be used to avoid SSO issues:

  • *.sharepoint.com
  • *.office.com
  • *.microsoftonline.com
  • *.lync.com
  • *.outlook.com
  • URL of ADFS Server (e.g. sts.customerdomain.com)

Hopefully that gives you a bit of insight around the thinking, components and tools needed in debugging SSO issues with ADFS and Office 365!

WPAD and Proxy Auth Cause Exchange HCW to Fail

A recent conversation with a colleague reminded me of an issue I’ve faced a number of times (and forgotten to blog about) when running the Exchange Hybrid Configuration Wizard (HCW) on Exchange 2010 or 2013 in an environment where Web Proxy Autodiscovery Protocol (WPAD) is used.

The Problem

The most common scenario where I’ve seen this come into play is along the lines of this:

  1. WPAD is used to distribute Proxy.PAC to client machines
  2. Customer permits direct connection from Exchange servers to Internet
  3. From an elevated command prompt, run “netsh winhttp reset proxy” to ensure a direct connection
  4. Change Internet Options settings from “Automatically detect settings” to “Disabled”
  5. Browse to a site restricted by the proxy to confirm proxy bypass is working
  6. Can connect to Exchange Online using Remote PowerShell
  7. Run the HCW but it fails with the following error in the logs:
    ERROR : System.Management.Automation.RemoteException: Federation information could not be received from the external organization.
    ERROR : Subtask NeedsConfiguration execution failed: Configure Organization Relationship
    Exchange was unable to communicate with the autodiscover endpoint for your Office 365 tenant. This is typically an outbound http access configuration issue. If you are using a proxy server for outbound communication, verify that Exchange is configured to use it via the “Get-ExchangeServer –InternetWebProxy” cmdlet. Use the “Set-ExchangeServer –InternetWebProxy” cmdlet to configure if needed.

At this point, you’d probably be scratching your head wondering where it’s going wrong. I certainly was. It can be difficult to troubleshoot issues with the HCW as it is over 50 configuration steps all wrapped up in a ‘friendly’ GUI. The first error gives a good indication of where to look: Federation information could not be received from the external organization.

Check whether federation information can be retrieved from Office 365 by running the cmdlet from your Exchange server:

Get-FederationInformation –DomainName YourTenant.mail.onmicrosoft.com - BypassAdditionalDomainValidation: $true -Verbose

In my case it failed and I found this little gem in the verbose output:

Exception=The remote server returned an error: (407) Proxy Authentication Required

So where is it getting this proxy configuration from? As it turns out, the HCW (and the individual cmdlets) actually run in the user context of the Local System account, rather than the current user. The default configuration for Internet Options is for “Automatically Detect Settings” to be enabled on a per-user basis, which also applies to the Local System account. This default setting combined with WPAD and client’s PAC file is directing the HCW out through the proxy server, which is always going to fail given that Local System is not a credential that the proxy server will validate.

The Workarounds

There are a few ways to address this, although the first two may be harder to implement if it’s a larger organisation:

  • Permit the server to use the proxy without authentication
  • Modify the PAC file settings being distributed by WPAD

Alternatively it is possible to change the default Internet Options settings within the Local System profile:

  1. In your own profile, configure the default Internet Option setting from “Automatically Detect Settings” to “Disabled” and export the registry key from HKCU. Import this key into the “Local System” (HKEY_USERS\.DEFAULT) hive. Always remember to back up your registry before making changes.
    HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings\Connections
  2. Use PsExec to launch Internet Explorer as “Local System”, disable the setting and save the changes
    psexec.exe -i -s -d “C:\Program Files\Internet Explorer\iexplore.exe”
Picture 7

Moving SharePoint Online workflow task metadata into the data warehouse using Nintex Flows and custom Web API

This post suggests the idea of automatic copying of SharePoint Online(SPO) workflow tasks’ metadata into the external data warehouse.  In this scenario, workflow tasks are becoming a subject of another workflow that performs automatic copying of task’s data into the external database using a custom Web API endpoint as the interface to that database. Commonly, the requirement to move workflow tasks data elsewhere arises from limitations of SPO. In particular, SPO throttles requests for access to workflow data making it virtually impossible to create a meaningful workflow reporting system with large amounts of workflow tasks. The easiest approach to solve the problem is to use Nintex workflow to “listen” to the changes in the workflow tasks, then request the task data via SPO REST API and, finally, send the data to external data warehouse Web API endpoint.

Some SPO solutions require creation of a reporting system that includes workflow tasks’ metadata. For example, it could be a report about documents with statuses of workflows linked to these documents. Using conventional approach (ex. SPO REST API) to obtain the data seems unfeasible as SPO throttles requests for workflow data. In fact, the throttling is so tight that generation of reports with more than a hundred of records is unrealistic. In addition to that, many companies would like to create Business Intelligence(BI) systems analysing workflow tasks data. Having data warehouse with all the workflow tasks metadata can assist in this job very well.

To be able to implement the solution a few prerequisites must be met. You must know basics of Nintex workflow creation and to be able to create a backend solution with the database of your choice and custom Web API endpoint that allows you to write the data model to that database. In this post we have used Visual Studio 2015 and created ordinary REST Web API 2.0 project with Azure SQL Database.

The solution will involve following steps:

  1. Get sample of your workflow task metadata and create your data model.
  2. Create a Web API capable of writing data model to the database.
  3. Expose one POST endpoint method of the Web REST API that accepts JSON model of the workflow task metadata.
  4. Create Nintex workflow in the SPO list storing your workflow tasks.
  5. Design Nintex workflow: call SPO REST API to get JSON metadata and pass this JSON object to your Web API hook.

Below is detailed description of each step.

We are looking here to export metadata of a workflow task. We need to find the SPO list that holds all your workflow tasks and navigate there. You will need a name of the list to be able to start calling SPO REST API. It is better to use a REST tool to perform Web API requests. Many people use Fiddler or Postman (Chrome Extension) for this job. Request SPO REST API to get a sample of JSON data that you want to put into your database. The request will look similar to this example:

Picture 1

The key element in this request is getbytitle(“list name”), where “list name” is SPO list name of your workflow tasks. Please remember to add header “Accept” with the value “application/json”. It tells SPO to return JSON instead of the HTML. As a result, you will get one JSON object that contains JSON metadata of Task 1. This JSON object is the example of data that you will need to put into your database. Not all fields are required in the data warehouse. We need to create a data model containing only fields of our choice. For example, it can look like this one in C# and all properties are based on model returned earlier:

The next step is to create a Web API that exposes a single method that accepts our model as a parameter from the body of the request. You can choose any REST Web API design. We have created a simple Web API 2.0 in Visual Studio 2015 using general wizard for MVC, Web API 2.0 project. Then, we have added an empty controller and filled  it with the code that works with the Entity Framework to write data model to the database. We have also created code-first EF database context that works with just one entity described above.

The code of the controller:

The code of the database context for Entity Framework

Once you have created the Web API, you should be able to call Web API method like this:
https://yoursite.azurewebsites.net/api/EmployeeFileReviewTasksWebHook

You will need to put your model data in the request body as a JSON object. Also don’t forget to include proper headers for your authentication and header “Accept” with “application/json” and set type of the request to POST. Once you’ve tested the method, you can move on to the next steps. For example, below is how we tested it in our project.

Picture 4

Next, we will create a new Nintex Workflow in the SPO list with our workflow tasks. It is all straightforward. Click Nintex Workflows, then create a new workflow and start designing it.

Picture 5

Picture 6

Once you’ve created a new workflow, click on Workflow Settings button. In the displayed form please set parameters as it shown on screenshot below. We set “Start when items are created” and “Start when items are modified”. In this scenario, any modifications of our Workflow task will start this workflow automatically. It also includes cases when Workflow task have been modified by other workflows.

Picture 7.1

Create 5 steps in this workflow as it shown on the following screenshots labelled as numbers 1 to 5. Please keep in mind that blocks 3 and 5 are there to assist in debugging only and not required in production use.

Picture 7

Step 1. Create a Dictionary variable that contains SPO REST API request headers. You can add any required headers, including Authentication headers. It is essential here to include Accept header with “application/json” in it to tell SPO that we want JSON in responses. We set Output variable to SPListRequestHeaders so we can use it later.

Picture 8

Step 2. Call HTTP Web Service. We call SPO REST API here. It is important to make sure that getbytitle parameter is correctly set to your Workflow Tasks list as we discussed before. The list of fields that we want to be returned is defined in the “$select=…” parameter of OData request. We need only fields that are included in our data model. Other settings are straightforward: we supply our Request Headers created in Step 1 and create two more variables for response. SPListResponseContent will get resulting JSON object that we going to need at the Step 4.

Picture 9

Step 3 is optional. We’ve added it to debug our workflow. It will send an email with the contents of our JSON response from the previous step. It will show us what was returned by SPO REST API.

Picture 10

Step 4. Here we are calling our custom API endpoint with passing JSON object model that we got from SPO REST API. We supply full URL of our Web Hook, Set method to POST and in the Request body we inject SPListResponseContent from the Step 2. We’re also capturing response code to display later in workflow history.

Picture 11

Step 5 is also optional. It writes a log message with the response code that we have received from our API Endpoint.

Picture 12

Once all five steps are completed, we publish this Nintex workflow. Now we are ready for testing.

To test the system, open list of our Workflow tasks. Click on any task and modify any of task’s properties and save the task. This will initiate our workflow automatically. You can monitor workflow execution in workflow history. Once workflow is completed, you should be able to see messages as displayed below. Notice that our workflow has also written Web API response code at the end.

Picture 13

To make sure that everything went well, open your database and check the records updated by your Web API. After every Workflow Task modification you will see corresponding changes in the database. For example:

Picture 14

 

In this post we have shown that automatic copying of Workflow Tasks metadata into your data warehouse can be done with a simple Nintex Workflow setup and performing only two REST Web API requests. The solution is quite flexible as you can select required properties from the SPO list and export into the data warehouse. We can easily add more tables in case if there are more than one workflow tasks lists. This solution enables creation of powerful reporting system  using data warehouse and also allows to employ BI data analytics tool of your choice.

Implementing Microsoft (Office365) Peering for ExpressRoute

Notes from the Field

I have recently been involved with an implementation of Microsoft Peering for Expressroute with a large Australian customer and thought I would share the experience with you.

Firstly, and secondly, make sure that you read the specific guidance from Microsoft regarding prerequisites for Microsoft Peering. (See below)

Configure Microsoft peering for the circuit

Make sure that you have the following information before you proceed.

  • A /30 subnet for the primary link. This must be a valid public IPv4 prefix owned by you and registered in an RIR / IRR.
  • A /30 subnet for the secondary link. This must be a valid public IPv4 prefix owned by you and registered in an RIR / IRR.
  • A valid VLAN ID to establish this peering on. Ensure that no other peering in the circuit uses the same VLAN ID.
  • AS number for peering. You can use both 2-byte and 4-byte AS numbers.
  • Advertised prefixes: You must provide a list of all prefixes you plan to advertise over the BGP session. Only public IP address prefixes are accepted. You can send a comma separated list if you plan to send a set of prefixes. These prefixes must be registered to you in an RIR / IRR.
  • Customer ASN: If you are advertising prefixes that are not registered to the peering AS number, you can specify the AS number to which they are registered. This is optional.
  • Routing Registry Name: You can specify the RIR / IRR against which the AS number and prefixes are registered.
  • An MD5 hash, if you choose to use one. This is optional.

https://azure.microsoft.com/en-us/documentation/articles/expressroute-howto-routing-classic/#microsoft-peering

Let me address the items in bold above.

AS Number for Peering: A public AS number is required in order to implement Microsoft Peering. The customer I was working with, didn’t own a public AS, and as a result, were required to apply to APNIC for a public AS. APNIC are the registrar for AS numbers in Asia Pacific

The prerequisites for applying for a public AS number in Australia are as follows, and have been taken from the APNIC site.

Your organization is eligible for an AS Number assignment if:

  • it is currently multihomed, or
  • it holds previously-allocated provider independent address space and intends to multihome in the future.

An organization will also be eligible if it can demonstrate that it will meet the above criteria upon receiving an AS Number (or within a reasonably short time afterwards).

I have heard along the grapevine from colleagues of mine, that this process can be quite time consuming, particularly when the above requirements are not clear, however this wasn’t the case for the customer in question, and the AS number was approved in a timely manner.

Advertised prefixes: This is the list of pubic IP prefixes which azure will receive requests from.

Important Note: Microsoft will run a validation process once the dedicatedcircuit for Microsoft Peering is established. I attempted to find out detailed information regarding the validation process however I was not able to obtain this information. What I was able to find out, is that a validation check is run against the Public AS number to verify that the customer who owns the tenant, also owns the AS. A similar validation check is completed against the AdvertisedPrefixes.

In our case, even though the customer owned both the AS, and the AdvertisedPrefixes, automatic validation failed.

In order to fix this issue, we opened a support call with Azure support who were able to perform a manual validation within about 10 mins of being in contact.

Once the virtual circuit was up, and the AdvertisedPrefixes value showing “configured”, we moved onto configuring the virtual circuit in the Equinix Cloud Exchange.

The customers’ ISP configured the virtual circuits correctly, however we were not able to successfully establish BGP Peering. Another support call was created, with all vendors involved and after some troubleshooting, it was discovered the Primary and Secondary Peer subnets had been configured in reverse and this was causing a mac address mismatch with Azure.

Fortunately, the solution to this issue was to simply recreate the virtual circuit via the cloud exchange portal using the same service key as was used originally. Once this was completed, BGP peering was successfully established.

Premium Sku

Although not documented anywhere (that I could find), Microsoft Peering requires a Premium sku type in order to be enabled.

If you attempt to enable Microsoft Peering on a virtual circuit with a “standard” sku type, you will be promptly rewarded with a powershell error telling you exactly this.

Quick Shoutout: Thanks to our Microsoft Technology Strategist “Scott Turner” for point this out to me ahead of time.

Implementation Process

Create Virtual Circuit

#Import Azure Expressroute modules
Import-Module 'C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\Azure.psd1'
Import-Module 'C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\ExpressRoute\ExpressRoute.psd1'

#Creating a new circuit for Sydney O365
$Bandwidth = 500
$CircuitName = "EquinixO365Sydney"
$ServiceProvider = "Equinix"
$Location = "Sydney"
New-AzureDedicatedCircuit -CircuitName $CircuitName `
-ServiceProviderName $ServiceProvider `
-Bandwidth $Bandwidth -Location $Location -Sku Premium

#Creating a new circuit for Melbourne O365
$Bandwidth = 500
$CircuitName = "EquinixO365Melbourne"
$ServiceProvider = "Equinix"
$Location = "Melbourne"
New-AzureDedicatedCircuit -CircuitName $CircuitName `
-ServiceProviderName $ServiceProvider `
-Bandwidth $Bandwidth -Location $Location -Sku Premium

Get your Service Keys

Get-AzureDedicatedCircuit

Enable BGP Peering

Set-AzureBGPPeering -AccessType Microsoft `
-ServiceKey "SYD Service Key" -PrimaryPeerSubnet "x.x.x.x/30" `
-SecondaryPeerSubnet "x.x.x.x/30" -VlanId "Enter Vlan Id" `
-PeerAsn "Enter Public ASN" `
-AdvertisedPublicPrefixes "Enter Advertised Prefixes" -RoutingRegistryName "APNIC"

Set-AzureBGPPeering -AccessType Microsoft `
-ServiceKey "MEL Service Key" -PrimaryPeerSubnet "x.x.x.x/30" `
-SecondaryPeerSubnet "x.x.x.x/30" -VlanId "Enter Vlan Id" `
-PeerAsn "Enter Public ASN" `
-AdvertisedPublicPrefixes "Enter Advertised Prefixes" -RoutingRegistryName "APNIC"

Important Note: Advertising default routes

Default routes are permitted only on Azure private peering sessions. In such a case, we will route all traffic from the associated virtual networks to your network. Advertising default routes into private peering will result in the internet path from Azure being blocked. You must rely on your corporate edge to route traffic from and to the internet for services hosted in Azure.

To enable connectivity to other Azure services and infrastructure services, you must make sure one of the following items is in place:

  • Azure public peering is enabled to route traffic to public endpoints
  • You use user defined routing to allow internet connectivity for every subnet requiring Internet connectivity.

Note: Advertising default routes will break Windows and other VM license activation. Follow instructions here to work around this.

https://azure.microsoft.com/en-us/documentation/articles/expressroute-routing/

Do not advertise the default route to Microsoft Peering as this will break things!

I hope this helps some of you down the track as at the time we completed implementation, there was very little documentation available on the Web for Microsoft Peering.

 

Managing SharePoint Online (SPO) User Profiles with FIM/MIM 2016 and the Granfeldt PowerShell MA

Forefront / Microsoft Identity Manager does not come with an out-of-the-box management agent for managing SharePoint Online.

Whilst the DirSync/AADConnect solution will allow you to synchronise attributes from your On Premise Active Directory to AzureAD, SharePoint only leverages a handful of them. It then has its own set of attributes that it leverages. Many are similarly named to the standard Azure AD attributes but with the SPS- prefix.

For example, here is a list of SPO attributes and a couple of references to associated Azure AD attributes;

  • UserProfile_GUID
  • SID
  • SPS-PhoneticFirstName
  • SPS-PhoneticLastName
  • SPS-PhoneticDisplayName
  • SPS-JobTitle
  • SPS-Department
  • AboutMe
  • PersonalSpace
  • PictureURL
  • UserName
  • QuickLinks
  • WebSite
  • PublicSiteRedirect
  • SPS-Dotted-line
  • SPS-Peers
  • SPS-Responsibility
  • SPS-SipAddress
  • SPS-MySiteUpgrade
  • SPS-ProxyAddresses
  • SPS-HireDate
  • SPS-DisplayOrder
  • SPS-ClaimID
  • SPS-ClaimProviderID
  • SPS-ClaimProviderType
  • SPS-SavedAccountName
  • SPS-SavedSID
  • SPS-ResourceSID
  • SPS-ResourceAccountName
  • SPS-ObjectExists
  • SPS-MasterAccountName
  • SPS-PersonalSiteCapabilities
  • SPS-UserPrincipalName
  • SPS-O15FirstRunExperience
  • SPS-PersonalSiteInstantiationState
  • SPS-PersonalSiteFirstCreationTime
  • SPS-PersonalSiteLastCreationTime
  • SPS-PersonalSiteNumberOfRetries
  • SPS-PersonalSiteFirstCreationError
  • SPS-DistinguishedName
  • SPS-SourceObjectDN
  • SPS-FeedIdentifier
  • SPS-Location
  • Certifications
  • SPS-Skills
  • SPS-PastProjects
  • SPS-School
  • SPS-Birthday
  • SPS-Interests
  • SPS-StatusNotes
  • SPS-HashTags
  • SPS-PictureTimestamp
  • SPS-PicturePlaceholderState
  • SPS-PrivacyPeople
  • SPS-PrivacyActivity
  • SPS-PictureExchangeSyncState
  • SPS-TimeZone
  • SPS-EmailOptin
  • OfficeGraphEnabled
  • SPS-UserType
  • SPS-HideFromAddressLists
  • SPS-RecipientTypeDetails
  • DelveFlags
  • msOnline-ObjectId
  • SPS-PointPublishingUrl
  • SPS-TenantInstanceId

My customer has AADConnect in place that is synchronising their On Premise AD to Office 365. They also have a MIM 2016 instance that is managing user provisioning and lifecycle management. I’ll be using that MIM 2016 instance to manage SPO User Profile Attributes.

The remainder of this blog post describes the PS MA I’ve developed to manage the SPO attributes to allow their SPO Online Forms etc to leverage business and organisation user metadata.

Using the Granfeldt PowerShell Management Agent to manage SharePoint Online User Profiles

In this blog post I detail how you can synchronise user attributes from your On Premise Active Directory to an associated users SharePoint Online user profile utilising Søren Granfeldt’s extremely versatile PowerShell Management Agent. Provisioning and licensing of users for SPO is performed in parallel by the DirSync/AADConnect solution. This solution just provides attribute synchronisation to SPO User Profile attributes.

Overview

In this solution I’m managing the attributes that are pertinent to the customer. If you need an additional attribute or you have created custom attributes it is easy enough to extent.

Getting Started with the Granfeldt PowerShell Management Agent

First up, you can get it from here. Søren’s documentation is pretty good but does assume you have a working knowledge of FIM/MIM and this blog post is no different.

Three items I had to work out that I’ll save you the pain of are;

  • You must have a Password.ps1 file. Even though we’re not doing password management on this MA, the PS MA configuration requires a file for this field. The .ps1 doesn’t need to have any logic/script inside it. It just needs to be present
  • The credentials you give the MA to run this MA are the credentials for the account that has permissions to manage SharePoint Online User Profiles. More detail on that further below.
  • The path to the scripts in the PS MA Config must not contain spaces and be in old-skool 8.3 format. I’ve chosen to store my scripts in an appropriately named subdirectory under the MIM Extensions directory. Tip: from a command shell use dir /x to get the 8.3 directory format name. Mine looks like C:\PROGRA~1\MICROS~4\2010\SYNCHR~1\EXTENS~2\SPO

Managing SPO User Profiles

In order to use this working example there are a couple of items to note;

  • At the top of the Import and Export scripts you’ll need to enter your SPO Tenant Admin URL. If your tenant URL is ‘CORP.sharepoint.com’ then at the top of the scripts enter ‘CORP-admin.sharepoint.com’. The Import script will work with corp.sharepoint.com but the export won’t.
  • Give the account you’re using to connect to SPO via your MIM permissions to manage/update SPO User Profiles

Schema.ps1

As mentioned above I’m only syncing attributes pertinent to my customers’ requirements. That said I’ve selected a number of attributes that are potentials for future requirements.

Password Script (password.ps1)

Empty as described above

Import.ps1

A key part of the import script is connecting to SPO and accessing the full User Profile. In order to do this, you will need to install the SharePoint Online Client Components SDK. It’s available for download here https://www.microsoft.com/en-us/download/details.aspx?id=42038

The import script then imports two libraries that give us access to the SPO User Profiles.

Import-Module
‘C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\16\ISAPI\Microsoft.SharePoint.Client.UserProfiles.dll’

Import-Module
‘C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\16\ISAPI\Microsoft.SharePoint.Client.dll’

Import values for attributes defined in the schema.

Export.ps1

The business part of the MA. Basically enough to take attribute value changes from the MV to the SPO MA and export them to SPO. In the example script below I’m only exporting three attributes. Add as many as you need.

Wiring it all together

In order to wire the functionality together, I’m doing it just using the Sync Engine MA configuration as we’re relying on AADConnect to create the users in Office365, and we’re just flowing through attribute values.

Basically, create the PS MA, create your MA Run Profiles, import users and attributes from the PS MA, validate your joins and Export to update SPO attributes as per your flow rules.

Management Agent Configuration

As per the tips above, the format for the script paths must be without spaces etc. I’m using 8.3 format and I’m using the Office 365 account we gave permissions to manage user profiles in SPO earlier.

Password script must be specified but as we’re not doing password management it’s empty as detailed above.

If your schema.ps1 file is formatted correctly you can select your attributes.

I have a few join rules. In the pre-prod environment though I’m joining on WorkEmail => mail.

My import flow is just bringing back in users mobile numbers that users are able to modify in SPO. I’m exporting Title, Location and Department to SPO.

Summary

Using the Granfeldt PowerShell MA it was very easy to manage user SharePoint Online User Profile attributes.

Follow Darren on Twitter @darrenjrobinson

022616_0121_Feb2016Azur2.jpg

Feb 2016 Azure AD Connect Upgrade Fails – IndexOutOfRangeException resolution

I’ve been doing some work for a client recently who decided to upgrade their Azure AD Connect appliance to the latest February release. This was a prerequisite task for future work to follow. As an aside, it’s always nice to run the current version of the sync client. Microsoft regularly update the client to provide new features and improvements. A key driver for this client in particular was the fact that the new client (1.1.105.0 – Released 16/2/2016) will allow you to synchronise every 30 minutes, which is a welcome change from the previous 3-hour sync cycles.

The detailed Azure AD Connect version release history can be found – here

So it was decided… we would upgrade.

All sounds fairly painless really, yeah?

We completed the usual preparation tasks, such as forcing a sync, exporting the connector space, taking backups and snapshots etc. All of this completed as expected. We were ready to kick off the installation process of the new client (shown below.)

Shortly after the client upgrade process completed, I was greeted with this wonderful window.

The log file tells me this:

[11:36:13.912] [ 1] [INFO ] Found existing persisted state context.

[11:36:13.912] [ 1] [ERROR] Caught an exception while creating the initial page set on the root page.

Exception Data (Raw): System.IndexOutOfRangeException: Index was outside the bounds of the array.

at Microsoft.Online.Deployment.Types.Utility.AdDomainInfoEncoder.ConvertBack(Object value, Type targetType, Object parameter, CultureInfo culture)

at Microsoft.Online.Deployment.Types.PersistedState.PersistedStateElement.ToContext(PersistedStateContainer state, PropertyInfo propertyInfo, PersistedElementAttribute attr, Object& value)

at Microsoft.Online.Deployment.Types.Context.LocalContextBase.LoadFromState(PersistedStateContainer state, IPowerShell powerShell)

at Microsoft.Online.Deployment.OneADWizard.UI.WizardPages.RootPageViewModel.GetInitialPagesCore()

at Microsoft.Online.Deployment.OneADWizard.UI.WizardPages.RootPageViewModel.GetInitialPages()

[11:36:41.346] [ 1] [INFO ] Opened log file at path C:\Users\xxxxxx\AppData\Local\AADConnect\trace-20160218-113613.log

To resolve this issue, we need to action the following

  1. Make sure the AAD Connect wizard is not running.

  2. Open this file: %ProgramData%\AADConnect\PersistedState.xml

  3. Take a backup (copy and rename PersistedState.xml to PersistedState.old)

  4. Proceed to edit the xml file using your favourite text editor.

  5. Find a section that looks like this:

<PersistedStateElement>
<Key>IAdfsContext.TargetForestDomainName</Key>
<Value>domain.com</Value>
</PersistedStateElement>

We need to update the <Value> element to look like this:

<PersistedStateElement>
<Key>IAdfsContext.TargetForestDomainName</Key>
<Value>domain.com,domain.com</Value>
</PersistedStateElement>

The domain.com value must be populated twice, separated by a comma.

You can now save the file and restart the AAD Connect wizard. It will now launch as expected.

Following on from this I was able to complete the upgrade as per normal.

I can also now see that the sync cycle has been reduce to 30 minutes post upgrade (shown below.)

It appears that Microsoft have logged an official bug fix for this, However… this should get you out of trouble in the short term.

Happy upgrading!