Where’s the source!

SauceIn this post I will talk about data (aka the source)! In IAM there’s really one simple concept that is often misunderstood or ignored. The data going out of any IAM solution is only as good as the data going in. This may seem simple enough but if not enough attention is paid to the data source and data quality then the results are going to be unfavourable at best and catastrophic at worst.
With most IAM solutions data is going to come from multiple sources. Most IAM professionals will agree the best place to source the majority of your user data is going to be the HR system. Why? Well simply put it’s where all important information about the individual is stored and for the most part kept up to date, for example if you were to change positions within the same company the HR systems are going to be updated to reflect the change to your job title, as well as any potential direct report changes which may come as a result of this sort of change.
I also said that data can come and will normally always come from multiple sources. At typical example of this generally speaking, temporary and contract staff will not be managed within the central HR system, the HR team simply put don’t care about contractors. So where do they come from, how are they managed? For smaller organisations this is usually something that’s manually done in AD with no real governance in place. For the larger organisations this is less ideal and can be a security nightmare for the IT team to manage and can create quite a large security risk to the business, so a primary data source for contractors becomes necessary what this is is entirely up to the business and what works for them, I have seen a standard SQL web application being used to populate a database, I’ve seen ITSM tools being used, and less common is using the IAM system they build to manage contractor accounts (within MIM 2016 this is through the MIM Portal).
There are many other examples of how different corporate applications can be used to augment the identity information of your user data such as email, phone systems and to a lessor extent physical security systems building access, and datacentre access, but we will try and keep it simple for the purpose of this post. The following diagram helps illustrate the dataflow for the different user types.

IAM Diagram

What you will notice from the diagram above, is even though an organisation will have data coming from multiple systems, they all come together and are stored in a central repository or an “Identity Vault”. This is able to keep an accurate record of the information coming from multiple sources to compile what is the users complete identity profile. From this we can then start to manage what information is flowed to downstream systems when provisioning accounts, and we can also ensure that if any information was to change, it can be updated to the users profiles in any attached system that is managed through the enterprise IAM Services.
In my next post I will go into the finer details of the central repository or the “Identity Vault”

So in summary, the source of data is very important in defining an IAM solution, it ensures you have the right data being distributed to any managed downstream systems regardless of what type of user base you have. My next post we will dig into the central repository or the Identity Vault, this will go into details around how we can set precedence to data from specific systems to ensure that if there is a difference in the data coming from the difference sources that only the highest precedence will be applied we will also discuss how we augment the data sets to ensure that we are also only collecting the necessary information related to the management of that user and the applications that use within your business.

As per usual, if you have any comments or questions on this post of any of my previous posts then please feel free to comment or reach out to me directly.

Decommissioning Exchange 2016 Server

I have created many labs over the years and never really spent the time to decommission my environment, I usually just blow it away and start again.

So I finally decided to go through the process and decommission my Exchange 2016 server in my lab environment.

My lab consisted of the following:

  • Domain Controller (Windows Server 2012 R2)
  • AAD Connect Server
  • Exchange 2016 Server/ Office 365 Hybrid
  • Office 365 tenant

Being a lab I only had one Exchange server which had the mailbox role configured and was also my hybrid server.

Proceed with the steps below to remove the Exchange Online configuration;

  1. Connect to Exchange Online using PowerShell (https://technet.microsoft.com/en-us/library/jj984289(v=exchg.160).aspx)
  2. Remove the Inbound connector
    Remove-InboundConnector -Identity "Inbound from [Inbound Connector ID]"
  3. Remove the Outbound connector
    Remove-OutboundConnector -Identity "Outbound to [Outbound Connector ID]"
  4. Remove the Office 365 federation
    Remove-OrganizationRelationship "O365 to On-premises - [Org Relationship ID]"
  5. Remove the OAuth configuration
    Remove-IntraOrganizationConnector -Identity "HybridIOC - [Intra Org Connector ID]"

Now connect to your Exchange 2016 server and conduct the following;

  1. Open Exchange Management Shell
  2. Remove the Hybrid Config
    Remove-HybridConfig
  3. Remove the federation configuration
    Remove-OrganizationRelationship "On-premises to O365 - [Org Relationship ID]"
  4. Remove OAuth configuration
    Remove-IntraorganizationConnector -Identity "HybridIOC - [Intra Org Connector ID]"
  5. Remove all mailboxes, mailbox databases, public folders etc.
  6. Uninstall Exchange (either via PowerShell or programs and features)
  7. Decommission server

I also confirmed Exchange was removed from Active Directory and I was able to run another installation of Exchange on the same Active Directory with no conflicts or issues.

Exchange 2010 Hybrid Auto Mapping Shared Mailboxes

Migrating shared mailboxes to Office 365 is one of those things that is starting to become easier over time, especially with full access permissions now working cross premises.

One little discovery that I thought I would share is that if you have an Exchange 2010 hybrid configuration, the auto mapping feature will not work cross premises. (Exchange 2013 and above, you are ok and have nothing to worry about).

This means if an on-premises user has access to a shared mailbox that you migrate to Office 365, it will disappear from their Outlook even though they still have full access.

Unless you migrate the shared mailbox and user at the same time, the only option is to manually add the shared mailbox back into Outlook.

Something to keep in mind especially if your user base accesses multiple shared mailboxes at a time.

 

 

Try/Catch works in PowerShell ISE and not in PowerShell console

I recently encountered an issue with one of my PowerShell scripts. It was a script to enable litigation hold on all mailboxes in Exchange Online.

I connected to Exchange Online via the usual means below.

$creds = Get-Credential
$session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $Creds -Authentication Basic -AllowRedirection
Import-PSSession $session -AllowClobber

I then attempted to execute the following with no success.

try
{
Set-Mailbox -Identity $user.UserPrincipalName -LitigationHoldEnabled $true -ErrorAction Stop
}
catch
{
Write-Host "ERROR!" -ForegroundColor Red
}

As a test I removed the “-ErrorAction Stop” switch and then added the following line to the top of my script.

ErrorActionPreference = 'Stop'

That’s when it would work in Windows PowerShell ISE but not in Windows PowerShell console.

After many hours of troubleshooting I then discovered it was related to the implicit remote session established when connecting to Exchange Online.

To get my try/catch to work correctly I added the following line to the top of my script and all was well again.

$Global:ErrorActionPreference = 'Stop'

Send mail to Office 365 via an Exchange Server hosted in Azure

Those of you who have attempted to send mail to Office 365 from Azure know that sending outbound mail directly from an email server hosted in Azure is not supported due to elastic nature of public cloud service IPs and the potential for abuse. Therefore, the Azure IP address blocks are added to public block lists with no exceptions to this policy.

To be able to send mail from an Azure hosted email server to Office 365 you to need to send mail via a SMTP relay. There is a number of different SMTP relays you can utilise including Exchange Online Protection, more information can be found here: https://blogs.msdn.microsoft.com/mast/2016/04/04/sending-e-mail-from-azure-compute-resource-to-external-domains

To configure Exchange Server 2016 hosted in Azure to send mail to Office 365 via SMTP relay to Exchange Online protection you need to do the following;

  1. Create a connector in your Office 365 tenant
  2. Configure accepted domains on your Exchange Server in Azure
  3. Create a send connector on your Exchange Server in Azure that relays to Exchange Online Protection

Create a connector in your Office 365 tenant

  1. Login to Exchange Online Admin Center
  2. Click mail flow | connector
  3. Click +
  4. Select from: “Your organisation’s email server” to: “Office 365”o365-connector1
  5. Enter in a Name for the Connector | Click Nexto365-connector2
  6. Select “By verifying that the IP address of the sending server matches one of these IP addresses that belong to your organization”
  7. Add the public IP address of your Exchange Server in Azureo365-connector3

Configure accepted domains on your Exchange Server in Azure

  1. Open Exchange Management Shell
  2. Execute the following PowerShell command for each domain you want to send mail to in Office 365;
  3. New-AcceptedDomain -DomainName Contoso.com -DomainType InternalRelay -Name Contosoaccepted-domain1

Create a send connector on your Exchange Server in Azure that relays to Exchange Online Protection

  1. Execute the following PowerShell command;
  2. New-SendConnector -Name “My company to Office 365” -AddressSpaces * -CloudServicesMailEnabled $true -RequireTLS $true -SmartHosts yourdomain-com.mail.protection.outlook.com -TlsAuthLevel CertificateValidationsend-connector1

Free/busy Exchange hybrid troubleshooting with Microsoft Edge

Those of you who have configured Exchange hybrid with Office 365 before know that free/busy functionality can be troublesome at times and not work correctly.

Instead of searching through Exchange logs I found that you can pin point the exact error message through Microsoft Edge to assist with troubleshooting.

To do so;

  1. Open Microsoft Edge and login to Office 365 OWA (https://outlook.office365.com/owa) with an Office 365 account
  2. Create a new meeting request
  3. Press F12 to launch developer tools
  4. Conduct a free/busy lookup on a person with a mailbox on-premises
  5. Select the Network tab
  6. Select the entry with “GetUserAvailability”devtools-getuseravailability
  7. Select the body tab (on the right hand side)
  8. The MessageText element will display the exact error messagedevtools-messagetext

Real world Azure AD Connect: multi forest user and resource + user forest implementation

Originally posted on Lucian’s blog at clouduccino.com. Follow Lucian on Twitter @LucianFrango.

***

Disclaimer: During October I spent a few weeks working on this blog posts solution at a customer and had to do the responsible thing and pull the pin on further time as I had hit a glass ceiling. I reached what I thought was possible with Azure AD Connect. In comes Nigel Jones (Identity Consultant @ Kloud) who, through a bit of persuasion from Darren (@darrenjrobinson), took it upon himself to smash through that glass ceiling of Azure AD Connect and figured this solution out. Full credit and a big high five!

***

tl;dr

  • Azure AD Connect multi-forest design
  • Using AADC to sync user/account + resource shared forest with another user/account only forest
  • Why it won’t work out of the box
  • How to get around the issue and leverage precedence to make it work
  • Visio’s on how it all works for easy digestion

***

In true Memento style, after the quick disclaimer above, let me take you back for a quick background of the solution and then (possibly) blow your mind with what we have ended up with.

Back to the future

A while back in the world of directory synchronisation with Azure AD, to have a user and resource forest solution synchronised required the use of Microsoft Forefront Identity Manager (FIM), now Microsoft Identity Manager (MIM). From memory, you needed the former of those products (FIM) whenever you had a multi-forest synchronisation environment with Azure AD.

Just like Marty McFly, Azure AD synchronisation went from relative obscurity to the mainstream. In doing so, there have been many advancements and improvements that negate the need to ever deploy FIM or MIM for ever the more complex environment.

When Azure AD Connect, then Azure AD Sync, introduced the ability to synchronise multiple forests in a user + resource model, it opened the door for a lot of organisations to streamline the federated identity design for Azure and Office 365.

2016-12-02-aadc-design-02

In the beginning…

The following outlines a common real world scenario for numerous enterprise organisations. In this environment we have an existing Active Directory forest which includes an Exchange organisation, SharePoint, Skype for Business and many more common services and infrastructure. The business grows and with the wealth and equity purchases another business to diversity or expand. With that comes integration and the sharing of resources.

We have two companies: Contoso and Fabrikam. A two-way trust is set up between the ADDS forests and users can start to collaboration and share resources.

In order to use Exchange, which is the most common example, we need to start to treat Contoso as a resource forest for Fabrikam.

Over at the Contoso forest, IT creates disabled user objects and linked mailboxes Fabrikam users. When where in on-premises world, this works fine. I won’t go into too much more detail, but, I’m sure that you, mr or mrs reader, understand the particulars.

In summary, Contoso is a user and resource forest for itself, and a resource forest for Fabrikam. Fabrikam is simply a user forest with no deployment of Exchange, SharePoint etc.

How does a resource forest topology work with Azure AD Connect?

For the better part of two years now, since AADConnect was AADSync, Microsoft added in support for multi-forest connectivity. Last year, Jason Atherton (awesome Office 365 consultant @ Kloud) wrote a great blog post summarising this compatibility and usage.

In AADConnect, a user/account and resource forest topology is supported. The supported topology assumes that a customer has that simple, no-nonsense architecture. There’s no room for any shared funny business…

AADConnect is able to select the two forests common identities and merge them before synchronising to Azure AD. This process uses the attributes associated with the user objects: objectSID in the user/account forest and the msExchMasterAccountSID in the resource forest, to join the user account and the resource account.

There is also the option for customers to have multiple user forests and a single resource forest. I’ve personally not tried this with more than two forests, so I’m not confident enough to say how additional user/account forests would work out as well. However, please try it out and be sure to let me know via comments below, via Twitter or email me your results!

Quick note: you can also merge two objects by sAmAccountName and sAmAccountName attribute match, or specifying any ADDS attribute to match between the forests.

Compatibility

aadc-multi-forest

If you’d like to read up on this a little more, here are two articles reference in detail the above mentioned topologies:

Why won’t this work in the example shown?

Generally speaking, the first forest to sync in AADConnect, in a multi-forest implementation, is the user/account forest, which likely is the primary/main forest in an organisation. Lets assume this is the Contoso forest. This will be the first connector to sync in AADConnect. This will have the lowest precedence as well, as with AADConnect, the lower the precedence designated number, the higher the priority.

When the additional user/account forest(s) is added, or the resource forest, these connectors run after the initial Contoso connector due to the default precedence set. From an external perspective, this doesn’t seem like much of a bit deal. AADConnect merges two matching or mirrored user objects by way of the (commonly used) objectSID and msExchMasterAccountSID and away we go. In theory, precedence shouldn’t really matter.

Give me more detail

The issue is that precedence does in deed matter when we go back to our Contoso and Fabrikam example. The reason that this does not work is indeed precedence. Here’s what happens:

2016-12-02-aadc-whathappens-01

  • #1 – Contoso is sync’ed to AADC first as it was the first forest connected to AADC
    • Adding in Fabrikam first over Contoso doesn’t work either
  • #2 – The Fabrikam forest is joined with a second forest connector
  • AADC is configured with user identities exist across multiple directories
  • objectSID and msExchMasterAccountSID is selected to merge identities
  • When the objects are merged, sAmAccountName is taken Contoso forest – #1
    • This happens for Contoso forest users AND Fabrikam forest users
  • When the objects are merged, mail or primarySMTPaddress is taken Contoso forest – #1
    • This happens for Contoso forest users AND Farikam forest users
  • Should the two objects not have a completely identical set of attributes, the attributes that are set are pulled
    • In this case, most of the user object details come from Fabrikam – #2
    • Attributes like the users firstname, lastname, employee ID, branch / office

The result is this standard setup is having Fabrikam users with their resource accounts in Contoso sync’ed, but, have their UPN set with the prefix from the Contoso forest. An example would be a UPN of user@contoso.com rather than the desired user@fabrikam.com. When this happens, there is no SSO as Windows Integrated Authentication in the Fabrikam forest does not recognise the Contoso forest UNP prefix of @contoso.com.

Yes, even with ADDS forest trusts configured correctly and UPN routing etc all working correctly, authentication just does not work. AADC uses the incorrect attributes and sync’s those to Azure AD.

Is there any other way around this?

I’ve touched on and referenced precedence a number of times in this blog post so far. The solution is indeed precedence. The issue that I had experienced was a lack of understanding of precedence in AADConnect. Sure it works on a connector rule level precedence which is set by AADConnect during the configuration process as forests are connected to.

Playing around with precedence was not something I want to do as I didn’t have enough Microsoft Identity Manager or Forefront Identity Manager background to really be certain of the outcome of the joining/merging process of user and resource account objects. I know that FIM/MIM has the option of attribute level precedence, which is what we really wanted here, so my thinking as that we needed FIM/MIM to do the job. Wrong!

In comes Nigel…

Nigel dissected the requirements over the course of a week. He reviewed the configuration in an existing FIM 2010 R2 deployment and found the requirements needed of AADConnect. Having got AADConnect setup, all that was required was tweaking a couple of the inbound rules and moving higher up the precedence order.

Below is the AADConnect Sync Rules editor output from the final configuration of AADConnect:

2016-12-01-syncrules-01

The solution centres around the main precedence rule, rule #1 for Fabrikam (red arrows pointing and yellow highlight) to be above the highest (and default) Contoso rule (originally #1). When this happened, AADConnect was able to pull the correct sAmAccountName and mail attributes from Fabrikam and keep all the other attributes associated with Exchange mailboxes from Contoso. Happy days!

Final words

Tinkering around with AADConnect shows just how powerful the “cut down FIM/MIM” application is. While AADConnect lacks the in-depth configuration and customisation that you find in FIM/MIM, it packs a lot in a small package! #Impressed

Cheers,

Lucian

Completing an Exchange Online Hybrid individual MoveRequest for a mailbox in a migration batch

I can’t remember for certain, however, I would say since at least Exchange Server 2010 Hybrid, there was always the ability to complete a MoveRequest from on-premises to Exchange Online manually (via PowerShell) for a mailbox that was a within a migration batch. It’s really important for all customers to have this feature and something I have used on every enterprise migration to Exchange Online.

What are we trying to achive here?

With enterprise customers and the potential for thousands of mailboxes to move from on-premises to Exchange Online, business analyst’s get their “kind in a candy store” on and sift through data to come up with relationships between mailboxes so these mailboxes can be grouped together in migration batches for synchronised cutovers.

This is the tried and true method to ensure that not only business units, departments or teams cutover around the same timeframe, but, any specialised mailbox and shared mailbox relationships cutover. This then keeps business processes working nicely and with minimal disruption to users.

Sidebar – As of March 2016, it is now possible to have permissions to shared mailboxes cross organisations from cloud to on-premises, in hybrid deployments with Exchange Server 2013 or 2016 on-premises. Cloud mailboxes can access on-premises shared mailboxes which can streamline migrations where no all shared mailboxes need to be cutover with their full access users.

Reference: https://blogs.technet.microsoft.com/mconeill/2016/03/20/shared-mailboxes-in-exchange-hybrid-now-work-cross-premises/

How can we achieve this?

In the past and from what I had been doing for, what I feel is, the last 4 years, required one or two PowerShell cmdlets. The main reference that I’ve recently gone to is Paul Cunningham’s ExchangeServerPro blog post that conveniently is also the main Google search result. The two lines of PowerShell are as follows:

Step 1 – Change the Suspend When Ready to Complete switch to $false or turn it off

Get-MoveRequest mailbox@domain.com | Set-MoveRequest -SuspendWhenReadyToComplete:$false

Step 2 – Resume the move request to complete the cutover of the mailbox

Get-MoveRequest mailbox@domain.com | Resume-MoveRequest

Often I don’t even do step 1 mentioned above. I just resumed the move request. It all worked and go me the result I was after. This was across various migration projects with both Exchange Server 2010 and Exchange Server 2013 on-premises.

Things changed recently!?!? I’ve been working with a customer on their transition to Exchange Online for the last month and we’re now up to moving a pilot set of mailboxes to the cloud. Things were going great, as expected, nothing out of the ordinary. That is, until I wanted to cutover one initial mailbox in our first migration batch.

Why didn’t this work?

It’s been a few months, three quarters of a year, since I’ve done an Office 365 Exchange Online mail migration. I did the trusted Google and found Paul’s blog post. Had that “oh yeah, thats it” moment and remembered the PowerShell cmdlet’s. Our pilot user was cutover! Wrong..

The mailbox went from a status of “synced” to “incrementalsync” and back to “synced” with no success. I ran through the PowerShell again. No buono. Here’s a screen grab of what I was seeing.

SERIOUS FACE > Some names and identifying details have been changed to protect the privacy of individuals

And yes, that is my PowerShell colour theme. I have to get my “1337 h4x0r” going and make it green and stuff.

I see where this is going. There’s a work around. So Lucian, hurry up and tell me!

Yes, dear reader, you would be right. After a bit of back and forth with Microsoft, there is indeed a solution. It still involves two PowerShell cmdlet’s, but, there is two additional properties. One of those properties is actually hidden: -preventCompletion. The other property is a date and time value normally represented like: “28/11/2016 12:00:00 PM”.

Lets cut to the chase; here’s the two cmdlets”

Get-MoveRequest -Identity mailbox@domain.com | Set-MoveRequest -SuspendWhenReadyToComplete:$false -preventCompletion:$false -CompleteAfter 5
Get-MoveRequest -Identity mailbox@domain.com | Resume-MoveRequest

After running that, the mailbox, along with another four to be sure it worked, had all successfully cutover to Exchange Online. My -CompleteAfter value of “5” was meant to be 5 minutes. I would say that could be set with the correct date and time format and have the current date and time + 5minutes added to do it correctly. For me, it worked with the above PowerShell.

Final words

I know it’s been sometime since I moved mailboxes from on-premises to the cloud, however, it’s like riding a bike: you never forget. In this case, I knew the option was there to do it. I pestered Office 365 support for two days. I hope the awesome people there don’t hate me for it. Although, in the long run, it’s good that I did as figuring this out and using this method of manual mailbox cutover for sync’ed migration batches is crucial to any transition to the cloud.

Enjoy!

Lucian

AAD Connect – Using Directory Extensions to add attributes to Azure AD

I was recently asked to consult on a project that was looking at the integration of Workday with Azure AD for Single Sign On. One of the requirements for the project, is that staff number be used as the NameID value for authentication.

This got me thinking as the staff number wasn’t represented in Azure AD at all at this point, and in order to use it, we will need to get it to Azure AD.

These days, this is fairly easy to achieve by using the “Directory Extensions” option in Azure AD Connect. Directory Extensions allows us to synchronise additional attributes from the on-premises environment to Azure AD.

Launch the AADC configuration utility and select “Customize synchronisation options”

aadc_config_p1

Enter your Azure AD global administrator credentials to connect to Azure AD.

aadc_config_p2

Once authenticated to Azure AD, click next through the options until we get to “Optional Features” and select “Directory extension attribute sync”

aadc_config_p3

There are two additional attributes that I want to make use of in Azure AD, employeeID and employeeNumber. As such, I have selected these attributes from the list.

aadc_config_p4

Completing the wizard will configure AAD Connect to sync the requested attributes to Azure AD. A full synchronisation is required post configuration and can be launched either from the configuration wizard itself, or from powershell using the following cmdlet.

Start-ADSyncSyncCycle -PolicyType initial

Once the sync was complete, I went to validate that my new attributes were visible in Azure AD. They weren’t!

After some digging, I found that my attributes had in fact synced successfully, they just weren’t in the location I wanted them to be. My attributes had synced as follows:

Source AD                                                                    Azure AD

employeeNumber                                                         extension_tenantGUID_employeeNumber

employeeID                                                                    extension_tenantGUID_employeeID

 

So… This wasn’t exactly what I was looking for, but at least the theory works in practice.

Fortunately, this problem is also easy to fix. We can configure AAD Connect to synchronise to a different target attribute using the synchronisation rules editor.

When we configured Directory extensions above, a new rule was created in the synchronisation rules editor for “User Directory Extension”. If we edit this rule, we can see the source and target attributes as they are currently configured, and we are able to make changes to these rules.

Before making any changes, follow the received prompt guiding you not to change the existing rule, but instead clone the current rule and then edit the clone. The original rule will be disabled during the cloning process.

As seen below, I have now configured the employeeNumber and employeeID attributes to sync to extensionattribute5 and 6 respectively.

aadc_config_p6

A full synchronisation is required before the above changes will take effect. We are now in a position to configure Single Sign On settings in Azure AD, using extensionAttribute5 as the NameID value.

I hope this helps some of you out there.

Cheers,

Shane.

Office365 Licensing Management Agent for Microsoft Identity Manager

Licensing for Office365 has always been a moving target for enterprise customers. Over the years I’ve implemented a plethora of solutions to keep licensing consistent with entitlement logic. For some customers this is as simple as everyone gets say, an E3 license. For other institutions there are often a mix of ‘E’ and ‘K’ licenses depending on EmployeeType.

Using the Granfeldt PowerShell Management Agent to import Office365 Licensing info

In this blog post I detail how I’m using Søren Granfeldt’s extremely versatile PowerShell Management Agent yet again. This time to import Office365 licensing information into Microsoft Identity Manager.

I’m bringing in the licenses associated with users as attributes on the user account. I’m also bringing in the licenses from the tenant as their own ObjectType into the Metaverse. This includes the information about each license such as how many licenses have been purchased, how many licenses have been issued etc.

Overview

I’m not showing assigning licenses. In the schema I have included the LicensesToAdd and LicensesToRemove attributes. Check out my Adding/Removing User Office365 Licences using PowerShell and the Azure AD Graph RestAPI post to see how to assign and remove licenses using Powershell. From that you can workout your logic to implement an Export flow to manage Office365 licenses.

Getting Started with the Granfeldt PowerShell Management Agent

If you don’t already have it, what are you waiting for. Go get it from here. Søren’s documentation is pretty good but does assume you have a working knowledge of FIM/MIM and this blog post is no different.

Three items I had to work out that I’ll save you the pain of are;

  • You must have a Password.ps1 file. Even though we’re not doing password management on this MA, the PS MA configuration requires a file for this field. The .ps1 doesn’t need to have any logic/script inside it. It just needs to be present
  • The credentials you give the MA to run this MA are the credentials for the account that has permissions to the Office365 Tenant. Just a normal account is enough to enumerate it, but you’ll need additional permissions to assign/remove licenses.
  • The path to the scripts in the PS MA Config must not contain spaces and be in old-skool 8.3 format. I’ve chosen to store my scripts in an appropriately named subdirectory under the MIM Extensions directory. Tip: from a command shell use dir /x to get the 8.3 directory format name. Mine looks like C:\PROGRA~1\MICROS~2\2010\SYNCHR~1\EXTENS~2\O365Li~1

Schema.ps1

My Schema is based around the core Office365 Licenses function. You’ll need to create a number of corresponding attributes in the Metaverse Schema on the Person ObjectType to flow the attributes into. You will also need to create a new ObjectType in the Metaverse for the O365 Licenses. I named mine LicensePlans. Use the Schema info below for the attributes that will be imported and the attribute object types to make sure what you create in the Metaverse aligns, so you can import the values. Note the attributes that are multi-valued.

Import.ps1

The logic which the Import.ps1 implements I’m not going to document here as this post goes into all the details Enumerating all Users/Groups/Contacts in an Azure tenant using PowerShell and the Azure Graph API ‘odata.nextLink’ paging function

Password Script (password.ps1)

Empty as not implemented

Export.ps1

Empty as not implemented

Management Agent Configuration

As per the tips above, the format for the script paths must be without spaces etc. I’m using 8.3 format and I’m using an Office 365 account to connect to Office365 and import the user and license data.

As per the Schema script earlier in this post I’m bringing user licensing metadata as well as the Office365 Tenant Licenses info.

Attributes to bring through aligned with what is specified in the Schema file.

Flow through the attributes to the attributes I created in the Metaverse on the Person ObjectType and to the new ObjectType LicensePlans.

Wiring it up

To finish it up you’ll need to do the usual tasks of creating run profiles, staging the connector space from Office365 and importing into the Metaverse.

Enjoy.

Follow Darren on Twitter @darrenjrobinson