Demystifying Managed Service Identities on Azure

Managed service identities (MSIs) are a great feature of Azure that are being gradually enabled on a number of different resource types. But when I’m talking to developers, operations engineers, and other Azure customers, I often find that there is some confusion and uncertainty about what they do. In this post I will explain what MSIs are and are not, where they make sense to use, and give some general advice on how to work with them.

What Do Managed Service Identities Do?

A managed service identity allows an Azure resource to identify itself to Azure Active Directory without needing to present any explicit credentials. Let’s explain that a little more.

In many situations, you may have Azure resources that need to securely communicate with other resources. For example, you may have an application running on Azure App Service that needs to retrieve some secrets from a Key Vault. Before MSIs existed, you would need to create an identity for the application in Azure AD, set up credentials for that application (also known as creating a service principal), configure the application to know these credentials, and then communicate with Azure AD to exchange the credentials for a short-lived token that Key Vault will accept. This requires quite a lot of upfront setup, and can be difficult to achieve within a fully automated deployment pipeline. Additionally, to maintain a high level of security, the credentials should be changed (rotated) regularly, and this requires even more manual effort.

With an MSI, in contrast, the App Service automatically gets its own identity in Azure AD, and there is a built-in way that the app can use its identity to retrieve a token. We don’t need to maintain any AD applications, create any credentials, or handle the rotation of these credentials ourselves. Azure takes care of it for us.

It can do this because Azure can identify the resource – it already knows where a given App Service or virtual machine ‘lives’ inside the Azure environment, so it can use this information to allow the application to identify itself to Azure AD without the need for exchanging credentials.

What Do Managed Service Identities Not Do?

Inbound requests: One of the biggest points of confusion about MSIs is whether they are used for inbound requests to the resource or for outbound requests from the resource. MSIs are for the latter – when a resource needs to make an outbound request, it can identify itself with an MSI and pass its identity along to the resource it’s requesting access to.

MSIs pair nicely with other features of Azure resources that allow for Azure AD tokens to be used for their own inbound requests. For example, Azure Key Vault accepts requests with an Azure AD token attached, and it evaluates which parts of Key Vault can be accessed based on the identity of the caller. An MSI can be used in conjunction with this feature to allow an Azure resource to directly access a Key Vault-managed secret.

Authorization: Another important point is that MSIs are only directly involved in authentication, and not in authorization. In other words, an MSI allows Azure AD to determine what the resource or application is, but that by itself says nothing about what the resource can do. For some Azure resources this is Azure’s own Identity and Access Management system (IAM). Key Vault is one exception – it maintains its own access control system, and is managed outside of Azure’s IAM. For non-Azure resources, we could communicate with any authorisation system that understands Azure AD tokens; an MSI will then just be another way of getting a valid token that an authorisation system can accept.

Another important point to be aware of is that the target resource doesn’t need to run within the same Azure subscription, or even within Azure at all. Any service that understands Azure Active Directory tokens should work with tokens for MSIs.

How to Use MSIs

Now that we know what MSIs can do, let’s have a look at how to use them. Generally there will be three main parts to working with an MSI: enabling the MSI; granting it rights to a target resource; and using it.

  1. Enabling an MSI on a resource. Before a resource can identify itself to Azure AD,it needs to be configured to expose an MSI. The way that you do this will depend on the specific resource type you’re enabling the MSI on. In App Services, an MSI can be enabled through the Azure Portal, through an ARM template, or through the Azure CLI, as documented here. For virtual machines, an MSI can be enabled through the Azure Portal or through an ARM template. Other MSI-enabled services have their own ways of doing this.

  2. Granting rights to the target resource. Once the resource has an MSI enabled, we can grant it rights to do something. The way that we do this is different depending on the type of target resource. For example, Key Vault requires that you configure its Access Policies, while to use the Event Hubs or the Azure Resource Manager APIs you need to use Azure’s IAM system. Other target resource types will have their own way of handling access control.

  3. Using the MSI to issue tokens. Finally, now that the resource’s MSI is enabled and has been granted rights to a target resource, it can be used to actually issue tokens so that a target resource request can be issued. Once again, the approach will be different depending on the resource type. For App Services, there is an HTTP endpoint within the App Service’s private environment that can be used to get a token, and there is also a .NET library that will handle the API calls if you’re using a supported platform. For virtual machines, there is also an HTTP endpoint that can similarly be used to obtain a token. Of course, you don’t need to specify any credentials when you call these endpoints – they’re only available within that App Service or virtual machine, and Azure handles all of the credentials for you.

Finding an MSI’s Details and Listing MSIs

There may be situations where we need to find our MSI’s details, such as the principal ID used to represent the application in Azure AD. For example, we may need to manually configure an external service to authorise our application to access it. As of April 2018, the Azure Portal shows MSIs when adding role assignments, but the Azure AD blade doesn’t seem to provide any way to view a list of MSIs. They are effectively hidden from the list of Azure AD applications. However, there are a couple of other ways we can find an MSI.

If we want to find a specific resource’s MSI details then we can go to the Azure Resource Explorer and find our resource. The JSON details for the resource will generally include an identity property, which in turn includes a principalId:

Screenshot 1

That principalId is the client ID of the service principal, and can be used for role assignments.

Another way to find and list MSIs is to use the Azure AD PowerShell cmdlets. The Get-AzureRmADServicePrincipal cmdlet will return back a complete list of service principals in your Azure AD directory, including any MSIs. MSIs have service principal names starting with, and the ApplicationId is the client ID of the service principal:

Screenshot 2

Now that we’ve seen how to work with an MSI, let’s look at which Azure resources actually support creating and using them.

Resource Types with MSI and AAD Support

As of April 2018, there are only a small number of Azure services with support for creating MSIs, and of these, currently all of them are in preview. Additionally, while it’s not yet listed on that page, Azure API Management also supports MSIs – this is primarily for handling Key Vault integration for SSL certificates.

One important note is that for App Services, MSIs are currently incompatible with deployment slots – only the production slot gets assigned an MSI. Hopefully this will be resolved before MSIs become fully available and supported.

As I mentioned above, MSIs are really just a feature that allows a resource to assume an identity that Azure AD will accept. However, in order to actually use MSIs within Azure, it’s also helpful to look at which resource types support receiving requests with Azure AD authentication, and therefore support receiving MSIs on incoming requests. Microsoft maintain a list of these resource types here.

Example Scenarios

Now that we understand what MSIs are and how they can be used with AAD-enabled services, let’s look at a few example real-world scenarios where they can be used.

Virtual Machines and Key Vault

Azure Key Vault is a secure data store for secrets, keys, and certificates. Key Vault requires that every request is authenticated with Azure AD. As an example of how this might be used with an MSI, imagine we have an application running on a virtual machine that needs to retrieve a database connection string from Key Vault. Once the VM is configured with an MSI and the MSI is granted Key Vault access rights, the application can request a token and can then get the connection string without needing to maintain any credentials to access Key Vault.

API Management and Key Vault

Another great example of an MSI being used with Key Vault is Azure API Management. API Management creates a public domain name for the API gateway, to which we can assign a custom domain name and SSL certificate. We can store the SSL certificate inside Key Vault, and then give Azure API Management an MSI and access to that Key Vault secret. Once it has this, API Management can automatically retrieve the SSL certificate for the custom domain name straight from Key Vault, simplifying the certificate installation process and improving security by ensuring that the certificate is not directly passed around.

Azure Functions and Azure Resource Manager

Azure Resource Manager (ARM) is the deployment and resource management system used by Azure. ARM itself supports AAD authentication. Imagine we have an Azure Function that needs to scan our Azure subscription to find resources that have recently been created. In order to do this, the function needs to log into ARM and get a list of resources. Our Azure Functions app can expose an MSI, and so once that MSI has been granted reader rights on the resource group, the function can get a token to make ARM requests and get the list without needing to maintain any credentials.

App Services and Event Hubs/Service Bus

Event Hubs is a managed event stream. Communication to both publish onto, and subscribe to events from, the stream can be secured using Azure AD. An example scenario where MSIs would help here is when an application running on Azure App Service needs to publish events to an Event Hub. Once the App Service has been configured with an MSI, and Event Hubs has been configured to grant that MSI publishing permissions, the application can retrieve an Azure AD token and use it to post messages without having to maintain keys.

Service Bus provides a number of features related to messaging and queuing, including queues and topics (similar to queues but with multiple subscribers). As with Event Hubs, an application could use its MSI to post messages to a queue or to read messages from a topic subscription, without having to maintain keys.

App Services and Azure SQL

Azure SQL is a managed relational database, and it supports Azure AD authentication for incoming connections. A database can be configured to allow Azure AD users and applications to read or write specific types of data, to execute stored procedures, and to manage the database itself. When coupled with an App Service with an MSI, Azure SQL’s AAD support is very powerful – it reduces the need to provision and manage database credentials, and ensures that only a given application can log into a database with a given user account. Tomas Restrepo has written a great blog post explaining how to use Azure SQL with App Services and MSIs.


In this post we’ve looked into the details of managed service identities (MSIs) in Azure. MSIs provide some great security and management benefits for applications and systems hosted on Azure, and enable high levels of automation in our deployments. While they aren’t particularly complicated to understand, there are a few subtleties to be aware of. As long as you understand that MSIs are for authentication of a resource making an outbound request, and that authorisation is a separate thing that needs to be managed independently, you will be able to take advantage of MSIs with the services that already support them, as well as the services that may soon get MSI and AAD support.

Identifying Active Directory Users with Pwned Passwords using Microsoft/Forefront Identity Manager v2, k-Anonymity and Have I Been Pwned


In August 2017 Troy Hunted released a sizeable list of Pwned Passwords. 320 Million in fact.

I subsequently wrote this post on Identifying Active Directory Users with Pwned Passwords using Microsoft/Forefront Identity Manager which called the API and sets a boolean attribute in the MIM Service that could be used with business logic to force users with accounts that have compromised passwords to change their password on next logon.

Whilst that was a proof of concept/discussion point of sorts AND  I had a disclaimer about sending passwords across the internet to a third-party service there was a lot of momentum around the HIBP API and I developed a solution and wrote this update to check the passwords locally.

Today Troy has released v2 of that list and updated the API with new features and functionality. If you’re playing catch-up I encourage you to read Troy’s post from August last year, and my two posts about checking Active Directory passwords against that list.

Leveraging V2 (with k-Anonymity) of the Have I Been Pwned API

With v2 of the HIBP passwod list and API the number of leaked credentials in the list has grown to half a billion. 501,636,842 Pwned Passwords to be exact.

With the v2 list in conjunction with Junade Ali from Cloudflare the API has been updated to be leveraged with a level of anonymity. Instead of sending a SHA-1 hash of the password to check if the password you’re checking is on the list you can now send a truncated version of the SHA-1 hash of the password and you will be returned a set of passwords from the HIBP v2 API. This is done using a concept called k-anonymity detailed brilliantly here by Junade Ali.

v2 of the API also returns a score for each password in the list. Basically how many times the password has previously been seen in leaked credentials lists. Brilliant.

Updated Pwned PowerShell Management Agent for Pwned Password Lookup

Below is an updated Password.ps1 script for the previous API version of my Pwned Password Management Agent for Microsoft Identity Manager. It functions by;

  • taking the new password received from PCNS
  • hashes the password to SHA-1 format
  • looks up the v2 HIBP API using part of the SHA-1 hash
  • updates the MIM Service with Pwned Password status

Checkout the original post with all the rest of the details here.


Of course you can also download (recommended via Torrent) the Pwned Password dataset. Keep in mind that the compressed dataset is 8.75 GB and uncompressed is 29.4 GB. Convert that into an On-Premise SQL Table(s) as I did in the linked post at the beginning of this post and you’ll be well in excess of that.

Awesome work from Tory and Junade.


Query multiple object classes from AD using LDAP Query

Recently I had to make a query to the Active Directory to get the list of users and contacts. To achieve this, I used the LDAP query. See the following function:

/// Queries the Active Directory using LDAP 
///<param name="entry">Directory entry</param>
///<param name="search">Directory searcher with properties to load and filters</param>
///<returns>A dictionary with ObjectGuid as the key</returns>
public static Dictionary<string, SearchResult> QueryLDAP(DirectoryEntry entry, DirectorySearcher search)
    entry.AuthenticationType = AuthenticationTypes.SecureSocketsLayer;
    entry.Path = ConfigurationManager.AppSettings["LDAP.URL"].ToString();
    entry.Username = ConfigurationManager.AppSettings["LDAP.Username"].ToString();
    entry.Password = ConfigurationManager.AppSettings["LDAP.Password"].ToString();
    /// Load any attributes you want to retrieve
    search.SearchRoot = entry;
    search.Filter = "(|(ObjectClass=user)(ObjectClass=contact))";
    search.SearchScope = SearchScope.Subtree;
    SearchResultCollection result = search.FindAll();
    Dictionary<string, SearchResult> dicResult = new
    Dictionary<string, SearchResult>();
    foreach (SearchResult profile in result)
       if (profile.Properties["objectGUID"] != null && profile.Properties["objectGUID"].Count > 0)
           Guid guid = new Guid((Byte[])profile.Properties["objectGUID"][0]);
           dicResult.Add(guid.ToString(), profile);

    return dicResult;


What this function does is, it queries the Active Directory and returns all profiles (set by filter) in a dictionary object. Notice the search filter set to return all objects class of user AND contact. The settings would come from a config file as below. Replace the tags with your settings:

<!--LDAP settings-->
<add key="LDAP.URL" value="LDAP://OU=<OU_NAME>,DC=<DC_NAME>,DC=com" />
<add key="LDAP.Username" value="<SERVICE_ACCOUNT_USERNAME>" />
<add key="LDAP.Password" value="<SERVICE_ACCOUNT_PWD>" />

So to use it, we will do:

using (DirectoryEntry entry = new DirectoryEntry())
using (DirectorySearcher search = new DirectorySearcher())
      //extract all AD profiles
      sbLog.AppendLine("Preparing to query LDAP...");
      Dictionary<string, SearchResult> AD_Results = QueryLDAP(entry, search);
      foreach (SearchResult ADProfile in AD_Results)
         string email = ADProfile.GetDirectoryEntry().Properties["mail"].Value.ToString();

You can now loop through the dictionary to get each profile. 🙂

How to Synchronize users Active Directory/Azure Active Directory Photo using Microsoft Identity Manager


Whilst Microsoft FIM/MIM can be used to do pretty much anything your requirements dictate, dealing with object types other than text and references can be a little tricky when manipulating them the first time. User Profile Photos fall into that category as they are stored in the directory as binary objects. Throw in Azure AD and obtaining and synchronizing photos can seem like adding a double back-flip to the scenario.

This post is Part 1 of a two-part post. Part two is here. This is essentially the introduction to the how-to piece before extending the solution past a users Active Directory Profile Photo to their Office 365 Profile Photo. Underneath the synchronization and method for dealing with the binary image data is the same, but the API’s and methods used are different when you are looking to implement the solution for any scale.

As for why you would want to do this, refer to Part two here. It details why you may want to do this.


As always I’m using my favourite PowerShell Management Agent (the Granfeldt PSMA). I’ve updated an existing Management Agent I had for Azure AD that is described here. I highly recommend you use that as the basis for the extra photo functionality that I describe in this post. Keep in mind the AzureADPreview, now AzureAD Powershell Module has change the ADAL Helper Libraries. I detail the changes here so you can get AuthN to work with the new libraries.

Therefore the changes to my previous Azure AD PowerShell MA are to add two additional attributes to the Schema script, and include the logic to import users profile photo (if they have one) in the Import script.


Take the schema.ps1 from my Azure AD PSMA here and add the following two lines to the bottom (before the $obj in the last line where I’ve left an empty line (29)).

$obj | Add-Member -Type NoteProperty -Name "AADPhoto|Binary" -Value 0x20 
$obj | Add-Member -Type NoteProperty -Name "AADPhotoChecksum|String" -Value "23973abc382373"

The AADPhoto attribute of type Binary is where we will store the photo. The AADPhotoChecksum attribute of type String is where we will store a checksum of the photo for use in logic if we need to determine if images have changed easily during imports.


Take the import.ps1 from my Azure AD PSMA here and make the following additions;

  • On your MIM Sync Server download/install the Pscx PowerShell Module.
    • The Pscx Powershell Module is required for Get-Hash (to calculate Image checksum) based on variables vs a file on the local disk
    • You can get the module from the Gallery using Install-Module Pscx -Force
    • Add these two lines up the top of the import.ps1 script. Around line 26 is a good spot
# Powershell Module required for Get-Hash (to calculate Image checksum)
Import-Module Pscx
  • Add the following lines into the Import.ps1 in the section where we are creating the object to pass to the MA. After the $obj.Add(“AADCity”,$ line is a good spot. 
  • What the script below does is create a WebClient rather than use Invoke-RestMethod or Invoke-WebRequest to get the users Azure AD Profile image only if the ‘thumbnailPhoto@odata.mediaContentType’ attribute exists which indicates the user has a profile photo. I’m using the WebClient over the PowerShell Invoke-RestMethod or Invoke-WebRequest functions so that the returned object is in binary format (rather than being returned as a string), which saves having to convert it to binary or output to a file and read it back in. The WebClient is also faster for transferring images/data.
  • Once the image has been returned (line 8 below) the image is added to the object as the attribute AADPhoto to be passed to the MA (line 11)
  • Line 14 gets the checksum for the image and adds that to the AADPhotoChecksum attribute in line 16.

Other changes

Now that you’ve updated the Schema and Import scripts, you will need to;

  • Refresh your schema on your Azure AD PSMA to get the new attributes (AADPhoto and AADPhotoChecksum) added
  • Select the two new attributes in the Attributes section of your Azure AD PSMA
  • Create in your MetaVerse via the MetaVerse Designer two new attributes on the person (or whatever ObjectType you are using for users), for AADPhoto and AADPhotoChecksum. Make sure that AADPhoto is of type Binary and AADPhotoChecksum is of type string.

  • Configure your Attribute Flow on your Azure AD PSMA to import the AADPhoto and AADPhotoChecksum attributes into the Metaverse. Once done and you’ve performed an Import and Sync you will have Azure AD Photos in your MV.

  • How do you know they are correct ? Let’s extract one from the MV, write it to a file and have a look at it. This small script using the Lithnet MIIS Automation PowerShell Module makes it easy. First I get my user object from the MV. I then have a look at the text string version of the image (to make sure it is there), then output the binary version to a file in the C:\Temp directory.
$me = Get-MVObject -ObjectType person -Attribute accountName -Value "drobinson"
[string]$myphoto = $me.Attributes.AADPhoto.Values.ValueString
[System.Io.File]::WriteAllBytes("c:\temp\UserPhoto.jpg" ,$me.Attributes.AADPhoto.Values.ValueBinary )
  • Sure enough. The image is valid.


Photos are still just bits of data. Once you know how to get them and manipulate them you can do what ever you need to with them. See Part two that takes this concept and extends it to Office 365.

Creating Organizational Units, and Groups in AD with GUID

A recent client of Kloud, wanted to have the chance to create new organizational units, and groups, automatically, with a unique ID (GUID) for each organizational unit. The groups created needed to share the GUID of the OU.

In this blog, I will demonstrate how you could achieve the aforementioned, through a simple PowerShell script, naturally.

Before you start however, you may have to run PowerShell (run as Administrator), and execute the following cmdlet:

Set-ExecutionPolicy RemoteSigned

This is to allow PowerShell scripts to run on the computer.

How to define GUID?

To define the GUID in PowerShell, we use the [GUID] type accelerator: [guid]::NewGuid()

Defining the OU, and Groups.

First part of the script consists of the following parameters:

#Define GUID

$Guid = [guid]::NewGuid()

This is to define the GUID, in which a unique ID resembling this “0fda3c39-c5d8-424f-9bac-654e2cf045ed” will be added to the OU, and groups within that OU.

#Define OU Name

$OUName = (Read-Host Input OU name';) + '-' + $Guid

Defining the name of the OU. This needs an administrator’s input.

#Define Managers Group Name

$MGroupPrefix = (Read-Host -Prompt 'Input Managers Group name')

if ($MGroupPrefix) {write-host 'The specified Managers Name will be used.' -foreground 'yellow'}

else {

$MGroupPrefix = 'Managers';

Write-Host 'Default Managers Group name will be used.' -foreground 'yellow'


$MGroupPrefix = $MGroupPrefix + '-' + $Guid


This is to define the name of the “First Group”, it will also be followed by a “-“ and a GUID.

If the Administrator selects a name, then, the selected name will be used.

#Define Supervisors Group Name

$SGroupPrefix = (Read-Host -Prompt 'Input Supervisors Group name')

if ($SGroupPrefix) {write-host 'The specified Supervisors Group name will be used.' -foreground 'yellow'}

else {

$SGroupPrefix = 'Supervisors'

Write-Host 'Default Supervisors name will be used.' -foreground 'yellow'


$SGroupPrefix = $SGroupPrefix + '-' + $Guid


This will define the second group name “Supervisors”, again, it will be followed by “-” and a GUID.

In this section however, I didn’t specify a name to demonstrate how the default name “Supervisors” + “-” + GUID come into place, as define in “else” function. Simply press “enter”.

Defining the OU Name, group name, and path

#Defining Main OU Name, OUs, and path

$OUPath = 'OU=Departments,DC=contoso,DC=lab'

$GroupsOUName = 'Groups'

$GroupOUPath = 'OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab'

$UsersOUName = 'Users'

$UsersOUPath = 'OU=' + $OUName + ',OU=Dpartments,DC=contoso,DC=lab'

The above parameters define the OU Path in which the main OU will be created, e.g. Finance.

It will also create two other OUs, “Groups” and Users” in the specified path.

Defining the Groups (that will be created in “Groups” OU), the group names and path.

#Defining Groups, groups name and path

$MGroupPath = 'OU=Groups, OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab'

$MGroupSamAccountName = 'Managers-' + $Guid

$MGroupDisplayName = 'Managers'

$SGroupPath = 'OU=Groups, OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab' 

$SGroupSamAccountName = 'Supervisors-' + $Guid

$SGroupDisplayName = 'Supervisors'

The above parameters define the Groups path, the SamAccountName and Display Name, that have been created in the main OU, e.g. Finance.

Creating the Organizational units, and groups.

As you may have noticed, everything that we have defined above is just parameters. Nothing will be created without calling the cmdlets.

#Creating Tenant OU, Groups &amp;amp;amp;amp; Users OU, and Admins Groups

Import-Module ActiveDirectory

New-ADOrganizationalUnit -Name $OUName -Path $OUPath

New-ADOrganizationalUnit -Name $GroupsOUName -Path $GroupOUPath

New-ADOrganizationalUnit -Name $UsersOUName -Path $UsersOUPath

New-ADGroup -Name $FirstGroupPrefix -SamAccountName $FirstGroupSamAccountName -GroupCategory Security -GroupScope Global -DisplayName $FirstGroupDisplayName -Path $FirstGroupPath -Description 'Members of this group are Managers of $OUName'

New-ADGroup -Name $SecondGroupPrefix -SamAccountName $SecondGroupSamAccountName -GroupCategory Security -GroupScope Global -DisplayName $SecondGroupDisplayName -Path $SecondGroupPath -Description Members of this group are Supervisors of $OUName'

Once completed you will have the following OU/Department

And in Groups OU, you will have two groups, Managers (as defined in the previous step) and “SupervisorsGroup” the default name, as the Administrator did not specify a name for the Group.

Notice how the GUID is shared across the Departments, and the Groups.

Putting it all together:

Note: The SYNOPSIS, DESCRIPTION, EXAMPLE, and NOTES are used for get-help.


This is a Powershell script to create Organizations Units (OU), and groups.

The script will create:
1) An OU, defined by the administrator. The OU name is define by the administrator, followed by a unique ID generated through GUID. It will be created in a specific OU path.
2) Two OUs consisting of 'Groups' and 'Users'.
3) Two Groups; 'First Group' and 'Second Group' These names are the default names followed with GUID. An administrator can define Group names as desired. GUID will be added by default.


This PowerShell script should be ran as administrator.



#Define GUID
$Guid = [guid]::NewGuid()

#Define OU Name
$OUName  = (Read-Host 'Input OU name') + '-' + $Guid

#Define First Group Name
$MGroupPrefix = (Read-Host -Prompt 'Input Managers Group name')
if ($MGroupPrefix) {write-host 'The specified Managers Name will be used.' -foreground 'yellow'}
else {
$MGroupPrefix = 'Managers'
Write-Host 'Default Managers Group name will be used' -foreground 'yellow'
$MGroupPrefix = $MGroupPrefix + '-' + $Guid

#Define Second Group Name
$SGroupPrefix = (Read-Host -Prompt 'Input Supervisors Group name')
if ($SGroupPrefix) {write-host 'The specified Supervisors Group name will be used.' -foreground 'yellow'}
else {
$SGroupPrefix = 'SupervisorsGroup'
Write-Host 'Default SupervisorsGroup name will be used' -foreground 'yellow'

$SGroupPrefix = $SGroupPrefix + '-' + $Guid


#Defining Main OU Name, OUs, and path
$OUPath  = 'OU=Departments,DC=contoso,DC=lab'
$GroupsOUName = 'Groups'
$GroupOUPath = 'OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab'
$UsersOUName = 'Users'
$UsersOUPath = 'OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab'

#Defining Groups, groups name and path
$MGroupPath = 'OU=Groups, OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab'
$MGroupSamAccountName = 'Managers-' + $Guid
$MGroupDisplayName = 'Managers'
$SGroupPath = 'OU=Groups, OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab'
$SGroupSamAccountName = 'Supervisors-' + $Guid
$SGroupDisplayName = 'Supervisors'

#Creating Tenant OU, Groups &amp;amp; Users OU, and Admins Groups
Import-Module ActiveDirectory

New-ADOrganizationalUnit -Name $OUName -Path $OUPath

New-ADOrganizationalUnit -Name $GroupsOUName -Path $GroupOUPath

New-ADOrganizationalUnit -Name $UsersOUName -Path $UsersOUPath

New-ADGroup -Name $MGroupPrefix -SamAccountName $MGroupSamAccountName -GroupCategory Security -GroupScope Global -DisplayName $MGroupDisplayName -Path $MGroupPath -Description 'Members of this group are Managers of $OUName'

New-ADGroup -Name $SGroupPrefix -SamAccountName $SGroupSamAccountName -GroupCategory Security -GroupScope Global -DisplayName $SGroupDisplayName -Path $SGroupPath -Description 'Members of this group are Supervisors of $OUName'

Active Directory – What are Linked Attributes?

A customer request to add some additional attributes to their Azure AD tenant via Directory Extensions feature in the Azure AD Connect tool, lead me into further investigation. My last blog here set out the customer request, but what I didn’t detail in that blog was one of the attributes they also wanted to extend into Azure AD was directReports, an attribute they had used in the past for their custom built on-premise applications to display the list of staff the user was a manager for. This led me down a rabbit hole where it took a while to reach the end.

With my past experience in using Microsoft Identity Manager (formally Forefront Identity Manager), I knew directReports wasn’t a real attribute stored in Active Directory, but rather a calculated value shown using the Active Directory Users and Computers console. The directReports was based on the values of the manager attribute that contained the reference to the user you were querying (phew, that was a mouthful). This is why directReport and other similar type of attributes such as memberOf were not selectable for Directory Extension in the Azure AD Connect tool. I had never bothered to understand it further than that until the customer also asked for a list of these type of attributes so that they could tell their application developers they would need a different technique to determine these values in Azure AD. This is where the investigation started which I would like to summarise as I found it very difficult to find this information in one place.

In short, these attributes in the Active Directory schema are Linked Attributes as detailed in this Microsoft MSDN article here:

Linked attributes are pairs of attributes in which the system calculates the values of one attribute (the back link) based on the values set on the other attribute (the forward link) throughout the forest. A back-link value on any object instance consists of the DNs of all the objects that have the object’s DN set in the corresponding forward link. For example, “Manager” and “Reports” are a pair of linked attributes, where Manager is the forward link and Reports is the back link. Now suppose Bill is Joe’s manager. If you store the DN of Bill’s user object in the “Manager” attribute of Joe’s user object, then the DN of Joe’s user object will show up in the “Reports” attribute of Bill’s user object.

I then found this article here which further explained these forward and back links in respect of which are writeable and which are read-only, the example below referring to the linked attributes member/memberOf:

Not going too deep into the technical details, there’s another thing we need to know when looking at group membership and forward- and backlinks: forward-links are writable and backlinks are read-only. This means that only forward-links changed and the corresponding backlinks are computed automatically. That also means that only forward-links are replicated between DCs whereas backlinks are maintained by the DCs after that.

The take-out from this is the value in the forward-link can be updated, the member attribute in this case, but you cannot update the back-link memberOf. Back-links are always calculated automatically by the system whenever an attribute that is a forward-link is modified.

My final quest was to find the list of linked attributes without querying the Active Directory schema which then led me to this article here, which listed the common linked attributes:

  • altRecipient/altRecipientBL
  • dLMemRejectPerms/dLMemRejectPermsBL
  • dLMemSubmitPerms/dLMemSubmitPermsBL
  • msExchArchiveDatabaseLink/msExchArchiveDatabaseLinkBL
  • msExchDelegateListLink/msExchDelegateListBL
  • publicDelegates/publicDelegatesBL
  • member/memberOf
  • manager/directReports
  • owner/ownerBL

There is further, deeper technical information about linked attributes such as “distinguished name tags” (DNT) and what is replicated between DCs versus what is calculated locally on a DC, which you can read in your own leisure in the articles listed throughout this blog. But I hope the summary is enough information on how they work.

Create a Replica Domain Controller using Desired State Configuration

Originally posted on Nivlesh’s blog @

Welcome back. In this blog we will continue with our new Active Directory Domain and use Desired State Configuration (DSC) to add a replica domain controller to it, for redundancy.

If you have not read the first part of this blog series, I would recommend doing that before continuing (even if you need a refresher). The first blog can be found at Create a new Active Directory Forest using Desired State Configuration

Whenever you create an Active Directory Domain, you should have, at a minimum, two domain controllers. This ensures that domain services are available even if one domain controller goes down.

To create a replica domain controller we will be using the xActiveDirectory and xPendingReboot experimental DSC modules. Download these from

After downloading the zip files, extract the contents and place them in the PowerShell modules directory located at $env:ProgramFiles\WindowsPowerShell\Modules folder (unless you have changed your systemroot folder, this will be located at C:\ProgramFiles\WindowsPowerShell\Modules )

With the above done, open Windows PowerShell ISE and lets get started.

Paste the following into your PowerShell editor and save it using a filename of your choice (I have saved mine as CreateADReplicaDC.ps1 )

The above declares the parameters that need to be passed to the CreateADReplicaDC DSC configuration function ($DomainName , $Admincreds, $SafemodeAdminCreds). Some variables have also been declared ($RetryCount and $RetryIntervalSec). These will be used later within the configuration script.

Next, we have to import the experimental DSC modules that we have downloaded (unless the DSC modules are present out-of-the-box, they have to be imported so that they are available for use)

The $Admincreds parameter needs to be converted into a domain\username format. This is accomplished by using the following (the result is held in a variable called $DomainCreds)

Next, we need to tell DSC that the command has to be run on the local computer. This is done by the following line (localhost refers to the local computer)

Node localhost

As mentioned in the first blog, the DSC instructions are passed to the LocalConfigurationManager which carries out the commands. We need to tell the LocalConfigurationManager it should reboot the server if required, to continue with the configuration after a reboot, and to just apply the settings once (DSC can apply a setting and constantly monitor it to check that it has not been changed. If the setting is found to be changed, DSC can re-apply the setting. In our case we will not do this, we will apply the setting just once).

Next, we will install the Remote Server Administration Tools

Now, lets install Active Directory Domain Services Role

Before continuing on, we need to check if the Active Directory Domain that we are going to add the domain controller to is available. This is achieved by using the xWaitForADDomain function in the xActiveDirectory module.

Below is what we will use.

$DomainName is the name of the domain the domain controller will be joined to
$DomainCreds is the Administrator username and password for the domain (the username is in the format domain\username since we had converted this previously)

The DependsOn in the above code means that this section depends on the Active Directory Domain Services role to have been successfully installed.

So far, our script has successfully installed the Active Directory Domain Services role and has checked to ensure that the domain we will join the domain controller to is available.

So, lets proceed. We will now promote the virtual machine to a replica domain controller of our new Active Directory domain.

The above will only run after DscForestWait successfully signals that the Active Directory domain is available (denoted by DependsOn ). The SafeModeAdministratorPassword is set to the password that was provided in the parameters. The DatabasePath, LogPath and SysvolPath are all set to a location on C:\. This might not be suitable for everyone. For those that are uncomfortable with this, I would suggest to add another data disk to your virtual machine and point the Database, Log and Sysvol path to this new volume.

The only thing left now is to restart the server. So lets go ahead and do that

Keep in mind that the reboot will only happen if the domain controller was successfully added as a replica domain controller (this is due to the DependsOn).

That’s it. We now have a DSC configuration script that will add a replica domain controller to our Active Directory domain

The virtual machine that will be promoted to a replica domain controller needs to have the first domain controller’s ip address added to its dns settings and it also needs to have network connectivity to the first domain controller. These are required for the DSC configuration script to succeed.

The full DSC Configuration script is pasted below

If you would like to bootstrap the above to your Azure Resource Manager template, create a DSC extension and link it to the above DSC Configuration script (the script, together with the experimental DSC modules has to be packaged into a zip file and uploaded to a public location which ARM can access. I used GitHub for this)

One thing to note when bootstrapping is that after you create your first domain controller, you will need to inject its ip address into the dns settings of the virtual machine that will become your replica domain controller. This is required so that the virtual machine can locate the Active Directory domain.

I accomplished this by using the following lines of code in my ARM template, right after the Active Directory Domain had been created.


DC01 is the name of the first domain controller that was created in the new Active Directory domain
DC01NicIPAddress is the ip address of DC01
DC02 is the name of the virtual machine that will be added to the domain as a replica Domain Controller
DC02NicName is the name of the network card on DC02
DC02NicIPAddress is the ip address of DC02
DC02SubnetRef is a reference to the subnet DC02 is in
nicTemplateUri is the url of the ARM deployment template that will update the network interface settings for DC02 (it will inject the DNS settings). This file has to be uploaded to a location that ARM can access (for instance GitHub)

Below are the contents of the deployment template file that nicTemplateUri refers to

After the DNS settings have been updated, the following can be used to create a DSC extension to promote the virtual machine to a replica domain controller (it uses the DSC configuration script we had created earlier)

The variables used in the json file above are listed below (I have pointed repoLocation to my GitHub repository)

This concludes the DSC series for creating a new Active Directory domain and adding a replica domain controller to it.

Hope you all enjoyed this blog post.

Create a new Active Directory Forest using Desired State Configuration

Originally posted on Nivlesh’s blog @

Desired State Configuration (DSC) is a declarative language in which you state “what” you want done instead of going into the nitty gritty level to describe exactly how to get it done. Jeffrey Snover (the inventor of PowerShell) quotes Jean-Luc Picard from Star Trek: The Next Generation to describe DSC – it tells the servers to “Make it so”.

In this blog, I will show you how to use DSC to create a brand new Active Directory Forest. In my next blog post, for redundancy, we will add another domain controller to this new Active Directory Forest.

The DSC configuration script can be used in conjunction with Azure Resource Manager to deploy a new Active Directory Forest with a single click, how good is that!

A DSC Configuration script uses DSC Resources to describe what needs to be done. DSC Resources are made up of PowerShell script functions, which are used by the Local Configuration Manager to “Make it so”.

Windows PowerShell 4.0 and Windows PowerShell 5.0, by default come with a limited number of DSC Resources. Don’t let this trouble you, as you can always install more DSC modules! It’s as easy as downloading them from a trusted location and placing them in the PowerShell modules directory! 

To get a list of DSC Resources currently installed, within PowerShell, execute the following command


Out of the box, there are no DSC modules available to create an Active Directory Forest. So, to start off, lets go download a module that will do this for us.

For our purpose, we will be installing the xActiveDirectory PowerShell module (the x in front of ActiveDirectory means experimental), which can be downloaded from  (updates to the xActiveDirectory module is no longer being provided at the above link, instead these are now published on GitHub at  . For simplicity, we will use the TechNet link above)

Download the following DSC modules as well

After downloading the zip files, extract the contents and place them in the PowerShell modules directory located at $env:ProgramFiles\WindowsPowerShell\Modules folder

($env: is a PowerShell reference  to environmental variables).

For most systems, the modules folder is located at C:\ProgramFiles\WindowsPowerShell\Modules

However, if you are unsure, run the following PowerShell command to get the location of $env:ProgramFiles 

Get-ChildItem env:ProgramFiles

Tip: To get a list of all environmental variables, run Get-ChildItem env:

With the pre-requisites taken care of, let’s start creating the DSC configuration script.

Open your Windows PowerShell IDE and let’s get started.

Paste the following into your PowerShell editor and save it using a filename of your choice (I have saved mine as CreateNewADForest.ps1)

Configuration CreateNewADForest {





The above declaration defines the parameters that will be passed to our  configuration function.  There are also two “constants” defined as well. I have named my configuration function CreateNewADForest which coincides with the name of the file it is saved in – CreateNewADForest.ps1

Here is some explanation regarding the above parameters and constants

We now have to import the DSC modules that we had downloaded, so add the following line to the above Configuration Script

Import-DscResource -ModuleName xActiveDirectory, xNetworking, xPendingReboot

We now need to convert $AdminCreds the format domain\username. We will store the result in a new object called $DomainCreds

The next line states where the DSC commands will run. Since we are going to run this script from within the newly provisioned virtual machine, we will use localhost as the location

So enter the following

Node localhost

Next, we need to tell the Local Configuration Manager to apply the settings only once, reboot the server if needed (during our setup) and most importantly, to continue on with the configuration after reboot. This is done by using the following lines

     ActionAfterReboot = 'ContinueConfiguration'
     ConfigurationMode = 'ApplyOnly'
     RebootNodeIfNeeded = $true

If you have worked with Active Directory before, you will be aware of the importance of DNS. Unfortunately, during my testing, I found that a DNS Server is not automatically deployed when creating a new Active Directory Forest. To ensure a DNS Server is present before we start creating a new Active Directory Forest, the following lines are needed in our script

WindowsFeature DNS
     Ensure = "Present"
     Name = "DNS"

The above is a declarative statement which nicely shows the “Make it so” nature of DSC. The above lines are asking the DSC Resource WindowsFeature to Ensure DNS (which refers to DNS Server) is Present. If it is not Present, then it will be installed, otherwise nothing will be done. This is the purpose of the Ensure = “Present” line. The word DNS after WindowsFeature is just a name I have given to this block of code.

Next, we will leverage the DSC function xDNSServerAddress (from the xNetworking module we had installed) to set the local computer’s DNS settings, to its loopback address. This will make the computer refer to the newly installed DNS Server.

xDnsServerAddress DnsServerAddress
     Address        = ''
     InterfaceAlias = 'Ethernet'
     AddressFamily  = 'IPv4'
     DependsOn = "[WindowsFeature]DNS"

Notice the DependsOn = “[WindowsFeature]DNS” in the above code. It tells the function that this block of code depends on the [WindowsFeature]DNS section. To put it another way, the above code will only run after the DNS Server has been installed. There you go, we have specified our first dependency.

Next, we will install the Remote Server Administration Tools

WindowsFeature RSAT
     Ensure = "Present"
     Name = "RSAT"

Now, we will install the Active Directory Domain Services feature in Windows Server. Note, this is just installation of the feature. It does not create the Active Directory Forest.

WindowsFeature ADDSInstall
     Ensure = "Present"
     Name = "AD-Domain-Services"

So far so good. Now for the most important event of all, creating the new Active Directory Forest. For this we will use the xADDomain function from the xActiveDirectory module.

Notice above, we are pointing the DatabasePath, LogPath and SysvolPath all on to the C: drive. If you need these to be on a separate volume, then ensure an additional data disk is added to your virtual machine and then change the drive letter accordingly in the above code. This section has a dependency on the server’s DNS settings being changed.

If you have previously installed a new Active Directory forest, you will remember how long it takes for the configuration to finish. We have to cater for this in our DSC script, to wait for the Forest to be successfully provisioned, before proceeding. Here, we will leverage on the xWaitForADDomain DSC function (from the xActiveDirectoy module we had installed).

xWaitForADDomain DscForestWait
     DomainName = $DomainName
     DomainUserCredential = $DomainCreds
     RetryCount = $RetryCount
     RetryIntervalSec = $RetryIntervalSec
     DependsOn = "[xADDomain]FirstDC"

Viola! The new Active Directory Forest has been successfully provisioned! Let’s add a new user to Active Directory and then restart the server.

xADUser FirstUser
     DomainName = $DomainName
     DomainAdministratorCredential = $DomainCreds
     UserName = $myFirstUserCreds.Username
     Password = $myFirstUserCreds
     Ensure = "Present"
     DependsOn = "[xWaitForADDomain]DscForestWait"

The username and password are what had been supplied in the parameters to the configuration function.

That’s it! Our new Active Directory Forest is up and running. All that is needed now is a reboot of the server to complete the configuration!

To reboot the server, we will use the xPendingReboot module (from the xPendingReboot package that we installed)

xPendingReboot Reboot1
     Name = "RebootServer"
     DependsOn = "[xWaitForADDomain]DscForestWait"

Your completed Configuration Script should look like below (I have taken the liberty of closing off all brackets and parenthesis)

You can copy the above script to a newly create Azure virtual machine and deploy it manually.

However, a more elegant method will be to bootstrap it to your Azure Resource Manager virtual machine template. One caveat is, the DSC configuration script has to be packaged into a zip file and then uploaded to a location that Azure Resource Manager can access (that means it can’t live on your local computer). You can upload it to your Azure Storage blob container and use a shared access token to give access to your script or upload it to a public repository like GitHub.

I have packaged and uploaded the script to my GitHub repository at

The zip file also contains the additional DSC Modules that were downloaded. To use the above in your Azure Resource Manager template, create a PowerShell DSC Extension and link it to the Azure virtual machine that will become your Active Directory Domain Controller. Below is an example of the extension you can create in your ARM template (the server that will become the first domain controller is called DC01 in the code below).

Below is an except of the relevant variables

And here are the relevant parameters

That’s it folks! Now you have a new shiny Active Directory Forest, that was created for you using a DSC configuration script.

In the second part of this two-part series, we will add an additional domain controller to this Active Directory Forest using DSC configuration script.

Let me know what you think of the above information.

Good practices for implementing a healthy Azure Active Directory identity foundation

Originally posted on Lucian.Blog. Follow Lucian on Twitter @LucianFrango for daily doses of cloud.

This is part frustrated (mild) rant, part helpful hint and like the title says: part public service announcement. While I am most definitely a glass is half full kind of person, and I don’t get stressed out much or phased by pressure much, I, like anyone, do get annoyed with certain things. Let’s start with a quick vent before continuing on with the public service announcement.

Rather then just have a rant or a whinge, let me explain the situation and by doing so I’ll most likely vent some frustration.

Deploying a Hybrid Exchange environment to integrate with Exchange Online in Office 365 can be a behemoth of a task if you lack certain things. While anyone can say any number of criteria or list off important considerations, pre-requisites or requirements, I feel that there is only one thing that needs to be addressed. One thing that sticks in my mind as being the foundation of the endeavour.

Just like the old saying goes “you can’t build a strong house on weak foundations”; the same applies to that initial journey to the cloud.

I say initial journey, as for many organisations that first step after setting up a website, which can be the beginning to being a cloud-focused organisation, Office 365 is truly the first step to move infrastructure, process and systems to the cloud that I see happen the most.

Importantly though is to realise that as amazing and full of features as Office 365 is, deploying a Hybrid environment to leverage what I consider the best of both worlds, a hybrid identity, all that matters is the existing Active Directory Domain Services (ADDS) environment. That is ALL THAT MATTERS.

Step aside Active Directory Federation Services (ADFS) and Azure AD Connect (AADConnect) or even Hybrid Exchange Server 2016 itself. All those components sit on top of the foundation of the existing identity and directory services that is the ADDS environment.

ADDS is so crucial as it key link in the chain, so much so that if it has issues, the entire project can easily and quickly run into trouble and delays. Delays lead to cost. Delays lead to unhappy management. Delays lead to unhappy users. Delays lead to the people working on the project going grey.

I remember a time when I had blonde curly hair. I grew out of that and as I got older my hair darkened to a rich, chocolate (at least 70% cocoa) brown. Now, as each project gets notched on my belt, slowly, the slick chocolate locks are giving way to the odd silky, white sign that there’s not enough emphasis on a well-managed and organised ADDS.

Read More

Dynamic Active Directory User Provisioning placement (OU) using the Granfeldt Powershell Management Agent

When using Forefront / Microsoft Identity Manager for provisioning users into Active Directory, determining which organisational unit (OU) to place the user in varies from customer to customer. Some AD OU structures are flat, others hierarchical based on business, departmental, functional role or geography. Basically every implementation I’ve done has been different.

That said the most recent implementation I’ve done is for an organisation that is growing and as such the existing structure is in flux and based on differing logic depending on who you talk to. Coming soon they’re restructuring their OU structure but not before MIM will be live. Damn.

This blog post details my interim solution to enable provisioning of new users into AD with an existing complex OU structure until the customer simplifies their hierarchy with upcoming work and without having to maintain a matrix of mapping tables.


In this blog post I’ll document how you can perform dynamic Active Directory user provisioning placement utilising Søren Granfeldt’s extremely versatile PowerShell Management Agent.

Using the Synchronisation Rule for AD Outbound I land the new AD account into a landing OU. My Placement MA (using the PS MA), see’s accounts in this landing OU and then queries back into AD using the new users Title (title) and State (st) attributes to find other AD users with the same values, location (DN). It orders the results and moves our new AD account to the location of the greatest number of users with the same metadata.

My User Placement MA is used in conjunction with an Active Directory MA and Declarative Rules in the MIM Portal.

Getting Started with the Granfeldt PowerShell Management Agent

In case you haven’t read my other PS MA blog posts here is the background on getting started with the Granfeldt PS MA. If you have you can skip through this section.

First up, you can get it from here. Søren’s documentation is pretty good but does assume you have a working knowledge of FIM/MIM and this blog post is no different. Configuration tasks like adding additional attributes the User Object Class in the MIM Portal, updating MPR’s, flow rules, Workflows, Sets etc are assumed knowledge and if not is easily Bing’able for you to work it out.

Three items I had to work out that I’ll save you the pain of are;

  • You must have a Password.ps1 file. Even though we’re not doing password management on this MA, the PS MA requires a file for this field. The .ps1 doesn’t need to have any logic/script inside it. It just needs to be present
  • The credentials you give the MA to run the scripts as, needs to be in the format of just ‘accountname’ NOT ‘domain\accountname’. I’m using the service account that I’ve used for the Active Directory MA. The target system is the same directory service and the account has the permissions required (you’ll need to add the service account to the appropriate Lync role group for user management)
  • The path to the scripts in the PS MA Config must not contain spaces and be in old-skool 8.3 format. I’ve chosen to store my scripts in an appropriately named subdirectory under the MIM Extensions directory. Tip: from a command shell use “dir /x” to get the 8.3 directory format name. Mine looks like C:\PROGRA~1\MICROS~4\2010\SYNCHR~1\EXTENS~2\Placement

Schema Script (schema.ps1)

As I’m using the OOTB (out of the box) Active Directory MA to provision the AD account and only using this MA to move the AD Object the schema only consists of the attributes needed to workout where the user should reside and update them.

Password Script (password.ps1)

Just a placeholder file as detailed above.

Import Script (import.ps1)

The import script looks to see if the user is currently residing in the LandingOU that the AD MA puts new users into. If the user is, then the Import Script gets the users ‘Title’ and ‘State’ attribute values and searches AD to find any other users with the same State and Title and pulls back their OU. The script discounts any users that are in the Expired OU structure and any users (including the user being processed) that are in the Landing OU. It orders the list by occurrence and then selects the first entry. This becomes the location where we will move the user to. We also update the targetOU custom attribute with where the user is, once moved to keep the synchronisation rule happy and no longer trigger a move on the user (unless someone moves the user back to the LandingOU). The location of the LandingOU and the Expired OU’s path are set near the top of the script. If we don’t find any matches then I have catchall per State (st) where we will put the user and they can then be moved manually if required later.

On the $request line you’ll see I have a filter for the User Object Class and users with an account name starting with ‘U’. For the customer I wrote this for, all users start with U. You can obviously also have Filters on the MA. I have a few there as well, but limiting the size of the returned array makes things quicker to run full imports.

Export Script (export.ps1)

The export script is basically doing the move of the users AD account.

Wiring it all together

In order to wire the functionality all together there are the usual number of configuration steps to be completed. Below I’ve shown a number of the key points associated with making it all work.

Basically create the PS MA, import attributes from the PS MA, add any additional attributes to the Portal Schema, update the Portal Filter to allow Administrators to use the attribute, update the Synchronisation MPR to allow the Sync Engine to flow in the new attributes, create the Set used for the transition, create your Synchronisation Rule, create your Terminal Services Workflow, create your Terminal Services MPR, create your MA Run Profiles and let it loose.

Management Agent Configuration

As per the tips above, the format for the script paths must be without spaces etc. I’m using 8.3 format and I’m using the same service account as my AD MA.

Password script must be specified but as we’re not doing password management it’s empty as detailed above.

If your schema.ps1 file is formatted correctly you can select your attributes.

My join rule is simple. AccountName (which as you’ll see in the Import.ps1 is aligned with sAMAccountName) to AccountName in the MetaVerse.

My import flow is just bringing back in the DN to a custom MV attribute.

Synchronisation Rules

My Placement MA Outbound Sync rule relies on the value of the new OU that was bought in on the Import script.

We flow out the new DN for the user object.


I created a Set that I use as a transition to trigger the move of the user object on my Placement MA. My Set looks to see if the user account is in AD and active (I have a Boolean attribute not shown and another rule in the MIM Portal that is set based on an advanced flow rule in the Sync engine that has some logic to determine if employment date as sourced from my HR Management Agent is current and the user should be active) and that the user is currently in the LandingOU.


An action based workflow that will use the trigger the Synchronisation rule on the Placement MA.


Finally my MPR for moving the AD object is based on the transition set,

and my Placement Workflow.


Using the Granfeldt PowerShell MA it was very easy to accomplish something that would have been a nightmare to try and do with declarative rules or un-maintainable using a matrix of mapping tables. It would also be possible to use this as a basis to move users when their State or Title changes. This example could also be reworked easily to use different attributes.

Follow Darren on Twitter @darrenjrobinson