Creating Organizational Units, and Groups in AD with GUID

A recent client of Kloud, wanted to have the chance to create new organizational units, and groups, automatically, with a unique ID (GUID) for each organizational unit. The groups created needed to share the GUID of the OU.

In this blog, I will demonstrate how you could achieve the aforementioned, through a simple PowerShell script, naturally.

Before you start however, you may have to run PowerShell (run as Administrator), and execute the following cmdlet:

Set-ExecutionPolicy RemoteSigned

This is to allow PowerShell scripts to run on the computer.

How to define GUID?

To define the GUID in PowerShell, we use the [GUID] type accelerator: [guid]::NewGuid()


Defining the OU, and Groups.

First part of the script consists of the following parameters:


param()
#Define GUID

$Guid = [guid]::NewGuid()

This is to define the GUID, in which a unique ID resembling this “0fda3c39-c5d8-424f-9bac-654e2cf045ed” will be added to the OU, and groups within that OU.


#Define OU Name

$OUName = (Read-Host Input OU name';) + '-' + $Guid


Defining the name of the OU. This needs an administrator’s input.



#Define Managers Group Name

$MGroupPrefix = (Read-Host -Prompt 'Input Managers Group name')

if ($MGroupPrefix) {write-host 'The specified Managers Name will be used.' -foreground 'yellow'}

else {

$MGroupPrefix = 'Managers';

Write-Host 'Default Managers Group name will be used.' -foreground 'yellow'

}

$MGroupPrefix = $MGroupPrefix + '-' + $Guid

$MGroupPrefix

This is to define the name of the “First Group”, it will also be followed by a “-“ and a GUID.

If the Administrator selects a name, then, the selected name will be used.


#Define Supervisors Group Name

$SGroupPrefix = (Read-Host -Prompt 'Input Supervisors Group name')

if ($SGroupPrefix) {write-host 'The specified Supervisors Group name will be used.' -foreground 'yellow'}

else {

$SGroupPrefix = 'Supervisors'

Write-Host 'Default Supervisors name will be used.' -foreground 'yellow'

}

$SGroupPrefix = $SGroupPrefix + '-' + $Guid

$SGroupPrefix

This will define the second group name “Supervisors”, again, it will be followed by “-” and a GUID.

In this section however, I didn’t specify a name to demonstrate how the default name “Supervisors” + “-” + GUID come into place, as define in “else” function. Simply press “enter”.

Defining the OU Name, group name, and path

#Defining Main OU Name, OUs, and path

$OUPath = 'OU=Departments,DC=contoso,DC=lab'

$GroupsOUName = 'Groups'

$GroupOUPath = 'OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab'

$UsersOUName = 'Users'

$UsersOUPath = 'OU=' + $OUName + ',OU=Dpartments,DC=contoso,DC=lab'

The above parameters define the OU Path in which the main OU will be created, e.g. Finance.

It will also create two other OUs, “Groups” and Users” in the specified path.

Defining the Groups (that will be created in “Groups” OU), the group names and path.


#Defining Groups, groups name and path

$MGroupPath = 'OU=Groups, OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab'

$MGroupSamAccountName = 'Managers-' + $Guid

$MGroupDisplayName = 'Managers'

$SGroupPath = 'OU=Groups, OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab' 

$SGroupSamAccountName = 'Supervisors-' + $Guid

$SGroupDisplayName = 'Supervisors'

The above parameters define the Groups path, the SamAccountName and Display Name, that have been created in the main OU, e.g. Finance.

Creating the Organizational units, and groups.

As you may have noticed, everything that we have defined above is just parameters. Nothing will be created without calling the cmdlets.


#Creating Tenant OU, Groups & Users OU, and Admins Groups

Import-Module ActiveDirectory

New-ADOrganizationalUnit -Name $OUName -Path $OUPath

New-ADOrganizationalUnit -Name $GroupsOUName -Path $GroupOUPath

New-ADOrganizationalUnit -Name $UsersOUName -Path $UsersOUPath

New-ADGroup -Name $FirstGroupPrefix -SamAccountName $FirstGroupSamAccountName -GroupCategory Security -GroupScope Global -DisplayName $FirstGroupDisplayName -Path $FirstGroupPath -Description 'Members of this group are Managers of $OUName'

New-ADGroup -Name $SecondGroupPrefix -SamAccountName $SecondGroupSamAccountName -GroupCategory Security -GroupScope Global -DisplayName $SecondGroupDisplayName -Path $SecondGroupPath -Description Members of this group are Supervisors of $OUName'

Once completed you will have the following OU/Department


And in Groups OU, you will have two groups, Managers (as defined in the previous step) and “SupervisorsGroup” the default name, as the Administrator did not specify a name for the Group.


Notice how the GUID is shared across the Departments, and the Groups.

Putting it all together:

Note: The SYNOPSIS, DESCRIPTION, EXAMPLE, and NOTES are used for get-help.


<#

.SYNOPSIS
This is a Powershell script to create Organizations Units (OU), and groups.

.DESCRIPTION
The script will create:
1) An OU, defined by the administrator. The OU name is define by the administrator, followed by a unique ID generated through GUID. It will be created in a specific OU path.
2) Two OUs consisting of 'Groups' and 'Users'.
3) Two Groups; 'First Group' and 'Second Group' These names are the default names followed with GUID. An administrator can define Group names as desired. GUID will be added by default.

.Departments
./CreateOUs.ps1

.NOTES
This PowerShell script should be ran as administrator.

#>

param()

#Define GUID
$Guid = [guid]::NewGuid()

#Define OU Name
$OUName  = (Read-Host 'Input OU name') + '-' + $Guid

#Define First Group Name
$MGroupPrefix = (Read-Host -Prompt 'Input Managers Group name')
if ($MGroupPrefix) {write-host 'The specified Managers Name will be used.' -foreground 'yellow'}
else {
$MGroupPrefix = 'Managers'
Write-Host 'Default Managers Group name will be used' -foreground 'yellow'
}
$MGroupPrefix = $MGroupPrefix + '-' + $Guid

$MGroupPrefix
#Define Second Group Name
$SGroupPrefix = (Read-Host -Prompt 'Input Supervisors Group name')
if ($SGroupPrefix) {write-host 'The specified Supervisors Group name will be used.' -foreground 'yellow'}
else {
$SGroupPrefix = 'SupervisorsGroup'
Write-Host 'Default SupervisorsGroup name will be used' -foreground 'yellow'
}

$SGroupPrefix = $SGroupPrefix + '-' + $Guid

$SGroupPrefix

#Defining Main OU Name, OUs, and path
$OUPath  = 'OU=Departments,DC=contoso,DC=lab'
$GroupsOUName = 'Groups'
$GroupOUPath = 'OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab'
$UsersOUName = 'Users'
$UsersOUPath = 'OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab'

#Defining Groups, groups name and path
$MGroupPath = 'OU=Groups, OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab'
$MGroupSamAccountName = 'Managers-' + $Guid
$MGroupDisplayName = 'Managers'
$SGroupPath = 'OU=Groups, OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab'
$SGroupSamAccountName = 'Supervisors-' + $Guid
$SGroupDisplayName = 'Supervisors'

#Creating Tenant OU, Groups &amp;amp; Users OU, and Admins Groups
Import-Module ActiveDirectory

New-ADOrganizationalUnit -Name $OUName -Path $OUPath

New-ADOrganizationalUnit -Name $GroupsOUName -Path $GroupOUPath

New-ADOrganizationalUnit -Name $UsersOUName -Path $UsersOUPath

New-ADGroup -Name $MGroupPrefix -SamAccountName $MGroupSamAccountName -GroupCategory Security -GroupScope Global -DisplayName $MGroupDisplayName -Path $MGroupPath -Description 'Members of this group are Managers of $OUName'

New-ADGroup -Name $SGroupPrefix -SamAccountName $SGroupSamAccountName -GroupCategory Security -GroupScope Global -DisplayName $SGroupDisplayName -Path $SGroupPath -Description 'Members of this group are Supervisors of $OUName'


Active Directory – What are Linked Attributes?

A customer request to add some additional attributes to their Azure AD tenant via Directory Extensions feature in the Azure AD Connect tool, lead me into further investigation. My last blog here set out the customer request, but what I didn’t detail in that blog was one of the attributes they also wanted to extend into Azure AD was directReports, an attribute they had used in the past for their custom built on-premise applications to display the list of staff the user was a manager for. This led me down a rabbit hole where it took a while to reach the end.

With my past experience in using Microsoft Identity Manager (formally Forefront Identity Manager), I knew directReports wasn’t a real attribute stored in Active Directory, but rather a calculated value shown using the Active Directory Users and Computers console. The directReports was based on the values of the manager attribute that contained the reference to the user you were querying (phew, that was a mouthful). This is why directReport and other similar type of attributes such as memberOf were not selectable for Directory Extension in the Azure AD Connect tool. I had never bothered to understand it further than that until the customer also asked for a list of these type of attributes so that they could tell their application developers they would need a different technique to determine these values in Azure AD. This is where the investigation started which I would like to summarise as I found it very difficult to find this information in one place.

In short, these attributes in the Active Directory schema are Linked Attributes as detailed in this Microsoft MSDN article here:

Linked attributes are pairs of attributes in which the system calculates the values of one attribute (the back link) based on the values set on the other attribute (the forward link) throughout the forest. A back-link value on any object instance consists of the DNs of all the objects that have the object’s DN set in the corresponding forward link. For example, “Manager” and “Reports” are a pair of linked attributes, where Manager is the forward link and Reports is the back link. Now suppose Bill is Joe’s manager. If you store the DN of Bill’s user object in the “Manager” attribute of Joe’s user object, then the DN of Joe’s user object will show up in the “Reports” attribute of Bill’s user object.

I then found this article here which further explained these forward and back links in respect of which are writeable and which are read-only, the example below referring to the linked attributes member/memberOf:

Not going too deep into the technical details, there’s another thing we need to know when looking at group membership and forward- and backlinks: forward-links are writable and backlinks are read-only. This means that only forward-links changed and the corresponding backlinks are computed automatically. That also means that only forward-links are replicated between DCs whereas backlinks are maintained by the DCs after that.

The take-out from this is the value in the forward-link can be updated, the member attribute in this case, but you cannot update the back-link memberOf. Back-links are always calculated automatically by the system whenever an attribute that is a forward-link is modified.

My final quest was to find the list of linked attributes without querying the Active Directory schema which then led me to this article here, which listed the common linked attributes:

  • altRecipient/altRecipientBL
  • dLMemRejectPerms/dLMemRejectPermsBL
  • dLMemSubmitPerms/dLMemSubmitPermsBL
  • msExchArchiveDatabaseLink/msExchArchiveDatabaseLinkBL
  • msExchDelegateListLink/msExchDelegateListBL
  • publicDelegates/publicDelegatesBL
  • member/memberOf
  • manager/directReports
  • owner/ownerBL

There is further, deeper technical information about linked attributes such as “distinguished name tags” (DNT) and what is replicated between DCs versus what is calculated locally on a DC, which you can read in your own leisure in the articles listed throughout this blog. But I hope the summary is enough information on how they work.

Create a Replica Domain Controller using Desired State Configuration

Originally posted on Nivlesh’s blog @ nivleshc.wordpress.com

Welcome back. In this blog we will continue with our new Active Directory Domain and use Desired State Configuration (DSC) to add a replica domain controller to it, for redundancy.

If you have not read the first part of this blog series, I would recommend doing that before continuing (even if you need a refresher). The first blog can be found at Create a new Active Directory Forest using Desired State Configuration

Whenever you create an Active Directory Domain, you should have, at a minimum, two domain controllers. This ensures that domain services are available even if one domain controller goes down.

To create a replica domain controller we will be using the xActiveDirectory and xPendingReboot experimental DSC modules. Download these from

https://gallery.technet.microsoft.com/scriptcenter/xActiveDirectory-f2d573f3

https://gallery.technet.microsoft.com/scriptcenter/xPendingReboot-PowerShell-b269f154

After downloading the zip files, extract the contents and place them in the PowerShell modules directory located at $env:ProgramFiles\WindowsPowerShell\Modules folder (unless you have changed your systemroot folder, this will be located at C:\ProgramFiles\WindowsPowerShell\Modules )

With the above done, open Windows PowerShell ISE and lets get started.

Paste the following into your PowerShell editor and save it using a filename of your choice (I have saved mine as CreateADReplicaDC.ps1 )

The above declares the parameters that need to be passed to the CreateADReplicaDC DSC configuration function ($DomainName , $Admincreds, $SafemodeAdminCreds). Some variables have also been declared ($RetryCount and $RetryIntervalSec). These will be used later within the configuration script.

Next, we have to import the experimental DSC modules that we have downloaded (unless the DSC modules are present out-of-the-box, they have to be imported so that they are available for use)

The $Admincreds parameter needs to be converted into a domain\username format. This is accomplished by using the following (the result is held in a variable called $DomainCreds)

Next, we need to tell DSC that the command has to be run on the local computer. This is done by the following line (localhost refers to the local computer)

Node localhost

As mentioned in the first blog, the DSC instructions are passed to the LocalConfigurationManager which carries out the commands. We need to tell the LocalConfigurationManager it should reboot the server if required, to continue with the configuration after a reboot, and to just apply the settings once (DSC can apply a setting and constantly monitor it to check that it has not been changed. If the setting is found to be changed, DSC can re-apply the setting. In our case we will not do this, we will apply the setting just once).

Next, we will install the Remote Server Administration Tools

Now, lets install Active Directory Domain Services Role

Before continuing on, we need to check if the Active Directory Domain that we are going to add the domain controller to is available. This is achieved by using the xWaitForADDomain function in the xActiveDirectory module.

Below is what we will use.

where
$DomainName is the name of the domain the domain controller will be joined to
$DomainCreds is the Administrator username and password for the domain (the username is in the format domain\username since we had converted this previously)

The DependsOn in the above code means that this section depends on the Active Directory Domain Services role to have been successfully installed.

So far, our script has successfully installed the Active Directory Domain Services role and has checked to ensure that the domain we will join the domain controller to is available.

So, lets proceed. We will now promote the virtual machine to a replica domain controller of our new Active Directory domain.

The above will only run after DscForestWait successfully signals that the Active Directory domain is available (denoted by DependsOn ). The SafeModeAdministratorPassword is set to the password that was provided in the parameters. The DatabasePath, LogPath and SysvolPath are all set to a location on C:\. This might not be suitable for everyone. For those that are uncomfortable with this, I would suggest to add another data disk to your virtual machine and point the Database, Log and Sysvol path to this new volume.

The only thing left now is to restart the server. So lets go ahead and do that

Keep in mind that the reboot will only happen if the domain controller was successfully added as a replica domain controller (this is due to the DependsOn).

That’s it. We now have a DSC configuration script that will add a replica domain controller to our Active Directory domain

The virtual machine that will be promoted to a replica domain controller needs to have the first domain controller’s ip address added to its dns settings and it also needs to have network connectivity to the first domain controller. These are required for the DSC configuration script to succeed.

The full DSC Configuration script is pasted below

If you would like to bootstrap the above to your Azure Resource Manager template, create a DSC extension and link it to the above DSC Configuration script (the script, together with the experimental DSC modules has to be packaged into a zip file and uploaded to a public location which ARM can access. I used GitHub for this)

One thing to note when bootstrapping is that after you create your first domain controller, you will need to inject its ip address into the dns settings of the virtual machine that will become your replica domain controller. This is required so that the virtual machine can locate the Active Directory domain.

I accomplished this by using the following lines of code in my ARM template, right after the Active Directory Domain had been created.

where

DC01 is the name of the first domain controller that was created in the new Active Directory domain
DC01NicIPAddress is the ip address of DC01
DC02 is the name of the virtual machine that will be added to the domain as a replica Domain Controller
DC02NicName is the name of the network card on DC02
DC02NicIPAddress is the ip address of DC02
DC02SubnetRef is a reference to the subnet DC02 is in
nicTemplateUri is the url of the ARM deployment template that will update the network interface settings for DC02 (it will inject the DNS settings). This file has to be uploaded to a location that ARM can access (for instance GitHub)

Below are the contents of the deployment template file that nicTemplateUri refers to

After the DNS settings have been updated, the following can be used to create a DSC extension to promote the virtual machine to a replica domain controller (it uses the DSC configuration script we had created earlier)

The variables used in the json file above are listed below (I have pointed repoLocation to my GitHub repository)

This concludes the DSC series for creating a new Active Directory domain and adding a replica domain controller to it.

Hope you all enjoyed this blog post.

Create a new Active Directory Forest using Desired State Configuration

Originally posted on Nivlesh’s blog @ nivleshc.wordpress.com

Desired State Configuration (DSC) is a declarative language in which you state “what” you want done instead of going into the nitty gritty level to describe exactly how to get it done. Jeffrey Snover (the inventor of PowerShell) quotes Jean-Luc Picard from Star Trek: The Next Generation to describe DSC – it tells the servers to “Make it so”.

In this blog, I will show you how to use DSC to create a brand new Active Directory Forest. In my next blog post, for redundancy, we will add another domain controller to this new Active Directory Forest.

The DSC configuration script can be used in conjunction with Azure Resource Manager to deploy a new Active Directory Forest with a single click, how good is that!

A DSC Configuration script uses DSC Resources to describe what needs to be done. DSC Resources are made up of PowerShell script functions, which are used by the Local Configuration Manager to “Make it so”.

Windows PowerShell 4.0 and Windows PowerShell 5.0, by default come with a limited number of DSC Resources. Don’t let this trouble you, as you can always install more DSC modules! It’s as easy as downloading them from a trusted location and placing them in the PowerShell modules directory! 

To get a list of DSC Resources currently installed, within PowerShell, execute the following command

Get-DscResource

Out of the box, there are no DSC modules available to create an Active Directory Forest. So, to start off, lets go download a module that will do this for us.

For our purpose, we will be installing the xActiveDirectory PowerShell module (the x in front of ActiveDirectory means experimental), which can be downloaded from https://gallery.technet.microsoft.com/scriptcenter/xActiveDirectory-f2d573f3  (updates to the xActiveDirectory module is no longer being provided at the above link, instead these are now published on GitHub at https://github.com/PowerShell/xActiveDirectory  . For simplicity, we will use the TechNet link above)

Download the following DSC modules as well

https://gallery.technet.microsoft.com/scriptcenter/xNetworking-Module-818b3583

https://gallery.technet.microsoft.com/scriptcenter/xPendingReboot-PowerShell-b269f154

After downloading the zip files, extract the contents and place them in the PowerShell modules directory located at $env:ProgramFiles\WindowsPowerShell\Modules folder

($env: is a PowerShell reference  to environmental variables).

For most systems, the modules folder is located at C:\ProgramFiles\WindowsPowerShell\Modules

However, if you are unsure, run the following PowerShell command to get the location of $env:ProgramFiles 

Get-ChildItem env:ProgramFiles

Tip: To get a list of all environmental variables, run Get-ChildItem env:

With the pre-requisites taken care of, let’s start creating the DSC configuration script.

Open your Windows PowerShell IDE and let’s get started.

Paste the following into your PowerShell editor and save it using a filename of your choice (I have saved mine as CreateNewADForest.ps1)

Configuration CreateNewADForest {
param(
      [Parameter(Mandatory)]
      [String]$DomainName,

      [Parameter(Mandatory)]
      [System.Management.Automation.PSCredential]$AdminCreds,

      [Parameter(Mandatory)]
      [System.Management.Automation.PSCredential]$SafeModeAdminCreds,

      [Parameter(Mandatory)]
      [System.Management.Automation.PSCredential]$myFirstUserCreds,

      [Int]$RetryCount=20,
      [Int]$RetryIntervalSec=30
)

The above declaration defines the parameters that will be passed to our  configuration function.  There are also two “constants” defined as well. I have named my configuration function CreateNewADForest which coincides with the name of the file it is saved in – CreateNewADForest.ps1

Here is some explanation regarding the above parameters and constants

We now have to import the DSC modules that we had downloaded, so add the following line to the above Configuration Script

Import-DscResource -ModuleName xActiveDirectory, xNetworking, xPendingReboot

We now need to convert $AdminCreds the format domain\username. We will store the result in a new object called $DomainCreds

The next line states where the DSC commands will run. Since we are going to run this script from within the newly provisioned virtual machine, we will use localhost as the location

So enter the following

Node localhost
{

Next, we need to tell the Local Configuration Manager to apply the settings only once, reboot the server if needed (during our setup) and most importantly, to continue on with the configuration after reboot. This is done by using the following lines

LocalConfigurationManager
{
     ActionAfterReboot = 'ContinueConfiguration'
     ConfigurationMode = 'ApplyOnly'
     RebootNodeIfNeeded = $true
}

If you have worked with Active Directory before, you will be aware of the importance of DNS. Unfortunately, during my testing, I found that a DNS Server is not automatically deployed when creating a new Active Directory Forest. To ensure a DNS Server is present before we start creating a new Active Directory Forest, the following lines are needed in our script

WindowsFeature DNS
{
     Ensure = "Present"
     Name = "DNS"
}

The above is a declarative statement which nicely shows the “Make it so” nature of DSC. The above lines are asking the DSC Resource WindowsFeature to Ensure DNS (which refers to DNS Server) is Present. If it is not Present, then it will be installed, otherwise nothing will be done. This is the purpose of the Ensure = “Present” line. The word DNS after WindowsFeature is just a name I have given to this block of code.

Next, we will leverage the DSC function xDNSServerAddress (from the xNetworking module we had installed) to set the local computer’s DNS settings, to its loopback address. This will make the computer refer to the newly installed DNS Server.

xDnsServerAddress DnsServerAddress
{
     Address        = '127.0.0.1'
     InterfaceAlias = 'Ethernet'
     AddressFamily  = 'IPv4'
     DependsOn = "[WindowsFeature]DNS"
}

Notice the DependsOn = “[WindowsFeature]DNS” in the above code. It tells the function that this block of code depends on the [WindowsFeature]DNS section. To put it another way, the above code will only run after the DNS Server has been installed. There you go, we have specified our first dependency.

Next, we will install the Remote Server Administration Tools

WindowsFeature RSAT
{
     Ensure = "Present"
     Name = "RSAT"
}

Now, we will install the Active Directory Domain Services feature in Windows Server. Note, this is just installation of the feature. It does not create the Active Directory Forest.

WindowsFeature ADDSInstall
{
     Ensure = "Present"
     Name = "AD-Domain-Services"
}

So far so good. Now for the most important event of all, creating the new Active Directory Forest. For this we will use the xADDomain function from the xActiveDirectory module.

Notice above, we are pointing the DatabasePath, LogPath and SysvolPath all on to the C: drive. If you need these to be on a separate volume, then ensure an additional data disk is added to your virtual machine and then change the drive letter accordingly in the above code. This section has a dependency on the server’s DNS settings being changed.

If you have previously installed a new Active Directory forest, you will remember how long it takes for the configuration to finish. We have to cater for this in our DSC script, to wait for the Forest to be successfully provisioned, before proceeding. Here, we will leverage on the xWaitForADDomain DSC function (from the xActiveDirectoy module we had installed).

xWaitForADDomain DscForestWait
{
     DomainName = $DomainName
     DomainUserCredential = $DomainCreds
     RetryCount = $RetryCount
     RetryIntervalSec = $RetryIntervalSec
     DependsOn = "[xADDomain]FirstDC"
}

Viola! The new Active Directory Forest has been successfully provisioned! Let’s add a new user to Active Directory and then restart the server.

xADUser FirstUser
{
     DomainName = $DomainName
     DomainAdministratorCredential = $DomainCreds
     UserName = $myFirstUserCreds.Username
     Password = $myFirstUserCreds
     Ensure = "Present"
     DependsOn = "[xWaitForADDomain]DscForestWait"
}

The username and password are what had been supplied in the parameters to the configuration function.

That’s it! Our new Active Directory Forest is up and running. All that is needed now is a reboot of the server to complete the configuration!

To reboot the server, we will use the xPendingReboot module (from the xPendingReboot package that we installed)

xPendingReboot Reboot1
{
     Name = "RebootServer"
     DependsOn = "[xWaitForADDomain]DscForestWait"
}

Your completed Configuration Script should look like below (I have taken the liberty of closing off all brackets and parenthesis)

You can copy the above script to a newly create Azure virtual machine and deploy it manually.

However, a more elegant method will be to bootstrap it to your Azure Resource Manager virtual machine template. One caveat is, the DSC configuration script has to be packaged into a zip file and then uploaded to a location that Azure Resource Manager can access (that means it can’t live on your local computer). You can upload it to your Azure Storage blob container and use a shared access token to give access to your script or upload it to a public repository like GitHub.

I have packaged and uploaded the script to my GitHub repository at https://raw.githubusercontent.com/nivleshc/arm/master/CreateNewADForest.zip

The zip file also contains the additional DSC Modules that were downloaded. To use the above in your Azure Resource Manager template, create a PowerShell DSC Extension and link it to the Azure virtual machine that will become your Active Directory Domain Controller. Below is an example of the extension you can create in your ARM template (the server that will become the first domain controller is called DC01 in the code below).

Below is an except of the relevant variables

And here are the relevant parameters

That’s it folks! Now you have a new shiny Active Directory Forest, that was created for you using a DSC configuration script.

In the second part of this two-part series, we will add an additional domain controller to this Active Directory Forest using DSC configuration script.

Let me know what you think of the above information.

Good practices for implementing a healthy Azure Active Directory identity foundation

Originally posted on Lucian’s blog at www.clouduccino.com. Follow Lucian on Twitter @LucianFrango for daily doses of cloud.


This is part frustrated (mild) rant, part helpful hint and like the title says: part public service announcement. While I am most definitely a glass is half full kind of person, and I don’t get stressed out much or phased by pressure much, I, like anyone, do get annoyed with certain things. Let’s start with a quick vent before continuing on with the public service announcement.

Rather then just have a rant or a whinge, let me explain the situation and by doing so I’ll most likely vent some frustration.

Deploying a Hybrid Exchange environment to integrate with Exchange Online in Office 365 can be a behemoth of a task if you lack certain things. While anyone can say any number of criteria or list off important considerations, pre-requisites or requirements, I feel that there is only one thing that needs to be addressed. One thing that sticks in my mind as being the foundation of the endeavour.

Just like the old saying goes “you can’t build a strong house on weak foundations”; the same applies to that initial journey to the cloud.

I say initial journey, as for many organisations that first step after setting up a website, which can be the beginning to being a cloud-focused organisation, Office 365 is truly the first step to move infrastructure, process and systems to the cloud that I see happen the most.

Importantly though is to realise that as amazing and full of features as Office 365 is, deploying a Hybrid environment to leverage what I consider the best of both worlds, a hybrid identity, all that matters is the existing Active Directory Domain Services (ADDS) environment. That is ALL THAT MATTERS.

Step aside Active Directory Federation Services (ADFS) and Azure AD Connect (AADConnect) or even Hybrid Exchange Server 2016 itself. All those components sit on top of the foundation of the existing identity and directory services that is the ADDS environment.

ADDS is so crucial as it key link in the chain, so much so that if it has issues, the entire project can easily and quickly run into trouble and delays. Delays lead to cost. Delays lead to unhappy management. Delays lead to unhappy users. Delays lead to the people working on the project going grey.

I remember a time when I had blonde curly hair. I grew out of that and as I got older my hair darkened to a rich, chocolate (at least 70% cocoa) brown. Now, as each project gets notched on my belt, slowly, the slick chocolate locks are giving way to the odd silky, white sign that there’s not enough emphasis on a well-managed and organised ADDS.

Read More

Dynamic Active Directory User Provisioning placement (OU) using the Granfeldt Powershell Management Agent

When using Forefront / Microsoft Identity Manager for provisioning users into Active Directory, determining which organisational unit (OU) to place the user in varies from customer to customer. Some AD OU structures are flat, others hierarchical based on business, departmental, functional role or geography. Basically every implementation I’ve done has been different.

That said the most recent implementation I’ve done is for an organisation that is growing and as such the existing structure is in flux and based on differing logic depending on who you talk to. Coming soon they’re restructuring their OU structure but not before MIM will be live. Damn.

This blog post details my interim solution to enable provisioning of new users into AD with an existing complex OU structure until the customer simplifies their hierarchy with upcoming work and without having to maintain a matrix of mapping tables.

Overview

In this blog post I’ll document how you can perform dynamic Active Directory user provisioning placement utilising Søren Granfeldt’s extremely versatile PowerShell Management Agent.

Using the Synchronisation Rule for AD Outbound I land the new AD account into a landing OU. My Placement MA (using the PS MA), see’s accounts in this landing OU and then queries back into AD using the new users Title (title) and State (st) attributes to find other AD users with the same values, location (DN). It orders the results and moves our new AD account to the location of the greatest number of users with the same metadata.

My User Placement MA is used in conjunction with an Active Directory MA and Declarative Rules in the MIM Portal.

Getting Started with the Granfeldt PowerShell Management Agent

In case you haven’t read my other PS MA blog posts here is the background on getting started with the Granfeldt PS MA. If you have you can skip through this section.

First up, you can get it from here. Søren’s documentation is pretty good but does assume you have a working knowledge of FIM/MIM and this blog post is no different. Configuration tasks like adding additional attributes the User Object Class in the MIM Portal, updating MPR’s, flow rules, Workflows, Sets etc are assumed knowledge and if not is easily Bing’able for you to work it out.

Three items I had to work out that I’ll save you the pain of are;

  • You must have a Password.ps1 file. Even though we’re not doing password management on this MA, the PS MA requires a file for this field. The .ps1 doesn’t need to have any logic/script inside it. It just needs to be present
  • The credentials you give the MA to run the scripts as, needs to be in the format of just ‘accountname’ NOT ‘domain\accountname’. I’m using the service account that I’ve used for the Active Directory MA. The target system is the same directory service and the account has the permissions required (you’ll need to add the service account to the appropriate Lync role group for user management)
  • The path to the scripts in the PS MA Config must not contain spaces and be in old-skool 8.3 format. I’ve chosen to store my scripts in an appropriately named subdirectory under the MIM Extensions directory. Tip: from a command shell use “dir /x” to get the 8.3 directory format name. Mine looks like C:\PROGRA~1\MICROS~4\2010\SYNCHR~1\EXTENS~2\Placement

Schema Script (schema.ps1)

As I’m using the OOTB (out of the box) Active Directory MA to provision the AD account and only using this MA to move the AD Object the schema only consists of the attributes needed to workout where the user should reside and update them.

Password Script (password.ps1)

Just a placeholder file as detailed above.

Import Script (import.ps1)

The import script looks to see if the user is currently residing in the LandingOU that the AD MA puts new users into. If the user is, then the Import Script gets the users ‘Title’ and ‘State’ attribute values and searches AD to find any other users with the same State and Title and pulls back their OU. The script discounts any users that are in the Expired OU structure and any users (including the user being processed) that are in the Landing OU. It orders the list by occurrence and then selects the first entry. This becomes the location where we will move the user to. We also update the targetOU custom attribute with where the user is, once moved to keep the synchronisation rule happy and no longer trigger a move on the user (unless someone moves the user back to the LandingOU). The location of the LandingOU and the Expired OU’s path are set near the top of the script. If we don’t find any matches then I have catchall per State (st) where we will put the user and they can then be moved manually if required later.

On the $request line you’ll see I have a filter for the User Object Class and users with an account name starting with ‘U’. For the customer I wrote this for, all users start with U. You can obviously also have Filters on the MA. I have a few there as well, but limiting the size of the returned array makes things quicker to run full imports.

Export Script (export.ps1)

The export script is basically doing the move of the users AD account.

Wiring it all together

In order to wire the functionality all together there are the usual number of configuration steps to be completed. Below I’ve shown a number of the key points associated with making it all work.

Basically create the PS MA, import attributes from the PS MA, add any additional attributes to the Portal Schema, update the Portal Filter to allow Administrators to use the attribute, update the Synchronisation MPR to allow the Sync Engine to flow in the new attributes, create the Set used for the transition, create your Synchronisation Rule, create your Terminal Services Workflow, create your Terminal Services MPR, create your MA Run Profiles and let it loose.

Management Agent Configuration

As per the tips above, the format for the script paths must be without spaces etc. I’m using 8.3 format and I’m using the same service account as my AD MA.

Password script must be specified but as we’re not doing password management it’s empty as detailed above.

If your schema.ps1 file is formatted correctly you can select your attributes.

My join rule is simple. AccountName (which as you’ll see in the Import.ps1 is aligned with sAMAccountName) to AccountName in the MetaVerse.

My import flow is just bringing back in the DN to a custom MV attribute.

Synchronisation Rules

My Placement MA Outbound Sync rule relies on the value of the new OU that was bought in on the Import script.

We flow out the new DN for the user object.

Set

I created a Set that I use as a transition to trigger the move of the user object on my Placement MA. My Set looks to see if the user account is in AD and active (I have a Boolean attribute not shown and another rule in the MIM Portal that is set based on an advanced flow rule in the Sync engine that has some logic to determine if employment date as sourced from my HR Management Agent is current and the user should be active) and that the user is currently in the LandingOU.

Workflow

An action based workflow that will use the trigger the Synchronisation rule on the Placement MA.

MPR

Finally my MPR for moving the AD object is based on the transition set,

and my Placement Workflow.

Summary

Using the Granfeldt PowerShell MA it was very easy to accomplish something that would have been a nightmare to try and do with declarative rules or un-maintainable using a matrix of mapping tables. It would also be possible to use this as a basis to move users when their State or Title changes. This example could also be reworked easily to use different attributes.

Follow Darren on Twitter @darrenjrobinson

Managing AD Terminal Services Configuration with FIM / MIM using the Granfeldt PowerShell Management Agent

Forefront / Microsoft Identity Manager contains numerous Management Agents (MA’s) out of the box. However an MA for managing AD Terminal Services user configuration isn’t one of them. And at first pass you’d think you could just manipulate a few attributes in AD on an AD MA like you do for home directories (aside from creating the file and permissions on the filesystem) and you’d be done. Don’t worry, I made that wrong assumption too.

Overview

In this blog post I’ll document how you can enable Active Directory users with the necessary attributes and file system elements utilising Søren Granfeldt’s extremely versatile PowerShell Management Agent. I’ll show you how to do the minimum of provisioning a terminal services home directory, a terminal services profile directory, update the associated users AD account with those paths and the terminal services drive letter whilst also allowing the user to logon. Understanding how this is done you can then easily then extend the functionality for lifecycle management (e.g. change location of terminal services directories, delete for de-provisioning). For the file system directories and permissions I’ve re-used logic from my AD home director PS MA detailed here which is based on Kent Nordström’s home directory MA stripped back with a few other subtle changes.

My Terminal Services MA is used in conjunction with an Active Directory MA and Declarative Rules in the MIM Portal.

A note on managing Terminal Services Configuration

Terminal Services configuration properties are, historically, held in a single attribute ‘userParameters’ (a BLOB, a Binary Large OBject). Inside the userParameters BLOB are the four attributes we’re interested in;

  • allowLogon
  • terminalServicesHomeDirectory
  • terminalServicesProfilePath
  • terminalServicesHomeDrive

What you see via LDAP in the userParameters attribute is either an empty attribute (no terminal services configuration) or an attribute full of encoded characters. We’ll use this as our trigger for the MIM rules to determine who has an existing configuration and who doesn’t.

Getting Started with the Granfeldt PowerShell Management Agent

In case you haven’t read my other PS MA blog posts here is the background on getting started with the Granfeldt PS MA. If you have you can skip through this section.

First up, you can get it from here. Søren’s documentation is pretty good but does assume you have a working knowledge of FIM/MIM and this blog post is no different. Configuration tasks like adding additional attributes the User Object Class in the MIM Portal, updating MPR’s, flow rules, Workflows, Sets etc are assumed knowledge and if not is easily Bing’able for you to work it out.

Three items I had to work out that I’ll save you the pain of are;

  • You must have a Password.ps1 file. Even though we’re not doing password management on this MA, the PS MA requires a file for this field. The .ps1 doesn’t need to have any logic/script inside it. It just needs to be present
  • The credentials you give the MA to run the scripts as, needs to be in the format of just ‘accountname’ NOT ‘domain\accountname’. I’m using the service account that I’ve used for the Active Directory MA. The target system is the same directory service and the account has the permissions required (you’ll need to add the service account to the appropriate Lync role group for user management)
  • The path to the scripts in the PS MA Config must not contain spaces and be in old-skool 8.3 format. I’ve chosen to store my scripts in an appropriately named subdirectory under the MIM Extensions directory. Tip: from a command shell use “dir /x” to get the 8.3 directory format name. Mine looks like C:\PROGRA~1\MICROS~4\2010\SYNCHR~1\EXTENS~2\TermServ

Schema Script (schema.ps1)

As I’m using the OOTB (out of the box) Active Directory MA to provision the AD account and only showing provisioning of terminal services configuration the schema only consists of the attributes needed to create the terminal services directories, the terminal services drive letter and update the user object in AD.

Password Script (password.ps1)

Just a placeholder file as detailed above.

Import Script (import.ps1)

The import script looks to see if there is a terminal services config for the user (in the userParameters BLOB) and then uses the ADSI provider that allows us to access the terminal services configuration attributes to read and flow through the four attributes we’re interested in.

On the $request line you’ll see I have a filter for the User Object Class and users with an account name starting with ‘U’. For the customer I wrote this for, all users start with U. You can obviously also have Filters on the MA. I have a few there as well, limiting the size of the returned array makes things quicker to run full imports.

Export Script (export.ps1)

The export script is doing a couple of functions. It is creating the terminal services home and profile directories, setting the necessary permissions on the home directory for the user and then updating the AD object with the terminal services home and profile directory paths, the terminal services directory drive letter and allowing the user to logon.

Wiring it all together

In order to wire the functionality all together there are the usual number of configuration steps to be completed. Below I’ve shown a number of the key points associated with making it all work.

Basically create the PS MA, import attributes from the PS MA, add any additional attributes to the Portal Schema, update the Portal Filter to allow Administrators to use the attribute, update the Synchronisation MPR to allow the Sync Engine to flow in the new attributes, create the Set used for the transition, create your Synchronisation Rule, create your Terminal Services Workflow, create your Terminal Services MPR, create your MA Run Profiles and let it loose.

Management Agent Configuration

As per the tips above, the format for the script paths must be without spaces etc. I’m using 8.3 format and I’m using the same service account as my AD MA.

Password script must be specified but as we’re not doing password management it’s empty as detailed above.

If your schema.ps1 file is formatted correctly you can select your attributes.

My join rule is simple. AccountName (which as you’ll see in the Import.ps1 is aligned with sAMAccountName) to AccountName in the MetaVerse.

My import flow is just bringing back in the DN to a custom MV attribute.

Synchronisation Rules

My Terminal Services Outbound Sync rule doesn’t need to be very complex. All it is doing is sync’ing out the Terminal Services Profile and Directory paths, Terminal Services Drive Letter and Allow Logon attributes.

Set

I created a Set that I use as a transition to trigger provisioning to the Terminal Services MA. My Set looks to see if the user account is in AD and active (I have a Boolean attribute not shown and another rule in the MIM Portal that is set based on an advanced flow rule in the Sync engine that has some logic to determine if employment date as sourced from my HR Management Agent is current and the user should be active) and that the Terminal Services Config is not already set (this is also based on a Boolean attribute I set via an Advanced Flow rule on my AD MA based on whether there is a value in the userParameters attribute). See my Exchange mailbox provisioning post using the PS MA for an example of setting a boolean attribute based on attribute being present or not.

Workflow

An action based workflow that is used to trigger the Synchronisation rule on the Terminal Services MA.

MPR

Finally my MPR for provisioning Terminal Services config is based on the transition set,

and my Terminal Services Workflow.

Summary

Using the Granfeldt PowerShell MA it was very easy to provision a terminal services configuration to new users and permission the directories correctly. It is also easy to extend the solution to manage moving terminal services directories to different locations, deletion etc.

 

Follow Darren on Twitter @darrenjrobinson

Provisioning Home Directories for Active Directory Users with FIM / MIM using the Granfeldt PowerShell Management Agent

Forefront / Microsoft Identity Manager contains numerous Management Agents (MA’s) out of the box. However an MA for creating user home directories and setting the associated permissions isn’t one of them.

Over the years I’ve accomplished home directory provisioning and permissioning in Active Directory / Windows File Services and Novell eDirectory / Novell File Services using methods that aren’t strictly best practice / supported (e.g. calling native libraries from within a Management Agent Extension to create/manage/delete etc). Whilst this functionally works the ability for end customers to maintain the implementation for changes is limited.

Overview

In this blog post I’ll document how you can enable an Active Directory users with a home directory with the required permissions and update their associated AD object with the home directory path and drive letter utilising Søren Granfeldt’s extremely versatile PowerShell Management Agent. I’ll show you how to do the minimum of provisioning a home directory to a user. Understanding how this is done you can then easily then extend the functionality for lifecycle management (e.g. change location of home directory, add sub-directories, delete for de-provisioning). I didn’t start from scratch on this one as Kent Nordström had most of what I needed as detailed here. I stripped it back to do what I wanted with a few other subtle changes.

My Home Directory MA is used in conjunction with an Active Directory MA and Declarative Rules in the MIM Portal.

Getting Started with the Granfeldt PowerShell Management Agent

First up, you can get it from here. Søren’s documentation is pretty good but does assume you have a working knowledge of FIM/MIM and this blog post is no different. Configuration tasks like adding additional attributes the User Object Class in the MIM Portal, updating MPR’s, flow rules, Workflows, Sets etc are assumed knowledge and if not is easily Bing’able for you to work it out.

Three items I had to work out that I’ll save you the pain of are;

  • You must have a Password.ps1 file. Even though we’re not doing password management on this MA, the PS MA requires a file for this field. The .ps1 doesn’t need to have any logic/script inside it. It just needs to be present
  • The credentials you give the MA to run the scripts as, needs to be in the format of just ‘accountname’ NOT ‘domain\accountname’. I’m using the service account that I’ve used for the Active Directory MA. The target system is the same directory service and the account has the permissions required (you’ll need to add the service account to the appropriate Lync role group for user management)
  • The path to the scripts in the PS MA Config must not contain spaces and be in old-skool 8.3 format. I’ve chosen to store my scripts in an appropriately named subdirectory under the MIM Extensions directory. Tip: from a command shell use “dir /x” to get the 8.3 directory format name. Mine looks like C:\PROGRA~1\MICROS~4\2010\SYNCHR~1\EXTENS~2\HomeDir

Schema Script (schema.ps1)

As I’m using the OOTB (out of the box) Active Directory MA to provision the AD account and only showing provisioning of home directories. As such the schema only consists of the attributes needed to create the home directory and update the user object in AD.

Password Script (password.ps1)

Just a placeholder file as detailed above.

Import Script (import.ps1)

Nothing to complex. Bringing back in the two attributes that we export to allow the confirming import to complete.
On the $request line you’ll see I have a filter for the User Object Class and users with an account name starting with ‘U’. For the customer I wrote this for, all users start with U. You can obviously also have Filters on the MA. I have a few there as well, but limiting the size of the returned array makes things quicker to run full imports.

Export Script (export.ps1)

The export script is doing a couple of functions. It is creating the home directory, setting the necessary permissions on the home directory for the user and then updating the AD object with the home directory path and the home directory drive letter.

Wiring it all together

In order to wire the functionality all together there are the usual number of configuration steps to be completed. Below I’ve shown a number of the key points associated with making it all work.

Basically create the PS MA, import attributes from the PS MA, add any additional attributes to the Portal Schema, update the Portal Filter to allow Administrators to use the attribute, update the Synchronisation MPR to allow the Sync Engine to flow in the new attributes, create the Set used for the transition, create your Synchronisation Rule, create your Home Director Workflow, create your Home Directory MPR, create your MA Run Profiles and let it loose.

Management Agent Configuration

As per the tips above, the format for the script paths must be without spaces etc. I’m using 8.3 format and I’m using the same service account as my AD MA.

Password script must be specified but as we’re not doing password management its empty as detailed above.

If your schema.ps1 file is formatted correctly you can select your attributes.

My join rule is simple. AccountName (which as you’ll see in the Import.ps1 is aligned with sAMAccountName) to AccountName in the MetaVerse.

My import flow is just bringing back in the DN to a custom MV attribute.

Synchronisation Rules

My Home Directory Outbound Sync rule doesn’t need to be very complex. All it is doing is sync’ing out the Home Directory Drive Letter and Home Directory Path attributes.

Set

I created a Set that I use as a transition to trigger provisioning to the Home Directory. My Set looks to see if the user account is in AD and active (I have a Boolean attribute not shown and another rule in the MIM Portal that is set based on an advanced flow rule in the Sync engine that has some logic to determine if employment date as sourced from my HR Management Agent is current and the user should be active) and that the Home Folder Path is not already set.

Workflow

An action based workflow that will use the trigger the Synchronisation rule on the Home Directory MA.

MPR

Finally my MPR for provisioning Home Directories is based on the transition set,

and my Home Directory Workflow.

Summary

Using the Granfeldt PowerShell MA it was very easy to provision home directories to new users and permission them correctly. It is also easy to extend the solution to manage moving home directories to different locations, deletion etc.

Follow Darren on Twitter @darrenjrobinson

Azure Active Directory Connect high-availability using ‘Staging Mode’

With the Azure Active Directory Connect product (AAD Connect) being announced as generally available to the market (more here, download here), there is a new feature available that will provide a greater speed of recovery of the AAD Sync component. This feature was not available with the previous AAD Sync or DirSync tools and there is little information about it available in the community, so hopefully this model can be considered for your synchronisation design.

Even though the AAD Sync component within the AAD Connect product is based on the Forefront Identity Manager (FIM) synchronisation service, it does not take on the same recovery techniques as FIM. For AAD Sync, you prepare two servers (ideally in different data centres) and install AAD Connect. Your primary server would be configured to perform the main role to synchronise identities between your Active Directory and Azure Active Directory, and the second server installed in much the same way but configured with the setting ‘enable staging mode’ being selected. Both servers are independent and do not share any components such as SQL Server, and the second server is performing the same tasks as the primary except for the following:

  • No exports occur to your on-premise Active Directory
  • No exports occur to Azure Active Directory
  • Password synchronisation and password writeback are disabled.

Should the primary server go offline for a long period of time or become unrecoverable, you can enable the second server by simply running the installation wizard again and disabling staging mode. When the task schedule next runs, it will perform a delta import and synchronisation and identify any differences between the state of the previous primary server and the current server.

Some other items you might want to consider with this design.

  • Configure the task schedule on the second server so that it runs soon after the primary server completes. By default the task schedule runs every 3 hours and launches at a time which is based on when it was installed, therefore the second server task schedule can launch up to a few hours after the primary server runs. Based on the average amount of work the primary server takes, configure the task schedule on the second server to launch say 5-10 minutes later
  • AAD Sync includes a tool called CSExportAnalyzer which displays changes that are staged to be exported to Active Directory or Azure Active Directory. This tool is useful to report on pending exports while the server is in ‘staging mode’
  • Consider using ‘custom sync groups’ which are located in your Active Directory domain. The default installation of the AAD Sync component will create the following groups locally on the server: ADSyncAdmins, ADSyncOperators, ADSyncBrowse and ADSyncPasswordSet. With more than one AAD Sync server, these groups need to be managed on the servers and kept in sync manually. Having the groups in your Active Directory will simplify this administration.

    NOTE: This feature is not yet working with the current AAD Connect download and this blog will be updated when working 100%.

The last two items will be detailed in future blogs.

Azure Active Directory Synchronization Tool: Password Sync as Backup for AD FS Federated Domains

Kloud has helped many Australian businesses leverage Microsoft cloud services such as Office 365, Intune and Microsoft Azure and most have implemented Active Directory Federation Services (AD FS) to provide a highly available Single Sign-On (SSO) user experience. In mid-2013, the Windows Azure Active Directory Synchronization Tool was updated to support password synchronisation with Azure Active Directory, which provided an alternative way to leverage on-premises authored identities with Microsoft’s cloud services.

Password synchronisation is a feature of the Azure Active Directory Sync Tool that will synchronise the password hash from your on-premises Active Directory environment to the Azure Active Directory. In this scenario users are able to log into Office 365 using the same password as they use in the on-premises environment, similarly to when using AD FS, however unlike AD FS there is no automatic sign in capability so users will still be prompted to enter credentials on a domain joined device.

For those that have already deployed AD FS or indeed those that are intending to implement AD FS in the future, one of the least publicised feature improvements in the May 2014 update to Office 365 is support for using the password sync feature as a temporary fall-back option for the primary AD FS service and federated authentication.

Another scenario now supported is the ability to have some domains configured for Password Sync while others within the same tenant are enabled for Federated Authentication with AD FS.

Mixing Password Sync and Federated Authentication

It’s quite a common scenario across many Office 365 implementations that I’ve done for customers to have a primary brand and domain such as contoso.com where the majority of users reside and is configured for federated authentication with AD FS. Contoso also owns a subsidiary called Fabrikam and there is no requirement for federated authentication or single sign on.

Previously this scenario would mean that for users with a primary SMTP address of fabrikam.com would either have to maintain a separate password within the Office 365 tenant or have a sub-optimal login experience and be configured for sign in with a UserPrincipalName in the @contoso.com format.

The recent changes to Office 365 allow for the mixed use of federated and password sync enabled domains.

Password Sync as a Temporary Fall-Back for Active Directory Federation Services

A number of smaller organisations I’ve worked with have elected to use a single instance of AD FS, taking advantage of the Single Sign-On capabilities but not including any high availability or site resilience. The Azure Active Directory Synchronization Tool is already a core component of the AD FS infrastructure so enabling Password Sync to provide a backup solution for the Single Sign-On service makes a lot of sense – and it’s free!

If you haven’t already (and you really, really should), deploy the most recent version of the Dirsync tool and enable the Password Sync option when prompted in the Configuration Wizard. A good TechNet article describing the Password Synchronization feature and how to implement it can be found here.

How to Temporarily “Switch” from Federated Authentication to Synchronised Password

The fall-back option is not automatic and requires manual configuration. Federated authentication can be changed to synchronised password authentication on a per-domain basis in the event of an outage to the AD FS infrastructure.

Detailed steps are as follows:

  1. Run the Windows Azure Active Directory Module for Windows PowerShell as an Administrator
  2. Run the following commands from the primary AD FS server:
    1. $Cred = Get-Credential
      #Enter non-federated Office 365 administrator credentials when prompted
    2. Connect-MsolService –Credential $Cred
    3. Convert-MsolDomainToStandard –DomainName <federated domain name> -SkipUserConversion $true -PasswordFile C:\Temp\passwordfile.txt
  3. Once the outage is over use the following command to convert the domain back to federated:
    1. Convert-MsolDomainToFederated –DomainName <federated domain name> -SupportMultipleDomains

It is recommended that you do not change UserPrincipalNames or ImmutableIds after converting your domain to the managed state for users that have been switched to use synchronised passwords.

It is worth noting that switching between Federated Authentication and Synchronised Password Authenication for sign in to Office 365 is not instant and will likely interrupt service access. This may not be a factor in the initial activation (as it’s likely an outage scenario) however it is something to bear in mind when cutting services back to Federated Authentication.