'Strong Name Verification' Issue with adding new Connectors in AAD Connect

I’ve been updating and installing the latest versions of AAD Connect recently (v1.1.750.0 to the latest v1.1.819.0) and noticed that I could not create a brand new custom ‘Connector’ using any of the following out of the box Connector templates:

  • Generic SQL
  • Generic LDAP (didn’t happen to me but I’ve seen reports it’s impacting others)
  • PowerShell
  • Web Service

The message in the AAD Connect Synchronisation Engine would appear as:
“The extension could not be loaded”
each time I tried to create a Connector with any of the above templates.
The Application Log in Event Viewer was a bit more helpful (specifying the PowerShell Connector):
“Could not load file or assembly ‘Microsoft.IAM.Connector.PowerShell, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35’ or one of its dependencies. Strong name signature could not be verified.  The assembly may have been tampered with, or it was delay signed but not fully signed with the correct private key. (Exception from HRESULT: 0x80131045)”
Screen Shot 2018-05-17 at 10.55.20 am
The text: ‘strong name signature could not be verified’ lead me to this article:
Basically, AAD Connect’s web assembly is rejecting the DLLs for the Connectors.  To circumvent this strong name signature verification for these DLLs, you need to run the following command:
sn.exe -Vr *,31bf3856ad364e35
where 31..35 is the number that corresponds to the ‘PublicKeyToken’ mentioned in the Event Viewer log.  I’m 99% certain this is a static value (based on another user reporting this issue to Microsoft) so the above command should work for you as well.
In terms of the location of the correct ‘sn.exe’, my AAD Connect had four versions installed onto it – you might have more depending on the number of versions of the .NET Framework you have installed.  I suggest you try first the latest .NET framework version, and specify the x64 folder (since AAD Connect is 64-bit).  I also strongly suggest you restart the Windows Server/s hosting AAD Connect after you apply it too.
Good luck!

'Generic' LDAP Connector for Azure AD Connect

I’m working for a large corporate who has a large user account store in Oracle Unified Directory (LDAP).   They want to use these existing accounts and synchronise them to Azure Active Directory for Azure application services (such as future Office 365 services).
Microsoft state here that Azure Active Directory Connect (AAD Connect) will, in a ‘Future Release’ version, provide native LDAP support (“Connect to single on-premises LDAP directory”), so timing wise I’m in a tricky position – do I guide my customer to attempt to use the current version? (at the time of writing is: v1.1.649.0) or wait for this ‘future release version’?.
This blog may not have a very large lifespan – indeed a new version of AAD Connect might be released at any time with native LDAP tree support, so be sure to research AAD Connect prior to providing a design or implementation.
My customer doesn’t have any requirement for ‘write back’ services (where data is written back from Azure Active Directory to the local directory user store) so this blog post covers just a straight export from the on-premises LDAP into Azure Active Directory.
I contacted Microsoft and they stated it’s supported ‘today’ to provide connectivity from AAD Connect to LDAP, so I’ve spun up a Proof of Concept (PoC) lab to determine how to get it working in this current version of AAD Connect.
Good news first, it works!  Bad news, it’s not very well documented so I’ve created this blog just to outline my learnings in getting it working for my PoC lab.
I don’t have access to the Oracle Unified Directory in my PoC lab, so I substituted in Active Directory Lightweight Directory Services (AD LDS) so my configuration reflects the use of AD LDS.

Tip #1 – You still need an Active Directory Domain Service (AD DS) for installation purposes

During the AAD connect installation wizard (specifically the ‘Connect your directories’ page), it expects to connect to an AD DS forest to progress the installation.  In this version of AAD Connect, the ‘Directory Type’ listbox only shows ‘Active Directory’ – which I’m expecting to include more options when the ‘Future Release’ version is available.
I created a single Domain Controller (forest root domain) and used the local HOST file of my AAD Connect Windows Server to point the forest root domain FQDN e.g. ‘forestAD.internal’ to the IP address of that Domain Controller.
I did not need to ‘join’ my AAD Connect Windows Server to that domain to complete the installation, which will make it easier to decommission this AD DS (if it’s put into Production) if Microsoft releases an AAD Connect version that does not require AD DS.
Screen Shot 2017-10-25 at 10.39.34 am

Tip #2 – Your LDAP Connector service account needs to be a part of the LDAP tree

After getting AAD Connect installed with mostly the default options, I then went into the “Synchronization Engine” to install the ‘Generic LDAP’ connector that is available by clicking on the ‘Connector’ tab and clicking ‘Create’ under the ‘Actions’ heading on the right hand side:
Screen Shot 2017-11-01 at 11.50.41 am
For the ‘User Name’ field, I created and used an account (in this example: ‘CN=FIMLDSAdmin,DC=domain,DC=internal) that was part of the LDAP tree itself instead of the installation account created by AD LDS which is part of the local Windows Server user store.
If I tried use a local Windows Server ie. ‘Server\username’ user, it would just give me generic connection errors and not bind to the LDAP tree, even if that user had full administrative rights to the AD LDS tree.  I gave up troubleshooting this – so I’ve resigned myself to needing a service account in the LDAP tree itself.
Screen Shot 2017-11-03 at 9.50.55 am

Tip #3 – Copy (and modify) the existing Inbound Rules for AD

When you create a custom Connector, the data mapping rules for that Connector are taken from the ‘Sychronization Rules Editor’ program and are not located in the Synchronization Engine.  There’s basically two choices at the moment for a custom connector: copy existing rules from another Connector, or create a brand new (blank) rule set for that connector (per object type e.g. ‘User’).
I chose the first option – I copied the three existing ‘Inbound’ Rules from the AD DS connector:

  • In from AD – User Join
  • In from AD – User AccountEnabled
  • In from AD – User Common

If you ‘edit’ each of these existing AD DS rules, you’ll get a choice to create a copy of that rule set and disable the original.  Select ‘Yes’ will create a copy of that rule, and you can then modify the ‘Connected Systen’ to use the LDAP Connector instead of the AD DS Connector:
Screen Shot 2017-11-03 at 10.10.51 am
I also modified the priority numbering of each of the rules from ‘201’ to ‘203’ to have these new rules applied last in my Synchronization Engine.
I ended up with the configuration of these three new ‘cloned’ rules for my LDAP Connector:
Screen Shot 2017-11-03 at 10.04.27 am
I found I had to edit or remove any rules that required the following:

  • Any rule looking for ‘sAMAccountName’, I modified the text of the rule to look for ‘UID’ instead (which is the schema attribute name most closely resembling that account name field in my custom LDAP)
  • I deleted the following rule from the Transformation section of the ‘In from AD – User Join’ cloned rule.  I found that it was preventing any of my LDAP accounts reaching Azure AD:

Screen Shot 2017-11-03 at 10.32.39 am

  • In the ‘In from AD – User AccountEnabled’ Rule, I modified the existing Scoping Filter to not look at the ‘userAccountControl’ bit and instead used the AD LDS attribute name:  msDS-UserAccountDisabled = FALSE’ value instead:

Screen Shot 2017-11-03 at 10.35.36 am
There are obviously many, many ways of configuring the rules for your LDAP tree but I thought I’d share how I did it with AD LDS.  The reasoning why I ‘cloned’ existing rules was that I wanted to protect the data integrity of Azure AD primarily.  There are many, many default data mapping rules for Azure AD that come with the AD DS rule set – a lot of them use ‘TRIM’ and ‘LEFT’ functions to ensure the data reaches Azure AD with the correct formatting.
It will be interesting to see how Microsoft tackles these rules sets in a more ‘wizard’ driven approach – particularly since LDAP trees can be highly customised with unique attribute names and data approaches.
Before closing the ‘Synchronization Rules Editor’, don’t forgot to ‘re-enable’ each of the (e.g AD DS) Connector rules you’ve previously cloned because the Synchronization Rules Editor assumes you’re not modifying the Connector they’re using.  Select the original rule cloned, and uncheck the ‘Disabled’ box.

Tip #4 – Create ‘Delta’ and ‘Full’ Run Profiles

Lastly, you might be wondering: how does the AAD Connector Scheduler (the one based entirely in PowerShell with seemingly no customisation commands) pickup the new LDAP Connector?
Well, it’s simply a matter of naming your ‘Run Profiles’ in the Synchronization Engine with the text: ‘Delta’ and ‘Full’ where required.  Select ‘Configure Run Profiles’ in the Engine for your LDAP Connector:
Screen Shot 2017-11-03 at 10.16.29 am.png
I then created ‘Run Profiles’ with the same naming convention as the ones created for AD DS and Azure AD:
Screen Shot 2017-11-03 at 10.17.58 am
Next time I ran an ‘Initial’ (which executes ‘Full Import’ and ‘Full Sync.’ jobs) or a ‘Delta’ AD Scheduler job (I’ve previously blogged about the AD Scheduler, but you can find the official Microsoft doc on it here), my new LDAP Connector Run Profiles were executed automatically along with the AD DS and AAD Connector Run Profiles:
Screen Shot 2017-11-03 at 10.19.57 am
Before I finish up, my colleague David Minnelli has found IDAMPundit’s blog post about a current bug upgrading to AAD Connect v1.1.649.0 version if you already have an LDAP Connector.  In a nutshell, just open up the existing LDAP Connector and step through each page and re-save it to clear the issue.
** Edit: I have had someone query how I’m authenticating with these accounts, well I’m leveraging an existing SecureAuth service that uses the WS-Federation protocol to communicate with Azure AD.  So ‘Federation’ basically – I’m not extracting passwords out of this LDAP or doing any kind of password hash synchronization.  Thanks for the question though!
Hope this helped, good luck!

Automatically Provision Azure AD B2B Guest Accounts

Azure ‘Business to Business’ (or the catchy acronym ‘B2B’) has been an area of significant development in the last 12 months when it comes to providing access to Azure based applications and services to identities outside an organisation’s tenancy.
Recently, Ryan Murphy (who has contributed to this blog) and I have been tasked to provide an identity based architecture to share Dynamics 365 services within a large organisation, but across two ‘internal’ Azure AD tenancies.
Dynamics 365 takes its identity store from Azure AD; if you’re assigned a license for Dynamics 365 in the Azure Portal, including in a ‘B2B’ scenario, you’re granted access to the Dynamics 365 application (as outlined here).  Further information about how Dynamic 365 concepts integrate with a single or multiple Azure tenancies is outlined here.
Microsoft provide extensive guidance in completing a typical ‘invitation’ based scenario using the Azure portal (using the links above).  Essentially this involves inviting users using an email which relies on that person manually clicking on the embedded link inside the email to complete the acceptance (and the ‘guest account’ creation in the Dynamics 365 service linked to that Azure AD).
However, this obviously won’t scale when you’re requiring on inviting thousands of new users, initially, but then also having to repeatedly invite new users as part of a Business-As-Usual (BAU) process as they join the organisation (or ‘identity churn’ as we say).
Therefore, to automate the creation of new Guest Users Azure AD accounts, without involving the user at all, this process can be followed:

  1. Create a ‘service account’ Guest User from the invited Azure AD (has to have the same UPN suffix as the users you’re inviting) to be a member of the resource Azure AD.
  2. Assign the ‘service account’ Guest User to be a member of the ‘Guest Inviter’ role of the resource Azure AD.
  3. Use PowerShell to auto. provision new Guest User accounts using the credentials of the ‘service account’ Guest User.

In this blog, we’ll use the terms ‘Resource Azure AD’ or ‘Resource Tenancy’ which is the location where you’re trying to share the sources out to another Azure AD called ‘Invited Azure AD’ or ‘Invited Tenancy’ where the user accounts (including usernames & passwords) you’re inviting reside.  The invited users only ever use their credentials in their own Azure AD or tenancy – never credentials of the ‘Resource Azure AD or tenancy’.  The ‘Guest User’ object created in the ‘Resource Tenancy’ are essentially just linking objects without any stored password.
A ‘Service Account’ Azure AD account dedicated solely to the automatic creation of Guest Users in the Resource Tenancy will need to be created first in the ‘Invited Azure AD’ – for this blog, we used an existing Azure AD account sourced using a synchronised local Active Directory.  This account did not have any ‘special’ permissions in the ‘Invited Azure AD’ but according to some blogs, it requires at least ‘read’ access to the user store in the ‘Invited Azure AD’ at least (which is default).
This ‘Service Account’ Azure AD account should have a mailbox associated with it, i.e. either an Exchange Online (Office 365) mailbox, or a mail value that has a valid SMTP address for a remote mailbox.  This mailbox is needed to approve the creation of a Guest User account in the Resource Tenancy (only needed for this individual Service Account).
It is strongly recommended that this ‘Service Account’ user in the ‘Invited Azure AD’ has a very strong & complex password, and that any credential used for that account within a PowerShell script be encrypted using David Lee’s blog.
The PowerShell scripts listed below to create these Guest Accounts accounts could then be actioned by an identity management system e.g. Microsoft Identity Manager (MIM) or a ‘Runbook’ or workflow system (e.g. SharePoint).

Task 1: Create the ‘Service Account’ Guest User using PowerShell

Step 1: Sign into the Azure AD Resource Tenancy’s web portal: ‘portal.azure.com’, using a Global Admin credential.
Step 2:  When you’re signed in, click on the account profile picture on the top right of the screen and select the correct ‘Resource Tenancy’ (There could be more than one tenant associated with the account you’re using):
Screenshot 2017-09-19 at 9.34.18 AM
Step 3:  Once the tenancy is selected, click on the ‘Azure Active Directory’ link on the left pane.
Step 4:  Click ‘User Settings’ and verify the setting (which is set by default for new Azure AD tenancies):  ‘Members can invite’.
Screenshot 2017-09-19 11.31.51
Step 5:  Using a new PowerShell session, connect and authenticate to the Azure AD tenancy where the Guest User accounts are required to be created into (i.e. the ‘Resource Azure AD’).
Be sure to specify the correct ‘Tenant ID’ of the ‘Resource Azure AD’ using the PowerShell switch ‘-TenantId‘ followed by the GUID value of your tenancy (to find that Tenant ID, follow the instructions here).
$Creds = Get-Credential
Connect-AzureAD -Credential $creds -TenantId “aaaaa-bbbbb-ccccc-ddddd”
Step 6:  The following PowerShell command should be executed under a ‘Global Admin’ to create the ‘Service Account’ e.g. ‘serviceaccount@invitedtenancy.com’.
New-AzureADMSInvitation -InvitedUserDisplayName “Service Account Guest Inviter” -InvitedUserEmailAddress “serviceaccount@invitedtenancy.com” -SendInvitationMessage $true -InviteRedirectUrl http://myapps.microsoft.com -InvitedUserType member
Step 7:  The ‘Service Account’ user account will then need to locate the email invitation sent out but his command and click on the link embedded within to authorise the creation of the Guest User object in the ‘Resource Tenancy’.

Task 2: Assign the ‘Service Account’ Guest Inviter Role using Azure Portal

Step 1:  Sign into the Azure web portal: ‘portal.azure.com’ with the same ‘Global Admin’ (or lower permission account) credential used in Task 1 (or re-use the same ‘Global Admin’ session from Task 1).
Step 2:  Click on the ‘Azure Active Directory’ shortcut on the left pane of the Azure Portal.
Step 3:  Click on the ‘All Users’ tab and select the ‘Service Account’ Guest User.
(I’m using ‘demo.microsoft.com’ pre-canned identities in the screen shot below, any names similar to real persons is purely coincidental – an image for ‘serviceaccount@invitedtenancy’ used as the example in Task 1 could not be reproduced)
Screenshot 2017-09-19 09.44.36
Step 4:  Once the ‘Service Account’ user is selected, click on the ‘Directory Role’ on the left pane.  Click to change their ‘Directory Role’ type to ‘Limited administrator’ and select ‘Guest Inviter’ below that radio button.  Click the ‘Save’ button.
Screenshot 2017-09-19 09.43.53
Step 5:  The next step is to test to ensure that ‘Service Account’ Guest User account can invite users from the same ‘UPN/Domain suffix’.   Click on the ‘Azure Active Directory’ link on the left pane off the main Azure Portal.
Step 6:  Click ‘User and groups’ and click ‘Add a guest user’ on the right:
Screenshot 2017-09-19 09.36.02
Step 7:  On the ‘Invite a guest’ screen, send an email invitation to a user from the same Azure AD as the ‘Service Account’ Guest User.  For example, if your ‘Service Account’ Guest user UPN / Domain Suffix is: ‘serviceaccount@remotetenant.com’, then invite a user from the same UPN/domain suffix e.g. ‘jim@remotetenant.com’  (again, only an example – any coincidence to current or future email address is purely coincidental).
Screenshot 2017-09-19 09.36.03
Step 8:  When the user receives the invitation email, ensure that the following text appears at the bottom of the email:  ‘There is no action required from you at this time’:
Step 9:  If that works, then PowerShell can now automate that invitation process bypassing the need for emails to be sent out.  Automatic Guest Account creation can now leverage the ‘Service Account’ Guest User.
NOTE:  If you try to invite a user from with UPN/Domain suffix that does not match the ‘Service Account’ Guest User, the invitation will still be sent but it will appear requesting the user accept the invitation.  The invitation will be in a ‘pending acceptance’ state until that is done, and the Guest User object will not be created until that is completed.

Task 3:  Auto. Provision new Guest User accounts using PowerShell

Step 1:  Open Windows PowerShell (or re-use an existing PowerShell session that has rights to the ‘Resource Tenancy’).
Step 2:  Type the following example PowerShell command to send  invitation out, and authenticate when prompted using the ‘Invited Tenancy’ credentials of the ‘Service Account’ Guest User.
In the script, again be sure to specify the ‘TenantID’ for the switch –TenantID of the ‘Resource Tenancy’, not the ‘Invited Tenancy’.
#Connect to Azure AD
$Creds = Get-Credential
Connect-AzureAD -Credential $creds -TenantId “aaaaa-bbbbb-ccccc-ddddd”
$messageInfo = New-Object Microsoft.Open.MSGraph.Model.InvitedUserMessageInfo
$messageInfo.customizedMessageBody = “Hey there! Check this out. I created and approved my own invitation through PowerShell”
New-AzureADMSInvitation -InvitedUserEmailAddress “ted@invitedtenancy.com” -InvitedUserDisplayName “Ted at Invited Tenancy”  -InviteRedirectUrl https://myapps.microsoft.com -InvitedUserMessageInfo $messageInfo -SendInvitationMessage $false
Compared to using the Azure portal, this time no email will be sent (the display name and message body will never be seen by the invited user, it’s just required for the command complete).   To send a confirmation email to the user, you can change the switch -SendInvitationMessage to: $True.
Step 3:  The output of the PowerShell command should have at the end of the text next to ‘Status’ as ‘Accepted’:
This means the Guest User object has automatically been created and approved by the ‘Resource Tenancy’.   That Guest User object created will be associated with the actual Azure AD user object from the ‘Invited Tenancy’.
The next steps for this invited Guest User will be then to assign them a Dynamics 365 license and then a Dynamics 365 role in the ‘Resource Tenancy’ (which might be topics of future blogs).
Hope this blog has proven useful.

Migrating 'SourceAnchor' from 'ObjectGUID' using new AAD Connect 1.1.524.0

I count myself lucky every now and again, for many reasons.  I have my health.  I have my wonderful family.
Today, however, it’s finding out the latest version of AAD Connect (v1.1.524.0) will probably give me back a few more months of my life.
The reason?  My customer’s chosen configuration of their AAD Connect to choose the default value of ‘ObjectGUID’ for their ‘SourceAnchor’ value.
Now, for most organizations with a single AD forest, you’re laughing.  No reason to keep reading.  Log off, go outside, enjoy the sunshine (or have a coffee if you’re in Melbourne).
But no, my customer has TWO AD forests, synchronizing to a single Azure AD tenancy.
OK? What’s the big deal?  That’s been a supported configuration for many years now.
Well…… when they configured their AAD Connect they chose to use ‘ObjectGUID’ as their ‘SourceAnchor’ value:
Why is this an issue? 
I’m trying to MIGRATE a user from one forest to another.   Has the penny dropped yet?
OK, if not, let me extract and BOLD these scary dot points from this Microsoft Support Article (https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnect-design-concepts#sourceanchor):

  • The sourceAnchor attribute can only be set during initial installation. If you rerun the installation wizard, this option is read-only. If you need to change this setting, then you must uninstall and reinstall.
  • If you install another Azure AD Connect server, then you must select the same sourceAnchor attribute as previously used. If you have earlier been using DirSync and move to Azure AD Connect, then you must use objectGUID since that is the attribute used by DirSync.
  • If the value for sourceAnchor is changed after the object has been exported to Azure AD, then Azure AD Connect sync throws an error and does not allow any more changes on that object before the issue has been fixed and the sourceAnchor is changed back in the source directory.

Ok another link:
By default, Azure AD Connect (version 1.1.486.0 and older) uses objectGUID as the sourceAnchor attribute. ObjectGUID is system-generated. You cannot specify its value when creating on-premises AD objects.
OK, just un-install and re-install AAD Connect.   No big deal.  Change Window over a weekend.  Get it done.
No, no, no.  Keep reading.
If you browse to page 6 of this very helpful (and I’ll admit downright scary migration blog), you’ll see this text:
You need to delete your users from Azure Active Directory and you need to start again.
Come again?!. Ok.  In the word’s of the great ‘Hitchhiker’s Guide to the Galaxy’:  DON’T PANIC.
Yes. So. That is one option (sic) however the MS blog does into detail (albeit not tested by me) of another method, namely changing the ‘SourceAnchor’ value away from ‘objectGUID’ in a new installation of AAD Connect by also changing all your users UPN values to ‘onmicrosoft.com’ values, removing then installing a version of AAD connect, then changing their UPN values back to their original values.
But yeah, scary stuff.  Doing this for all users in a very large organization?  Positively terrifying (hence the start of this article).   With an Azure AD that integrates with Exchange, Skype for Business and a basically 24×7 global user base.  Well….you get my drift.
So good news?  Well, the new version supports the migration of ‘SourceAnchor’ values to the use of the positively joyous: msDS-ConsistencyGuid
So back to my original context, why is this important?  Well, looky here …I can see you msDS-ConsistencyGuid  (using ADSIEdit.msc):
The reason I’m excited – it’s a ‘writeable attribute’.
So forward sailing boys.  Let slip the anchor.  Let’s get sailing while the tide is high.
(In other words):
I’m going to:

  1. I’m going to upgrade my customer’s AAD Connect
  2. Ensure during the upgrade, that I migrate ‘SourceAnchor’ option in the AAD Connect wizard to use the new msDS-ConsistencyGuid  value in AD.
  3. Ensure all users (in both AD forests) have a new & unique  value after AAD connect performs a full sync and export to both domains.
  4. Ensure my Active Directory Migration Tool (or PowerShell migration script) moves the users msDS-ConsistencyGuid value from one forest to another (as well as retaining SIDHistory and passwords)
  5. And always: Test, test, test – to ensure I don’t lose their Azure AD account in the process.

Cross fingers this all works of course.  There’s very little guidance out there that combines ADMT guidance with this latest AAD Connect versioning.  It’s not explicitly stated in the AAD Connect online documentation, but it suggests that Microsoft have made changes on the Azure AAD ‘cloud’ side of the equation to also migrate unique joins to use this new value during the upgrade.
So upgrading AAD Connect and selecting to use msDS-ConsistencyGuid as your new ‘SourceAnchor’ SHOULD also trigger some back end changes to the tenancy as well (I’m hoping).
As you know, There’s nothing worse than a good plan and design spoiled by one little bug in implementation.  So come back for a future blog or two on my perilous journey argh me maties.. (er, customer project friends).

Check Patch Status of 'WannaCrypt' / 'WannaCry' using PowerShell

A short but sweet blog today, mindful that today most Australians will be coming back to work after the ‘WannaCrypt’ attack that was reported in the media on Friday.
I would like to just point out the work of Kieran Walsh – he’s done the ‘hard yards’ of extracting all of the Knowledge Base (KB) article numbers that you need to be searching for, to determine your patching status of Microsoft Security Bulletin MS17-010  (https://technet.microsoft.com/en-us/library/security/ms17-010.aspx).  Microsoft’s detailed blog about the ‘WannaCrypt ransomware’ can be found here: https://blogs.technet.microsoft.com/mmpc/2017/05/12/wannacrypt-ransomware-worm-targets-out-of-date-systems/
If you don’t have an Enterprise patch deployment tool such as SCCM or WSUS (there are many many others), Kieran’s script executes a simple ‘Get-Hotfix’ PowerShell command remotely against a Windows Server or workstation, and uses all the computer objects in Active Directory as a reference.  I personally haven’t run this yet, so please test this first against a test AD if you have one.  The ‘Get-Hotfix’ command is relatively ‘benign’ so the risk is low.
Conversely, if you’re looking to run this on your local workstation, I’ve modified his script and made a simple ‘local’ check.  Copy and paste this into a PowerShell window with ‘administrator’ permissions:
#— Script start

# List of all HotFixes containing the patch
$hotfixes = @(‘KB4012598’, ‘KB4012212’, ‘KB4012215’, ‘KB4015549’, ‘KB4019264’, ‘KB4012213’, ‘KB4012216’, ‘KB4015550’, ‘KB4019215’, ‘KB4012214’, ‘KB4012217’, ‘KB4015551’, ‘KB4019216’, ‘KB4012606’, ‘KB4015221’, ‘KB4016637’, ‘KB4019474’, ‘KB4013198’, ‘KB4015219’, ‘KB4016636’, ‘KB4019473’, ‘KB4013429’, ‘KB4015217’, ‘KB4015438’, ‘KB4016635’, ‘KB4019472’, ‘KB4018466’)
# Search for the HotFixes
$hotfix = Get-HotFix | Where-Object {$hotfixes -contains $_.HotfixID} | Select-Object -property “HotFixID”
# See if the HotFix was found
if (Get-HotFix | Where-Object {$hotfixes -contains $_.HotfixID}) {write-host “Found hotfix” $_.HotfixID
} else {
write-host “Didn’t find hotfix”
#— Script end
Please follow all official Microsoft advice in applying the correct patch as per the security bulletin link above.  Conversely look to disable ‘SMBv1’ services on your workstations until you can get them patched.  Good luck.
** Update @ 4:30pm (15/05/2017).  In my testing, I’ve found the Windows 10 patches listed in the Security Bulletin have been superseded by newer KB numbers.  I’ve added three KB’s for the 64-bit version of Windows 10, version 1511.  I’d suggest looking at the ‘Package Details’ tab of the Microsoft Catalog site (e.g http://www.catalog.update.microsoft.com/Search.aspx?q=KB4013198) for the latest KB numbers.  I’ll try to add all KBs for Windows 10 by tomorrow AEST (the 16th).  Alternative, keep an eye on updates to Kieran’s script as he gets update from the community.

** Update @ 5pm – The MS blog about the ransomware attack itself specifically states Windows 10 machines are not impacted even though there are patches for the security bulletin that apply to Windows 10.  Ignore Windows 10 devices in your report unless there’s updated information from Microsoft.
** Update @ 8pm: Kieran has updated his script to exclude Windows 10 computer objects from the AD query.
** Update @ 9:30 am 16/05:  Updated list of KBs from Kieran’s script (who has been sourcing the latest KB list from the community)
** Updated @ 2pm 17/05:  Updated list of KBs (including Windows 10 updates) from the comments area from Kieran’s script (user: d83194).  For future updates, I’d suggest reviewing Kieran’s comments for the latest KB articles.  I’ll let you make the decision about whether to keep the Windows 10 filter (-notlike ‘Windows 10‘) in Kieran’s script.  Maybe produce two reports (with Windows 10/without Windows 10).

Azure MFA: Architecture Selection Case Study

I’ve been working with a customer on designing a new Azure Multi Factor Authentication (MFA) service, replacing an existing 2FA (Two Factor Authentication) service based on RSA Authenticator version 7.
Now, typically Azure MFA service solutions in the past few years have been previously architected in the detail ie. a ‘bottom up’ approach to design – what apps are we enforcing MFA on? what token are we going to use? phone, SMS, smart phone app? Is it one way message, two way message? etc.
Typically a customer knew quite quickly which MFA ‘architecture’ was required – ie. the ‘cloud’ version of Azure MFA was really only capable of securing Azure Active Directory authenticated applications. The ‘on prem’ (local data centre or private cloud) version using Azure MFA Server (the server software Microsoft acquired in the PhoneFactor acquisition) was the version that was used to secure ‘on-prem’ directory integrated applications.  There wasn’t really a need to look at the ‘top down’ architecture.
In aid of a ‘bottom up’ detailed approach – my colleague Lucian posted a very handy ‘cheat sheet’ last year, in comparing the various architectures and the features they support which you can find here: https://blog.kloud.com.au/2016/06/03/azure-multi-factor-authentication-mfa-cheat-sheet

New Azure MFA ‘Cloud Service’ Features

In the last few months however, Microsoft have been bulking up the Azure MFA ‘cloud’ option with new integration support for on-premise AD FS (provided with Windows Server 2016) and now on-premise Radius applications (with the recent announcement of the ‘public preview’ of the NPS Extension last month).
(On a side note: what is also interesting, and which potentially reveals wider trends on token ‘popularity’ selection choices, is that the Azure ‘Cloud’ option still does not support OATH (ie. third party tokens) or two-way SMS options (ie. reply with a ‘Y’ to authenticate)).
These new features have therefore forced the consideration of the primarily ‘cloud service’ architecture for both Radius and AD FS ‘on prem’ apps.

“It’s all about the Apps”

Now, in my experience, many organizations share application architectures they like to secure with multi factor authentication options.  They broadly fit into the following categories:
1. Network Gateway Applications that use Radius or SDI authentication protocols, such as network VPN clients and application presentation virtualisation technologies such as Citrix and Remote App
2. SaaS Applications that choose to use local directory credentials (such as Active Directory) using Federation technologies such as AD FS (which support SAML or WS-Federation protocols), and
3. SaaS applications that use remote (or ‘cloud’) directory credentials for authentication such as Azure Active Directory.
Applications that are traditionally accessed via only the corporate network are being phased out for ones that exist either purely in the Cloud (SaaS) or exist in a hybrid ‘on-prem’ / ‘cloud’ architecture.
These newer application architectures allow access methods from untrusted networks (read: the Internet) and therefore these access points also apply to trusted (read: corporate workstations or ‘Standard Operating Environment’) and untrusted (read: BYOD or ‘nefarious’) devices.
In order to secure these newer points of access, 2FA or MFA solution architectures have had to adapt (or die).
What hasn’t changed however is that a customer when reviewing their 2FA or MFA choice of vendors will always want to choose a low number of MFA vendors (read: one), and expects that MFA provider to support all of their applications.  This keeps user training cost low and operational costs low.   Many are also fed up dealing with ‘point solutions’ ie. securing only one or two applications and requiring a different 2FA or MFA solution per application.

Customer Case Study

So in light of that background, this section now goes through the requirements in detail to really ‘flush’ out all the detail before making the right architectural decision.

Vendor Selection

This was taken place prior to my working with our customer, however it was agreed that Azure MFA and Microsoft were the ‘right’ vendor to replace RSA primarily based on:

  • EMS (Enterprise Mobility + Security) licensing was in place, therefore the customer could take advantage of Azure Premium licensing for their user base.  Azure Premium meant we would use the ‘Per User’ charge model for Azure MFA (and not the other choice of ‘Per Authentication’ charge model ie. being charged for each Azure MFA token delivered).
  • Tight integration with existing Microsoft services including Office 365, local Active Directory and AD FS authentication services.
  • Re-use of strong IT department skills in the use of Azure AD features.


 Step 1: App Requirements Gathering

The customer I’ve been working with has two ‘types’ of applications:
1. Network Gateway Applications – Cisco VPN using an ASA appliance and SDI protocol, and Citrix NetScaler using Radius protocol.
2. SaaS Applications using local Directory (AD) credentials via the use of AD FS (on Server 2008 currently migrating to Server 2012 R2) using both SAML & WS-Federation protocols.
They wanted a service that could replace their RSA service that integrated with their VPN & Citrix services, but also ‘extend’ that solution to integrate with AD FS as well.   The currently don’t use 2FA or MFA with their AD FS authenticated applications (which includes Office 365).
They did not want to extend 2FA services to Office 365 primarily as that would incur the use of static ‘app passwords’ for their Outlook 2010 desktop version.

Step 2:  User Service Migration Requirements

The move from RSA to Azure MFA was going to involve the flowing changes as well to the way users used two factor services:

  1. Retire the use of ‘physical’ RSA tokens but preserve a similar smart phone ‘soft token’ delivery capability
  2. Support two ‘token’ options going forward:  ‘soft token’ ie. use of a smart phone application or SMS received tokens
  3. Modify some applications to use the local AD password instead of the RSA ‘PIN’ as a ‘what you know’ factor
  4. Avoid the IT Service Desk for ‘soft token’ registration.  RSA required the supply of a static number to the Service Desk who would then enable the service per that user.  Azure MFA uses a ‘rotating number’ for ‘soft token’ registrations (using the Microsoft Authenticator application).  This process can only be performed on the smart phone itself.

So mapping out these requirements, I then had to find the correct architecture that met their requirements (in light of the new ‘Cloud’ Azure MFA features):

Step 3: Choosing the right Azure MFA architecture

I therefore had a unique situation, whereby I had to present an architectural selection – whether to use the Azure MFA on premise Server solution, or Azure MFA Cloud services.  Now, both services technically use the Azure MFA ‘Cloud’ to deliver the tokens, but the sake of simplicity, it boils down to two choices:

  1. Keep the service “mostly” on premise (Solution #1), or
  2. Keep the service “mostly” ‘in the cloud’ (Solution #2)

The next section goes through the ‘on-premise’ and ‘cloud’ requirements of both options, including specific requirements that came out of a solution workshop.

Solution Option #1 – Keep it ‘On Prem’

New on-premise server hardware and services required:

  • One or two Azure MFA Servers on Windows Server integrating with local (or remote) NPS services, which performs Radius authentication for three customer applications
  • On-premise database storing user token selection preferences and mobile phone number storage requiring backup and restoration procedures
  • One or two Windows Server (IIS) hosted web servers hosting the Azure MFA User Portal and Mobile App web service
  • Use of existing reverse proxy publishing capability of the user portal and mobile app web services to the Internet under an a custom web site FQDN.  This published mobile app website is used for Microsoft Authenticator mobile app registrations and potential user self-selection of factor e.g. choosing between SMS & mobile app for example.
New Azure MFA Cloud services required:
  • User using Azure MFA services must be in local Active Directory as well as Azure Active Directory
  • Azure MFA Premium license assigned to user account stored in Azure Active Directory


  • If future requirements dictate Office 365 services to use MFA, then ADFS version 3 (Windows Server 2012) directly integrates with on premise Server MFA.  Only AD FS version 4 (Windows Server 2016) has capability in integrating directly with the cloud based Azure MFA.
  • The ability to allow all MFA integrated authentications through in case Internet services (HTTPS) to Azure cloud are unavailable.  This is configurable with the ‘Approve’ setting for the Azure MFA server setting: “when Internet is not accessible:”


  • On-premise MFA Servers requiring uptime & maintenance (such as patching etc.)
  • Have to host on-premise Azure website and publish to the Internet under existing customer capability for user self service (if required).  This includes on-premise IIS web servers to host mobile app registration and user factor selection options (choosing between SMS and mobile app etc.)
  • Disaster Recovery planning and implementation to protect the local Azure MFA Servers database for user token selection and mobile phone number storage (although mobile phone numbers can be retrieved from local Active Directory as an import, assuming they are present and accurate).
  • SSL certificates used to secure the on-premise Azure self-service portal are required to be already supported by mobile devices such as Android and Apple. Android devices for example, do not support installing custom certificates and requires using an SSL certificate from an already trusted vendor (such as THAWTE)


Solution Option #2 – Go the ‘Cloud’!

New on-prem server hardware and services required:

  • One or two Windows Servers hosting local NPS services which performs Radius authentication for three customer applications.  These can be existing available Windows Servers not utilizing local NPS services for Radius authentication but hosting other software (assuming they also fit the requirements for security and network location)
  • New Windows Server 2016 server farm operating ADFS version 4, replacing the existing ADFS v3 farm.

New Azure MFA Cloud services required:

  • User using Azure MFA services must be in local Active Directory as well as Azure Active Directory
  • User token selection preferences and mobile phone number storage stored in Azure Active Directory cloud directory
  • Azure MFA Premium license assigned to user account stored in Azure Active Directory
  • Use of Azure hosted website: ‘myapps.microsoft.com’ for Microsoft Authenticator mobile app registrations and potential user self selection of factor e.g. choosing between SMS & mobile app for example.
  • Configuring Azure MFA policies to avoid enabling MFA for other Azure hosted services such as Office 365.


  • All MFA services are public cloud based with little maintenance required from the customer’s IT department apart from uptime for on-premise NPS servers and AD FS servers (which they’re currently already doing)
  • Potential to reuse existing Windows NPS server infrastructure (would have to review existing RSA Radius servers for compatibility with Azure MFA plug in, i.e. Windows Server versions, cutover plans)
  • Azure MFA user self-service portal (for users to register their own Microsoft soft token) is hosted in cloud, requiring no on-premise web servers, certificates or reverse proxy infrastructure.
  • No local disaster recovery planning and configuration required. NPS services are stateless apart from IP addressing configurations.   User information token selections and mobile phone numbers stored in Azure Active Directory with inherent recovery options.


  • Does not support AD FS version 3 (Windows Server 2012) for future MFA integration with AD FS SaaS enabled apps such as Office 365 or other third party applications (i.e. those that uses AD FS so users can use local AD authentication credentials). These applications require AD FS version 4 (Windows Server 2016) which supports the Azure MFA extension (similar to the NPS extension for Radius)
  • The Radius NPS extension and the Windows AD FS 2016 Azure MFA integration do not currently support the ability to approve authentications should the Internet go offline to the Azure cloud i.e. cannot reach the Azure MFA service across HTTPS however this may be because….
  • The Radius NPS extension is still in ‘public preview’.  Support from Microsoft at this time is limited if there are any issues with it.  It is expected that this NPS extension will go into general release shortly however.


Conclusion and Architecture Selection

After the workshop, it was generally agreed that Option #2 fit the customer’s on-going IT strategic direction of “Cloud First”.
It was agreed that the key was replacing the existing RSA service integrating with Radius protocol applications in the short term, with AD FS integration viewed as very much ‘optional’ at this stage in light of Office 365 not viewed as requiring two factor services (at this stage).
This meant that AD FS services were not going to be upgraded to Windows Server 2016 to allow integration with Option #2 services (particularly in light of the current upgrade to Windows Server 2012 wanting to be completed first).
The decision was to take Option #2 into the detailed design stage, and I’m sure to post future blog posts particularly into any production ‘gotchas’ in regards to the Radius NPS extension for Azure MFA.
During the workshop, the customer was still deciding whether to allow a user to select their own token ‘type’ but agreed that they wanted to limit it if they did to only three choices: one way SMS (code delivered via SMS), phone call (ie. push ‘pound to continue’) or the use of the Microsoft Authenticator app.   Since these features are available in both architectures (albeit with different UX), this wasn’t a factor in the architecture choice.
The limitation for Option #2 currently around the lack of automatically approving authentications in case the Internet service ‘went down’ was disappointing to the customer, however at this stage it was going to be managed with an ‘outage process’ in case they lost their Internet service. The workaround to have a second NPS server without the Azure MFA extension was going to be considered as part of that process in the detailed design phase.

Introduction to MIM Advanced Workflows with MIMWAL


Microsoft late last year introduced the ‘MIMWAL’, or to say it in full: (inhales) ‘Microsoft Identity Manager Workflow Activity Library’ – an open source project that extends the default workflows & functions that come with MIM.
Personally I’ve been using a version of MIMWAL for a number of years, as have my colleagues, in working on MIM projects with Microsoft Consulting.   This is the first time however it’s been available publicly to all MIM customers, so I thought it’d be a good idea to introduce how to source it, install it and work with it.
Microsoft (I believe for legal reasons) don’t host a compiled version of MIMWAL, instead host the source code on GitHub for customers to source, compile and potentially extend. The front page to Microsoft’s MIMWAL GitHub library can be found here: http://microsoft.github.io/MIMWAL/

Compile and Deploy

Now, the official deployment page is fine (github) but I personally found Matthew’s blog to be an excellent process to use (ithinkthereforeidam.com).  Ordinarily, when it comes to installing complex software, I usually combine multiple public and private sources and write my own process but this blog is so well done I couldn’t fault it.
…however, some minor notes and comments about the overall process:

  • I found that I needed to copy the gacutil.exe and sn.exe utilities you extract from the old FIM patch in the ‘Solution Output’ folder.  The process mentions they need to be in the ‘src\Scripts’ (Step 6), but they need to be in the ‘Solution Output’ folder as well, which you can see in the last screenshot of that Explorer folder in Step 8 (of process: Configure Build/Developer Computer).
  • I found the slowest tasks in the entire process was sourcing and installing Visual Studio, and extracting the required FIM files from the patch download.  I’d suggest keeping a saved Windows Server VM somewhere once you’ve completed these tasks so you don’t have to repeat them in case you want to compile the latest version of MIMWAL in the future (preferably with MIM installed so you can perform the verification as well).
  • Be sure to download the ‘AMD 64’ version of the FIM patch file if you’re installing MIMWAL onto a Windows Server 64-bit O/S (which pretty much everyone is).  I had forgotten that old 64 bit patches used to be titled after the AMD 64-bit chipset, and I instead wasted time looking for the newer ‘x64’ title of the patch which doesn’t exist for this FIM patch.


‘Bread and Butter’ MIMWAL Workflows

I’ll go through two examples of MIMWAL based Action Workflows here that I use for almost every FIM/MIM implementation.
These action workflows have been part of previous versions of the Workflow Activity Library, and you can find them in the MIMWAL Action Workflow templates:

I’ll now run through real world examples in using both Workflow templates.

Update Resource Workflow

The Update Resource MIMWAL action workflow I use all the time to link two different objects together – many times linking a user object with a new and custom ‘location’ object.
For new users, I execute this MIMWAL workflow when a user first ‘Transitions In’ to a Set whose dynamic membership is “User has Location Code”.
For users changing location, I also execute this workflow use a Request-based MPR of the Synchronization Engine changing the “Location Code” for a user.
This workflow looks like the following:
The XPath Filter is:  /Location[LocationCode = ‘[//Target/LocationCode]’]
When you target the Workflow at the User object, it will use the Location Code stored in the User object to find the equivalent Location object and store it in a temporary ‘Query’ object (referenced by calling [//Queries]):
The full value expression used above, for example, sending the value of the ‘City’ attribute stored in the Location object into the User object is:
This custom expression determines if there is a value stored in the ‘[//Queries]’ object (ie. a copy of the Location object found early in the query), and if there is a value, then send it to the City attribute of the user object ie. the ‘target’ of the Workflow.  If there is no value, it will send a ‘null’ value to wipe out the existing value (in case a user changes location, but the new location doesn’t have a value for one of the attributes).
It is also a good idea (not seen in this example) to send the Location’s Location Code to the User object and store it in a ‘Reference’ attribute (‘LocationReference’).  That way in future, you can directly access the Location object attributes via the User object using an example XPath:  [//Person/LocationReference/City].

Generate Unique Value from AD (e.g. for sAMAccountName, CN, mailnickname)

I’ve previously worked in complex Active Directory and Exchange environments, where there can often be a lot of conflict when it comes to the following attributes:

  • sAMAccountName (used progressively less and less these days)
  • User Principal Name (used progressively more and more these days, although communicated to the end user as ’email address’)
  • CN (or ‘container’ value, which forms part of the LDAP Distinguished Name (DN) value.  Side note: the most commonly mistaken attribute for admins who think this is the ‘Display Name’ when they view it in AD Users & Computers.
  • Mailnickname (used by some Exchange environments to generate a primary SMTP address or ‘mail’ attribute values)

All AD environments require a unique sAMAccountName (otherwise you’ll get a MIM export error into AD if there’s already an account with it) for any AD account to be created.  It will also require a unique CN value in the same OU as other objects, otherwise the object cannot be created.  Unique CN values are generally required to be unique if you export all user accounts for a large organization to the same OU where there is a greater chance for a conflict happening.
UPNs are generally unique if you copy a person’s email address, but sometimes not – sometimes it’s best to combine a unique mailnickname, append a suffix and send that value to the UPN value.  Again, it depends on the structure and naming of your AD, and the applications that integrate with it (Exchange, Office 365 etc.).
Note: the default MIMWAL Generate Unique Value template assumes the FIM Service account has the permissions required to perform LDAP lookups against the LDAP path you specify.  There are ways to enhance the MIMWAL to add in an authentication username/password field in case there is an ‘air gap’ between the FIM server’s joined domain and the target AD you’re querying (a future blog post).
In this example in using the ‘Generate Unique Value’ MIMWAL workflow, I tend to execute as part of a multi-step workflow, such as the one below (Step 2 of 3):sam1
I use the workflow to generate a query of the LDAP to look for existing accounts, and then send that value to the [//Workflowdata/AccountName] attribute.
The LDAP filter used in this example looks at all existing sAMAccountNames across the entire domain to look for an existing account:   (&(objectClass=user)(objectCategory=person)(sAMAccountName=[//Value]))
The workflow will also query the FIM Service database for existing user accounts (that may not have been provisioned yet to AD) using the XPath filter:  /Person[AccountName = ‘[//Value]’]
The Uniqueness Key Seed in this example is ‘2’, which essentially means that if you cannot resolve a conflict with using other attribute values (such as a user’s middle name, or using more letters of a first or last name) then you can use this ‘seed’ number to break the conflict as a last resort.  This number increments by 1 for each confict, so if there’s a ‘michael.pearn’, and a ‘michael.pearn2’ for example, the next one to test will be ‘michael.pearn3’ etc etc.
The second half of the workflow shows the rules to use to generate sAMAccountName values, and the rules in order in which to break the conflict.  In this example (which is a very simple example), I use an employee’s ‘ID number’ to generate an AD account.  If there is already an account for that ID number, then this workflow will generate a new account with the string ‘-2’ added to the end of it:
Value Expression 1 (highest priority): NormalizeString([//Target/EmployeeID])
Value Expression 2 (lowest priority):  NormalizeString([//Target/EmployeeID] + “-” + [//UniquenessKey])
NOTE: The function ‘NormalizeString’ is a new MIMWAL function that is also used to strip out any diacritics character out.  More information can be found here: https://github.com/Microsoft/MIMWAL/wiki/NormalizeString-Function

Microsoft have posted other examples of Value Expressions to use that you could follow here: https://github.com/Microsoft/MIMWAL/wiki/Generate-Unique-Value-Activity
My preference is to use as many value expressions as you can to break the conflict before having to use the uniqueness key.  Note: the sAMAccountName has a default 20 character limit, so often the ‘left’ function is used to trim the number of characters you take from a person’s name e.g. ‘left 8 characters’ of a person’s first name, combined with ‘left 11 characters’ of a person’s last name (and not forgetting to save a character for the seed value deadlock breaker!).
Once the Workflow step is executed, I then send the value to the AD Sync Rule (using [//WorkflowData/AccountName] to then pass to the outbound ‘AccountName –> sAMAccountName’ outbound AD rule flow:

More ideas for using MIMWAL

In my research on MIMWAL, I’ve found some very useful links to sample complex workflow chains that use the MIMWAL ‘building block’ action workflows and combine them to do complex tasks.
Some of those ideas can be found here by some of Microsoft’s own MSDN: https://blogs.msdn.microsoft.com/connector_space/2016/01/15/the-mimwal-custom-workflow-activity-library/
These include:

  • Create Employee IDs
  • Create Home Directories
  • Create Admin Accounts

I particularly like the idea of using the ‘Create Employee ID’ example workflow, something that I’ve only previously done outside of FIM/MIM, for example with a SQL Trigger that updates a SQL database with a unique number.

Setting up your SP 2013 Web App for MIM SP1 & Kerberos SSO

I confess: getting a Microsoft product based website working with Kerberos and Single Sign On (i.e. without authentication prompts from a domain joined workstation or server) feels somewhat of a ‘black art’ for me.
I’m generally ok with registering SPNs, SSLs, working with load balancing IPs etc, but when it comes to the final Internet Explorer test, and it fails and I see an NTLM style auth. prompt, it’s enough to send me into a deep rage (or depression or both).
So, recently, I’ve had a chance to review the latest guidance on getting the Microsoft Identity Manager (MIM) SP1 Portal setup on Windows Server 2012 R2 and SharePoint Foundation 2013 SP1 for both of the following customer requirements:

  • SSL (port 443)
  • Single Sign On from domain joined workstations / servers

The official MIM guidance here is a good place to start if you’re building out a lab (https://docs.microsoft.com/en-us/microsoft-identity-manager/deploy-use/prepare-server-sharepoint).  There’s a major flaw however in this guidance for SSL & Kerberos SSO – it’ll work, but you’ll still get your NTLM style auth. prompt should you configure the SharePoint Web Application initially under port 82 (if you’re following this guidance strictly like I did) and then in the words of this article: “Initially, SSL will not be configured. Be sure to configure SSL or equivalent before enabling access to this portal.”
Unfortunately, this article doesn’t elaborate on how to configure Kerberos and SSL post FIM portal installation, and to then get SSO working across it.
To further my understanding of the root cause, I built out two MIM servers in the same AD:

  • MIM server #1 FIM portal installed onto the Web Application on port 82, with SSL configured post installation with SSL bindings in IIS Manager and a new ‘Intranet’ Alternate Access Mapping configured in the SharePoint Central Administration
  • MIM server #2, FIM portal installed onto the Web Application built on port 443 (no Alternate Access Paths specified) and SSL bindings configured in IIS Manager.

After completion, I found MIM Server #1 was working with Kerberos and SSO under port 82, but each time I accessed it using the SSL URL I configured post installation, I would get the NTLM style auth. prompt regardless of workstation or server used to access it.
With MIM server #2, I built the web application purely into port 443 using this command:
New-SpWebApplication -Name “MIM Portal” -ApplicationPool “MIMAppPool” -ApplicationPoolAccount $dbManagedAccount -AuthenticationMethod “Kerberos” -SecureSocketsLayer:$true -Port 443 -URL https://<snip>.mimportal.com.au
The key switches are:

  • -SecureSocketsLayer:$true
  • -Port 443
  • -URL (with URL starting with https://)

I then configured SSL after this SharePoint Web Application command in IIS Manager with a binding similar to this:
A crucial way to see if it’s configured properly is to test the MIM Portal FQDN (without the /identitymanagement specification) you’re intending to use after you configure SharePoint Web Application and bind the SSL certificate in IIS Manager but BEFORE you install the FIM Service and Portal.
So in summary test this:

  • https://mimportal.somewhere.com.au

Verify it working with SSO, then install the FIM Portal to get this URL working:

  • https://mimportal.somewhere.com.au/identitymanagement

The first test should appear as a generic ‘Team Site’ in your browser without authentication prompt from a domain joined workstation or server if it’s working correctly.
The other item to take note is that I’ve seen other guidance that this won’t work from a browser locally on the MIM server – something that I haven’t seen in any of my tests.  All test results that I’ve seen are consistent with using a browser from any domain joined workstation, remote domain joined server or the domain joined MIM server itself.  There’s no difference in results in terms of SSO in my opinion.   Be sure to add the MIM portal to the ‘Intranet’ site as well for you testing.
Also, I never had to configure ‘Require Kerberos = True’ for the Web Config that used to be part of the guidance for FIM and previous versions of SharePoint.  This might work as well, but wouldn’t explain the port 82/443 differences for MIM Server #1 (ie. why would that work for 443 and not 82? etc.)
I’ve seen other MIM expert peers configure their MIM sites using custom PowerShell installations of SharePoint Foundation to configure the MIM portal under port 80 (overwriting the default SharePoint Foundation 2013 taking over port 80 during it’s wizard based installation).  I’m sure that might be a valid strategy as well, and SSO may then work as well with SSL with further configuration, but I personally can’t attest to that working.
Good luck!

Avoiding Windows service accounts with static passwords using GMSAs

One of the benefits of an Active Directory (AD) running with only Windows Server 2012 domain controllers is the use of ‘Group Managed Service Accounts’ (GMSAs).
GMSAs can essentially execute applications and services similar to an Active Directory user account running as a ‘service account’.  GMSAs store their 120 character length passwords using the Key Distribution Service (KDS) on Windows Server 2012 DCs and periodically refresh these passwords for extra security (and that refresh time is configurable).
This essentially provides the following benefits:

  1. Eliminates the need for Administrators to store static service accounts passwords in a ‘password vault’
  2. Increased security as the password is refreshed automatically and that refresh interval is configurable (you can tell it to refresh the password every day if you want to.
  3. The password is not known even to administrators so there is no chance for  attackers to try to hijack the GMSA account and ‘hide their tracks’ by logging in as that account on other Windows Servers or applications
  4. An extremely long character password which would require a lot of computing power & time to break

There is still overhead in using GMSA versus a traditional AD user account:

  1. Not all applications or services support GMSA so if the application does not document their supportability, then you will need to test their use in a lab
  2. Increase overhead in the upfront configuration getting them working and testing versus a simple AD user account creation
  3. GMSA bugs (see Appendix)

I recently had some time to develop & run a PowerShell script under Task Scheduler, but I wanted to use GMSA to run the job under a service account whose password would not be known to any administrator and would refresh automatically (every 30 days or so).
There are quite a few blogs out there on GMSA, including this excellent PFE blog from MS from 2012 and the official TechNet library.
My blog is really a ‘beginners guide’ to GMSA in working with it in a simple Task Scheduler scenario.  I had some interesting learnings using GMSA for the first time that I thought would prove useful, plus some sample commands in other blogs are not 100% accurate.
This blog will run through the following steps:

  1. Create a GMSA and link it to two Windows Servers
  2. ‘Install’ the GMSA on the Windows Servers and test it
  3. Create a Task Schedule job and have it execute it under the GMSA
  4. Execute a GMSA refresh password and verify Task Schedule job will still execute

An appendix at the end will briefly discuss issues I’m still having though running a GMSA in conjunction with an Active Directory security group (i.e. using an AD Group instead of direct server memberships to the GMSA object).
A GMSA essentially shares many attributes with a computer account in Active Directory, but it still operates as a distinct AD class object.   Therefore, its use is still quite limited to a handful of Windows applications and services.   It seems the following apps and services can run under GMSA but I’d first check and test to ensure you can run it under GMSA:

  • A Windows Service
  • An IIS Application Pool
  • SQL 2012
  • ADFS 3.0 (although the creation and use of GMSA using ADFS 3.0 is quite ‘wizard driven’ and invisible to admins)
  • Task Scheduler jobs

This blog will create a GMSA manually, and allow two Windows Servers to retrieve the password to that single GMSA and use it to operate two Task Schedule jobs, one per each server.

Step 1: Create your KDS root key & Prep Environment

A KDS root key is required to work with GMSA.  If you’re in a shared lab, this may already have been generated.  You can check with the PowerShell command (run under ‘Run As Administrator’ with Domain Admin rights):
If you get output similar to the following, you may skip this step for the entire forest:
If there is no KDS root key present (or it has expired), the command to create the KDS root key for the entire AD forest (of which all GMSA derive their passwords from) is as follows:
Add-KDSRootKey –EffectiveImmediately
The ‘EffectiveImmediately’ switch is documented that may need to wait up to 10 hours for it to take effect to take into account Domain Controller replication. however you can speed up the process (if you’re in a lab) by following this link.
The next few steps will assume you have the following configured:

  • Domain Admins rights
  • PowerShell loaded with ‘Run as Administrator’
  • Active Directory PowerShell module loaded with command:
    • import-module activedirectory


Step 2: Create a GMSA and link it to two (or more) Windows Servers

This step creates the GMSA object in AD, and links two Windows Servers to be able to retrieve (and therefore login) as that GMSA on those servers to execute the Task Schedule job.
The following commands will :
$server1 = Get-ADComputer <Server1 NETBIOS name>
$server2 = Get-ADComputer <Server2 NETBIOS name>
New-ADServiceAccount -name gmsa-pwdexpiry -DNSHostName gmsa-pwdexpiry.domain.lab -PrincipalsAllowedToRetrieveManagedPassword $server1,$server2
Get-ADServiceAccount -Identity gmsa-pwdexpiry -Properties PrincipalsAllowedToRetrieveManagedPassword
You should get an output similar to the following:
The important verification step is to ensure the ‘PrincipalsAllowed…’ value contains all LDAP paths to the Windows Servers who wish to use the GMSA (the ones specified as variables).
The GMSA object will get added by default to the ‘Managed Service Accounts’ container object in the root of the domain (unless you specify the ‘-path’ switch to tell it to install it to a custom OU).

  1. To reiterate, many blogs point out that you can use the switch: ‘PrincipalsAllowedToRetrieveManagedPassword’ (almost the longest switch name I’ve ever encountered!) to specify an ‘AD group name’.   I’m having issues with using that switch to specify an and work with an AD group instead of direct computer account memberships to the GMSA.   I run through those issues in the Appendix.
  2. A lot of blogs just state you can just specify the server NETBIOS names for the ‘principals’ switch, however I’ve found you need to first retrieve the AD objects first using the ‘get-ADcomputeraccount’ commands
  3. I did not specify a Service Principal Name (SPN) as my Task Scheduler job does not require one, however be sure to do so if you’re executing an application or service requiring one
  4. I accepted the default password refresh interval of 30 days without specifying a custom password refresh interval (viewable in the attribute value: ‘msDS-ManagedPasswordInterval’).  Custom refresh intervals can only be specified during GMSA creation from what I’ve read (a topic for a future blog!).
  5. Be sure to specify a ‘comma’ between the two computer account variables without a space

OPTIONAL Step 2A: Add or Removing Computers to the GMSA

If you’ve created the GMSA but forgot to add a server account, then to modify the server computer account membership of a GMSA, I found the guidance from MS a little confusing. In my testing I found you cannot really add or remove individual computers to the GMSA without re-adding every computer back into the membership list.
You can use this command to update an existing GMSA, but you will still need to specify EVERY computer that should be able to retrieve the password for that GMSA.
For example, if I wanted to add a third server to use the GMSA I would still need to re-add all the existing servers using the ‘Set-ADServiceAccount’ command:
$server1 = Get-ADComputer <Server1 NETBIOS name>
$server2 = Get-ADComputer <Server2 NETBIOS name>
$server3 = Get-ADComputer <Server3 NETBIOS name>
Set-ADServiceAccount -Identity gmsa-pwdexpiry -PrincipalsAllowedToRetrieveManagedPassword $server1,$server2,$server3
(Also another reason why I want to work with an AD group used instead!)

Step 3: ‘Install’ the Service Account

According to Microsoft TechNet, the ‘Install-ADServiceAccount’ “makes the required changes locally that the service account password can be periodically reset by the computer”.
I’m not 100% sure what these changes are local to the Windows Server, but after you run the command, the Windows Server will have permission to reset the password to the GMSA.
You run this command on a Windows Server (who should already be in the list of ‘PrincipalsAllowed…’ computer stored in the GMSA):
Install-ADServiceAccount gsma-pwdexpiry
After you run this command, verify that both the ‘PrincipalsAllowed…’ switch and ‘Install’ commands are properly configured for this Windows Server:
Test-ADServiceAccount gsma-pwdexpiry
A value of ‘True’ for the Test command means that this server can now use the GMSA to execute the Task Scheduler.  A value of ‘False’ means that either the Windows Server was not added to the ‘Principals’ list (using either ‘New-ADServiceAccount’ or ‘Set-ADServiceAccount’) or the ‘Install-ADServiceAccount’ command did not execute properly.
Finally, in order to execute Task Scheduler jobs, be sure also to add the GSMA to the local security policy (or GPO) to be assigned the right: ‘Log on as batch job’:
Without this last step, the GMSA account will properly login to the Windows Server but the Task Scheduler job will not execute as the GMSA will not have the permission to do so.  If the Windows Server is a Domain Controller, then you will need to use a GPO (either ‘Default Domain Controller’ GPO or a new GPO).

Step 4:  Create the Task Schedule Job to run under GMSA

Windows Task Scheduler (at least on Windows Server 2012) does not allow you to specify a GMSA using the GUI.  Instead, you have to create the Task Schedule job using PowerShell.  The password prompt when you create the job using the GUI will ask you to specify a password when you go to save it (which you will never have!)
The following four commands will instead create the Task Schedule job to execute an example PowerShell script and specifies the GMSA object to run under (using the $principal object):
$action = New-ScheduledTaskAction powershell.exe -Argument “-file c:\Scripts\Script.ps1” -WorkingDirectory “C:\WINDOWS\system32\WindowsPowerShell\v1.0”
$trigger = New-ScheduledTaskTrigger -At 12:00 -Daily
$principal = New-ScheduledTaskPrincipal -UserID domain.lab\gmsa-pwdexpiry$ -LogonType Password -RunLevel highest
Register-ScheduledTask myAdminTask –Action $action –Trigger $trigger –Principal $principal

  1. Be sure to replace the ‘domain.lab’ with the FQDN of your domain and other variables such as script path & name
  2. It’s optional to use the switch: ‘-RunLevel highest’.  This just sets the job to ‘Run with highest privileges’.
  3. Be sure to specify a ‘$’ symbol after the GMSA name for the ‘-UserID’.  I also had to specify the FQDN instead of the NETBIOS for the domain name as well.

Step 5: Kick the tyres! (aka test test test)

Yes, when you’re using GMSA you need to be confident that you’re leaving something that is going to work even when the password expires.
Some common task that I like to perform to verify the GMSA is running include:

Force the GMSA to password change:

You can force the GMSA to reset it’s password by running the command:
Reset-ADServiceAccountPassword gmsa-pwdexpiry
You can then verify the time date of the last password set by running the command:
Get-ADServiceAccount -Identity gmsa-pwdexpiry -Properties passwordlastset
The value will be next to the ‘PasswordLastSet’ field:
After forcing a password reset, I would initiate a Task Schedule job execution and be sure that it operates without failure.

Verify Last Login Time

You can also verify that the GMSA is logging in properly to the server by checking the ‘Last Login value’:
Get-ADServiceAccount -Identity gmsa-pwdexpiry -Properties LastLogonDate

 View all Properties

Finally, if you’re curious as to what else that object stores, then this is the best method to review all values of the GMSA:
 Get-ADServiceAccount -Identity gmsa-pwdexpiry -Properties *
I would not recommend using ADSIEdit to review most GMSA attributes as I find that GUI is limited in showing the correct values for those objects, e.g. this is what happens when you view the ‘principals…’ value using ADSIEdit (called msDS-GroupMSAMembership in ADSI):

Appendix:  Why can’t I use an AD group with the switch: PrincipalsAllowedTo..?

Simply: you can! Just a word of warning.  I’ve been having intermittent issues in my lab with using AD groups.   I decided to base my blog purely on direct computer account memberships directly to the GMSA as I’ve not had an issue with that approach.
If find that the commands: ‘Install-ADServiceAccount’ and ‘Test-ADServiceAccount’ sometimes fails when I use group memberships.  Feel free to readily try however, it may due to issues in my lab.  In preparing this lab, I could not provide the screen shot of the issues as they’d mysteriously resolved themselves overnight (the worst kind of bug, an intermittent one!)
You can easily run the command to create a GMSA with a security group membership (e.g. ‘pwdexpiry’) as the sole ‘PrincipalsAllowed…’ object:
Then use try running the ‘Install-ADServiceAccount’ and ‘Test-ADServiceAccount’ on the Windows Servers whose computer accounts are members of that group.
Good luck!

Filtering images across a custom FIM / MIM ECMA import MA

A recent customer had a special request when I was designing and coding a new ECMA 2.2 based Management Agent (MA) or “Connector” for Microsoft Forefront Identity Manager (FIM).

(On a sidenote: FIM’s latest release is now Microsoft Identity Manager or “MIM”, but my customer hadn’t upgraded to the latest version).

Kloud previously were engaged to write a new ECMA based MA for Gallagher 7.5 (a door security card system) to facilitate the provisioning of access and removal of access tied to an HR system.

Whilst the majority of the ECMA was ‘export’ based, ie. FIM controlled most of the Gallagher data, one of the attributes we were importing back from this security card system was a person’s picture that was printed on these cards.


It seems that in the early days of the Gallagher system (maybe before digital cameras were invented?), they used to upload a static logo (similar to a WiFi symbol) in place of a person’s face.  It was only recently they changed their internal processes to upload the actual profile picture of someone rather than this logo.

The system has been upgraded a number of times, but the data migrated each time without anyone going back to update the existing people’s profile pictures.

This picture would then be physically printed on their security cards, which for people with their faces on their cards, they wanted to appear in Outlook and SharePoint.

The special request was that they wanted me to ‘filter out’ images that were just logos, and only import profile pictures into FIM from Gallagher (and then exported out of FIM into Active Directory and SharePoint).

There were many concerns with this request:

  • We had limited budget and time, so removing the offending logos manually was going to be very costly and difficult (not to mention very tiring for that person across 10,000 identities!)
  • Gallagher stores the picture in its database as a ‘byte’ value (rather than the picture filename used for the import).  That value format is exposed as well across the Gallagher web service API for that picture attribute.
  • Gallagher uses a ‘cropping system’ to ensure that only 240 x 160 pixel sized image is selected from the logo source file that was much larger.  Moving the ‘crop window’ up, down, left or right would change the byte value stored in Gallagher (I know, I tested it almost 20 different combinations!)
  • The logo file itself had multiple file versions, some of which had been cropped prior to uploading into Gallagher.


My colleague Boris pointed me to an open source Image Comparison DLL written by Jakob Krarup (which you can find here).

It’s called ‘XNA.FileComparison’ and it works superbly well.  Basically this code allows you to use Histogram values embedded within a picture to compare two different pictures and then calculate a ‘Percentage Different’ value between the two.

One of the methods included in this code (PercentageDifference()) is an ability to compare two picture based objects in C# and return a ‘percentage difference’ value which you can use to determine if the picture is a logo or a human (by comparing each image imports into the Connector Space to a reference logo picture stored on the FIM server).

To implement it, I did the following:

  1. Downloaded the sample ‘XNA.FileComparison’ executable (.exe) and ran a basic comparison between some source images and the reference logo image, and looked at the percentage difference values that the PercentageDifference() method would be returning.  This gave me an idea of how well the method was comparing the pictures.
  2. Downloaded the source Visual Studio solution (.SLN) file and re-compiled it for 64-bit systems (the compiled DLL version on the website only works on x86 architectures)
  3. Added the DLL as a Project reference to a newly created Management Agent Extension, whose source code you can find below

In my Management Agent code, I  then used this PercentageDifference() method to compare each Connector Space image against a Reference image (located in the Extensions folder of my FIM Synchronization Service).   The threshold value the method returned then determined whether to allow the image into the Metaverse (and if necessary copy it to the ‘Allowed’ folder) or block it from reaching the Metaverse (and if necessary copy it to the ‘Filtered’ folder).

I also exported each image’s respective threshold value to a file called “thresholds.txt” in each of the two different folders:  ‘Allowed’ and ‘Filtered’.

Each of the options above were configurable in an XML file such as:

  • Export folder locations for Allowed & Filtered pictures
  • Threshold filter percentage
  • A ‘do you want to export images?’ Boolean Export value (True/False), allowing you to turn off the image export on the Production FIM synchronization server once a suitable threshold value was found (e.g. 75%).

A sample XML that configures this option functionality can be seen below:


Testing and Results

To test the method, I would run a Full Import on the Gallagher MA to get all pictures values into the Connector Space.  Then I would run multiple ‘Full Synchronizations’ on the MA to get both ‘filtered’ and ‘allowed’ pictures into the two folder locations (whose locations are specified in the XML).

After each ‘Full Synchronization’ we reviewed all threshold values (thresholds.txt) in each folder and used the ‘large icons’ view in Windows Explorer to ensure all people’s faces ended up in the ‘Allowed’ folder and all logo type images ended up in the ‘Filtered’ folder.   I ensured I deleted all pictures and the thresholds.txt in each folder so I didn’t get confused the next run.  If a profile picture ended up in the ‘filtered folder’ or a logo ended up in the ‘allowed folder’, I’d modify the threshold value in the XML and run another Full Synchronization attempt.

Generally, the percentage difference for most ‘Allowed’ images was around 90-95% (i.e. the person’s face value was 90-95% different than the reference logo image).

What was interesting was that some allowed images got down as low as only 75% (ie. 75% different compared to the logo), so we set our production threshold filter to be 70%.  The reason some people’s picture was (percentage wise) “closer” to the logo, was due to some people’s profile pictures having a pure white background and the logo itself was mostly white in colour.

The highest ‘difference’ value for logo images was as high as 63% (the difference between a person’s logo image and the reference logo image was 63%, meaning it was a very “bad” logo image – usually heavily cropped showing more white space than usual).

So setting the filter threshold of 70% fit roughly halfway between 63% and 75%.  This ended up in a 100% success rate across about 6000 images which isn’t too shabby.

If in the future, there were people’s faces that were less than 70% different from the logo (and not meet the threshold so were unexpectedly filtered out), the customer had the choice to update the Management Agent configuration XML to lower the threshold value below 70%, or use a different picture.

Some Notes re: Code

Here are some ‘quirks’ related to my environment which you’ll see in the MA Extension code:

  • A small percentage of people in Gallagher customers did not have an Active Directory account (which I used for the image export filename), so in those cases I used a large random number if they didn’t to save the filename for the images (I was in a hurry!)
  • I’m writing to a custom Gallagher Event Viewer ID name, which will save all the logs to that custom Application Event Viewer (in case you’re trying to find the logs in the generic ‘Application’ Event Viewer log)
  • Hard coding of ‘thresholds.txt’ as a file name and the location of the Options XML (beware if you’re using a D:\ drive or other letter for the installation path of the Synchronization Service!)

Management Agent Extension Code




Follow ...+

Kloud Blog - Follow