How to use the FIM/MIM Azure Graph Management Agent for B2B Member/Guest Sync between Azure Tenants

Introduction

Just landed from the Microsoft Identity Manager Engineering Team is a new Management Agent built specifically for managing Azure Users and Groups and Contacts.

Microsoft have documented a number of scenarios for implementing the management agent. The scenarios the MA has been built for are valid and I have customers that will benefit from the new MA immediately. There is however another scenario I’m seeing from a number of customers that is possible but not detailed in the release notes. That is B2B Sync between Azure Tenants; using Microsoft Identity Manager to automate the creation of Guests in an Azure Tenant.

This could be one-way or two-way depending on what you are looking to achieve. Essentially this is the Azure equivalent of using FIM/MIM for Global Address List Sync.

B2B MA.png

Overview

The changes are minimal to the documentation provided with the Management Agent. Essentially;

  • ensure you enable Write Permissions to the Application you create in the AAD Tenant you will be writing too
  • Enable Invite Guest to the Organization permission on the AAD Application
  • Create an Outbound Sync Rule to an AAD Tenant with the necessary mandatory attributes
  • Configure the Management Agent for Export Sync Profiles

In the scenario I’m detailing here I’m showing taking a number of users from Org2 and provisioning them as Guests in Org1.

What I’m detailing here supplements the Microsoft documentation. For configuring the base MA definitely checkout their documentation here.

Microsoft Graph Permissions

When setting up the Graph Permissions you will need to have Write permissions to the Target Azure AD for at least Users. If you plan to also synchronize Groups or Contacts you’ll need to have Write permissions for those too.

Graph Permissions 1

In addition as we will be automating the invitation of users from one Tenant to another we will need to have the permission ‘Invite guest users to the organization’.

Graph Permissions 2

With those permissions selected and while authenticated as an Administrator select the Grant Permissions button to assign those permissions to the Application.

Grant Permissions 1Grant Permissions 2

Repeat this in both Azure AD Tenants if you are going to do bi-directional sync.  If not you only need write and invite permissions on the Tenant you will be creating Guest accounts in.

Creating the Import/Inbound Sync Rules Azure Tenants

Here is an example of my Import Sync Rules to get Members (Users) in from an Azure Tenant. I have an inbound sync rule for both Azure Tenants.

Sync Rules.PNG

Make sure you have ‘Create Resource in FIM‘ configured on the source (or both if doing bi-directional) Graph Connector.

Sync Rule Relationship.PNG

The attribute flow rules I’ve used are below. They are a combination of the necessary attributes to create the corresponding Guest account on the associated management agent and enough to be used as logic for scoping who gets created as a Guest in the other Tenant. I’ve also used existing attributes negating the need to create any new ones.

Inbound SyncRule Flow.PNG

Creating the Export/Outbound Sync Rule to a Partner B2B Tenant

For your Export/Outbound rule make sure you have ‘Create resource in external system’ configured.

Export Relationship.PNG

There are a number of mandatory attributes that need to be flowed out in order to create Guests in Azure AD. The key attributes are;

  • userType = Guest
  • accountEnabled = True
  • displayName is required
  • password is required (and not export_password as normally required on AD style MA’s in FIM/MIM)
  • mailNickname is required
  • for dn and id initially I’m using the id (flowed in import to employeeID) from the source tenant. This needs to be provided to the MA to get the object created. Azure will generate new values on export so we’ll see a rename come back in on the confirming import
  • userPrincipalName is in the format of
    • SOURCEUPN (with @ replaced with _ ) #EXT# DestinationUPNSuffix
    • e.g user1_org2.com#EXT#org1.com

Export Attributes.PNG

Here is an example of building a UPN.

UPN Rule.PNG

Sets, Workflows and MPR’s

I didn’t need to do anything special here. I just created a Set based on attributes coming in from the source Azure Tenant to scope who gets created in the target Tenant. An MPR that looks for transition into the Set and applies the Workflow that associates the Sync Rule.

End to End

After synchronizing in from the source (B2B Org 2) the provisioning rules trigger and created the Users as Guests on B2B Org 1.

Prov to Org1 1.PNG

Looking at the Pending Export we can see our rules have applied.

Pending Export.PNG

On Export the Guest accounts are successfully created.

Export Success.PNG

On the confirming import we get the rename as Azure has generated a new CN and therefore DN for the Guest user.

Rename on Import 2.PNG

Looking into Azure AD we can see one of our new Guest users.

User in AAD.PNG

Summary

Using the Microsoft Azure B2B Graph Management Agent we can leverage it to invite Users from one Tenant as Guests in another Tenant. Essentially an Azure version of GALSync.

 

Demystifying Managed Service Identities on Azure

Managed service identities (MSIs) are a great feature of Azure that are being gradually enabled on a number of different resource types. But when I’m talking to developers, operations engineers, and other Azure customers, I often find that there is some confusion and uncertainty about what they do. In this post I will explain what MSIs are and are not, where they make sense to use, and give some general advice on how to work with them.

What Do Managed Service Identities Do?

A managed service identity allows an Azure resource to identify itself to Azure Active Directory without needing to present any explicit credentials. Let’s explain that a little more.

In many situations, you may have Azure resources that need to securely communicate with other resources. For example, you may have an application running on Azure App Service that needs to retrieve some secrets from a Key Vault. Before MSIs existed, you would need to create an identity for the application in Azure AD, set up credentials for that application (also known as creating a service principal), configure the application to know these credentials, and then communicate with Azure AD to exchange the credentials for a short-lived token that Key Vault will accept. This requires quite a lot of upfront setup, and can be difficult to achieve within a fully automated deployment pipeline. Additionally, to maintain a high level of security, the credentials should be changed (rotated) regularly, and this requires even more manual effort.

With an MSI, in contrast, the App Service automatically gets its own identity in Azure AD, and there is a built-in way that the app can use its identity to retrieve a token. We don’t need to maintain any AD applications, create any credentials, or handle the rotation of these credentials ourselves. Azure takes care of it for us.

It can do this because Azure can identify the resource – it already knows where a given App Service or virtual machine ‘lives’ inside the Azure environment, so it can use this information to allow the application to identify itself to Azure AD without the need for exchanging credentials.

What Do Managed Service Identities Not Do?

Inbound requests: One of the biggest points of confusion about MSIs is whether they are used for inbound requests to the resource or for outbound requests from the resource. MSIs are for the latter – when a resource needs to make an outbound request, it can identify itself with an MSI and pass its identity along to the resource it’s requesting access to.

MSIs pair nicely with other features of Azure resources that allow for Azure AD tokens to be used for their own inbound requests. For example, Azure Key Vault accepts requests with an Azure AD token attached, and it evaluates which parts of Key Vault can be accessed based on the identity of the caller. An MSI can be used in conjunction with this feature to allow an Azure resource to directly access a Key Vault-managed secret.

Authorization: Another important point is that MSIs are only directly involved in authentication, and not in authorization. In other words, an MSI allows Azure AD to determine what the resource or application is, but that by itself says nothing about what the resource can do. For some Azure resources this is Azure’s own Identity and Access Management system (IAM). Key Vault is one exception – it maintains its own access control system, and is managed outside of Azure’s IAM. For non-Azure resources, we could communicate with any authorisation system that understands Azure AD tokens; an MSI will then just be another way of getting a valid token that an authorisation system can accept.

Another important point to be aware of is that the target resource doesn’t need to run within the same Azure subscription, or even within Azure at all. Any service that understands Azure Active Directory tokens should work with tokens for MSIs.

How to Use MSIs

Now that we know what MSIs can do, let’s have a look at how to use them. Generally there will be three main parts to working with an MSI: enabling the MSI; granting it rights to a target resource; and using it.

  1. Enabling an MSI on a resource. Before a resource can identify itself to Azure AD,it needs to be configured to expose an MSI. The way that you do this will depend on the specific resource type you’re enabling the MSI on. In App Services, an MSI can be enabled through the Azure Portal, through an ARM template, or through the Azure CLI, as documented here. For virtual machines, an MSI can be enabled through the Azure Portal or through an ARM template. Other MSI-enabled services have their own ways of doing this.

  2. Granting rights to the target resource. Once the resource has an MSI enabled, we can grant it rights to do something. The way that we do this is different depending on the type of target resource. For example, Key Vault requires that you configure its Access Policies, while to use the Event Hubs or the Azure Resource Manager APIs you need to use Azure’s IAM system. Other target resource types will have their own way of handling access control.

  3. Using the MSI to issue tokens. Finally, now that the resource’s MSI is enabled and has been granted rights to a target resource, it can be used to actually issue tokens so that a target resource request can be issued. Once again, the approach will be different depending on the resource type. For App Services, there is an HTTP endpoint within the App Service’s private environment that can be used to get a token, and there is also a .NET library that will handle the API calls if you’re using a supported platform. For virtual machines, there is also an HTTP endpoint that can similarly be used to obtain a token. Of course, you don’t need to specify any credentials when you call these endpoints – they’re only available within that App Service or virtual machine, and Azure handles all of the credentials for you.

Finding an MSI’s Details and Listing MSIs

There may be situations where we need to find our MSI’s details, such as the principal ID used to represent the application in Azure AD. For example, we may need to manually configure an external service to authorise our application to access it. As of April 2018, the Azure Portal shows MSIs when adding role assignments, but the Azure AD blade doesn’t seem to provide any way to view a list of MSIs. They are effectively hidden from the list of Azure AD applications. However, there are a couple of other ways we can find an MSI.

If we want to find a specific resource’s MSI details then we can go to the Azure Resource Explorer and find our resource. The JSON details for the resource will generally include an identity property, which in turn includes a principalId:

Screenshot 1

That principalId is the client ID of the service principal, and can be used for role assignments.

Another way to find and list MSIs is to use the Azure AD PowerShell cmdlets. The Get-AzureRmADServicePrincipal cmdlet will return back a complete list of service principals in your Azure AD directory, including any MSIs. MSIs have service principal names starting with https://identity.azure.net, and the ApplicationId is the client ID of the service principal:

Screenshot 2

Now that we’ve seen how to work with an MSI, let’s look at which Azure resources actually support creating and using them.

Resource Types with MSI and AAD Support

As of April 2018, there are only a small number of Azure services with support for creating MSIs, and of these, currently all of them are in preview. Additionally, while it’s not yet listed on that page, Azure API Management also supports MSIs – this is primarily for handling Key Vault integration for SSL certificates.

One important note is that for App Services, MSIs are currently incompatible with deployment slots – only the production slot gets assigned an MSI. Hopefully this will be resolved before MSIs become fully available and supported.

As I mentioned above, MSIs are really just a feature that allows a resource to assume an identity that Azure AD will accept. However, in order to actually use MSIs within Azure, it’s also helpful to look at which resource types support receiving requests with Azure AD authentication, and therefore support receiving MSIs on incoming requests. Microsoft maintain a list of these resource types here.

Example Scenarios

Now that we understand what MSIs are and how they can be used with AAD-enabled services, let’s look at a few example real-world scenarios where they can be used.

Virtual Machines and Key Vault

Azure Key Vault is a secure data store for secrets, keys, and certificates. Key Vault requires that every request is authenticated with Azure AD. As an example of how this might be used with an MSI, imagine we have an application running on a virtual machine that needs to retrieve a database connection string from Key Vault. Once the VM is configured with an MSI and the MSI is granted Key Vault access rights, the application can request a token and can then get the connection string without needing to maintain any credentials to access Key Vault.

API Management and Key Vault

Another great example of an MSI being used with Key Vault is Azure API Management. API Management creates a public domain name for the API gateway, to which we can assign a custom domain name and SSL certificate. We can store the SSL certificate inside Key Vault, and then give Azure API Management an MSI and access to that Key Vault secret. Once it has this, API Management can automatically retrieve the SSL certificate for the custom domain name straight from Key Vault, simplifying the certificate installation process and improving security by ensuring that the certificate is not directly passed around.

Azure Functions and Azure Resource Manager

Azure Resource Manager (ARM) is the deployment and resource management system used by Azure. ARM itself supports AAD authentication. Imagine we have an Azure Function that needs to scan our Azure subscription to find resources that have recently been created. In order to do this, the function needs to log into ARM and get a list of resources. Our Azure Functions app can expose an MSI, and so once that MSI has been granted reader rights on the resource group, the function can get a token to make ARM requests and get the list without needing to maintain any credentials.

App Services and Event Hubs/Service Bus

Event Hubs is a managed event stream. Communication to both publish onto, and subscribe to events from, the stream can be secured using Azure AD. An example scenario where MSIs would help here is when an application running on Azure App Service needs to publish events to an Event Hub. Once the App Service has been configured with an MSI, and Event Hubs has been configured to grant that MSI publishing permissions, the application can retrieve an Azure AD token and use it to post messages without having to maintain keys.

Service Bus provides a number of features related to messaging and queuing, including queues and topics (similar to queues but with multiple subscribers). As with Event Hubs, an application could use its MSI to post messages to a queue or to read messages from a topic subscription, without having to maintain keys.

App Services and Azure SQL

Azure SQL is a managed relational database, and it supports Azure AD authentication for incoming connections. A database can be configured to allow Azure AD users and applications to read or write specific types of data, to execute stored procedures, and to manage the database itself. When coupled with an App Service with an MSI, Azure SQL’s AAD support is very powerful – it reduces the need to provision and manage database credentials, and ensures that only a given application can log into a database with a given user account. Tomas Restrepo has written a great blog post explaining how to use Azure SQL with App Services and MSIs.

Summary

In this post we’ve looked into the details of managed service identities (MSIs) in Azure. MSIs provide some great security and management benefits for applications and systems hosted on Azure, and enable high levels of automation in our deployments. While they aren’t particularly complicated to understand, there are a few subtleties to be aware of. As long as you understand that MSIs are for authentication of a resource making an outbound request, and that authorisation is a separate thing that needs to be managed independently, you will be able to take advantage of MSIs with the services that already support them, as well as the services that may soon get MSI and AAD support.

Validating a Yubico YubiKeys’ One Time Password (OTP) using Single Factor Authentication and PowerShell

Multi-factor Authentication comes in many different formats. Physical tokens historically have been very common and moving forward with FIDO v2 standards will likely continue to be so for many security scenarios where soft tokens (think Authenticator Apps on mobile devices) aren’t possible.

Yubico YubiKeys are physical tokens that have a number of properties that make them desirable. They don’t use a battery (so aren’t limited to the life of the battery), they come in many differing formats (NFC, USB-3, USB-C), can hold multiple sets of credentials and support open standards for multi-factor authentication. You can checkout Yubico’s range of tokens here.

YubiKeys ship with a configuration already configured that allows them to be validated against YubiCloud. Before we configure them for a user I wanted a quick way to validate that the YubiKey was valid. You can do this using Yubico’s demo webpage here but for other reasons I needed to write my own. There wasn’t any PowerShell examples anywhere, so now that I’ve worked it out, I’m posting it here.

Prerequisites

You will need a Yubikey. You will need to register and obtain a Yubico API Key using a Yubikey from here.

Validation Script

Update the following script to change line 2 for your ClientID that  you received after registering against the Yubico API above.

Running the script validates that the Key if valid.

YubiKey Validation.PNG

Re-running the submission of the same key (i.e I didn’t generate a new OTP) gets the expected response that the Request is Replayed.

YubiKey Validation Failed.PNG

Summary

Using PowerShell we can negate the need to leverage any Yubico client libraries and validate a YubiKey against YubiCloud.

 

Using Microsoft Identity Manager Synchronisation Server’s Global Address List Synchronisation feature to create a shared global address book across three Exchange Forests

First published at https://nivleshc.wordpress.com

Introduction

Over the life of a company, there can be many acquisitions and mergers. During such events, the parent and the newly acquired entities have their IT “merged”. This allows for the removal of redundant systems and the reduction of expenses. It also fosters collaboration between the two entities. Unfortunately, the marriage of the two IT systems, can at times, take a long time.

To enable a more collaborative space between the parent and the newly acquired company, a shared “global address book” can be created, which will allow employees to quickly look up each others contact details easily.

In this blog, I will show how we can use Microsoft Identity Manager (MIM) 2016 Synchronisation Server’s  GALSync feature to extend the Global Address Book (GAL) of three Exchange Forests. The GAL will be populated with contacts corresponding to  mailboxes in the other Exchange Forests, and this will be automatically maintained, to ensure the contacts remain up-to-date.

Though this blog focuses on three Exchange Forests, it can easily be adapted for two Exchange Forests, if you remove all reference to the third AD Forest, AD Domain and Exchange Forest

For reference, we will be using the following:

Name: Contoso Limited (parent company)
Active Directory Forest: contoso.com
Active Directory Domain: contoso.com
Active Directory Forest Level: Windows Server 2008 R2
Exchange Server FQDN: CEX01.contoso.com
Exchange Server Version: Exchange 2010 SP3
Email Address Space owned: contoso.com, contoso.com.au
Number of employees: 2000

Name: Northwind Traders (newly acquired)
Active Directory Forest: northwind.com
Active Directory Domain: northwind.com
Active Directory Forest Level: Windows Server 2008 R2
Exchange Server FQDN: NWEX01.northwind.com
Exchange Server Version: Exchange 2010 SP3
Email Address Space owned: northwind.com, northwind.com.au
Number of employees: 400

Name: WingTip Toys (newly acquired)
Active Directory Forest: wingtiptoys.com
Active Directory Domain: wingtiptoys.com
Active Directory Forest Level: Windows Server 2008 R2
Exchange Server FQDN: WTTEX01.wingtiptoys.com
Exchange Server Version: Exchange 2010 SP3
Email Address Space owned: wingtiptoys.com, wingtiptoys.com.au
Number of employees: 600

 

Contoso, Northwind and WingTip Toys are connected using a wide area network and it has been decided that the MIM Synchronisation Server will be installed and configured in the the Contoso domain.

Preparation

Before we start, some preparation work has to be done to ensure there are no roadblocks or issues.

  • Cleanup of “inter forest” email objects
    • This is one of the most important things that must be done and I can’t stress this enough. You will have to go through all your email objects (mailboxes, contacts, mailuser objects) in each of the three Exchange Forests (Contoso, Northwind, WingTip Toys) and find any that are forwarding to the other Exchange forests. If there are any, these must be removed. GALSync will create email enabled contacts corresponding to the mailboxes in the other Exchange Forests, with  externalemailaddress of these new objects set to the primary email address of the other Exchange Forest’s objects. If duplicates arise because there were existing objects in the local Exchange Forest corresponding to the other Exchange Forest’s objects, this will cause the local Exchange Server to get confused and it will keep on queuing emails for these objects and will not deliver them [if after implementing GALSync, some users complain about not receiving emails from a certain Exchange Forest, this could be a possible reason]
  • Creation of Organisational Units (OU) that will be used by GALSync
    • Create the following Organisational Units in the three Active Directory domains
      • contoso.com\GALSync\LocalForest\Contacts
      • contoso.com\GALSync\RemoteForest\Contacts
      • northwind.com\GALSync\LocalForest\Contacts
      • northwind.com\GALSync\RemoteForest\Contacts
      • wingtiptoys.com\GALSync\LocalForest\Contacts
      • wingtiptoys.com\GALSync\RemoteForest\Contacts
  • Service Accounts
    • The following service accounts must be created in the specified Active Directory domains. You can change the name to comply with your own naming standards
      • MIM Synchronisation Server Service Account
        • UPN: svc-mimsync@contoso.com
        • AD Domain to create in: contoso.com
        • Permissions: non-privileged Active Directory service account
      • Management Agent Account to connect to Contoso.com AD Domain
        • UPN: svc-mimadma@contoso.com
        • AD Domain to create in: contoso.com
        • Permissions
          • non-privilged Active Directory service account
          • Grant “Replicating Directory Changes” permission
          • Grant the following permissions on the GALSync OU in the Contoso AD Domain that was created above. Ensure the permissions propagate to all sub-OUs within the GALSync OU
            • Create Contact Objects
            • Delete Contact Objects
            • Read all Properties
            • Write all Properties
          • Add to the Exchange Organization Management Active Directory security group in Contoso AD Domain
      • Management Agent Account to connect to Northwind.com AD Domain
        • UPN: svc-mimadma@northwind.com
        • AD Domain to create in: nothwind.com
        • Permissions
          • non-privilged Active Directory service account
          • Grant “Replicating Directory Changes” permission
          • Grant the following permissions on the GALSync OU in the Northwind AD Domain that was created above. Ensure the permissions propagate to all sub-OUs within GALSync OU
            • Create Contact Objects
            • Delete Contact Objects
            • Read all Properties
            • Write all Properties
          • Add to the Exchange Organization Management Active Directory security group in Northwind AD Domain
      • Management Agent Account to connect to WingTiptoys.com AD Domain
        • UPN: svc-mimadma@wingtiptoys.com
        • AD Domain to create in: wingtiptoys.com
        • Permissions
          • non-privilged Active Directory service account
          • Grant “Replicating Directory Changes” permission
          • Grant the following permissions on the GALSync OU in the WingTipToys AD Domain that was created above. Ensure the permissions propagate to all sub-OUs within GALSync OU
            • Create Contact Objects
            • Delete Contact Objects
            • Read all Properties
            • Write all Properties
          • Add to the Exchange Organization Management Active Directory security group in Northwind AD Domain
      • Service account used for the scheduled task job that will run the MIM RunProfiles script on the MIM Synchronisation Server
        • UPN: svc-mimscheduler@contoso.com
        • AD Domain to create in: Contoso.com (this can also be a local account on the MIM Synchronisation Server)
        • Permissions
          • non-privileged Active Directory service account
          • Grant “Log on as a batch job” user right on the MIM Synchronisation Server
          • Add to FIMSyncOperators security group on the MIM Synchronisation Server (this security group is created locally on the MIM Synchronisation Server after MIM Synchronisation Server has been installed)
  • SQL Server Permissions
    • MIM Synchronisation Server requires a Microsoft SQL Server to host its database. On the SQL Server, grant SQL SYSADMIN role to the account that you will be logged on as when installing MIM Synchronisation Server

Configuration

Provision a Microsoft Windows Server 2012 R2 in the Contoso.com Active Directory domain and install MIM 2016 Synchronisation Server. During installation, specify svc-mimsync@contoso.com as the account under which the MIM Synchronisation Service will run.

One thing to note is that GALSync will update the proxyaddress field for all mailboxes in its scope (mailboxes for which it will be creating contacts in the other Exchange Forests) with X500 entries.

Management Agent Configuration

  1. Once the MIM Synchronisation Server has been successfully installed, use the following steps to create the GALSync Management Agents. Open the Synchronisation Service Manager
    • Create GALSync Management Agent for Contoso.com AD Forest
      • From Tools menu, click Management Agents and then click Create
      •  In the Management Agent drop-down list, click Active Directory global address list (GAL) 
      • In the name type GALSyncMA for Contoso.com
      • On the Connect to an Active Directory Forest page, type the forest name, the MIM MA account details (svc-mimadma@contoso.com) and the domain name
      • In the next screen, specify the OUs that GALSync will query to find mailboxes to create contacts for in the other forests. Also, place a tick beside contoso.com\GALSync (this selects GALSync and all sub-OUs)
      • In the Containers screen, for
        • Target Container select Contoso.com\GALSync\RemoteForest\Contactsthis is the OU where MIM GALSync will create contacts corresponding to the mailboxes in Northwind and WingTipToys Exchange Forest
        • Source Container select Contoso.com\GALSync\LocalForest\Contactsthis is where MIM GALSync will create contacts corresponding to Contoso.com mailboxes. These will be sent to the GALSync/RemoteForest/Contacts OU in Northwind and WingTipToys AD Domain (personally, I haven’t seen any objects created in this OU)
      • In Exchange Configuration click Edit and enter all the email suffixes that belong to Contoso.com. The email suffixes listed here are used to filter out which email addresses from the original email object are added to the corresponding contact in the other Exchange Forests. In this case the email suffixes will be @contoso.com and @contoso.com.au. Note the @ before the email suffix)
      • Leave everything else as default and proceed to the Configure Extensions section. One thing I would like to mention here is that in Configure Connection Filter section, the Filter Type for user is supposed to be Declared (and is the default setting), not Rules extension as stated in https://technet.microsoft.com/en-us/library/cc708642(v=ws.10).aspx
      • In the Configure Extensions section, set the following
      • Click OK
    • Create GALSync Management Agent for Northwind.com AD Forest
      • From Tools menu, click Management Agents and then click Create
      •  In the Management Agent drop-down list, click Active Directory global address list (GAL) 
      • In the name type GALSyncMA for Northwind.com
      • On the Connect to an Active Directory Forest page, type the forest name, the MIM MA account details (svc-mimadma@northwind.com) and the domain name
      • In the next screen, specify the OUs that GALSync will query to find mailboxes to create contacts for in the other forests. Also, place a tick beside northwind.com\GALSync (this selects GALSync and all sub-OUs)
      • In the Containers screen, for
        • Target Container select Northwind.com\GALSync\RemoteForest\Contactsthis is the OU where MIM GALSync will create contacts corresponding to the mailboxes in Contoso and WingTipToys Exchange Forest
        • Source Container select Northwind.com\GALSync\LocalForest\Contactsthis is where MIM GALSync will create contacts corresponding to Northwind.com mailboxes. These will be sent to the GALSync/RemoteForest/Contacts OU in Contoso and WingTipToys AD Domain (personally, I haven’t seen any objects created in this OU)
      • In Exchange Configuration click Edit and enter all the email suffixes that belong to Northwind.com. The email suffixes listed here are used to filter out which email addresses from the original email object are added to the corresponding contact in the other Exchange Forests. In this case the email suffixes will be @northwind.com and @northwind.com.au. Note the @ before the email suffix)
      • Leave everything else as default and proceed to the Configure Extensions section. One thing I would like to mention here is that in Configure Connection Filter section, the Filter Type for user is supposed to be Declared (and is the default setting), not Rules extension as stated in https://technet.microsoft.com/en-us/library/cc708642(v=ws.10).aspx
      • In the Configure Extensions section, set the following
      • Click OK
    • Create GALSync Management Agent for WingTipToys.com AD Forest
      • From Tools menu, click Management Agents and then click Create
      •  In the Management Agent drop-down list, click Active Directory global address list (GAL) 
      • In the name type GALSyncMA for WingTipToys.com
      • On the Connect to an Active Directory Forest page, type the forest name, the MIM MA account details (svc-mimadma@wingtiptoys.com) and the domain name
      • In the next screen, specify the OUs that GALSync will query to find mailboxes to create contacts for in the other forests. Also, place a tick beside wingtiptoys.com\GALSync (this selects GALSync and all sub-OUs)
      • In the Containers screen, for
        • Target Container select WingTipToys.com\GALSync\RemoteForest\Contactsthis is the OU where MIM GALSync will create contacts corresponding to the mailboxes in Contoso and Northwind Exchange Forest
        • Source Container select WIngTipToys.com\GALSync\LocalForest\Contactsthis is where MIM GALSync will create contacts corresponding to WingTipToys.com mailboxes. These will be sent to the GALSync/RemoteForest/Contacts OU in Contoso and Northwind AD Domain (personally, I haven’t seen any objects created in this OU)
      • In Exchange Configuration click Edit and enter all the email suffixes that belong to WingTipToys.com. The email suffixes listed here are used to filter out which email addresses from the original email object are added to the corresponding contact in the other Exchange Forests. In this case the email suffixes will be @wingtiptoys.com and @wingtiptoys.com.au. Note the @ before the email suffix)
      • Leave everything else as default and proceed to the Configure Extensions section. One thing I would like to mention here is that in Configure Connection Filter section, the Filter Type for user is supposed to be Declared (and is the default setting), not Rules extension as stated in https://technet.microsoft.com/en-us/library/cc708642(v=ws.10).aspx
      • In the Configure Extensions section, set the following
      • Click OK
  2. Enable provisioning by using the following steps
    • In the Synchronisation Service Manager, from Tools select Options
    • Under Metaverse Rules Extensions ensure the following have been ticked
      • Enable metaverse rules extensions
      • Enable Provisioning Rules Extension

Run Profiles Execution Order

Congratulations! All configuration has now been completed. All we have to do now is to run the synchronisation jobs to get the mailbox object information from the three AD Forests into the MIM metaverse, let MIM GALSync do a bit of processing to find out which contacts are to be created in the other Exchange Forests, and then carry out an export, to create those contacts in the other Exchange Forests. Unfortunately, MIM has no way of finding out if the exports were successful, and that is why we will have to do a confirming import on all the management agents, so that MIM can find out if everything had been exported as expected.

From my testing, I have found that when MIM GALSync does its processing, it compares the mailboxes that an Exchange Forest has with what is in the MIM metaverse. MIM then exports out, as contacts, all objects that are in the metaverse but not in that particular Exchange Forest. These are populated in that AD Domains GALSync/RemoteForest/Contacts OU as AD objects and subsequently mail enabled using the Exchange RPS URI (remote PowerShell url)

CAUTION! Before you continue, you need to find out if a synchronisation solution had previously been deployed in the environment.

If any of the AD Forests had previously had a synchronisation solution deployed, then we will need to follow the run profile execution order mentioned below. This is done to ensure no duplicate contacts are created during the initial GAL synchronisation.

  1. Full Import (Staging Only) on GALSyncMA for Contoso.com
  2. Full Import (Staging Only) on GALSyncMA for Northwind.com
  3. Full Import (Staging Only) on GALSyncMA for WingTipToys.com
  4. Delta Synchronisation on GALSyncMA for Contoso.com
  5. Delta Synchronisation on GALSyncMA for Northwind.com
  6. Delta Synchronisation on GALSyncMA for WingTipToys.com
  7. Repeat Delta Synchronisation on GALSyncMA for Contoso.com
  8. Repeat Delta Synchronisation on GALSyncMA for Northwind.com
  9. Repeat Delta Synchronisation on GALSyncMA for WingTipToys.com
  10. Export on GALSyncMA for Contoso.com
  11. Export on GALSyncMA for Northwind.com
  12. Export on GALSyncMA for WingTipToys.com
  13. Delta Import on GALSyncMA for Contoso.com
  14. Delta Import on GALSyncMA for Northwind.com
  15. Delta Import on GALSyncMA for WingTipToys.com

 

If there hasn’t been any previous synchronisation solutions deployed in any of the AD Forests, then use the following runprofile order for the initial run

  1. Full Import (Staging Only) on GALSyncMA for Contoso.com
  2. Full Import (Staging Only) on GALSyncMA for Northwind.com
  3. Full Import (Staging Only) on GALSyncMA for WingTipToys.com
  4. Full Synchronisation on GALSyncMA for Contoso.com
  5. Full Synchronisation on GALSyncMA for Northwind.com
  6. Full Synchronisation on GALSyncMA for WingTipToys.com
  7. Export on GALSyncMA for Contoso.com
  8. Export on GALSyncMA for Northwind.com
  9. Export on GALSyncMA for WingTipToys.com
  10. Delta Import on GALSyncMA for Contoso.com
  11. Delta Import on GALSyncMA for Northwind.com
  12. Delta Import on GALSyncMA for WingTipToys.com

 

Once the initial synchronisation has completed, you will see contacts in each AD Domain’s GALSync\RemoteForest\Contacts OU corresponding to mailboxes in the other two Exchange Forests. These will have been email enabled and will show in the Exchange console and the online Global Address List.

Outlook clients that use offline address books won’t see the new contacts until the offline address book generation process has run on the Exchange servers and the updated offline address book has been downloaded by the outlook client.

To ensure the GALSync generated contacts remain up-to-date, the following runprofile execution order must be used from hereon. This should be repeated every 1 hour (or as per your required interval. Keep in mind that if after one cycle of the following order, if anything is still pending an Export, then this will be run at the next runprofile execution, so changes might not be seen for at most two runcycle intervals)

  1. Delta Import (Staging Only) on GALSyncMA for Contoso.com
  2. Delta Import (Staging Only) on GALSyncMA for Northwind.com
  3. Delta Import (Staging Only) on GALSyncMA for WingTipToys.com
  4. Delta Synchronisation on GALSyncMA for Contoso.com
  5. Delta Synchronisation on GALSyncMA for Northwind.com
  6. Delta Synchronisation on GALSyncMA for WingTipToys.com
  7. Export on GALSyncMA for Contoso.com
  8. Export on GALSyncMA for Northwind.com
  9. Export on GALSyncMA for WingTipToys.com
  10. Delta Import on GALSyncMA for Contoso.com
  11. Delta Import on GALSyncMA for Northwind.com
  12. Delta Import on GALSyncMA for WingTipToys.com

I don’t imagine anyone would want to run the runprofiles manually every hour 😉 So below is a script that can be used to do it.

Export all the runprofiles using the Synchronisation Service Manager as vbs scripts and place them in a folder c:\scripts\runprofiles on the MIM Synchronisation Server.

Copy the below script and save it as GALSync_RunProfiles.cmd in c:\scripts

@echo off
REM This script will run the MIM RunProfiles in the correct order
REM Author nivleshc@yahoo.com

set _script_dir="c:\scripts\runprofiles\"

REM Delta Import (Stage Only)
echo ContosoGALSyncMA Delta Import -StageOnly
CSCRIPT //B %_script_dir%ContosoGALSyncMA_Delta_Import_StageOnly.vbs

echo NorthwindGALSyncMA Delta Import -StageOnly
CSCRIPT //B %_script_dir%NorthwindGALSyncMA_Delta_Import_StageOnly.vbs

echo WingTipToysGAlSyncMA Delta Import -StageOnly
CSCRIPT //B %_script_dir%WingTipToysGAlSyncMA_Delta_Import_StageOnly.vbs

REM Delta Sync
echo ContosoGALSyncMA Delta Sync
CSCRIPT //B %_script_dir%ContosoGALSyncMA_Delta_Sync.vbs

echo NorthwindGALSyncMA Delta Sync
CSCRIPT //B %_script_dir%NorthwindGALSyncMA_Delta_Sync.vbs

echo WingTipToysGAlSyncMA Delta Sync
CSCRIPT //B %_script_dir%WingTipToysGAlSyncMA_Delta_Sync.vbs

REM Export
echo ContosoGALSyncMA Export
CSCRIPT //B %_script_dir%ContosoGALSyncMA_Export.vbs

echo NorthwindGALSyncMA Export
CSCRIPT //B %_script_dir%NorthwindGALSyncMA_Export.vbs

echo WingTipToysGAlSyncMA Export
CSCRIPT //B %_script_dir%WingTipToysGAlSyncMA_Export.vbs

REM Delta Import
echo ContosoGALSyncMA Delta Import
CSCRIPT //B %_script_dir%ContosoGALSyncMA_Delta_Import.vbs

echo NorthwindGALSyncMA Delta Import
CSCRIPT //B %_script_dir%NorthwindGALSyncMA_Delta_Import.vbs

echo WingTipToysGAlSyncMA Delta Import
CSCRIPT //B %_script_dir%WingTipToysGAlSyncMA_Delta_Import.vbs

 

Create a scheduled task on the MIM Synchronisation Server to run GALSync_RunProfiles.cmd script every 1 hour (or for an interval of your choice). Use the task scheduler account that had been created during the preparation stage to run this scheduled task.

Some Gotchas

I have found that sometimes some mailboxes fail to be imported into the MIM Metaverse and report an mv-constraing-restriction violation on the msExchSafeSenderHash attribute. This error occurs because the AD attribute msExchSafeSenderHash is much longer than what the corresponding MIM Metaverse attribute is. Since this attribute is not being used to create the contacts in the other Exchange Forests, it can be dropped from the attribute flow.

Use the steps outlined in the following article to resolve this issue. https://social.technet.microsoft.com/wiki/contents/articles/10733.troubleshooting-galsync-mv-constraint-violation-msexchsafesenderhash.aspx

 

I hope this blog helps those that might be wanting to create a shared “global address book” among multiple Exchange Forests.

As mentioned previous, the above steps can be used to create a  shared “global address book” for two Exchange Forests as well. In that case, just remove any mention of the third AD Forest, AD Domain and Exchange Forest from the above mentioned steps.

Enjoy 😉

Identifying Active Directory Users with Pwned Passwords using Microsoft/Forefront Identity Manager v2, k-Anonymity and Have I Been Pwned

Background

In August 2017 Troy Hunted released a sizeable list of Pwned Passwords. 320 Million in fact.

I subsequently wrote this post on Identifying Active Directory Users with Pwned Passwords using Microsoft/Forefront Identity Manager which called the API and sets a boolean attribute in the MIM Service that could be used with business logic to force users with accounts that have compromised passwords to change their password on next logon.

Whilst that was a proof of concept/discussion point of sorts AND  I had a disclaimer about sending passwords across the internet to a third-party service there was a lot of momentum around the HIBP API and I developed a solution and wrote this update to check the passwords locally.

Today Troy has released v2 of that list and updated the API with new features and functionality. If you’re playing catch-up I encourage you to read Troy’s post from August last year, and my two posts about checking Active Directory passwords against that list.

Leveraging V2 (with k-Anonymity) of the Have I Been Pwned API

With v2 of the HIBP passwod list and API the number of leaked credentials in the list has grown to half a billion. 501,636,842 Pwned Passwords to be exact.

With the v2 list in conjunction with Junade Ali from Cloudflare the API has been updated to be leveraged with a level of anonymity. Instead of sending a SHA-1 hash of the password to check if the password you’re checking is on the list you can now send a truncated version of the SHA-1 hash of the password and you will be returned a set of passwords from the HIBP v2 API. This is done using a concept called k-anonymity detailed brilliantly here by Junade Ali.

v2 of the API also returns a score for each password in the list. Basically how many times the password has previously been seen in leaked credentials lists. Brilliant.

Updated Pwned PowerShell Management Agent for Pwned Password Lookup

Below is an updated Password.ps1 script for the previous API version of my Pwned Password Management Agent for Microsoft Identity Manager. It functions by;

  • taking the new password received from PCNS
  • hashes the password to SHA-1 format
  • looks up the v2 HIBP API using part of the SHA-1 hash
  • updates the MIM Service with Pwned Password status

Checkout the original post with all the rest of the details here.

Summary

Of course you can also download (recommended via Torrent) the Pwned Password dataset. Keep in mind that the compressed dataset is 8.75 GB and uncompressed is 29.4 GB. Convert that into an On-Premise SQL Table(s) as I did in the linked post at the beginning of this post and you’ll be well in excess of that.

Awesome work from Tory and Junade.

 

Using MIMWAL to mass update users

The generalised Workflow Activity Library for Microsoft Identity Manager (MIMWAL) is not particularly new, but I’m regularly finding new ways of using it.

TL;DR: [//Queries/Key/Attribute] can be used as a target to update multiple accounts at once

Working from colleague Michael’s previous post Introduction to MIM Advanced Workflows with MIMWAL (Update Resource workflow section), user accounts can be populated with location details when a location code is set or updated.

But, consider the question: what happens when the source location object is updated with new details, without moving the user between locations? A common occurrence is when the building name/number/street changes due to typing errors. New accounts and accounts moved into the location have the updated details, but accounts already in the location are stuck with old address details. The same can also occur with department codes and department names, or a number of other value->name mappings.

This is a scenario I’ve seen built poorly several times, with a variety of external script hackery used to address it, if it is addressed at all, and I’m here to say the MIMWAL makes it ridiculously easy. If you don’t have the MIMWAL deployed into your MIM (or FIM) environment, I seriously recommend doing so – it will repay the effort taken to build and deploy very quickly (Check the post above for build/deploy notes).

Mass Updates Solution

All it takes with MIMWAL, is one workflow, containing just activity, paired with a policy rule (not documented here).

Start a new workflow definition:

  • Name: Update all people in location when Location is updated
  • Type: Action
  • Run on policy update: False

CreateWorkflowLocationUpdate1

Add Activity -> Activity Picker -> “WAL: Update Resources” -> Select

CreateWorkflowLocationUpdate2

You’ll have to tick Advanced Features, then tick Query Resources when revealed to be able to enter the query.

Here, we’re searching for all person objects which have their location reference set to the location object which has just been updated. If you’re not using location references, you could use a search such as “/Person[_locationCode = ‘[//Target/_locationCode]’]” instead.

  • Advanced Features: True
  • Query Resources: True
  • Queries:
    • Key: Users
    • XPath Filter: /Person[_locationObject = ‘[//Target/ObjectID]’]

CreateWorkflowLocationUpdate3

Here is where the magic happens. I haven’t found many examples on the web; hopefully this makes it more obvious how updating multiple objects at a time works.

The target expression is the result set from the above query, and the particular attribute required. In this example, we’re collecting the Address attribute from the updated location object ([//Target/Address]) if it exists, or null otherwise, and sending it to the Address attribute on the query result set called Users ([//Queries/Users/Address]).

Updates:

  • Value Expression: IIF(IsPresent([//Target/Address]),[//Target/Address],Null())
  • Target: [//Queries/Users/Address]
  • Allow Null: True

and so on, for all appropriate attributes.

CreateWorkflowLocationUpdate4

Notes

Very simple to set up, but can be slow to execute across large result sets as each object (e.g. Person) is updated as a separate request, so try to make changes to location data in quiet processing times, or on an admin service instance … but you do that anyway, right?

Automating the creation of Azure IoT Hubs and the registration of IoT Devices with PowerShell and VS Code

The creation of an Azure IoT Hub is quick and simple, either through the Azure Portal or using PowerShell. But what can get more time-consuming is the registration of IoT Devices with the IoT Hub and generation of SAS Tokens for them for authentication.

In my experiments with micro-controllers and their integration with Azure IoT Services I often find I keep having to manually do tasks that should have just been automated. So I did. In this post I’ll cover using PowerShell to;

  • create an Azure IoT Hub
  • register an Azure IoT Device
  • generate a SAS Token for the IoT Device to use for authentication to an Azure IoT Hub from a Mongoose OS enabled ESP8266 micro controller

IoT Integration

Prerequisites

In order to fully test this, ideally you will have a micro-controller. I’m using an ESP8266 based micro-controller like this one. If you want to test this out without physical hardware, you could generate your own DeviceID (any text string) and use the AzureIoT Library detailed further on to send MQTT messages.

You will also require an Azure Subscription. I detail using a Free Tier Azure IoT Hub which is limited to 8000 messages per day. And instead of using PowerShell/PowerShell ISE get/use Visual Studio Code.

Finally you will need the AzureRM and AzureIoT PowerShell modules. With WinRM 5.x you can get them from the PowerShell Gallery with;

install-module AzureRM
install-module AzureIoT

Create an Azure IoT Hub

The script below will create a Free Tier Azure IoT Hub. Change the location (line 15) for which Azure Region you will use (the commands on the lines above will list what regions are available), the Resource Group Name that will be created to hold it (line 18) and the name of the IoT Hub (line 23) and let it rip.

From your micro-controller we will need the DeviceID. I’m using the ID generated by the device which I obtained from the Device Configuration => Expert View of my Mongoose OS enabled ESP8266.

Device Config.PNG

Register the IoT Device with our Azure IoT Hub

Using the AzureIoT PowerShell module we can automate the creation/registration of the IoT Device. Update the script below for the name of your IoTHub and the Resource Group that contains it that you created earlier (lines 7 and 11). Update line 21 for the DeviceID or your new IoT Device. I’m using the AzureIoT module to do this. With WinRM 5.x you can install it quickly fromt the gallery with install-module AzureIoT

Looking at our IoTHub in the Azure Portal we can see the newly registered IoT Device.

DeviceCreated.png

Generate an IoT Device SAS Token

The final step is to create a SAS Token for our IoT Device to use to connect to the Azure IoTHub. Historically you would use the IoT Device Explorer to do that. Alternatively you can also use the code samples to implement the SAS Device Token generation via an Azure Function App. Examples exist for JavaScript and C#. However as of mid-January 2018 you can do it direct from VS Code or Azure Cloud Shell using the Azure CLI and the IOT Extension. I’m using this method here as it is the quickest and simplest method of generating the Device SAS Token.

The command to generate a token that would work for all Devices on an IoT Hub is

az iot hub generate-sas-token --hub-name

Here I show executing it via the Azure Cloud Shell after installing the IOT Extensions as detailed here. To open the Bash Cloud Shell select the >_ icon next to the notification bell in the right top menu list.

Generate IOT Device SAS Token.PNG

As we have done everything else via PowerShell and VS Code we can also do it easily from VS Code. Install the Azure CLI Tools (v0.4.0 or later in VS Code as detailed here. Then from within VS Code press Control + Shift + P to open the Command Palette and enter Azure: Sign In. Sign in to Azure. Then Control + Shift + P again and enter Azure: Open Bash in Cloud Shell to open a Bash Azure CLI Shell. You can check to see if you have the Azure CLI IOT Extension (if you’ve previously used the Azure CLI for IoT operations) by typing;

az extension show --name azure-cli-iot-ext

and install it if you don’t with;

az extension add --name azure-cli-iot-ext

Then run the same command from VS Code to generate the SAS Token

az iot hub generate-sas-token --hub-name

VSCode Generate SAS Token.PNG

NOTE: That token can then be used for any Device registered with that IOT Hub. Best practice is to have a token per device. To do that type

az iot hub generate-sas-token --hub-name  --device-id

Generate SAS Token VS Code Per Device.PNG

By default you will get a token valid for 1 hour. Use the –duration switch to specify the duration of the token you require for your environment.

We can now take the SAS Token and put it into our MQTT Config on our Mongoose OS IoT Device. Update the Device Configuration using Expert View and Save.

Mongoose SAS Config.PNG

We can then test our IoT Device sending updates to our Azure IoT Hub. Update Init.js using the telemetry sample code from Mongoose.

load('api_config.js');
 load('api_mqtt.js');
 load('api_sys.js');
 load('api_timer.js');

let topic = 'devices/' + Cfg.get('device.id') + '/messages/events/';

Timer.set(1000, true /* repeat */, function() {
 let msg = JSON.stringify({ ram: Sys.free_ram() });
 let ok = MQTT.pub(topic, msg, 1);
 print(ok, topic, '->', msg);
 }, null);

We can then see the telemetry being sent to our Azure IOT Hub using MQTT. In the Device Logs after the datestamp and before device/ if you see a 0 instead of 1 (as shown below) then your conenction information or SAS Token is not correct.

Mongoose IOT Events.png

On the Auzre IoT side we can then check the metrics and see the incoming telemetry using the counter Telemetry Metrics Sent as shown below.

Telemetry Metrics Sent.PNG

If you don’t have an IoT Device you can simulate one using PowerShell. The following example shows sending a message to our IoT Hub (using variables from previous scripts).

$deviceParams = @{
 iotConnString = $IoTConnectionString
 deviceId = $deviceID
}
$deviceKeys = Get-IoTDeviceKey @deviceParams 
# Get Device 
$device = Get-IoTDeviceClient -iotHubUri $IOTHubDeviceURI -deviceId $deviceID -deviceKey $deviceKeys.DevicePrimaryKey

# Send Message
$deviceMessageParams = @{
 deviceClient = $device
 messageString = "Azure IOT Hub"
}
Send-IoTDeviceMessage -deviceClient $deviceMessageParams

Summary

Using PowerShell we have quickly been able to;

  • Create an Azure IoT Hub
  • Register an IoT Device
  • Generate the SAS Token for the IoT Device to authenticate to our IoT Hub with
  • Configure our IoT Device to send telemetry to our Azure IoT Hub and verify integration/connectivity

We are now ready to implement logic onto our IoT Device for whatever it is you are looking to achieve.

 

Using Intune and AAD to protect against Spectre and Meltdown

Kieran Jacobsen is a Melbourne based IT professional specialising in Microsoft infrastructure, automation and security. Kieran is Head of Information Technology for Microsoft partner, Readify.

I’m a big fan of Intune’s device compliance policies and Azure Active Directory’s (AAD) conditional access rules. They’re one piece of the puzzle in moving to a Beyond Corp model, that I believe is the future of enterprise networks.

Compliance policies allow us to define what it takes for a device (typically a client) to be considered secure. The rules could include the use of a password, encryption, OS version or even if a device has been jail-broken or rooted. In Intune we can define policies for Windows 8.1 and 10, Windows Phone, macOS, iOS and Android.

One critical thing to highlight is that compliance policies don’t enforce settings and don’t make changes to a device. They’re simply a decision-making tool that allows Intune (and AAD) to determine the status of the device. If we want to make changes to a device, we need to use Intune configuration policies. It’s up to the admin or the user to make a non-compliant device compliant.

A common misconception with compliance policies are that the verification process occurs in real-time, that is, when a user tries to login the device’s compliance status is checked. The check occurs on an hourly basis, though users and admins can trigger off a check manually.

The next piece of the puzzle are conditional access policies. These are policies that allow us to target different sign-in experiences for different applications, devices and user accounts. A user on a compliant device may receive a different sign-in experience to someone using a web browser on some random unknown device.

How compliance policies and conditional access work together

To understand how Compliance Policies and Conditional Access works, let’s look at a user story.

Fred works in the Accounting department at Capital Systems. Fred has a work PC issued by Capital’s IT Team, and a home PC that he bought from a local computer store.

The IT team has defined two Conditional Access policies:

  • For Office 365: a user can connect from a compliant device, or needs to pass an MFA check.
  • For the finance system: the user can only connect from a compliant device and must pass an MFA check.

How does this work in practice?

When Fred tries to access his email from his work device, perhaps through a browser, AAD will check his device’s compliance status during login. As Fred’s work PC is compliant, it will allow access to his email.

Fred now goes home, on the train he remembers he forgot to reply to an important email. When Fred gets home, he starts his home PC and navigates to the Office 365 portal. This time, AAD doesn’t know the device, so it will treat the device as non-compliant. This time, Fred will be prompted to complete MFA before he can access his email.

Things are different for Fred when he tries to access Capital’s finance system. Fred will be able to access this system from his work PC as its complaint, assuming he completes an MFA request. Fred won’t be able to access this finance system from his home PC as his device isn’t compliant.

These rules allow Capital System’s IT team to govern who can access an application, from what devices they can access it from, and if they need to complete MFA.

Ensuring Spectre and Meltdown Patches are installed

We can use compliance policies to check if a device’s OS version contains the Spectre and Meltdown patches. When Intune checks the devices compliance, if isn’t running with expected patch level, it will be marked as non-compliant.

What does this mean for the user? In Fred’s case, if his work PC lacks those updates, he may receive extra MFA prompts and loose access to the finance system, until he installs the right patches.

The Intune portal and PowerBI can be used to generate reports on device compliance and identify devices that need attention. You can also configure Intune to email a user when their device becomes non-compliant. This email can be customised, I recommend that you include a link to a remediation guide or to your support system.

Configuring Intune Compliance Policies

Compliance policies can be created and modified in the Azure Portal via the Intune panel. Simply navigate to the Device Compliance and then Policies. You’ll need to create a separate policy for each OS that you want to manage compliance.

Within a compliance policy, we specify an OS version using a “major.minor.build” formatted string.

The major versions numbers are:

  • Windows 10 – 10.0 Note that the .0 is important*
  • Windows 8.1 – 3
  • macOS – 10

We can express things like Windows 10 Fall Creators, or macOS High Sierra using the minor version number.

  • Windows 10 Fall Creators Update – 10.0.16299
  • macOS High Sierra – 10.13

Finally, we can narrow down to a specific release or patch by using the build version number. For instance, the January updates for each platform are:

  • Windows 10 Fall Creators Update – 10.0.16299.192
  • macOS High Sierra – 10.13.2

You can specify the minimum and maximum OS version by navigating to Properties, Settings and then Device Properties.

 

Windows+10

Setting the minimum Windows 10 version in a compliance policy.

macOS

Setting the minimum macOS version in a compliance policy.

Once you have made this change, devices that don’t meet the minimum version will be marked as non-compliant during their next compliance evaluation.

Kieran Jacobsen

Automating the generation of Microsoft Identity Manager Configuration Documentation

Introduction

Last year Microsoft released the Microsoft Identity Manager Configuration Documenter which is available here. It is a fantastic little tool from Microsoft that supersedes its predecessor from the Microsoft Identity Manager 2003 Resource Toolkit (which only documented the Sync Server Configuration).

Running the tool (a PowerShell Module) against a base out-of-the-box reference configuration for FIM/MIM Servers reconciled against an exported configuration from the MIM Sync and Service Servers from an implementation, generates an HTML Report document that details the existing configuration of the MIM Service and MIM Sync.

Overview

Last year I wrote this post based on an automated solution I implemented to perform nightly backups of a FIM/MIM environment during development.

This post details how I’ve automated another daily task for a large development environment where a number of changes are going on and I wanted to have documentation generated that detailed the configuration for each day. Partly to quickly be able to work out what has changed when needing to roll back/re-validate changes, and also to have the individual configs from each day so they could also be used if we need to rollback.

The process uses an Azure Function App that uses Remote PowerShell into MIM to;

  1. Leverage a modified (stream lined version) of my nightly backup Azure Function to generate the Schema.xml and Policy.xml MIM Service configuration files and the Lithnet MIIS Automation PowerShell Module installed on the MIM Sync Server to export of the MIM Sync Server Configuration
  2. Create a sub-directory for each day under the MIM Documenter Tool to hold the daily configs
  3. Execute the generation of the Report and have the Report copied to the daily config/documented solution

Obtaining and configuring the MIM Configuration Documenter

Download the MIM Configuration Documenter from here and extract it to somewhere like c:\FIMDoco on your FIM/MIM Sync Server. In this example in my Dev environment I have the MIM Sync and Service/Portal all on a single server.

Then update the Invoke-Documenter-Contoso.ps1 (or whatever you’ve renamed the script to) to make the following changes;

  • Update the following lines for your version and include the new variable $schedulePath and add it to the $pilotConfig variable. Create the C:\FIMDoco\Customer and C:\FIMDoco\Customer\Dev directories (replace Customer with something appropriate.
######## Edit as appropriate ####################################
$schedulePath = Get-Date -format dd-MM-yyyy
$pilotConfig = "Customer\Dev\$($schedulePath)" # the path of the Pilot / Target config export files relative to the MIM Configuration Documenter "Data" folder.
$productionConfig = "MIM-SP1-Base_4.4.1302.0" # the path of the Production / Baseline config export files relative to the MIM Configuration Documenter "Data" folder.
$reportType = "SyncAndService" # "SyncOnly" # "ServiceOnly"
#################################################################
  • Remark out the Host Settings as these won’t work via a WebJob/Azure Function
#$hostSettings = (Get-Host).PrivateData
#$hostSettings.WarningBackgroundColor = "red"
#$hostSettings.WarningForegroundColor = "white"
  • Remark out the last line as this will be executed as part of the automation and we want it to complete silently at the end.
# Read-Host "Press any key to exit"

It should then look something like this;

Azure Function to Automate execution of the Documenter

As per my nightly backup process;

  • I configured my MIM Sync Server to accept Remote PowerShell Sessions. That involved enabling WinRM, creating a certificate, creating the listener, opening the firewall port and enabling the incoming port on the NSG . You can easily do all that by following my instructions here. From the same post I setup up the encrypted password file and uploaded it to my Function App and set the Function App Application Settings for MIMSyncCredUser and MIMSyncCredPassword.
  • I created an Azure PowerShell Timer Function App. Pretty much the same as I show in this post, except choose Timer.
    • I configured my Schedule for 6am every morning using the following CRON configuration
0 0 6 * * *
  • I also needed to increase the timeout for the Azure Function as generation of the files to execute the report and the time to execute the report exceed the default timeout of 5 mins in my environment (19 Management Agents). I increased the timeout to the maximum of 10 mins as detailed here. Essentially added the following to the host.json file in the wwwroot directory of my Function App.
{
 "functionTimeout": "00:10:00"
}

Azure Function PowerShell Timer Script (Run.ps1)

This is the Function App PowerShell Script that uses Remote PowerShell into the MIM Sync/Service Server to export the configuration using the Lithnet MIIS Automation and Microsoft FIM Automation PowerShell modules.

Note: If your MIM Service is on a different host you will need to install the Microsoft FIM Automation PowerShell Module on your MIM Sync Server and update the script below to change references to http://localhost:5725 to whatever your MIM Service host is.

Testing the Function App

With everything configured, manually running the Function App and checking the output window if you’ve configured everything correct will show success in the Logs as shown below. In this environment with 19 Management Agents it takes 7 minutes to run.

Running the Azure Function.PNG

The Report

The outcome everyday just after 6am is I have (via automation);

  • an Export of the Policy and Schema Configuration from my MIM Service
  • an Export of the MIM Sync Server Configuration (the Metaverse and all Management Agents)
  • I have the MIM Configuration Documenter Report generated
  • If I need to rollback changes I have the ability to do that on a daily interval (either for a MIM Service change or an individual Management Agent change

Under the c:\FIMDoco\Data\Customer\Dev\Report directory is the HTML Configuration Report.

Report Output.PNG

Opening the report in a browser we have the configuration of the MIM Sync and MIM Service.

Report

 

Provisioning Hybrid Exchange/Exchange Online Mailboxes with Microsoft Identity Manager

Introduction

Working for Kloud all our projects involve Cloud services, and all our customers have varying and unique requirements. Recently one of our customers embarked on their migration from On-Premise Exchange to Exchange Online. Nothing really groundbreaking there though, however they had a number of unique requirements including management of Litigation Hold. And that needed to be integrated with their existing Microsoft Identity Manager implementation (that currently provisions new users to their Exchange 2013 environment). They also required that management of the Exchange environment still be possible via the Exchange Management Console against a local Exchange server. This post details how I integrated the environments using MIM.

Overview

In order to integrate the Provisioning and Lifecycle management of Exchange Online Mailboxes in a Hybrid Exchange with Microsoft Identity Manager I created a custom PowerShell Management Agent simply because it was going to provide the flexibility I needed.

Provisioning is based on the following process;

  1. MIM Creates new user in Active Directory (no changes to existing MIM provisioning process)
  2. Azure Active Directory Connect synchronises the user to Azure Active Directory
  3. The Exchange Online MIM Management Agent sees the corresponding AAD account for the new user
  4. MIM Declarative Rules trigger the creation of a new Remote Mailbox for the AD/AAD user against the local Exchange 2013 On Premise Server. This allows the EMC to be used to manage mailboxes On Premise even though the mailbox resides in Office365/Exchange Online
  5. AADC/Exchange synchronises the information as part of the Hybrid Exchange topology
  6. MIM sees the EXO Mailbox configuration for the new user and enables Litigation Hold against the EXO Mailbox (if required)

The following diagram graphically depicts this process.

EXO IDM Provisioning Solution.png

Exchange Online PowerShell MA

As always I’m using my favourite PowerShell Management Agent, the Grandfeldt PS MA now available on Github here.

Schema Script

The Schema script configures the schema required for current and future EXO management requirements. The Schema is based on a single Object Class “MailUser” but pulls the information from a combination of Azure AD User and Exchange Online Mailbox object classes for an associated account. Azure AD User objects are prefixed by ‘AAD’. Non AAD prefixed attributes are EXO Mailbox attributes.

Import Script

The Import script connects to both Azure AD and Exchange Online to retrieve Azure AD User accounts and if present the associated mailbox for a user.

It retrieves all Member AAD User Accounts and puts them into a Hash Table. Connectivity to AAD is via the AzureADPreview PowerShell module. It retrieves all Mailboxes and puts them into a Hash Table. It then processes all the mailboxes first including the associated AAD User account (utilising a join via userPrincipalName).

Following processing all mailboxes the remainder of the AAD Accounts (without mailboxes) are processed.

Export Script

The Export script performs the necessary integration against OnPremise Exchange Server 2013 for Provisioning and Exchange Online for the rest of management. Both utilise Remote Powershell. It also leverages the Lithnet MIIS Automation PowerShell Module to query the Metaverse to validate current object statuses.

Wiring it all up

The scripts above will allow you to integrate a FIM/MIM implementation with AAD/EXO for management of users EXO Mailboxes. You’ll need connectivity from the MIM Sync Server to AAD/O365 in order to manage them.  Everything else I wired up using a few Sets, Workflows, Sync Rules and MPR’s.