Migrating Sharepoint 2013 on prem to Office365 using Sharegate

Recently I completed a migration project which brought a number of sub-sites within Sharepoint 2013 on-premise to the cloud (Sharepoint Online). We decided to use Sharegate as the primary tool due to the simplistic of it.

Although it might sound as a straightforward process, there are a few things worth to be checked pre and post migration and I have summarized them here. I found it easier to have these information recorded in a spreadsheet with different tabs:

Pre-migration check:

  1. First thing, Get Site Admin access!

    This is the first and foremost important step, get yourself the admin access. It could be a lengthy process especially in a large corporation environment. The best level of access is being granted as the Site Collection Admin for all sites, but sometimes this might not be possible. Hence, getting Site Administrator access is the bare minimum for getting migration to work.

    You will likely be granted Global Admin on the new tenant at most cases, but if not, ask for it!

  2. List down active site collection features

    Whatever feature activated on the source site would need to be activated on the destination site as well. Therefore, we need to record down what have been activated on the source site. If there is any third party feature activated, you will need to liaise with relevant stakeholder in regards to whether it is still required on the new site. If it is, it is highly likely that a separate piece of license is required as the new environment will be a cloud based, rather than on-premise. Take Nintex Workflow for example, Nintex Workflow Online is a separate license comparing to Nintex Workflow 2013.

  3. Segregate the list of sites, inventory analysis

    I found it important to list down all the list of sites you are going to migrate, distinguish if they are site collections or just subsites. What I did was to put each site under a new tab, with all its site contents listed. Next to each lists/ libraries, I have fields for the list type, number of items and comment (if any).

    Go through each of the content, preferably sit down with the site owner and get in details of it. Some useful questions can be asked

  • Is this still relevant? Can it be deleted or skipped for the migration?
  • Is this heavily used? How often does it get accessed?
  • Does this form have custom edit/ new form? Sometimes owners might not even know, so you might have to take extra look by scanning through the forms.
  • Check if pages have custom script with site URL references as this will need to be changed to accommodate the new site url.

It would also be useful to get a comprehensive knowledge of how much storage each site holds. This can help you working out which site has the most content, hence likely to take the longest time during the migration. Sharegate has an inventory reporting tool, which can help but it requires Site Collection Admin access.

  1. Discuss some of the limitations

    Pages library

    Pages library under each site need specific attention, especially if you don’t have site collection admin! Pages which inherit any content type and master page from the parent site will not have these migrated across by Sharegate, meaning these pages will either not be created at the new site, or they will simply show as using default master page. This needs to be communicated and discussed with each owners.

    External Sharing

    External users will not be migrated across to the new site! These are users who won’ be provisioned in the new tenant but still require access to Sharepoint. They will need to be added (invited) manually to a site using their O365 email account or a Microsoft account.

    An O365 account would be whatever account they have been using to get on to their own Sharepoint Online. If they have not had one, they would need to use their Microsoft account, which would be a Hotmail/ Outlook account. Once they have been invited, they would need to response to the email by signing into the portal in order to get provisioned. New SPO site collection will need to have external sharing enabled before external access can happen. For more information, refer to: https://support.office.com/en-us/article/Manage-external-sharing-for-your-SharePoint-Online-environment-C8A462EB-0723-4B0B-8D0A-70FEAFE4BE85

    What can’t Sharegate do?

    Some of the following minor things cannot be migrated to O365:

  • User alerts – user will need to reset their alerts on new site
  • Personal views – user will need to create their personal views again on new site
  • Web part connections – any web part connections will not be preserved

For more, refer: https://support.share-gate.com/hc/en-us/categories/115000076328-Limitations

Performing the migration:

  1. Pick the right time

    Doing the migration at the low activity period would be ideal. User communications should be sent out to inform about the actual date happening as earlier as possible. I tend to stick to middle of the week as that way we still have a couple of days left to solve any issues instead of doing it on Friday or Saturday.

  2. Locking old sites

    During the migration, we do not want any users to be making changes to the old site. If you are migrating site collections, fortunately there’s a way to lock it down, provided you having access to the central admin portal. See https://technet.microsoft.com/en-us/library/cc263238.aspx

    However, if you are migrating sub-sites, there’s no way to lock down a sole sub-site, except changing its site permissions. That also means changing the site permissions risk having all these permissions information lost, so it would be ideal to record these permissions before making any changes. Also, take extra note on lists or libraries with unique permissions, which means they do not inherit site permissions, hence won’t be “locked unless manually changed respectively.

  3. Beware of O365 traffic jam

    Always stick to the Insane mode when running the migration in Sharegate. The Insane mode makes use of the new Offie 365 Migration API which is the fastest way to migrate huge volumes of data to Office365. While it’s been fast to export these data to Office365, I did find a delay in waiting for Office365 to import these into Sharepoint tenant. Sometimes, it could sit there for an hour before continuing with the import. Also, avoid running too many sessions if your VM is not powerful enough.

  4. Delta migration

    The good thing with using Sharegate is that you could do delta migration, which means you only migrate those files which have been modified or added since last migrated. However, it doesn’t handle deletion! If any files have been removed since you last migrated, running a delta sync will not delete these files from the destination end. Therefore, best practice is still delete the list from the destination site and re-create it using the Site Object wizard.

Post-migration check:

Doing the migration at the low activity period would be ideal. User communications should be sent out to inform about the actual date happening as earlier as possible. I tend to stick to middle of the week as that way we still have a couple of days left to solve any issues instead of doing it on Friday or Saturday.

Things to check:

  • Users can still access relevant pages, list and libraries
  • Users can still CRUD files/ items
  • Users can open Office web app (there can be different experience related to authentication when opening Office files, in most cases, users should only get prompted the very first time opening)

Exchange Online & Splunk – Automating the solution

NOTES FROM THE FIELD:

I have recently been consulting on, what I think is a pretty cool engagement to integrate some Office365 mailbox data into the Splunk reporting platform.

I initially thought about using a .csv export methodology however through trial & error (more error than trial if I’m being honest), and realising that this method still required some manual interaction, I decided to embark on finding a fully automated solution.

The final solution comprises the below components:

  • Splunk HTTP event collector
    • Splunk hostname
    • Token from HTTP event collector config page
  • Azure automation account
    • Azure Run As Account
    • Azure Runbook
    • Exchange Online credentials (registered to Azure automation account

I’m not going to run through the creation of the automation account, or required credentials as these had already been created, however there is a great guide to configuring the solution I have used for this customer at  https://www.splunk.com/blog/2017/10/05/splunking-microsoft-cloud-data-part-3.html

What the PowerShell script we are using will achieve is the following:

  • Connect to Azure and Exchange Online – Azure run as account authentication
  • Configure variables for connection to Splunk HTTP event collector
  • Collect mailbox data from the Exchange Online environment
  • Split the mailbox data into parts for faster processing
  • Specify SSL/TLS protocol settings for self-signed cert in test environment
  • Create a JSON object to be posted to the Splunk environment
  • HTTP POST the data directly to Splunk

The Code:

#Clear Existing PS Sessions
Get-PSSession | Remove-PSSession | Out-Null
#Create Split Function for CSV file
function Split-array {
param($inArray,[int]$parts,[int]$size)
if($parts) {
$PartSize=[Math]::Ceiling($inArray.count/$parts)
}
if($size) {
$PartSize=$size
$parts=[Math]::Ceiling($inArray.count/$size)
}
$outArray=New-Object’System.Collections.Generic.List[psobject]’
for($i=1;$i-le$parts;$i++) {
$start=(($i-1)*$PartSize)
$end=(($i)*$PartSize)-1
if($end-ge$inArray.count) {$end=$inArray.count-1}
$outArray.Add(@($inArray[$start..$end]))
}
return,$outArray
}
function Connect-ExchangeOnline {
param(
$Creds
)
#Connect to Exchange Online
$Session=New-PSSession –ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/-Credential $Credentials-Authentication Basic -AllowRedirection
$Commands=@(“Add-MailboxPermission”,”Add-RecipientPermission”,”Remove-RecipientPermission”,”Remove-MailboxPermission”,”Get-MailboxPermission”,”Get-User”,”Get-DistributionGroupMember”,”Get-DistributionGroup”,”Get-Mailbox”)
Import-PSSession-Session $Session-DisableNameChecking:$true-AllowClobber:$true-CommandName $commands|Out-Null
}
#Create Variables
$SplunkHost = “Your Splunk hostname or IP Address”
$SplunkEventCollectorPort = “8088”
$SplunkEventCollectorToken = “Splunk Token from Http Event Collector”
$servicePrincipalConnection = Get-AutomationConnection -Name ‘AzureRunAsConnection’
$credentials = Get-AutomationPSCredential -Name ‘Exchange Online’
#Connect to Azure
Add-AzureRMAccount -ServicePrincipal -Tenant $servicePrincipalConnection.TenantID -ApplicationId $servicePrincipalConnection.ApplicationID -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
#Connect to Exchange Online
Connect-ExchangeOnline -Creds $credentials
#Invoke Script
$mailboxes = Get-Mailbox -resultsize unlimited | select-object -property DisplayName, PrimarySMTPAddress, IsMailboxEnabled, ForwardingSmtpAddress, GrantSendOnBehalfTo, ProhibitSendReceiveQuota, AddressBookPolicy
#Get Current Date & Time
$time = get-date -Format s
#Convert Timezone to Australia/Brisbane
$bnetime = [System.TimeZoneInfo]::ConvertTimeBySystemTimeZoneId($time, [System.TimeZoneInfo]::Local.Id, ‘E. Australia Standard Time’)
#Adding Time Column to Output
$mailboxes = $mailboxes | Select-Object @{expression = {$bnetime}; Name = ‘Time’}, DisplayName, PrimarySMTPAddress, IsMailboxEnabled, ForwardingSmtpAddress, GrantSendOnBehalfTo, ProhibitSendReceiveQuota, AddressBookPolicy
#Create Split Array for Mailboxes Spreadsheet
$recipients = Split-array -inArray $mailboxes -parts 5
#Create JSON objects and HTTP Post to Splunk HTTP Event Collector
foreach ($recipient in $recipients) {
foreach($rin$recipient) {
#Create SSL Validation Bypass for Self-Signed Certificate in Testing
$AllProtocols = [System.Net.SecurityProtocolType]’Ssl3,Tls,Tls11,Tls12′
[System.Net.ServicePointManager]::SecurityProtocol = $AllProtocols
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
#Get JSON string to post to Splunk
$StringToPost = “{ `”Time`”: `”$($r.Time)`”, `”DisplayName`”: `”$($r.DisplayName)`”, `”PrimarySMTPAddress`”: `”$($r.PrimarySmtpAddress)`”, `”IsMailboxEnabled`”: `”$($r.IsMailboxEnabled)`”, `”ForwardingSmtpAddress`”: `”$($r.ForwardingSmtpAddress)`”, `”GrantSendOnBehalfTo`”: `”$($r.GrantSendOnBehalfTo)`”, `”ProhibitSendReceiveQuota`”: `”$($r.ProhibitSendReceiveQuota)`”, `”AddressBookPolicy`”: `”$($r.AddressBookPolicy)`” }”
$uri = “https://” + $SplunkHost + “:” + $SplunkEventCollectorPort + “/services/collector/raw”
$header = @{“Authorization”=”Splunk ” + $SplunkEventCollectorToken}
#Post to Splunk Http Event Collector
Invoke-RestMethod -Method Post -Uri $uri -Body $StringToPost -Header $header
}
}
Get-PSSession | Remove-PSSession | Out-Null

 

The final output that can be seen in Splunk looks like the following:

11/13/17
12:28:22.000 PM
{ [-]
AddressBookPolicy:
DisplayName: Shane Fisher
ForwardingSmtpAddress:
GrantSendOnBehalfTo:
IsMailboxEnabled: True
PrimarySMTPAddress: shane.fisher@xxxxxxxx.com.au
ProhibitSendReceiveQuota: 50 GB (53,687,091,200 bytes)
Time: 11/13/2017 12:28:22
}Show as raw text·         AddressBookPolicy =  

·         DisplayName = Shane Fisher

·         ForwardingSmtpAddress =  

·         GrantSendOnBehalfTo =  

·         IsMailboxEnabled = True

·         PrimarySMTPAddress = shane.fisher@xxxxxxxx.com.au

·         ProhibitSendReceiveQuota = 50 GB (53,687,091,200 bytes)

I hope this helps some of you out there.

Cheers,

Shane.

 

 

 

A tool to find mailbox permission dependencies

First published at https://nivleshc.wordpress.com

When planning to migrate mailboxes to Office 365, a lot of care must be taken around which mailboxes are moved together. The rule of the thumb is “those that work together, move together”. The reason for taking this approach is due to the fact that there are some permissions that do not work cross-premises and can cause issues. For instance, if a mailbox has delegate permissions to another mailbox (these are permissions that have been assigned using Outlook email client) and if one is migrated to Office 365 while the other remains on-premises, the delegate permissions capability is broken as it does not work cross-premises.

During the recent Microsoft Ignite, it was announced that there are a lot of features coming to Office 365 which will help with the cross-premises access issues.

I have been using Roman Zarka’s Export-MailboxPermissions.ps1 (part of https://blogs.technet.microsoft.com/zarkatech/2015/06/11/migrate-mailbox-permissions-to-office-365/ bundle) script to export all on-premises mailboxes permissions then using the output to decide which mailboxes move together. Believe me, this can be quite a challenge!

Recently, while having a casual conversation with one of my colleagues, I was introduced to an Excel  spreadsheet that he had created. Being the Excel guru that he is, he was doing various VLOOKUPs into the outputs from Roman Zarka’s script, to find out if the mailboxes he was intending to migrate had any permission dependencies with other mailboxes. I just stared at the spreadsheet with awe, and uttered the words “dude, that is simply awesome!”

I was hooked on that spreadsheet. However, I started craving for it to do more. So I decided to take it on myself to add some more features to it. However, not being too savvy with Excel, I decided to use PowerShell instead. Thus was born Find_MailboxPermssions_Dependencies.ps1

I will now walk you through the script and explain what it does

 

  1. The first pre-requisite for Find_MailboxPermissions_Dependencies.ps1 are the four output files from Roman Zarka’s Export-MailboxPermissions.ps1 script (MailboxAccess.csv, MailboxFolderDelegate.csv, MailboxSendAs.csv, MaiboxSendOnBehalf.csv)
  2. The next pre-requisite is details about the on-premises mailboxes. The on-premises Exchange environment must be queried and the details output into a csv file with the name OnPrem_Mbx_Details.csv. The csv must contain the following information (along the following column headings)“DisplayName, UserPrincipalName, PrimarySmtpAddress, RecipientTypeDetails, Department, Title, Office, State, OrganizationalUnit”
  3. The last pre-requisite is information about mailboxes that are already in Office 365. Use PowerShell to connect to Exchange Online and then run the following command (where O365_Mbx_Details.csv is the output file)
    Get-Mailbox -ResultSize unlimited | Select DisplayName,UserPrincipalName,EmailAddresses,WindowsEmailAddress,RecipientTypeDetails | Export-Csv -NoTypeInformation -Path O365_Mbx_Details.csv 

    If there are no mailboxes in Office 365, then create a blank file and put the following column headings in it “DisplayName”, “UserPrincipalName”, “EmailAddresses”, “WindowsEmailAddress”, “RecipientTypeDetails”. Save the file as O365_Mbx_Details.csv

  4. Next, put the above files in the same folder and then update the variable $root_dir in the script with the path to the folder (the path must end with a )
  5. It is assumed that the above files have the following names
    • MailboxAccess.csv
    • MailboxFolderDelegate.csv
    • MailboxSendAs.csv
    • MailboxSendOnBehalf.csv
    • O365_Mbx_Details.csv
    • OnPrem_Mbx_Details.csv
  6.  Now, that all the inputs have been taken care of, run the script.
  7. The first task the script does is to validate if the input files are present. If any of them are not found, the script outputs an error and terminates.
  8. Next, the files are read and stored in memory
  9. Now for the heart of the script. It goes through each of the mailboxes in the OnPrem_Mbx_Details.csv file and finds the following
    • all mailboxes that have been given SendOnBehalf permissions to this mailbox
    • all mailboxes that this mailbox has been given SendOnBehalf permissions on
    • all mailboxes that have been given SendAs permissions to this mailbox
    • all mailboxes that this mailbox has been given SendAs permissions on
    • all mailboxes that have been given Delegate permissions to this mailbox
    • all mailboxes that this mailbox has been given Delegate permissions on
    • all mailboxes that have been given Mailbox Access permissions on this mailbox
    • all mailboxes that this mailbox has been given Mailbox Access permissions on
    • if the mailbox that this mailbox has given the above permissions to or has got permissions on has already been migrated to Office 365
  10. The results are then output to a csv file (the name of the output file is of the format Find_MailboxPermissions_Dependencies_{timestamp of when script was run}_csv.csv
  11. The columns in the output file are explained below
Column Name Description
PermTo_OtherMbx_Or_FromOtherMbx? This is Y if the mailbox has given permissions to or has permissions on other mailboxes. Is N if there are no permission dependencies for this mailbox
PermTo_Or_PermFrom_O365Mbx? This is TRUE if the mailbox that this mailbox has given permissions to or has permissions on is  already in Office 365
Migration Readiness This is a color code based on the migration readiness of this permission. This will be further explained below
DisplayName The display name of the on-premises mailbox for which the permission dependency is being found
UserPrincipalName The userprincipalname of the on-premises mailbox for which the permission dependency is being found
PrimarySmtp The primarySmtp of the on-premises mailbox  for which the permission dependency is being found
MailboxType The mailbox type of the on-premises mailbox  for which the permission dependency is being found
Department This is the department the on-premises mailbox belongs to (inherited from Active Directory object)
Title This is the title that this on-premises mailbox has (inherited from Active Directory object)
SendOnBehalf_GivenTo emailaddress of the mailbox that has been given SendOnBehalf permissions to this on-premises mailbox
SendOnBehalf_GivenOn emailaddress of the mailbox that this on-premises mailbox has been given SendOnBehalf permissions to
SendAs_GivenTo emailaddress of the mailbox that has been given SendAs permissions to this on-premises mailbox
SendAs_GivenOn emailaddress of the mailbox that this on-premises mailbox has been given SendAs permissions on
MailboxFolderDelegate_GivenTo emailaddress of the mailbox that has been given Delegate access to this on-premises mailbox
MailboxFolderDelegate_GivenTo_FolderLocation the folders of the on-premises mailbox that the delegate access has been given to
MailboxFolderDelegate_GivenTo_DelegateAccess the type of delegate access that has been given on this on-premises mailbox
MailboxFolderDelegate_GivenOn email address of the mailbox that this on-premises mailbox has been given Delegate Access to
MailboxFolderDelegate_GivenOn_FolderLocation the folders that this on-premises mailbox has been given delegate access to
MailboxFolderDelegate_GivenOn_DelegateAccess the type of delegate access that this on-premises mailbox has been given
MailboxAccess_GivenTo emailaddress of the mailbox that has been given Mailbox Access to this on-premises mailbox
MailboxAccess_GivenTo_DelegateAccess the type of Mailbox Access that has been given on this on-premises mailbox
MailboxAccess_GivenOn emailaddress of the mailbox that this mailbox has been given Mailbox Access to
MailboxAccess_GivenOn_DelegateAccess the type of Mailbox Access that this on-premises mailbox has been given
OrganizationalUnit the Organizational Unit for the on-premises mailbox

The color codes in the column Migration Readiness correspond to the following

  • LightBlue – this on-premises mailbox has no permission dependencies and can be migrated
  • DarkGreen  – this on-premises mailbox has got a Mailbox Access permission dependency to another mailbox. It can be migrated while the other mailbox can remain on-premises, without experiencing any issues as Mailbox Access permissions are supported cross-premises.
  • LightGreen – this on-premises mailbox can be migrated without issues as the permission dependency is on a mailbox that is already in Office 365
  • Orange – this on-premises mailbox has SendAs permissions given to/or on another on-premises mailbox. If both mailboxes are not migrated at the same time, the SendAs capability will be broken. Lately, it has been noticed that this capability can be restored by re-applying the SendAs permissions to both the migrated and on-premises mailbox post migration
  • Pink – the on-premises mailbox has FolderDelegate given to/or on another on-premises mailbox. If both mailboxes are not migrated at the same time, the FolderDelegate capability will be broken. A possible workaround is to replace the FolderDelegate permission with Full Mailbox access as this works cross-premises, however there are privacy concerns around this workaround as this will enable the delegate to see all the contents of the mailbox instead of just the folders they had been given access on.
  • Red – the on-premises mailbox has SendOnBehalf permissions given to/or on another on-premises mailbox. If both mailboxes are not migrated at the same time, the SendOnBehalf capability will be broken. A possible workaround could be to replace SendOnBehalf with SendAs however the possible implications of this change must be investigated

Yay, the output has now been generated. All we need to do now is to make it look pretty in Excel 🙂

Carry out the following steps

  • Import the output csv file into Excel, using the semi-colon “;” as the delimiter (I couldn’t use commas as the delimiter as sometimes department,titles etc fields use them and this causes issues with the output file)
  • Create Conditional Formatting rules for the column Migration Readiness so that the fill color of this cell corresponds to the word in this column (for instance, if the word is LightBlue then create a rule to apply a light blue fill to the cell)

Thats it Folks! The mailbox permissions dependency spreadsheet is now ready. It provides a single-pane view to all the permissions across your on-premises mailboxes and gives a color coded analysis on which mailboxes can be migrated on their own without any issues and which might experience issues if they are not migrated in the same batch with the ones they have permissions dependencies on.

In the output file, for each on-premises mailbox, each line represents a permission dependency (unless the column PermTo_OtherMbx_Or_FromOtherMbx? is N). If there are more than one set of permissions applicable to an on-premises mailbox, these are displayed consecutively underneath each other.

It is imperative that the migration readiness of the mailbox be evaluated based on the migration readiness of all the permissions associated with that mailbox.

Find_MailboxPermissions_Dependencies.ps1 can be downloaded from  GitHub

A sample of the spreadsheet that was created using the output from the Find_MailboxPermissions_Dependencies.ps1 script can be downloaded from https://github.com/nivleshc/arm/blob/master/Sample%20Output_MailboxPermissions%20Dependencies.xlsx

I hope this script comes in handy when you are planning your migration batches and helps alleviate some of the headache that this task brings with it.

Till the next time, have a great day 😉

Restoring deleted OneDrive sites in Office365

A customer has requested whether it was possible to restore a OneDrive site that had been deleted when the user’s account was marked for deletion in AD. After a bit of research, I was able to restore the site back and retrieved the files (luckily it was deleted less than 30 days ago).

Read More

How to configure a Graphical PowerShell Dev/Admin/Support User Interface for Azure/Office365/Microsoft Identity Manager

During the development of an identity management solution I find myself with multiple PowerShell/RDP sessions connected to multiple environments using different credentials often to obtain trivial data/information. It is easy to trip yourself up as well with remote powershell sessions to differing environments. If only there was a simple UI that could front-end a set of PowerShell modules and make those simple queries quick and painless. Likewise to allow support staff to execute a canned set of queries without providing them elevated permissions.

I figured someone would have already solved this problem and after some searching with the right keywords I found the powershell-command-executor-ui from bitsofinfo . Looking into it he had solved a lot of the issues with building a UI front-end for PowerShell with the powershell-command-executor and the stateful-process-command-proxy. That solution provided the framework for what I was thinking. The ability to provide a UI for PowerShell using powershell modules including remote powershell was exactly what I was after. And it was built on NodeJS and AngularJS so simple enough for some customization.

Introduction

In this blog post I’ll detail how I’ve leveraged the projects listed above for integration with;

Initially I had a vision of serving up the UI from an Azure WebApp. NodeJS on Azure WebApp’s is supported, however with all the solution dependencies I just couldn’t get it working.

My fallback was to then look to serve up the UI from a Windows Server 2016 Nano Server. I learnt from my efforts that a number of the PowerShell modules I was looking to provide a UI for, have .NET Framework dependencies. Nano Server does not have full .NET Framework support. Microsoft state to do so would mean the server would no longer be Nano.

For now I’ve deployed an Azure Windows Server 2016 Server secured by an Azure NSG to only allow my machine to access it. More on security later.

Overview

Simply, put the details in Github for the powershell-command-executor provide the architecture and integration. What I will detail is the modifications I’ve made to utilize the more recent AzureADPreview PowerShell Module over the MSOL PowerShell Module. I also updated the dependencies of the solution for the latest versions and hooked it into Microsoft Identity Manager. I also made a few changes to allow different credentials to be used for Azure and Microsoft Identity Manager.

Getting Started

I highly recommend you start with your implementation on a local development workstation/development virtual machine. When you have a working version you’re happy with you can then look at other ways of presenting and securing it.

NodeJS

NodeJS is the webserver for this solution. Download NodeJS for your Windows host here. I’m using the 64-bit version, but have also implemented the solution on 32-bit. Install NodeJS on your local development workstation/development virtual machine.

You can accept all the defaults.

Following the installation of NodeJS download the powershell-command-executor-ui from GitHub. Select Clone 0r Download, Download ZIP and save it to your machine.

Right click the download when it has finished and select Extract All. Select Browse and create a folder at the root of C:\ named nodejs. Extract powershell-command-executor-ui.

Locate the c:\nodejs\powershell-command-executor-ui-master\package.json file.

Using an editor such as Notepad++ update the package.json file ……

…… so that it looks like the following. This will utilise the latest versions of the dependencies for the solution.

From an elevated (Administrator) command prompt in the c:\nodejs\powershell-command-executor-ui-master directory run “c:\program files\nodejs npm” installThis will read the package.json file you edited and download the dependencies for the solution.

You can see in the screenshot below NodeJS has downloaded all the items in package.json including the powershell-command-executor and stateful-process-command-proxy.

When you now list the directories under C:\nodejs\powershell-command-executor-ui-master\node_modules you will see those packages and all their dependencies.

We can now test that we have a working PowerShell UI NodeJS website. From an elevated command prompt whilst still in the c:\nodejs\powershell-command-executor-ui-master directory run “c:\Program Files\nodejs\node.exe” bin\www

Open a browser on the same host and go to http://localhost:3000”. You should see the default UI.

Configuration and Customization

Now it is time to configure and customize the PowerShell UI for our needs.

The files we are going to edit are:

  • C:\nodejs\powershell-command-executor-ui-master\routes\index.js
    • Update Paths to the encrypted credentials files used to connect to Azure, MIM. We’ll create the encrypted credentials files soon.
  • C:\nodejs\powershell-command-executor-ui-master\public\console.html
    • Update for your customizations for CSS etc.
  • C:\nodejs\powershell-command-executor-ui-master\node_modules\powershell-command-executor\O365Utils.js
    • Update for PowerShell Modules to Import
    • Update for Commands to make available in the UI

We also need to get a couple of PowerShell Modules installed on the host so they are available to the site. The two I’m using I’ve mentioned earlier. With WMF5 intalled using Powershell we can simply install them as per the commands below.

Install-Module AzureADPreview
Install-Module LithnetRMA

In order to connect to our Microsoft Identity Manager Synchronization Server we are going to need to enable Remote Powershell on our Microsoft Identity Manager Synchronization Server. This post I wrote here details all the setup tasks to make that work. Test that you can connect via RPS to your MIM Sync Server before updating the scripts below.

Likewise for the Microsoft Identity Manager Service Server. Make sure after installing the LithnetRMA Powershell Module you can connect to the MIM Service using something similar to:

# Import LithnetRMA PS Module
import-module lithnetrma

# MIM AD User Admin
$username = "mimadmin@mim.mydomain.com"
# Password 
$password = "Secr3tSq1rr3l!" | convertto-securestring -AsPlainText -Force
# PS Creds
$credentials = New-Object System.Management.Automation.PSCredential $Username,$password

# Connect to the FIM service instance
# Will require an inbound rule for TCP 5725 (or your MIM Service Server Port) in you Resource Group Network Security Group Config
Set-ResourceManagementClient -BaseAddress http://mymimportalserver.:5725 -Credentials $credentials

 

\routes\index.js

This file details the encrypted credentials the site uses. You will need to generate the encrypted credentials for your environment. You can do this using the powershell-credentials-encryption-tools. Download that script to your workstation and unzip it. Open the credentialEncryptor.ps1 script using an Administrator PowerShell ISE session.

I’ve changed the index.js to accept two sets of credentials. This is because your Azure Admin Credentials are going to be different from your MIM Administrator Credentials (both in name and password). The username for my Azure account looks something like myname@mycompany.com whereas for MIM it is Domainname\Username.

Provide an account name for your Azure environment and the associated password.

The tool will create the encrypted credential files.

Rename the encrypted.credentials file to whatever makes sense for your environment. I’ve renamed it creds1.encrypted.credentials.

Now we re-run the script to create another set of encrypted credentials. This time for Microsoft Identity Manager. Once created, rename the encrypted.credentials file to something that makes sense in your environment. I’ve renamed the second set to creds2.encrypted.credentials.

We now need to copy the following files to your UI Website C:\nodejs\powershell-command-executor-ui-master directory:

  • creds1.encrypted.credentials
  • creds2.encrypted.credentials
  • decryptUtil.ps1
  • secret.key

Navigate back to Routes.js and open the file in an editor such as Notepad++

Update the index.js file for the path to your credentials files. We also need to add in the additional credentials file.

The changes to the file are, the paths to the files we just copied above along with the addition var PATH_TO_ENCRYPTED_RPSCREDENTIALS_FILE for the second set of credentials used for Microsoft Identity Manager.

var PATH_TO_DECRYPT_UTILS_SCRIPT = "C:\\nodejs\\powershell-command-executor-ui-master\\decryptUtil.ps1";
var PATH_TO_ENCRYPTED_CREDENTIALS_FILE = "C:\\nodejs\\powershell-command-executor-ui-master\\creds1.encrypted.credentials";
var PATH_TO_ENCRYPTED_RPSCREDENTIALS_FILE = "C:\\nodejs\\powershell-command-executor-ui-master\\creds2-encrypted.credentials";
var PATH_TO_SECRET_KEY = "C:\\nodejs\\powershell-command-executor-ui-master\\secret.key";


Also to initCommands to pass through the additional credentials file


initCommands: o365Utils.getO365PSInitCommands(
 PATH_TO_DECRYPT_UTILS_SCRIPT,
 PATH_TO_ENCRYPTED_CREDENTIALS_FILE,
 PATH_TO_ENCRYPTED_RPSCREDENTIALS_FILE,
 PATH_TO_SECRET_KEY,
 10000,30000,3600000),

Here is the full index.js file for reference.

 

public/console.html

The public/console.html file is for formatting and associated UI components. The key things I’ve updated are the Bootstrap and AngularJS versions. Those are contained in the top of the html document. A summary is below.

https://ajax.googleapis.com/ajax/libs/angularjs/1.6.1/angular.min.js
https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.6.1/angular-resource.min.js
http://javascripts/ui-bootstrap-tpls-2.4.0.min.js
http://javascripts/console.js
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap-theme.min.css">

You will also need to download the updated Bootstrap UI (ui-bootstrap-tpls-2.4.0.min.js). I’m using v2.4.0 which you can download from here. Copy it to the javascripts directory.

I’ve also updated the table types, buttons, colours, header, logo etc in the appropriate locations (CSS, Tables, Div’s etc). Here is my full file for reference. You’ll need to update for your colours, branding etc.

powershell-command-executor\O365Utils.js

Finally the O365Utils.js file. This contains the commands that will be displayed along with their options, as well as the connection information for your Microsoft Identity Manager environment.

You will need to change:

  • Line 52 for the address of your MIM Sync Server
  • Line 55 for the addresses of your MIM Service Server
  • Line 141 on-wards for what commands and parameters for those commands you want to make available in the UI

Here is an example with a couple of AzureAD commands, a MIM Sync and a MIM Service command.

Show me my PowerShell UI Website

Now that we have everything configured let’s start the site and browse to it. If you haven’t stopped the NodeJS site from earlier go to the command window and press Cntrl+C a couple of times. Run “c:\Program Files\nodejs\node.exe” bin\www again from the C:\nodejs\powershell-command-executor-ui-master directory unless you have restarted the host and now have NodeJS in your environment path.

In a browser on the same host go to http://localhost:3000 again and you should see the site as it is below.

Branding and styling from the console.html, menu options from the o365Utils.js and when you select a command and execute it data from the associated service …….

… you can see results. From the screenshot below a Get-AzureADUser command for the associated search string executed in milliseconds.

 

Summary

The powershell-command-executor-ui from bitsofinfo is a very extensible and powerful NodeJS website as a front-end to PowerShell.

With a few tweaks and updates the look and feel can be easily changed along with the addition of any powershell commands that you wish to have a UI for.

As it sits though keep in mind you have a UI with hard-coded credentials that can do whatever commands you expose.

Personally I am running one for my use only and I have it hosted in Azure in its own Resource Group with an NSG allowing outgoing traffic to Azure and my MIM environment. Incoming traffic is only allowed from my personal management workstations IP address. I also needed to allow port 3000 into the server on the NSG as well as the firewall on the host. I did that quickly using the command below.

# Enable the WebPort NodeJS is using on the firewall 
netsh advfirewall firewall add rule name="NodeJS WebPort 3000" dir=in action=allow protocol=TCP localport=3000

Follow Darren on Twitter @darrenjrobinson

Azure AD Connect – Using AuthoritativeNull in a Sync Rule

There is a feature in Azure AD Connect that became available in the November 2015 build 1.0.9125.0 (listed here), which has not had much fanfare but can certainly come in handy in tricky situations. I happened to be working on a project that required the DNS domain linked to an old Office 365 tenant to be removed so that it could be used in a new tenant. Although the old tenant was no long used for Exchange Online services, it held onto the domain in question, and Azure AD Connect was being used to synchronise objects between the on-premise Active Directory and Azure Active Directory.

Trying to remove the domain using the Office 365 Portal will reveal if there are any items that need to be remediated prior to removing the domain from the tenant, and for this customer it showed that there were many user and group objects that still had the domain used as the userPrincipalName value, and in the mail and proxyAddresses attribute values. The AuthoritativeNull literal could be used in this situation to blank out these values against user and groups (ie. Distribution Lists) so that the domain can be released. I’ll attempt to show the steps in a test environment, and bear with me as this is a lengthy blog.

Trying to remove the domain minnelli.net listed the items needing attention, as shown in the following screenshots:

This report showed that three actions are required to remove the domain values out of the attributes:

  • userPrincipalName
  • proxyAddresses
  • mail

userPrincipalName is simple to resolve by changing the value in the on-premise Active Directory using a different domain suffix, then synchronising the changes to Azure Active Directory so that the default onmicrosoft.com or another accepted domain is set.

Clearing the proxyAddresses and mail attribute values is possible using the AuthoritativeNull literal in Azure AD Connect. NOTE: You will need to assess the outcome of performing these steps depending on your scenario. For my customer, we were able to perform these steps without affecting other services required from the old Office 365 tenant.

Using the Synchronization Rules Editor, locate and edit the In from AD – User Common rule. Editing the out-of-the-box rules will display a message to suggest you create an editable copy of the rule and disable the original rule which is highly recommended, so click Yes.

The rule is cloned as shown below and we need to be mindful of the Precedence value which we will get to shortly.

Select Transformations and edit the proxyAddresses attribute, set the FlowType to Expression, and set the Source to AuthoritativeNull.

I recommend setting the Precedence value in the new cloned rule to be the same as the original rule, in this case value 104. Firstly edit the original rule to a value such as 1001, and you can also notice the original rule is already set to Disabled.

Set the cloned rule Precedence value to 104.

Prior to performing a Full Synchronization run profile to process the new logic, I prefer and recommend to perform a Preview by selecting a user affected and previewing a Full Synchronization change. As can be seen below, the proxyAddresses value will be deleted.

The same process would need to be done for the mail attribute.

Once the rules are set, launch the following PowerShell command to perform a Full Import/Full Synchronization cycle in Azure AD Connect:

  • Start-ADSyncSyncCycle -PolicyType Initial

Once the cycle is completed, attempt to remove the domain again to check if any other items need remediation, or you might see a successful domain removal. I’ve seen it take upto 30 minutes or so before being able to remove the domain if all remediation tasks have been completed.

There will be other scenarios where using the AuthoritativeNull literal in a Sync Rule will come in handy. What others can you think of? Leave a description in the comments.

Complex Mail Routing in Exchange Online Staged Migration Scenario

Notes From the Field:

I was recently asked to assist an ongoing project with understanding some complex mail routing and identity scenario’s which had been identified during planning for an upcoming mail migration from an external system into Exchange Online.

New User accounts were created in Active Directory for the external staff who are about to be migrated. If we were to assign the target state, production email attributes now, and create the exchange online mailboxes, we would have a problem nearing migration.

When the new domain is verified in Office365 & Exchange Online, new mail from staff already in Exchange Online would start delivering to the newly created mailboxes for the staff soon to be onboarded.

Not doing this, will delay the project which is something we didn’t want either.

I have proposed the following in order to create a scenario whereby cutover to Exchange Online for the new domain is quick, as well as not causing user downtime during the co-existence period. We are creating some “co-existence” state attributes on the on-premises AD user objects that will allow mail flow to continue in all scenarios up until cutover. (I will come back to this later).

generic_exchangeonline_migration_process_flow

We have configured the AD user objects in the following way

  1. UserPrincipalName – username@localdomainname.local
  2. mail – username@mydomain.onmicrosoft.com
  3. targetaddress – username@mydomain.com

We have configured the remote mailbox objects in the following way

  1. mail – username@mydomain.onmicrosoft.com
  2. targetaddress – username@mydomain.com

We have configured the on-premises Exchange Accepted domain in the following way

  1. Accepted Domain – External Relay

We have configured the Exchange Online Accepted domain in the following way

  1. Accepted Domain – Internal Relay

How does this all work?

Glad you asked! As I eluded to earlier, the main problem here is with staff who already have mailboxes in Exchange Online. By configuring the objects in this way, we achieve several things:

  1. We can verify the new domains successfully in Office365 without impacting existing or new users. By setting the UPN & mail attributes to @mydomain.onmicrosoft.com, Office365 & Exchange Online do not (yet) reference the newly onboarded domain to these mailboxes.
  2. By configuring the accepted domains in this way, we are doing the following:
    1. When an email is sent from Exchange Online to an email address at the new domain, Exchange Online will route the message via the hybrid connector to the Exchange on-premises environment. (the new mailbox has an email address @mydomain.onmicrosoft.com)
    2. When the on-premises environment receives the email, Exchange will look at both the remote mailbox object & the accepted domain configuration.
      1. The target address on the mail is configured @mydomain.com
      2. The accepted domain is configured as external relay
      3. Because of this, the on-premises exchange environment will forward the message externally.

Why is this good?

Again, for a few reasons!

We are now able to pre-stage content from the existing external email environment to Exchange Online by using a target address of @mydomain.onmicrosoft.com. The project is no longer at risk of being delayed ! 🙂

At the night of cutover for MX records to Exchange Online (Or in this case, a 3rd party email hygiene provider),  We are able to use the same powershell code that we used in the beginning to configure the new user objects to modify the user accounts for production use. (We are using a different csv import file to achieve this).

Target State Objects

We have configured the AD user objects in the following way

  1. UserPrincipalName – username@mydomain.com
  2. mail – username@mydomain.com
  3. targetaddress – username@mydomain.mail.onmicrosoft.com

We have configured the remote mailbox objects in the following way

  1. mail
    1. username@mydomain.com (primary)
    2. username@mydomain.onmicrosoft.com
  2. targetaddress – username@mydomain.mail.onmicrosoft.com

We have configured the on-premises Exchange Accepted domain in the following way

  1. Accepted Domain – Authoritive

We have configured the Exchange Online Accepted domain in the following way

  1. Accepted Domain – Internal Relay

NOTE: AAD Connect sync is now run and a manual validation completed to confirm user accounts in both on-premises AD & Exchange, as well as Azure AD & Exchange Online to confirm that the user updates have been successful.

We can now update DNS MX records to our 3rd party email hygiene provider (or this could be Exchange Online Protection if you don’t have one).

A final synchronisation of mail from the original email system is completed once new mail is being delivered to Exchange Online.

How to assign and remove user Office365 licenses using the AzureADPreview Powershell Module

A couple of months ago the AzureADPreview module was released. The first cmdlet that I experimented with was Set-AzureADUserLicense. And it didn’t work, there was no working examples and I gave up and used GraphAPI instead.

Since then the AzureADPreview has gone through a number of revisions and I’ve been messing around a little with each update. The Set-AzureADUserLicense cmdlet has been my litmus test. Now that I have both removing and assigning Office 365 licenses working I’ll save others the pain of working it out and give a couple of working examples.

 

If like me you have been experimenting with the AzureADPreview module you’ll need to force the install of the newest one. And for whatever reason I was getting an error informing me that it wasn’t signed. As I’m messing around in my dev sandpit I skipped the publisher check.
Install-Module -Name AzureADPreview -MinimumVersion 2.0.0.7 -Force -SkipPublisherCheck
Import-Module AzureADPreview RequiredVersion 2.0.0.7

Removing an Office 365 License from a User

Removing a license with Set-AzureADUserLicense looks something like this.

What if there are multiple licenses ? Similar concept but just looping through each one to remove.

Assigning an Office 365 License to a User

Now that we have the removal of licenses sorted, how about adding licenses ?

Assigning a license with Set-AzureADUserLicense looks something like this;

Moving forward this AzureAD Powershell Module will replace the older MSOL Module as I wrote about here. If you’re writing new scripts it’s a good time to start using the new modules.

Follow Darren on Twitter @darrenjrobinson