Office 365 Lessons Learned – Advice on how to approach your deployment

Hindsight sometimes gets the better of us. That poor storm trooper.  If only he had the advice before Obi-Wan Kenobi showed up.  Clearly the Galactic Empire don’t learn lessons.  Perhaps consultants use the Jedi mind trick to persuade their customers?

I’ve recently completed an 18 month project that rolled out Office 365 to an ASX Top 30 company. This project successfully rolled out Office 365 to each division & team across the company, migrating content from SharePoint 2007 and File Shares. It hit the success criteria, was on-budget & showed positive figures on adoption metrics.

Every project is unique and has lessons to learn, though we rarely take time to reflect. This post is part of the learning process, it’s a brief list of lessons we learned along the way. It adds credibility when a consultancy can honestly say – Yes, we’ve done this before,  here’s what we learned & here’s how it applies to your project. I believe Professional Services companies lose value when they don’t reflect & learn, including on their successful projects.

Don’t try to do too much

We often get caught up in capabilities of a platform and want to deploy them all. A platform like Office 365 has an endless list of capabilities. Document Management, Forms, Workflow, Planner, Teams, New intranet, Records Management, Automated governance…the list goes on. Check out my recent post on how you can use the power of limitation to create value. When we purely focus on capabilities, we can easily lose sight of how overwhelming it can be to people who use them.  After all, the more you try to do, the more you’ll spend on consulting services.

Choose your capabilities, be clear on your scope, communicate your plans and road map future capability.

Design for day 1 and anticipate future changes

People in the business are not technology experts. They are experts in their relevant fields e.g. finance, HR, risk, compliance etc.  You can’t expect them to know & understand all the concepts in Office 365 without them using it. Once people start using Office 365, they start to understand and then the Ahah! moment. Now the change requests start flowing.

Design for day 1 and anticipate future changes. Resource your project so post go-live activities includes design adjustments and enhancements. Ensure these activities don’t remove key resources from the next team you are migrating.

Respect business priorities

It’s easy to forget that people have a day job, they can’t be available all the time for your project. This is more so the case when there’s important business process like budgeting, results and reporting are on.  You can’t expect to migrate or even plan to migrate during these periods, it just wont fly. If you are migrating remote teams, be mindful of events or processes that only affect them. There might be an activity which requires 100% of their time.

When planning & scheduling, be mindful of these priorities. Don’t assume they can carry the extra workload you are creating for them. Work with senior stakeholders to identify times when to engage or avoid.

Focus on the basics

Legacy systems that have been in place for years means people are very comfortable with how to use them. Office 365 often has multiple ways of doing the same thing – take Sharing for example, there’s 20+ ways to share the same content, through the browser, Office Pro Plus client, Outlook, OneDrive client, Office portal etc.

Too much choice is bad. Pick, train & communicate one way to do something. Once people become comfortable, they’ll find out the other ways themselves.

The lines between “Project” & “BAU” are blurred

New features & capabilities are constantly announced. We planned to deliver a modern intranet which we initially budged for a 3-month custom development cycle. When it came time to start this piece of work, Microsoft has just announced Communication sites. Whilst the customer was nervous with adopting this brand-new feature, it worked out to be good choice. The intranet now grows and morphs with the platform. Lots of new features have been announced, most recently we have megamenus, header & footer customisation plus much more.  This was great during the project, but what happens when the project finishes? Who can help make sense of these new features?

Traditional plan-build-run models aren’t effective for a platform that continuously evolves outside of your control. This model lacks value creation & evolution. It makes the focus reactive incident management. To unlock value, you need to build a capability that can translate new features to opportunities & pain points within business teams. This helps deepen the IT/Business relationship & create value, not to mention help with adoption.

What have you recently learned? Leave a comment or drop me an email!


Office365-AzureHybrid: Building an automated solution to pull Office 365 Audit logs

Custom reporting for Office 365 Audit logs is possible using data fetched from the Security and Compliance center. In the previous blogs here, we have seen how to use PowerShell and Office 365 Management API to fetch the data. In this blog, we will look at planning, prerequisites and rationale to help decide between the approaches.

The Office 365 Audit logs are available from the Security and Compliance center when enabled. At present, audit logging is not enabled by default and needs to be enabled from the Security and Compliance center. This could be turned on (if not done already) via the Start recording user and admin activity on the Audit log search page in the Security & Compliance Center. In future, supposedly Microsoft will be turning it On by default. The Audit information across all Office 365 services are tracked after enabling.

The Audit log search in Security and Compliance center allows to search the audit logs but is limited in what is provided. Also it takes a long time to obtain the results. All the below cases need custom hosting to provide more efficiency and performance

Planning and prerequisites:

Few considerations for custom processes are as follows:

  1. Need additional compute to process the data – Create a periodic job to fetch the data from the Office 365 audit log using a custom process since the audit log data is huge and queries take a longer time. The options are using a PowerShell job or Azure Function App as detailed below.
  2.  Need additional hosting for storing Office 365 Audit log data – The records could range from 5000 to 20000 per hour depending on the data sources and relevant data size. Hence to make it easier to retrieve the data later, store the data in a custom database. Since the data cost could be significant for this, use either dedicated hosting or NOSQL hosting such as Azure Tables/CosmosDB (Azure) or SimpleDB / DynamoDB (AWS)
  3. Might need additional Service Account or Azure AD App – The data will be retrieved using an elevated account at runtime so use an Azure AD app or service account to gather the data. For more information about this, please refer to this blog here.


Some of the scenarios when the Office 365 Audit log data could be useful.

  1. Create custom reports for user activities and actions
  2. Store audit log data for greater than 90 days
  3. Custom data reporting and alerts which are not supported in Security and Compliance center


Below are few approaches to pull the data from the Office 365 Audit Logs. Also there is benefits and limitations of the approaches in order to help decide on implementation.

Using PowerShell

Search-UnifiedAuditLog of Exchange Online PowerShell could be used to retrieve data from Office 365 Audit log. More implementation details could be found at the blog here.


  1. Doesn’t need additional compute hosting. The PowerShell job could be run on a local system with a service account or on a server.
  2. One off data-pull is possible and can be retrieved later
  3. Able to retrieve data more than 90 days from Office 365 Audit log
  4. No session time out constraints as long the PowerShell console can stay active
  5. Local Date filtering is applicable while searching. No need to convert to GMT Formats


  1. It Need Tenant Admin rights when connecting to Exchange PowerShell to download cmdlets from Exchange Online
  2. Needs active connection to Exchange online PowerShell every time it runs
  3. It is not possible to run it on Azure or AWS at present as connection with Exchange Online PowerShell cmdlet is not possible in serverless environment
  4. Needs longer active window time as the job could run for hours depending on the data

Using Office 365 Management API :

The Office Management API provides another way to retrieve data from Azure Logs using a subscription service and Azure AD App. For more detailed information, please check the blog here.


  1. Support of any language such as C#, Javascript, Python etc.
  2. Parallel processing allows greater speed and flexibility of data management
  3. Controlled data pull depending on data size to increase efficiency and performance


  1. Need Additional compute hosting for serverless workloads or web jobs to process the data
  2. Needs an Azure AD app or OAuth layer to connect to the subscription service
  3. Needs additional Time zone processing since all dates are in GMT for retrieving data
  4. Session timeout might occur in data pull involving large datasets. So advisable to use smaller time slot windows for data pull
  5. Multilevel data pull required to fetch the audit log. Please check the blog here to get more information

Final Thoughts

Both PowerShell and Office 365 Management Activity APIs are a great way to fetch Office 365 Audit log data in order to create custom reports. The above points could be used to decide on an approach to fetch the data efficiently and process it. For more details on the steps of the process, please check the blog here (PowerShell) and here (Office 365 Management API).

Analogue Devices and Microsoft Teams

Last week, I was working through a technical workshop with a customer who wanted to make the move to Microsoft Teams. We’d worked through the usual questions, and then the infamous question came: So .. are there any analogue devices still in use? “Yeah, about 50 handsets”. You’d be forgiven for thinking that analogue handsets were a thing of the past. However, much like the fax machine, there’s still a whole lot of love out there for them. There are many reasons for this, but the ones often heard are:
  • A basic analogue handset fits the requirement – There’s no need for a fancy touch screen.
  • It’s a common area phone – hallways, lifts, stairwells, doors, gates etc
  • It’s a wireless DECT handset – this may include multiple handsets and base stations.
  • It’s something special – like a car park barrier phone or intercom system
  • It’s in a difficult to reach or remote location – such as a shed or building located away from the main office
  • There’s no power or ethernet cabling to this location – it’s simply using a copper pair.
Whatever the reason, in almost all cases I have encountered, the customer has a requirement to have a working phone at that location. This means we need to come up with a plan of how we’re going to handle these analogue devices once we’ve moved to Microsoft Teams. So, What’s the plan? Well, firstly check and confirm with the customer that they actually still need the handset at that location. There’s always a possibility that it’s no longer required. As mentioned above though, this seldom happens. Once you’ve confirmed the phone is still required, figure out if it can be replaced with a Microsoft Teams handset. Currently, there are a small number of Microsoft Teams handsets available from Yealink and AudioCodes:
  • Yealink T56A
  • Yealink T58A
  • Audiocodes C450HD
Some things to consider with this approach:
  • Availability of networking and PoE – These phones will require a network connection, and can be powered via PoE.
  • Is this a noisy environment? – If the old analogue device was connected to a separate external ringer like a bell or light, this will need to be replaced too.
What if I can’t replace the handset with a Teams compatible device? There will be times when you simply can’t replace an old analogue device with a Teams compatible handset. This could be as simple as there not being ethernet cabling at that location, or that the analogue device is built into something like a car park barrier, or emergency lift phone. Most of the time, your customer is going to want to keep the same level of functionality on the device. The best news is, there are a number of ways to achieve this! Options You’ve got a few options here: Option 1: Do .. nothing You’ve read that right. Do nothing. Your PABX is already configured to work with these devices. If you can leave the PABX in place, as well as the PSTN connectivity, these devices can remain connected to the PABX and happily continue to work as they always have. If you have this as an option, great! Most of us don’t though. Option 2: Deploy Microsoft Teams Direct Routing Alright, so the PABX has to go. What now? Microsoft Teams Direct Routing is the answer. Direct Routing involves deploying a compatible session border controller (SBC) on premises, which allows you to connect up your analogue devices and convert them to SIP. Here’s a simplified overview of how it works: With this approach, your analogue devices and Microsoft Teams users can call each other internally, and you get to keep your existing ISDN or SIP provider for PSTN calls. You can deploy this solution to many different sites within your organisation, and you can even route calls between SBC’s so analogue devices at different sites can make internal calls to each other. What if we’ve gone down the Microsoft Online-only path? If you’re already making and receiving calls via Microsoft Phone System and Calling Plans in Office 365, you’ll need to deploy direct routing at locations where analogue devices still require connectivity. I’m ready to delve into this Awesome! Microsoft have plenty of helpful documentation on Direct Routing over at And as usual, if you have any questions, feel free to leave a comment below.

Create Office365 business value through the power of limitation

Recent consulting engagements have found me helping customers define what Office365 means to them & what value they see in its use. They are lucky to have licenses and are seeking help to understand how they drive value from the investment.

You’ve heard the sales pitches: Office365 – The platform to solve ALL your needs! From meetings, to document management, working with people outside your organisation, social networking, custom applications, business process automation, forms & workflow, analytics, security & compliance, device management…the list goes on and is only getting bigger!

When I hear Office365 described – I often hear attributes religious people give to God.

  • It’s everywhere you go – Omnipresent
  • It knows everything you do – Omniscient
  • It’s so powerful it can do everything you want – Omnipotent
  • It’s unified despite having multiple components – Oneness
  • It can punish you for doing with the wrong thing – Wrathful

It’s taking on a persona – how long before it becomes self-aware!?

If it can really meet ALL your needs, how do we define its use, do we just use it for everything? Where do we start? How do we define what it means if it can do everything?

Enter limitation. Limitation is a powerful idea that brings clarity through constraint. It’s the foundation on which definition is built. Can you really define something that can do anything?

The other side would suggest limiting technology constrains thinking and prevents creativity.  I don’t agree. Limitation begets creativity. It helps zero-in thinking and helps create practical, creative solutions with what you have. Moreover, having modern technology doesn’t make you a creative & innovative organisation. It’s about culture, people & process. As always, technology is a mere enabler.

What can’t we use Office365 for?

Sometimes its easier to start here. Working with Architecture teams to draw boundaries around the system helps provide guidance for appropriate use. They have a good grasp on enterprise architecture and reasons why things are the way they are. It helps clearly narrow use cases & provides a definition that makes sense to people.

  • We don’t use it to share content externally because of..
  • We can’t use it for customer facing staff because of…
  • We don’t use it for Forms & Workflow because we have <insert app name here>
  • We can’t don’t use it as a records management system because we have …

Office365 Use cases – The basis of meaning

Microsoft provide some great material on generic use cases. Document collaboration, secure external sharing, workflow, managing content on-the-go, making meetings more efficient etc.  These represent ideals and are sometimes too far removed from the reality of your organisation. Use them as a basis and further develop them with relevance to your business unit or organisation.

Group ideation workshops, discussions & brainstorming sessions are a great way to help draw out use cases. Make sure you have the right level of people, not too high & not too low. You can then follow-up with each and drill in to the detail and see the value the use case provides.

Get some runs on the board

Once you’ve defined a few use cases, work with the business to start piloting. Prove the use case with real-life scenarios. Build the network of people around the use cases and start to draw out and refine how it solves pain, for here is where true value appears. This can be a good news story that can be told to other parts of the business to build excitement.

Plan for supply & demand

Once you some have runs on the board, if successful, word will quickly get out. People will want it. Learn to harness this excitement and build off the energy created. Be ready to deal with sudden increase in supply.

On the demand side, plan for service management. How do people get support? Who support it?  How do we customise it? What the back-out plan? How do we manage updates? All the typical ITIL components you’d expect should be planned for during your pilots.

Office365 Roadmap to remove limitations & create new use cases

They are a meaningful way to communicate when value will be unlocked. IT should have a clear picture of business value is and how it will work to unlock the capabilities the business needs in order for it to be successful.

Roadmaps do a good at communicating this. Though typically, they are technology focused.  This might be a great way to help unify the IT team, but people on the receiving end wont quiet understand. Communicate using their language in ways they understand i.e. what value it will provide them, when & how it will help them be successful.

Azure Automation MS Flow Hybrid Workers SharePoint List upload CSV output

In this Blog I will discuss how to leverage SharePoint Lists as a front end using MS Flow to call Webhooks on Microsoft Azure Automation PowerShell scripts. These scripts execute via a hybrid worker to access On Premises resources. Results will be zipped and uploaded back to the SharePoint list.


  • Azure Automation Subscription and Account
  • SharePoint Online / Site Collection
  • On-premises resource (Windows 2016 server) configured as Hybrid Worker
  • CredSSP needs to be enabled on hybrid Worker as Azure launches scripts as system account and some commands cannot use ‘-Credential’ )
  • Modules needed on Hybrid worker from elevated powershell run “Add-WindowsFeature RSAT-AD-PowerShell and “Install-Module SharePointPnPPowerShellOnline”
  • From Azure Import module from gallery SharePointPnPPowerShellOnline

Create SharePoint List

Create a SharePoint list as below this will be the input required for the script.

ServerPath = the server name eg “rv16mimp”

AuditShare = the full path after server name eg “fileshare”

within the script this will become \\rv16mimp\fileshare”

Adjust the SharePoint List from List Settings to include ServerPath/AuditShare/ErrorMessage and Status.

Azure Automation Script and WebHook

Log in to Azure Automation Account and create a new PowerShell Runbook.

This script will take the values input from the SharePoint list and use SharePointPNP module to update the list to In progress. The script will execute on the Hybrid worker as the WebHook is configured as such.  It will invoke a command to launch a local script using CredSSP in order to run the script entirely as a Local AD user which is stored as an Azure Credential Object. Any errors encountered both in the Azure script and Local script will be in ErrorMessage. After the local script has completed the Azure script will gather the zip file created and attach it to the SharePoint List.

Create a Webhook on an existing runbook

Create a Webhook making sure to select the Hybrid Worker, it is important to copy this and store it safely as you never get to see it again.

Create MS Flow

You can Start the Creation of a Flow from the List, click ‘See your flows’.

From Flow click the New drop down button, then select “Create from blank”

Next when you see the image below click on the bottom right “when an Item is created”

Enter the Site Address by selecting ‘Enter custom value’

Select the List required then click new

You can filter the search for HTTP then choose it

Choose POST as the method and enter the webhook you saved above from Azure. In The Headers we use ItemID to match the list’s ID from SharePoint. ServerPath and AuditShare are the input fields from SharePoint list to the script parameters.

Hybrid Worker Script

This Script executes on the Hybrid worker using the credentials passed from the Azure automation Script and stored as a credential object. The main tasks it performs is a small audit of a file share and checks the groups members of the global group it finds. Lastly it zips up the files ready to upload back to SharePoint. I have used the $Date from the Azure script in order to Identity the filename and avoid conflicts.

After a successful run the list Item will look like the picture below where you can download the zip file

The Following Picture shows the output of the files above.

Retrieve Office 365 audit logs using Office Management API and Azure Functions

For creating custom reports on Office 365 content, the best approach is to fetch the Audit data from Office 365 Management Audit log, store it in a custom database and then create reports through it. In an earlier blog here, we looked at steps to retrieve Office 365 Audit log data using PowerShell. In this blog, we look at a similar process to gather audit data by using Office 365 Management API in Azure Functions.


To start with, we will create an Azure AD app to connect to the Office 365 Audit log data store. Even though it might sound difficult, creating the Azure AD app is quite easy and simple. It is as simple as going to the Azure AD. Here is a quick blog with steps for the same.

After the Azure AD app is created, we will create an Azure Function to pull the data from Office 365 Azure Content blob, for doing that we will need to subscribe to the service first.

There are few prerequisites for setting up the Azure content blob service which are as follows:

  1. Enable the Audit log service in Security and Compliance center. This could be turned on (if not done already) via the Start recording user and admin activity on the Audit log search page in the Security & Compliance Center. This is going to be automatically On by Microsoft in future.
  2. Turn on the subscription service from the Office 365 Management Api. For this hit the below URL to start the subscription service on your tenancy. Replace the tenant Id with the tenant Id from Azure Active Directory{tenant_id}/activity/feed/subscriptions/start?contentType=Audit.SharePoint

Next, back to the Azure Function, we will connect to the Azure subscription service using Azure AD app Id and secret using the below code. The below process is back and forth data pull from the Azure Content blob so read through the steps and code carefully as it might be a little confusing otherwise.

After connecting to the Azure subscription, we could request for content logs for a SharePoint events using a timeline window. Note that the date time are to be in UTC formats.

The detailed audit logs data are not provided in the initial data pull. The initial data pull from Office 365 Management Api returns the content URI to the detail audit log data. This content URI then provides the detailed audit log information hence the next step is a two-step process. The first step is to get the content blog URI details during the first call which then has the detailed log information URI to get the detail data entry from the Azure Subscription service.

Since the audit log data returned from the Office Management subscription service is paged, it is needed to loop through the NextPageURI to get the next URI for the next data pull.

The below code has the break up of data calls and looping for the next page URI. Brief overview of the code is as follows:

  1. Use the Do-While loop to call the initial data URI
  2. Call the initial data URI and get the response data
  3. Process the initial log data and convert to JSON data objects
  4. Get the ContentURI property and then call the data
  5. Next call the content URI to get the detailed audit log data
  6. After the data is fetched, convert to JSON data objects
  7. Add to the final data objects

After the data is retrieval is complete, the final could be stored in an Azure Table for further processing.

Final Thoughts

The above custom process using Azure Function and Office 365 Management API allows us to connect to the Audit log data through a custom job hosted in Office 365. After getting the data we could create reports or filter the data.

Use Azure AD Apps to connect with Office 365 and Cloud Services securely

Azure AD apps provide a faster and secure way to connect to the Office 365 tenancy and carry out automation tasks. There are many advantages of using Azure AD apps and could be used to authenticate for various Microsoft services such as Graph, Office 365 Management Api, SharePoint etc.

In this blog, we will look at the steps to set up an Azure AD app for Office 365 Management API, however the steps are mostly the same for other Office 365 services too.

In order to create an Azure AD App, first we need to have an Azure AD setup for Office 365. For more information about setting up Azure AD, please check the article here.

Note: Screenshots and examples used in this blog are referring to the latest App registrations (preview) page. The same is also possible through the old App registrations page too, but might be different names and locations of controls. Below is the front pages from both the App registration pages.

App Registrations (preview)

App Registrations

After the Azure AD is set up, we will open the latest App registrations (preview) section to set up the new app. This is also possible through the old App registrations page but from a personal opinion, I feel the new one is much better laid out than the older one.

In the App Registrations page, let’s create a new registration. For account types, lets’ leave it to the default current organisation only. We can leave the redirect api blank or set it to localhost for now. We will change the Redirect API later when the app is ready.


After the App is created, there are few important sections to note. Below is a screenshot of the same.

  1. Application ID – This is the identity of the Azure AD app and will be used to connect to Office 365 using this App.
  2. Application secret – This is password of the App. Please note that in the password
  3. Api Permissions – The permissions granted to the API for accessing various Office 365 services. The services could be accessed as a standalone App or delegated to user permissions.
  4. Reply URLs – Reply urls are important to redirect to the correct page after authentication is successful. If building in a custom application, then put the redirect to the App authentication redirect page.
  5. Directory ID or Tenant ID – This is the Azure AD Directory ID for the tenancy


The next step in the process would be to create a secret (password) for the App. Click on Certificates and Secret link, and then click on New Client Secret. While creating the secret select “Never” as expiry (if not needed to expire the secret) and then click create.


Note: It is important to note that the secret is only displayed on the page once, so copy the secret to use it later.

For Api permissions, click on Api Permissions and then click on Add permission. This will display a list of pre-set services and permissions required to access those services. Some of the office 365 services are highlighted in the below screenshot.


After the API is selected, select the permissions to run the Azure AD app i.e. with user permissions (delegated) or run as a service without an user as show in teh screenshot below.


Next steps is to grant permissions to the Azure App by a Tenant Administrator for accessing the services. This could be done from the Azure AD Directory or through an Admin consent URL For the Office 365 Management API, here is the admin consent URL is of the below format. Please make sure the reply back URL is a valid url and added to the Authentication.{your_client_id}&redirect_uri={your_redirect_url }

After the consent is provided, the app is ready to connect to the office 365 services now.


After the App is created, this could be used in applications and development code to access Office 365 services set up in the Azure AD app.


Selectively prevent and secure content from External sharing using Labels and DLP policies in Office 365

In a recent project, we had a requirement to prevent specific selective content from shared externally while still allowing the flexibility of external sharing for all users. We were able to make it possible through Security and Compliance Center. There are few ways to achieve this, Auto-classify (see below conclusion section for more info), Selective apply via Labels and both.

Note: Till recently (Dec 2018), there was a bug in Office 365 which was preventing this DLP policy with Labels to work. This is fixed in the latest release so available for use.

In this blog, we will look at the process where business users can decide the content to be shared externally or not. This is a nifty feature, because there are cases when the content could be classified as secured even when they don’t have any sensitive info such as contracts (without business info) or invoices (with only business name). Also, there are cases when content could be public even when the document has sensitive info because the company has decided to make it public. So, at the end it is up to the discretion of the owner to decide the content’s privacy and hence this feature a great value in these scenarios.

Note: If you would like to auto classify the content using Sensitive info types, please refer to the great article here. This process leverages the machine learning capabilities of Office 365 engine to identify secure content and automatically apply the security policy on it.

The first step is to create a Retention label (somehow this doesn’t work with Security labels, so must create retention label). After creating a label, publish the label to the selected locations, for our use case we will post it to SharePoint Sites only. While the label is published, we could go ahead and create a DLP policy to prevent sharing with external users (I was not able to make it work while set to Test with notification so put it to on state to test also). After this, when you apply the label to a document, after some time (takes about 1-2 min to affect), then the content is not able to be shared with external users. Lets’ look at each of the above steps in detail below.


  1. First step is to create a retention label in Security and Compliance center. To my astonishment, the selective process doesn’t work with Security Labels but Retention Labels, so will create Retention Labels. If it is optional to apply a retention period to the content, then the retention period can be left, so not required for this exercise.

  2. Secondly, we will publish the label to SharePoint Sites, for our requirement. I haven’t tried the process with other sources such as Outlook and One Drive but should work the same when applied.
    Note: It takes about a day for the retention labels to publish to SharePoint sites, so please wait for that to become available. We can move to the next configuration step right away but will have to wait for the label to be published to stop sharing.
  3. Next, we could create a DLP policy for the content to be applied. For creating a DLP policy we need to follow the below configuration steps. Once created, we might have to turn it on in order to test it.
  4. First step of the policy creation would be select Custom Policy for DLP policy creation and give it a name.
  5. Then, we would select the sources to be included for this policy. In our case, it is only SharePoint.
  6. After the above, we will set rule settings for the DLP policy where we will select the label to which the policy to apply, then select the policy tips, block sharing rules and override rules as shown in the below screenshots. We could also set the admins (provided) to get notified when such as content is shared externally.
  7. Next, we could allow the users to override the policy if needed. For this blog and our requirement, we had decided to not allow it to happen.
  8. After this is setup, we could turn on the DLP policy so that it could start applying the rules. There doesn’t seem to be any wait time for applying the policy later but give it some time if you don’t see it happening right away.
  9. Now the policy is enabled and if the label are published, the user can then apply the label on a content as shown in below screenshot.
    Note: In some cases, it takes about 2-3 min for the policy to be effective on the content after applying the label so give it some time.
  10.  After the label is effective after 2-3 min wait, if the same content is shared with an external user, we get the following error.

Read More

Active Directory User Migration in Hybrid Exchange Environment Using ADMT – Part6

Security Translation – Local Profiles and things to consider for end user experience

The last bit of any migration project is to keep the end user experience as simple and smooth as possible. So, by now we have successfully migrated the groups, migrated the users keeping their mailboxes intact and providing them access to all their resources using SID history. As the last bit of the migration I would like to discuss about few things that should be considered from an end user’s perspective to make their experience good when they login to the new domain. Depending on the nature of your organizational structure, there might be numerous things that you would like to consider. However, a few things that I consider to be important for most of the migrations are:

  • The user might have multiple alias (can receive emails on aliases but always send emails as primary email address). Have all these aliases been configured for the users?
  • Is the end user going to have a new UPN post migration? If yes, what is the impact and what all application will need to be re-configured?
    • O365 account
    • SIP URI
    • Authenticator app (if using Microsoft authenticator for MFA)
    • OneDrive for business
    • OneNote
    • ……..
  • User sign-in account to user machines
  • Are there any login scripts that the users were using in old domain?
  • What about their local user profile, does this need to be migrated or user will have new profile (loosing access to profile specific settings in old domains like profile files, browser favourites, shortcuts etc.)
    • While performing the local profile migration, user machine needs to be connected to the office network and be reachable from the ADMT machine, otherwise profile migration will fail

And again, the list can keep on growing depending upon different factors related to your environment.

So, before we discuss about the steps involved for user profile migration, lets look at a couple of pre-requisites.

Windows Firewall

If a firewall (windows or any other 3rd-party) is enabled, it might prevent local profile migration from source domain to target domain. It’s more relevant while computer migration is performed, but I would suggest disabling it for local profile migration as well. This can be achieved quite easily via group policies.

ADMT console uses Lightweight Directory Access Protocol (LDAP) port 389 to communicate with domain controllers and Remote Procedure Call (RPC) to communicate with ADMT agents. For RPC communication, any available RPC port in the range between 1024 and 5000 can be used.

For more information, see KB836429 . If the following error messages ERR3:7075, ERR2:7625, ERR2:7014, ERR2:7014, WRN1:7372 and ERR2:7228 are logged in the ADMT migration log, see KB929493 .

  • On a domain controller in source domain, click Start, Administrative Tools, then click Group Policy Management.
  • Create a new group policy object linking it to the OU containing user computer objects.
  • In Group Policy Management Editor, go to Computer Configuration\Administrative Templates\Network\Network Connections\Windows Firewall\Domain Profile. Right click Windows Firewall: Protect all network connections, and then click Properties.
  • On the Windows Firewall: Protect all network connections Properties page, select Disabled, then click OK.
  • Close all, open Command prompt and run GPUPDATE /force command.

Local Administrators access to ADMT account

The ADMT Migration Account that will be used for profile migration needs to have local administrator rights in the source domain. If the ADMT account is not assigned administrator privilege, ADMT agent cannot be deployed on user machines, which is required for granting settings from old profile to new profile. This can be achieved by creating a group policy and link it to the same OU of user computers in source domain as above.

  • Create a Domain Local Security Group in the Source Domain, add the ADMT Service Account to the group.
  • Create a new GPO and link it to the OU with the user machines in it.
  • In Group Policy Management Editor, go to Computer Configuration\Windows Settings\Security Settings\Restricted Groups.
  • Add the ADMT Admin Local Security group created earlier.
  • Under “This group is a member of:” select add, type Administrators.
  • Close all, open Command prompt and run GPUPDATE /force command.

Translate Local User Profile

User profiles store user data and information about the personal settings for the user. These may include files, printer settings, desktop settings including shortcuts and browser settings.

While the user profile translation is done, the machine must be up and connected to the network / reachable from ADMT machine. Preferably the user should not be logged in to the machine during the time of translation.

User profiles can be local or roaming.

Type Description Migration Requirements
Roaming profiles User profiles are stored centrally on servers. Profiles are available to the user, regardless of the workstation in use. Migration account requires full access to profile folders. This would typically require taking ownership of folders to give the migration account permissions.  Must be followed by local profile migration to update the local copy of the profile.
Local profiles User profiles are stored locally on the workstation. When a user logs on to another workstation, a unique local user profile is created. Migration account requires administrative access to PC. Permissions can be set by GPO.
  • On the computer in the target domain on which ADMT is installed, log on by using the ADMT account migration account.
  • Use the Security Translation Wizard by performing the steps in the following table.
Wizard page Action
Security Translation Options Click Previously migrated objects.
Domain Selection Under Source, in the Domain drop-down list, type or select the NetBIOS or Domain Name System (DNS) name of the source domain. In the Domain controller drop-down list, type or select the name of the domain controller, or select Any domain controller. Under Target, in the Domain drop-down list, type or select the NetBIOS or DNS name of the target domain. In the Domain controller drop-down list, type or select the name of the domain controller, or select Any domain controller, and then click Next.
Computer Selection Click Select computers from domain, and then click Next. On the Computer Selection page, click Add to select the computers in the source domain for which you want to translate security, click OK, and then click Next. Or Click Read objects from an include file, and then click Next. Type the location of the include file, and then click Next.
Translate Objects Click User Profiles.
Security Translation Options Click Replace.
ADMT Agent Dialog Select Run pre-check and agent operation, and then click Start.
  • Review the results that are displayed on the screen for any errors. After the wizard completes, click View Migration Log to see the list of computers, completion status, and the path to the log file for each computer. If an error is reported for a computer, you will have to refer to the log file on that computer to review any problems with local groups. The log file for each computer is named MigrationTaskID.log, and it is stored in the Windows\ADMT\Logs\Agents folder.

And job done!!!


Part 1. Introduction and high-level migration approach
Part 2. Configuring source and target domains for SID history and accepted-domains
Part 3. Installation and configuration of ADMT tool and Password Export Server
Part 4. Groups Migration
Part 5. Users Migration
Part 6. Security Translation Wizard – Local Profiles and things to consider for end user experience

Active Directory User Migration in Hybrid Exchange Environment Using ADMT – Part5

Users Migration

The gun seems to be pretty much loaded with all the ammunition, ready to fire? Probably not yet …

Here I want to discuss about few basic things that are easily missed and can cause the migration to fail or go wrong. Few things worth noting down before getting into the migration:

  • Make sure you have a plan to provide the permissions of file shares that built-in groups in source domain have access to. I used universal groups in target domain and assigned the permissions to those fileshares using these universal groups.
  • Make sure the default O365 policy in target domain includes new OU for default email address. Otherwise you will wonder where those funny email addresses are populating from for migrated users.
  • OU folders added to AAD Connect Sync configuration in target domain and the AADC sync service account has write permissions on the new OU containers. If this step is missed, you’ll get permissions issues for migrated users in AADC console.
  • Disable mail capability of mail-enabled security and distribution groups in source domain
  • Modify attributes for Distribution Groups and mail-enabled Security Groups to sync them to O365
  • Enable mail capability of mail-enabled security and distribution groups in target domain with appropriate attributes (email address and mailnickname)

Migrating Users

During the user account migration, audit events are logged in both the source and target domains.

  • On the ADMT server in the target domain, log on by using the ADMT account migration account.
  • Use the User Account Migration Wizard by performing the steps in the following table.
Source user accounts to Target Action
Domain Selection Under Source, in the Domain drop-down list, type or select source domain. In the Domain controller drop-down list, type or select the name of the domain controller, or select Any domain controller. Under Target, in the Domain drop-down list, type or select target domain. In the Domain controller drop-down list, type or select the name of the domain controller, or select Any domain controller, and then click Next.
User Selection Click Select users from the domain, and then click Next. On the User Selection page, click Add to select the users identified to be migrated in the source domain, click OK, and then click Next.
Organizational Unit Selection Ensure that ADMT lists the correct target OU. If it is not correct, type the correct OU, or click Browse. In the Browse for Container dialog box, locate the target domain and OU, and then click OK.
Password Options Click Do not update passwords for existing users.
Account Transition Options In Target Account State:, click Disable target accounts. In Source Account Disabling Options:, click Days until source accounts expire:, 5 days to keep the source account. Select the Migrate user SIDs to target domains check box.
User Account Type the user name, password, and domain of an ADMT admin migration account in source domain domain.
User Options Select the Update user rights check box. Select the Migrate associated user groups check box. Select the Fix users’ group memberships check box.
Object Property Exclusion Clear the Exclude specific object properties from migration check box.
Conflict Management Click Do not migrate source object if a conflict is detected in the target domain. Ensure that the Before merging remove user rights for existing target accounts and Move merged objects to specified target Organizational Unit check boxes are not selected.
  • When the wizard has finished running, click View Log, and then review the migration log for any errors.
  • Start Active Directory Users and Computers, and then verify that the user accounts exist in the appropriate OU in the target domain.
  • Post user migration using ADMT, change exchange related attributes using powershell script
    • Mailnickname
    • Proxy addresses
    • LegacyDN attributes as X500 in proxy addresses
  • Manual sync to Azure AD using AADConnect
  • Enable-remote mailbox for migrated user (if this command is not sun, user will not showup in on-prem exchange admin center.)
    • Enable-RemoteMailbox -Identity <mailnickname> -RemoteRoutingAddress < id>
  • Use the command below to verify sID history has been copied over
    • dsquery * -filter “&(objectcategory=user)(samaccountname=xxx.xxxx)” -attr objectsid sIDHistory


Part 1. Introduction and high-level migration approach
Part 2. Configuring source and target domains for SID history and accepted-domains
Part 3. Installation and configuration of ADMT tool and Password Export Server
Part 4. Groups Migration
Part 5. Users Migration
Part 6. Security Translation Wizard – Local Profiles and things to consider for end user experience

Follow Us!

Kloud Solutions Blog - Follow Us!