Office 365 Lessons Learned – Advice on how to approach your deployment

Hindsight sometimes gets the better of us. That poor storm trooper.  If only he had the advice before Obi-Wan Kenobi showed up.  Clearly the Galactic Empire don’t learn lessons.  Perhaps consultants use the Jedi mind trick to persuade their customers?

I’ve recently completed an 18 month project that rolled out Office 365 to an ASX Top 30 company. This project successfully rolled out Office 365 to each division & team across the company, migrating content from SharePoint 2007 and File Shares. It hit the success criteria, was on-budget & showed positive figures on adoption metrics.

Every project is unique and has lessons to learn, though we rarely take time to reflect. This post is part of the learning process, it’s a brief list of lessons we learned along the way. It adds credibility when a consultancy can honestly say – Yes, we’ve done this before,  here’s what we learned & here’s how it applies to your project. I believe Professional Services companies lose value when they don’t reflect & learn, including on their successful projects.

Don’t try to do too much

We often get caught up in capabilities of a platform and want to deploy them all. A platform like Office 365 has an endless list of capabilities. Document Management, Forms, Workflow, Planner, Teams, New intranet, Records Management, Automated governance…the list goes on. Check out my recent post on how you can use the power of limitation to create value. When we purely focus on capabilities, we can easily lose sight of how overwhelming it can be to people who use them.  After all, the more you try to do, the more you’ll spend on consulting services.

Choose your capabilities, be clear on your scope, communicate your plans and road map future capability.

Design for day 1 and anticipate future changes

People in the business are not technology experts. They are experts in their relevant fields e.g. finance, HR, risk, compliance etc.  You can’t expect them to know & understand all the concepts in Office 365 without them using it. Once people start using Office 365, they start to understand and then the Ahah! moment. Now the change requests start flowing.

Design for day 1 and anticipate future changes. Resource your project so post go-live activities includes design adjustments and enhancements. Ensure these activities don’t remove key resources from the next team you are migrating.

Respect business priorities

It’s easy to forget that people have a day job, they can’t be available all the time for your project. This is more so the case when there’s important business process like budgeting, results and reporting are on.  You can’t expect to migrate or even plan to migrate during these periods, it just wont fly. If you are migrating remote teams, be mindful of events or processes that only affect them. There might be an activity which requires 100% of their time.

When planning & scheduling, be mindful of these priorities. Don’t assume they can carry the extra workload you are creating for them. Work with senior stakeholders to identify times when to engage or avoid.

Focus on the basics

Legacy systems that have been in place for years means people are very comfortable with how to use them. Office 365 often has multiple ways of doing the same thing – take Sharing for example, there’s 20+ ways to share the same content, through the browser, Office Pro Plus client, Outlook, OneDrive client, Office portal etc.

Too much choice is bad. Pick, train & communicate one way to do something. Once people become comfortable, they’ll find out the other ways themselves.

The lines between “Project” & “BAU” are blurred

New features & capabilities are constantly announced. We planned to deliver a modern intranet which we initially budged for a 3-month custom development cycle. When it came time to start this piece of work, Microsoft has just announced Communication sites. Whilst the customer was nervous with adopting this brand-new feature, it worked out to be good choice. The intranet now grows and morphs with the platform. Lots of new features have been announced, most recently we have megamenus, header & footer customisation plus much more.  This was great during the project, but what happens when the project finishes? Who can help make sense of these new features?

Traditional plan-build-run models aren’t effective for a platform that continuously evolves outside of your control. This model lacks value creation & evolution. It makes the focus reactive incident management. To unlock value, you need to build a capability that can translate new features to opportunities & pain points within business teams. This helps deepen the IT/Business relationship & create value, not to mention help with adoption.

What have you recently learned? Leave a comment or drop me an email!


Office365-AzureHybrid: Building an automated solution to pull Office 365 Audit logs

Custom reporting for Office 365 Audit logs is possible using data fetched from the Security and Compliance center. In the previous blogs here, we have seen how to use PowerShell and Office 365 Management API to fetch the data. In this blog, we will look at planning, prerequisites and rationale to help decide between the approaches.

The Office 365 Audit logs are available from the Security and Compliance center when enabled. At present, audit logging is not enabled by default and needs to be enabled from the Security and Compliance center. This could be turned on (if not done already) via the Start recording user and admin activity on the Audit log search page in the Security & Compliance Center. In future, supposedly Microsoft will be turning it On by default. The Audit information across all Office 365 services are tracked after enabling.

The Audit log search in Security and Compliance center allows to search the audit logs but is limited in what is provided. Also it takes a long time to obtain the results. All the below cases need custom hosting to provide more efficiency and performance

Planning and prerequisites:

Few considerations for custom processes are as follows:

  1. Need additional compute to process the data – Create a periodic job to fetch the data from the Office 365 audit log using a custom process since the audit log data is huge and queries take a longer time. The options are using a PowerShell job or Azure Function App as detailed below.
  2.  Need additional hosting for storing Office 365 Audit log data – The records could range from 5000 to 20000 per hour depending on the data sources and relevant data size. Hence to make it easier to retrieve the data later, store the data in a custom database. Since the data cost could be significant for this, use either dedicated hosting or NOSQL hosting such as Azure Tables/CosmosDB (Azure) or SimpleDB / DynamoDB (AWS)
  3. Might need additional Service Account or Azure AD App – The data will be retrieved using an elevated account at runtime so use an Azure AD app or service account to gather the data. For more information about this, please refer to this blog here.


Some of the scenarios when the Office 365 Audit log data could be useful.

  1. Create custom reports for user activities and actions
  2. Store audit log data for greater than 90 days
  3. Custom data reporting and alerts which are not supported in Security and Compliance center


Below are few approaches to pull the data from the Office 365 Audit Logs. Also there is benefits and limitations of the approaches in order to help decide on implementation.

Using PowerShell

Search-UnifiedAuditLog of Exchange Online PowerShell could be used to retrieve data from Office 365 Audit log. More implementation details could be found at the blog here.


  1. Doesn’t need additional compute hosting. The PowerShell job could be run on a local system with a service account or on a server.
  2. One off data-pull is possible and can be retrieved later
  3. Able to retrieve data more than 90 days from Office 365 Audit log
  4. No session time out constraints as long the PowerShell console can stay active
  5. Local Date filtering is applicable while searching. No need to convert to GMT Formats


  1. It Need Tenant Admin rights when connecting to Exchange PowerShell to download cmdlets from Exchange Online
  2. Needs active connection to Exchange online PowerShell every time it runs
  3. It is not possible to run it on Azure or AWS at present as connection with Exchange Online PowerShell cmdlet is not possible in serverless environment
  4. Needs longer active window time as the job could run for hours depending on the data

Using Office 365 Management API :

The Office Management API provides another way to retrieve data from Azure Logs using a subscription service and Azure AD App. For more detailed information, please check the blog here.


  1. Support of any language such as C#, Javascript, Python etc.
  2. Parallel processing allows greater speed and flexibility of data management
  3. Controlled data pull depending on data size to increase efficiency and performance


  1. Need Additional compute hosting for serverless workloads or web jobs to process the data
  2. Needs an Azure AD app or OAuth layer to connect to the subscription service
  3. Needs additional Time zone processing since all dates are in GMT for retrieving data
  4. Session timeout might occur in data pull involving large datasets. So advisable to use smaller time slot windows for data pull
  5. Multilevel data pull required to fetch the audit log. Please check the blog here to get more information

Final Thoughts

Both PowerShell and Office 365 Management Activity APIs are a great way to fetch Office 365 Audit log data in order to create custom reports. The above points could be used to decide on an approach to fetch the data efficiently and process it. For more details on the steps of the process, please check the blog here (PowerShell) and here (Office 365 Management API).

Analogue Devices and Microsoft Teams

Last week, I was working through a technical workshop with a customer who wanted to make the move to Microsoft Teams. We’d worked through the usual questions, and then the infamous question came: So .. are there any analogue devices still in use? “Yeah, about 50 handsets”. You’d be forgiven for thinking that analogue handsets were a thing of the past. However, much like the fax machine, there’s still a whole lot of love out there for them. There are many reasons for this, but the ones often heard are:
  • A basic analogue handset fits the requirement – There’s no need for a fancy touch screen.
  • It’s a common area phone – hallways, lifts, stairwells, doors, gates etc
  • It’s a wireless DECT handset – this may include multiple handsets and base stations.
  • It’s something special – like a car park barrier phone or intercom system
  • It’s in a difficult to reach or remote location – such as a shed or building located away from the main office
  • There’s no power or ethernet cabling to this location – it’s simply using a copper pair.
Whatever the reason, in almost all cases I have encountered, the customer has a requirement to have a working phone at that location. This means we need to come up with a plan of how we’re going to handle these analogue devices once we’ve moved to Microsoft Teams. So, What’s the plan? Well, firstly check and confirm with the customer that they actually still need the handset at that location. There’s always a possibility that it’s no longer required. As mentioned above though, this seldom happens. Once you’ve confirmed the phone is still required, figure out if it can be replaced with a Microsoft Teams handset. Currently, there are a small number of Microsoft Teams handsets available from Yealink and AudioCodes:
  • Yealink T56A
  • Yealink T58A
  • Audiocodes C450HD
Some things to consider with this approach:
  • Availability of networking and PoE – These phones will require a network connection, and can be powered via PoE.
  • Is this a noisy environment? – If the old analogue device was connected to a separate external ringer like a bell or light, this will need to be replaced too.
What if I can’t replace the handset with a Teams compatible device? There will be times when you simply can’t replace an old analogue device with a Teams compatible handset. This could be as simple as there not being ethernet cabling at that location, or that the analogue device is built into something like a car park barrier, or emergency lift phone. Most of the time, your customer is going to want to keep the same level of functionality on the device. The best news is, there are a number of ways to achieve this! Options You’ve got a few options here: Option 1: Do .. nothing You’ve read that right. Do nothing. Your PABX is already configured to work with these devices. If you can leave the PABX in place, as well as the PSTN connectivity, these devices can remain connected to the PABX and happily continue to work as they always have. If you have this as an option, great! Most of us don’t though. Option 2: Deploy Microsoft Teams Direct Routing Alright, so the PABX has to go. What now? Microsoft Teams Direct Routing is the answer. Direct Routing involves deploying a compatible session border controller (SBC) on premises, which allows you to connect up your analogue devices and convert them to SIP. Here’s a simplified overview of how it works: With this approach, your analogue devices and Microsoft Teams users can call each other internally, and you get to keep your existing ISDN or SIP provider for PSTN calls. You can deploy this solution to many different sites within your organisation, and you can even route calls between SBC’s so analogue devices at different sites can make internal calls to each other. What if we’ve gone down the Microsoft Online-only path? If you’re already making and receiving calls via Microsoft Phone System and Calling Plans in Office 365, you’ll need to deploy direct routing at locations where analogue devices still require connectivity. I’m ready to delve into this Awesome! Microsoft have plenty of helpful documentation on Direct Routing over at And as usual, if you have any questions, feel free to leave a comment below.

Create Office365 business value through the power of limitation

Recent consulting engagements have found me helping customers define what Office365 means to them & what value they see in its use. They are lucky to have licenses and are seeking help to understand how they drive value from the investment.

You’ve heard the sales pitches: Office365 – The platform to solve ALL your needs! From meetings, to document management, working with people outside your organisation, social networking, custom applications, business process automation, forms & workflow, analytics, security & compliance, device management…the list goes on and is only getting bigger!

When I hear Office365 described – I often hear attributes religious people give to God.

  • It’s everywhere you go – Omnipresent
  • It knows everything you do – Omniscient
  • It’s so powerful it can do everything you want – Omnipotent
  • It’s unified despite having multiple components – Oneness
  • It can punish you for doing with the wrong thing – Wrathful

It’s taking on a persona – how long before it becomes self-aware!?

If it can really meet ALL your needs, how do we define its use, do we just use it for everything? Where do we start? How do we define what it means if it can do everything?

Enter limitation. Limitation is a powerful idea that brings clarity through constraint. It’s the foundation on which definition is built. Can you really define something that can do anything?

The other side would suggest limiting technology constrains thinking and prevents creativity.  I don’t agree. Limitation begets creativity. It helps zero-in thinking and helps create practical, creative solutions with what you have. Moreover, having modern technology doesn’t make you a creative & innovative organisation. It’s about culture, people & process. As always, technology is a mere enabler.

What can’t we use Office365 for?

Sometimes its easier to start here. Working with Architecture teams to draw boundaries around the system helps provide guidance for appropriate use. They have a good grasp on enterprise architecture and reasons why things are the way they are. It helps clearly narrow use cases & provides a definition that makes sense to people.

  • We don’t use it to share content externally because of..
  • We can’t use it for customer facing staff because of…
  • We don’t use it for Forms & Workflow because we have <insert app name here>
  • We can’t don’t use it as a records management system because we have …

Office365 Use cases – The basis of meaning

Microsoft provide some great material on generic use cases. Document collaboration, secure external sharing, workflow, managing content on-the-go, making meetings more efficient etc.  These represent ideals and are sometimes too far removed from the reality of your organisation. Use them as a basis and further develop them with relevance to your business unit or organisation.

Group ideation workshops, discussions & brainstorming sessions are a great way to help draw out use cases. Make sure you have the right level of people, not too high & not too low. You can then follow-up with each and drill in to the detail and see the value the use case provides.

Get some runs on the board

Once you’ve defined a few use cases, work with the business to start piloting. Prove the use case with real-life scenarios. Build the network of people around the use cases and start to draw out and refine how it solves pain, for here is where true value appears. This can be a good news story that can be told to other parts of the business to build excitement.

Plan for supply & demand

Once you some have runs on the board, if successful, word will quickly get out. People will want it. Learn to harness this excitement and build off the energy created. Be ready to deal with sudden increase in supply.

On the demand side, plan for service management. How do people get support? Who support it?  How do we customise it? What the back-out plan? How do we manage updates? All the typical ITIL components you’d expect should be planned for during your pilots.

Office365 Roadmap to remove limitations & create new use cases

They are a meaningful way to communicate when value will be unlocked. IT should have a clear picture of business value is and how it will work to unlock the capabilities the business needs in order for it to be successful.

Roadmaps do a good at communicating this. Though typically, they are technology focused.  This might be a great way to help unify the IT team, but people on the receiving end wont quiet understand. Communicate using their language in ways they understand i.e. what value it will provide them, when & how it will help them be successful.

Azure Automation MS Flow Hybrid Workers SharePoint List upload CSV output

In this Blog I will discuss how to leverage SharePoint Lists as a front end using MS Flow to call Webhooks on Microsoft Azure Automation PowerShell scripts. These scripts execute via a hybrid worker to access On Premises resources. Results will be zipped and uploaded back to the SharePoint list.


  • Azure Automation Subscription and Account
  • SharePoint Online / Site Collection
  • On-premises resource (Windows 2016 server) configured as Hybrid Worker
  • CredSSP needs to be enabled on hybrid Worker as Azure launches scripts as system account and some commands cannot use ‘-Credential’ )
  • Modules needed on Hybrid worker from elevated powershell run “Add-WindowsFeature RSAT-AD-PowerShell and “Install-Module SharePointPnPPowerShellOnline”
  • From Azure Import module from gallery SharePointPnPPowerShellOnline

Create SharePoint List

Create a SharePoint list as below this will be the input required for the script.

ServerPath = the server name eg “rv16mimp”

AuditShare = the full path after server name eg “fileshare”

within the script this will become \\rv16mimp\fileshare”

Adjust the SharePoint List from List Settings to include ServerPath/AuditShare/ErrorMessage and Status.

Azure Automation Script and WebHook

Log in to Azure Automation Account and create a new PowerShell Runbook.

This script will take the values input from the SharePoint list and use SharePointPNP module to update the list to In progress. The script will execute on the Hybrid worker as the WebHook is configured as such.  It will invoke a command to launch a local script using CredSSP in order to run the script entirely as a Local AD user which is stored as an Azure Credential Object. Any errors encountered both in the Azure script and Local script will be in ErrorMessage. After the local script has completed the Azure script will gather the zip file created and attach it to the SharePoint List.

Create a Webhook on an existing runbook

Create a Webhook making sure to select the Hybrid Worker, it is important to copy this and store it safely as you never get to see it again.

Create MS Flow

You can Start the Creation of a Flow from the List, click ‘See your flows’.

From Flow click the New drop down button, then select “Create from blank”

Next when you see the image below click on the bottom right “when an Item is created”

Enter the Site Address by selecting ‘Enter custom value’

Select the List required then click new

You can filter the search for HTTP then choose it

Choose POST as the method and enter the webhook you saved above from Azure. In The Headers we use ItemID to match the list’s ID from SharePoint. ServerPath and AuditShare are the input fields from SharePoint list to the script parameters.

Hybrid Worker Script

This Script executes on the Hybrid worker using the credentials passed from the Azure automation Script and stored as a credential object. The main tasks it performs is a small audit of a file share and checks the groups members of the global group it finds. Lastly it zips up the files ready to upload back to SharePoint. I have used the $Date from the Azure script in order to Identity the filename and avoid conflicts.

After a successful run the list Item will look like the picture below where you can download the zip file

The Following Picture shows the output of the files above.

Retrieve Office 365 audit logs using Office Management API and Azure Functions

For creating custom reports on Office 365 content, the best approach is to fetch the Audit data from Office 365 Management Audit log, store it in a custom database and then create reports through it. In an earlier blog here, we looked at steps to retrieve Office 365 Audit log data using PowerShell. In this blog, we look at a similar process to gather audit data by using Office 365 Management API in Azure Functions.


To start with, we will create an Azure AD app to connect to the Office 365 Audit log data store. Even though it might sound difficult, creating the Azure AD app is quite easy and simple. It is as simple as going to the Azure AD. Here is a quick blog with steps for the same.

After the Azure AD app is created, we will create an Azure Function to pull the data from Office 365 Azure Content blob, for doing that we will need to subscribe to the service first.

There are few prerequisites for setting up the Azure content blob service which are as follows:

  1. Enable the Audit log service in Security and Compliance center. This could be turned on (if not done already) via the Start recording user and admin activity on the Audit log search page in the Security & Compliance Center. This is going to be automatically On by Microsoft in future.
  2. Turn on the subscription service from the Office 365 Management Api. For this hit the below URL to start the subscription service on your tenancy. Replace the tenant Id with the tenant Id from Azure Active Directory{tenant_id}/activity/feed/subscriptions/start?contentType=Audit.SharePoint

Next, back to the Azure Function, we will connect to the Azure subscription service using Azure AD app Id and secret using the below code. The below process is back and forth data pull from the Azure Content blob so read through the steps and code carefully as it might be a little confusing otherwise.

After connecting to the Azure subscription, we could request for content logs for a SharePoint events using a timeline window. Note that the date time are to be in UTC formats.

The detailed audit logs data are not provided in the initial data pull. The initial data pull from Office 365 Management Api returns the content URI to the detail audit log data. This content URI then provides the detailed audit log information hence the next step is a two-step process. The first step is to get the content blog URI details during the first call which then has the detailed log information URI to get the detail data entry from the Azure Subscription service.

Since the audit log data returned from the Office Management subscription service is paged, it is needed to loop through the NextPageURI to get the next URI for the next data pull.

The below code has the break up of data calls and looping for the next page URI. Brief overview of the code is as follows:

  1. Use the Do-While loop to call the initial data URI
  2. Call the initial data URI and get the response data
  3. Process the initial log data and convert to JSON data objects
  4. Get the ContentURI property and then call the data
  5. Next call the content URI to get the detailed audit log data
  6. After the data is fetched, convert to JSON data objects
  7. Add to the final data objects

After the data is retrieval is complete, the final could be stored in an Azure Table for further processing.

Final Thoughts

The above custom process using Azure Function and Office 365 Management API allows us to connect to the Audit log data through a custom job hosted in Office 365. After getting the data we could create reports or filter the data.

Skype for Business Standard Edition – Unable to failback once DR is invoked

During the process of “Invoke-CsPoolFailover” the process changes the “PoolState” of the primary server from Active to FailedOver state, if this is not addressed after the restoration of the primary server the failback will not work.

Figure 1: Primary Server FailedOver State

In order to failback the pool back to the primary server the “PoolState” will need to be set back to Active. This can be done by running the following command:

PS C:\Set-CsRegistrarConfiguration -Identity “” -PoolState Active

Log into the restored primary frontend server and using Windows PowerShell start all the Skype for Business services by running the following command:

PS C:\Start-CsWindowsService

Once the above is done you can follow the listed blog for the failover process:

DR Failover for Skype for Business Standard Edition

DR Failover for Skype for Business Standard Edition

The article takes you through step by step of carrying out both health check and invoking disaster recovery (DR) a standard edition environment. The diagram below shows the layout of the environment where the DR was carried out on:

Figure 1 – Environment Overview

Before proceeding to test DR you need to make sure the appropriate registrar information is available/configured in the environment otherwise you will get the following error during Pool Failover process:

Please check that the pool <Prod_S4B> is healthy as conditions such as high CPU, low available memory
 or any disabled services can delay (or in some cases result in unsuccessful) fail over operations.
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is “Y”):
Get-CsRegistrarConfiguration -Identity ‘service:Registrar:<Prod_S4B>’
WARNING: Cannot find “RegistrarConfiguration” “Registrar:<Prod_S4B>” because it does not exist.
Get-CsRegistrarConfiguration -Identity ‘service:Registrar:<DR_S4B>’
Invoke-CsPoolFailOver : Microsoft.Rtc.Management.Hadr.ManagementCOMException: Version check failed. This cmdlet works
only on servers running Lync Server 2013 or later.
   at Microsoft.Rtc.Management.Hadr.InvokePoolFailOverCmdlet.Action()
At line:1 char:1
+ Invoke-CsPoolFailOver -PoolFqdn <DR_S4B>
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidResult: (:) [Invoke-CsPoolFailOver], ManagementCOMException
    + FullyQualifiedErrorId : Microsoft.Rtc.Management.Hadr.InvokePoolFailOverCmdlet
WARNING: Invoke-CsPoolFailOver encountered errors. Consult the log file for a detailed analysis, and ensure all errors
(2) and warnings (0) are addressed before continuing.
WARNING: Detailed results can be found at
Invoke-CsPoolFailOver : Microsoft.Rtc.Management.Hadr.ManagementCOMException: Version check failed. This cmdlet works
only on servers running Lync Server 2013 or later.
   at Microsoft.Rtc.Management.Hadr.InvokePoolFailOverCmdlet.Action()
At line:1 char:1
+ Invoke-CsPoolFailOver -PoolFqdn <DR_S4B>
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidResult: (:) [Invoke-CsPoolFailOver], ManagementCOMException
    + FullyQualifiedErrorId : Microsoft.Rtc.Management.Hadr.InvokePoolFailOverCmdlet

First run the following command in Skype Management Shell:

PS C:\Get-CsRegistrarConfigurtion

Output of the above command is:

Figure 2: Registrar Output

If incase you do not have the additional service level registrar’s pointing to the Frontend’s please run the following command:

PS C:\New-CsRegistrarConfiguration -Identity -EnableDHCPServer $true -PoolState $true

Note: Please replace “” with your own server FQDN. Run the command for each standard edition server.

Once the above is done, rerun the command:

PS C:\Get-CsRegistrarConfiguration

To ensure the service level information has been loaded correctly.

Now check to make sure there are no issues with the backup state of the environment. In order to do this run the following command:

PS C:\Get-CsBackupServiceStatus -PoolFqdn “”

Output of the above command is:

Figure 3: Primary Backup Status Output

To expand to see the BackupModules you can run the command as:

PS C:\Get-CsBackupServiceStatus -PoolFqdn “”  | Select BackupModules | fl

Figure 3.1: Backup Modules Output

Looking closer at Figure 3.1 you will note the line: CentralMgmt.CMSMaster: [FinalState,NotInitialized]}

Matching this to Figure 3, this actually references to:

Figure 3.2: CMS State on Primary Server

The reason why the CentralMgmt.CMSMaster “OverallImportStatus” is “NotInitialized” as there can only be one CMS Master. In this instance the is the CMS Master.

The table below shows the meaning of the various status output:

Table 1: Export States

When you run the same commands against the secondary frontend:

PS C:\Get-CsBackupServiceStatus -PoolFqdn “”

Output should be as follows:

Figure 4: Secondary Backup Status Output

To expand to see the BackupModules you can run the command as:

Figure 4.1: Backup Modules Output

Looking closer at Figure 4.1 you will note the line: CentralMgmt.CMSMaster: [NotInitialized,NormalState]}

Matching this to Figure 4, this actually references to:

Figure 4.2: CMS State on Secondary Server

The reason why the CentralMgmt.CMSMaster “OverallExportStatus” is “NotInitialized” as there can only be one CMS Master. In this instance the is not the CMS Master.

Note: There is no reason to check the Pool Fabric State as this is not an Enterprise Pool containing multiple frontend’s.

To test “Central Management Store (CMS)” run the following command:

On the Primary server:

PS C:\Test-CsDatabase -CentralManagementDatabase

Output should look like:

Figure 5: Primary CMS Output

Run the following command from the Secondary Server:

PS C:\Test-CsDatabase -CentralManagementDatabase

Output of the above command is as follows:

Figure 6: Secondary CMS Output

Please also test all database connectivity’s both to all locally running SQL Services and between frontend servers:

Testing local databases run the following command:

PS C:\Test-CsDatabase -LocalService

Testing database across servers:

From Primary Server run the following command to the opposing server:

PS C:\Test-CsDatabase -ConfiguredDatabases -SqlServerFqdn “”

The reason to do this is to ensure that the primary server can connect to the secondary server’s SQL database prior to initiating failover or putting the environment into production. Run the same command from the Secondary Server to the Primary Server. The command you run from the Secondary server is as follows:

PS C:\Test-CsDatabase -ConfiguredDatabases -SqlServerFqdn  “” 

Check the proposed state of the CMS failover by running the following command:

PS C:\Invoke-CsManagementServerFailover -WhatIf

Note: If this is a true DR situation most of the above health checks will fail as the query cannot communicate with the CMS Master server.

Once the health checks are done, run the invoke command to failover the CMS to the secondary server:

PS C:\Invoke-CsManagementServerFailover -BackSQLServerFqdn “Secondary Frontend Server” -BackupSQLInstanceName RTC

Check that the failover has been successful by running:

PS C:\Get-CsManagementStoreReplicationStatus -CentralManagementStoreStatus

Note: If the “ActiveMasterFqdn” is not populated, do not worry allow it takes a few mins to update. While this is happening you can launch the Topology Builder and verify that it has failed over.

On the main deployment page select “Skype for Business Server” and on the right hand pane under “Central Management Server” you should see a green tick next to the secondary frontend, example below:

Figure 7: Topology

As per “Figure 1” you will see that in this deployment there is only a single edge pool which is the next hop for the primary frontend server. In this deployment the servers are separate into their respective sites, primary being Site 1 and secondary being Site 2.

As such you cannot use Topology builder to failover edge services to the secondary frontend as the edge pool is physically configured under Site 1. The services will have to be failed over using PowerShell as below:

PS C:\Set-CsEdgeServer -Identity “Edgepool Fqdn” -Registrar Registrar:Secondary Frontend Fqdn

Once the command completes, publish the topology by running the following command:

PS C:\Enable-CsTopology

Once this completes you can now failover the pool for users and services by running the following command on the secondary server:

PS C:\Invoke-CsPoolFailOver -PoolFqdn “Secondary frontend server”

You have now successfully failed over the environment to the Secondary Frontend.

Use Azure AD Apps to connect with Office 365 and Cloud Services securely

Azure AD apps provide a faster and secure way to connect to the Office 365 tenancy and carry out automation tasks. There are many advantages of using Azure AD apps and could be used to authenticate for various Microsoft services such as Graph, Office 365 Management Api, SharePoint etc.

In this blog, we will look at the steps to set up an Azure AD app for Office 365 Management API, however the steps are mostly the same for other Office 365 services too.

In order to create an Azure AD App, first we need to have an Azure AD setup for Office 365. For more information about setting up Azure AD, please check the article here.

Note: Screenshots and examples used in this blog are referring to the latest App registrations (preview) page. The same is also possible through the old App registrations page too, but might be different names and locations of controls. Below is the front pages from both the App registration pages.

App Registrations (preview)

App Registrations

After the Azure AD is set up, we will open the latest App registrations (preview) section to set up the new app. This is also possible through the old App registrations page but from a personal opinion, I feel the new one is much better laid out than the older one.

In the App Registrations page, let’s create a new registration. For account types, lets’ leave it to the default current organisation only. We can leave the redirect api blank or set it to localhost for now. We will change the Redirect API later when the app is ready.


After the App is created, there are few important sections to note. Below is a screenshot of the same.

  1. Application ID – This is the identity of the Azure AD app and will be used to connect to Office 365 using this App.
  2. Application secret – This is password of the App. Please note that in the password
  3. Api Permissions – The permissions granted to the API for accessing various Office 365 services. The services could be accessed as a standalone App or delegated to user permissions.
  4. Reply URLs – Reply urls are important to redirect to the correct page after authentication is successful. If building in a custom application, then put the redirect to the App authentication redirect page.
  5. Directory ID or Tenant ID – This is the Azure AD Directory ID for the tenancy


The next step in the process would be to create a secret (password) for the App. Click on Certificates and Secret link, and then click on New Client Secret. While creating the secret select “Never” as expiry (if not needed to expire the secret) and then click create.


Note: It is important to note that the secret is only displayed on the page once, so copy the secret to use it later.

For Api permissions, click on Api Permissions and then click on Add permission. This will display a list of pre-set services and permissions required to access those services. Some of the office 365 services are highlighted in the below screenshot.


After the API is selected, select the permissions to run the Azure AD app i.e. with user permissions (delegated) or run as a service without an user as show in teh screenshot below.


Next steps is to grant permissions to the Azure App by a Tenant Administrator for accessing the services. This could be done from the Azure AD Directory or through an Admin consent URL For the Office 365 Management API, here is the admin consent URL is of the below format. Please make sure the reply back URL is a valid url and added to the Authentication.{your_client_id}&redirect_uri={your_redirect_url }

After the consent is provided, the app is ready to connect to the office 365 services now.


After the App is created, this could be used in applications and development code to access Office 365 services set up in the Azure AD app.


Selectively prevent and secure content from External sharing using Labels and DLP policies in Office 365

In a recent project, we had a requirement to prevent specific selective content from shared externally while still allowing the flexibility of external sharing for all users. We were able to make it possible through Security and Compliance Center. There are few ways to achieve this, Auto-classify (see below conclusion section for more info), Selective apply via Labels and both.

Note: Till recently (Dec 2018), there was a bug in Office 365 which was preventing this DLP policy with Labels to work. This is fixed in the latest release so available for use.

In this blog, we will look at the process where business users can decide the content to be shared externally or not. This is a nifty feature, because there are cases when the content could be classified as secured even when they don’t have any sensitive info such as contracts (without business info) or invoices (with only business name). Also, there are cases when content could be public even when the document has sensitive info because the company has decided to make it public. So, at the end it is up to the discretion of the owner to decide the content’s privacy and hence this feature a great value in these scenarios.

Note: If you would like to auto classify the content using Sensitive info types, please refer to the great article here. This process leverages the machine learning capabilities of Office 365 engine to identify secure content and automatically apply the security policy on it.

The first step is to create a Retention label (somehow this doesn’t work with Security labels, so must create retention label). After creating a label, publish the label to the selected locations, for our use case we will post it to SharePoint Sites only. While the label is published, we could go ahead and create a DLP policy to prevent sharing with external users (I was not able to make it work while set to Test with notification so put it to on state to test also). After this, when you apply the label to a document, after some time (takes about 1-2 min to affect), then the content is not able to be shared with external users. Lets’ look at each of the above steps in detail below.


  1. First step is to create a retention label in Security and Compliance center. To my astonishment, the selective process doesn’t work with Security Labels but Retention Labels, so will create Retention Labels. If it is optional to apply a retention period to the content, then the retention period can be left, so not required for this exercise.

  2. Secondly, we will publish the label to SharePoint Sites, for our requirement. I haven’t tried the process with other sources such as Outlook and One Drive but should work the same when applied.
    Note: It takes about a day for the retention labels to publish to SharePoint sites, so please wait for that to become available. We can move to the next configuration step right away but will have to wait for the label to be published to stop sharing.
  3. Next, we could create a DLP policy for the content to be applied. For creating a DLP policy we need to follow the below configuration steps. Once created, we might have to turn it on in order to test it.
  4. First step of the policy creation would be select Custom Policy for DLP policy creation and give it a name.
  5. Then, we would select the sources to be included for this policy. In our case, it is only SharePoint.
  6. After the above, we will set rule settings for the DLP policy where we will select the label to which the policy to apply, then select the policy tips, block sharing rules and override rules as shown in the below screenshots. We could also set the admins (provided) to get notified when such as content is shared externally.
  7. Next, we could allow the users to override the policy if needed. For this blog and our requirement, we had decided to not allow it to happen.
  8. After this is setup, we could turn on the DLP policy so that it could start applying the rules. There doesn’t seem to be any wait time for applying the policy later but give it some time if you don’t see it happening right away.
  9. Now the policy is enabled and if the label are published, the user can then apply the label on a content as shown in below screenshot.
    Note: In some cases, it takes about 2-3 min for the policy to be effective on the content after applying the label so give it some time.
  10.  After the label is effective after 2-3 min wait, if the same content is shared with an external user, we get the following error.

Read More

Follow Us!

Kloud Solutions Blog - Follow Us!