A tool to find mailbox permission dependencies

First published at https://nivleshc.wordpress.com

When planning to migrate mailboxes to Office 365, a lot of care must be taken around which mailboxes are moved together. The rule of the thumb is “those that work together, move together”. The reason for taking this approach is due to the fact that there are some permissions that do not work cross-premises and can cause issues. For instance, if a mailbox has delegate permissions to another mailbox (these are permissions that have been assigned using Outlook email client) and if one is migrated to Office 365 while the other remains on-premises, the delegate permissions capability is broken as it does not work cross-premises.

During the recent Microsoft Ignite, it was announced that there are a lot of features coming to Office 365 which will help with the cross-premises access issues.

I have been using Roman Zarka’s Export-MailboxPermissions.ps1 (part of https://blogs.technet.microsoft.com/zarkatech/2015/06/11/migrate-mailbox-permissions-to-office-365/ bundle) script to export all on-premises mailboxes permissions then using the output to decide which mailboxes move together. Believe me, this can be quite a challenge!

Recently, while having a casual conversation with one of my colleagues, I was introduced to an Excel  spreadsheet that he had created. Being the Excel guru that he is, he was doing various VLOOKUPs into the outputs from Roman Zarka’s script, to find out if the mailboxes he was intending to migrate had any permission dependencies with other mailboxes. I just stared at the spreadsheet with awe, and uttered the words “dude, that is simply awesome!”

I was hooked on that spreadsheet. However, I started craving for it to do more. So I decided to take it on myself to add some more features to it. However, not being too savvy with Excel, I decided to use PowerShell instead. Thus was born Find_MailboxPermssions_Dependencies.ps1

I will now walk you through the script and explain what it does

 

  1. The first pre-requisite for Find_MailboxPermissions_Dependencies.ps1 are the four output files from Roman Zarka’s Export-MailboxPermissions.ps1 script (MailboxAccess.csv, MailboxFolderDelegate.csv, MailboxSendAs.csv, MaiboxSendOnBehalf.csv)
  2. The next pre-requisite is details about the on-premises mailboxes. The on-premises Exchange environment must be queried and the details output into a csv file with the name OnPrem_Mbx_Details.csv. The csv must contain the following information (along the following column headings)“DisplayName, UserPrincipalName, PrimarySmtpAddress, RecipientTypeDetails, Department, Title, Office, State, OrganizationalUnit”
  3. The last pre-requisite is information about mailboxes that are already in Office 365. Use PowerShell to connect to Exchange Online and then run the following command (where O365_Mbx_Details.csv is the output file)
    Get-Mailbox -ResultSize unlimited | Select DisplayName,UserPrincipalName,EmailAddresses,WindowsEmailAddress,RecipientTypeDetails | Export-Csv -NoTypeInformation -Path O365_Mbx_Details.csv 

    If there are no mailboxes in Office 365, then create a blank file and put the following column headings in it “DisplayName”, “UserPrincipalName”, “EmailAddresses”, “WindowsEmailAddress”, “RecipientTypeDetails”. Save the file as O365_Mbx_Details.csv

  4. Next, put the above files in the same folder and then update the variable $root_dir in the script with the path to the folder (the path must end with a )
  5. It is assumed that the above files have the following names
    • MailboxAccess.csv
    • MailboxFolderDelegate.csv
    • MailboxSendAs.csv
    • MailboxSendOnBehalf.csv
    • O365_Mbx_Details.csv
    • OnPrem_Mbx_Details.csv
  6.  Now, that all the inputs have been taken care of, run the script.
  7. The first task the script does is to validate if the input files are present. If any of them are not found, the script outputs an error and terminates.
  8. Next, the files are read and stored in memory
  9. Now for the heart of the script. It goes through each of the mailboxes in the OnPrem_Mbx_Details.csv file and finds the following
    • all mailboxes that have been given SendOnBehalf permissions to this mailbox
    • all mailboxes that this mailbox has been given SendOnBehalf permissions on
    • all mailboxes that have been given SendAs permissions to this mailbox
    • all mailboxes that this mailbox has been given SendAs permissions on
    • all mailboxes that have been given Delegate permissions to this mailbox
    • all mailboxes that this mailbox has been given Delegate permissions on
    • all mailboxes that have been given Mailbox Access permissions on this mailbox
    • all mailboxes that this mailbox has been given Mailbox Access permissions on
    • if the mailbox that this mailbox has given the above permissions to or has got permissions on has already been migrated to Office 365
  10. The results are then output to a csv file (the name of the output file is of the format Find_MailboxPermissions_Dependencies_{timestamp of when script was run}_csv.csv
  11. The columns in the output file are explained below
Column Name Description
PermTo_OtherMbx_Or_FromOtherMbx? This is Y if the mailbox has given permissions to or has permissions on other mailboxes. Is N if there are no permission dependencies for this mailbox
PermTo_Or_PermFrom_O365Mbx? This is TRUE if the mailbox that this mailbox has given permissions to or has permissions on is  already in Office 365
Migration Readiness This is a color code based on the migration readiness of this permission. This will be further explained below
DisplayName The display name of the on-premises mailbox for which the permission dependency is being found
UserPrincipalName The userprincipalname of the on-premises mailbox for which the permission dependency is being found
PrimarySmtp The primarySmtp of the on-premises mailbox  for which the permission dependency is being found
MailboxType The mailbox type of the on-premises mailbox  for which the permission dependency is being found
Department This is the department the on-premises mailbox belongs to (inherited from Active Directory object)
Title This is the title that this on-premises mailbox has (inherited from Active Directory object)
SendOnBehalf_GivenTo emailaddress of the mailbox that has been given SendOnBehalf permissions to this on-premises mailbox
SendOnBehalf_GivenOn emailaddress of the mailbox that this on-premises mailbox has been given SendOnBehalf permissions to
SendAs_GivenTo emailaddress of the mailbox that has been given SendAs permissions to this on-premises mailbox
SendAs_GivenOn emailaddress of the mailbox that this on-premises mailbox has been given SendAs permissions on
MailboxFolderDelegate_GivenTo emailaddress of the mailbox that has been given Delegate access to this on-premises mailbox
MailboxFolderDelegate_GivenTo_FolderLocation the folders of the on-premises mailbox that the delegate access has been given to
MailboxFolderDelegate_GivenTo_DelegateAccess the type of delegate access that has been given on this on-premises mailbox
MailboxFolderDelegate_GivenOn email address of the mailbox that this on-premises mailbox has been given Delegate Access to
MailboxFolderDelegate_GivenOn_FolderLocation the folders that this on-premises mailbox has been given delegate access to
MailboxFolderDelegate_GivenOn_DelegateAccess the type of delegate access that this on-premises mailbox has been given
MailboxAccess_GivenTo emailaddress of the mailbox that has been given Mailbox Access to this on-premises mailbox
MailboxAccess_GivenTo_DelegateAccess the type of Mailbox Access that has been given on this on-premises mailbox
MailboxAccess_GivenOn emailaddress of the mailbox that this mailbox has been given Mailbox Access to
MailboxAccess_GivenOn_DelegateAccess the type of Mailbox Access that this on-premises mailbox has been given
OrganizationalUnit the Organizational Unit for the on-premises mailbox

The color codes in the column Migration Readiness correspond to the following

  • LightBlue – this on-premises mailbox has no permission dependencies and can be migrated
  • DarkGreen  – this on-premises mailbox has got a Mailbox Access permission dependency to another mailbox. It can be migrated while the other mailbox can remain on-premises, without experiencing any issues as Mailbox Access permissions are supported cross-premises.
  • LightGreen – this on-premises mailbox can be migrated without issues as the permission dependency is on a mailbox that is already in Office 365
  • Orange – this on-premises mailbox has SendAs permissions given to/or on another on-premises mailbox. If both mailboxes are not migrated at the same time, the SendAs capability will be broken. Lately, it has been noticed that this capability can be restored by re-applying the SendAs permissions to both the migrated and on-premises mailbox post migration
  • Pink – the on-premises mailbox has FolderDelegate given to/or on another on-premises mailbox. If both mailboxes are not migrated at the same time, the FolderDelegate capability will be broken. A possible workaround is to replace the FolderDelegate permission with Full Mailbox access as this works cross-premises, however there are privacy concerns around this workaround as this will enable the delegate to see all the contents of the mailbox instead of just the folders they had been given access on.
  • Red – the on-premises mailbox has SendOnBehalf permissions given to/or on another on-premises mailbox. If both mailboxes are not migrated at the same time, the SendOnBehalf capability will be broken. A possible workaround could be to replace SendOnBehalf with SendAs however the possible implications of this change must be investigated

Yay, the output has now been generated. All we need to do now is to make it look pretty in Excel 🙂

Carry out the following steps

  • Import the output csv file into Excel, using the semi-colon “;” as the delimiter (I couldn’t use commas as the delimiter as sometimes department,titles etc fields use them and this causes issues with the output file)
  • Create Conditional Formatting rules for the column Migration Readiness so that the fill color of this cell corresponds to the word in this column (for instance, if the word is LightBlue then create a rule to apply a light blue fill to the cell)

Thats it Folks! The mailbox permissions dependency spreadsheet is now ready. It provides a single-pane view to all the permissions across your on-premises mailboxes and gives a color coded analysis on which mailboxes can be migrated on their own without any issues and which might experience issues if they are not migrated in the same batch with the ones they have permissions dependencies on.

In the output file, for each on-premises mailbox, each line represents a permission dependency (unless the column PermTo_OtherMbx_Or_FromOtherMbx? is N). If there are more than one set of permissions applicable to an on-premises mailbox, these are displayed consecutively underneath each other.

It is imperative that the migration readiness of the mailbox be evaluated based on the migration readiness of all the permissions associated with that mailbox.

Find_MailboxPermissions_Dependencies.ps1 can be downloaded from  GitHub

A sample of the spreadsheet that was created using the output from the Find_MailboxPermissions_Dependencies.ps1 script can be downloaded from https://github.com/nivleshc/arm/blob/master/Sample%20Output_MailboxPermissions%20Dependencies.xlsx

I hope this script comes in handy when you are planning your migration batches and helps alleviate some of the headache that this task brings with it.

Till the next time, have a great day 😉

Azure Functions Cold Start Workaround

Intro

I love Azure Functions. So much power for so little effort or cost. The only downside is that the consumption model that keeps the cost so dirt-cheap means that unless you are using your Function constantly (in which case, you might be better off with the non-consumption options anyway), you will often be hit with a long delay as your Function wakes up from hibernation.

So very cold…

This isn’t a big deal if you are dealing with a fire and forget queue trigger scenario, but if you have web app that is calling the HTTP trigger and you need to wait for the Function to do it’s job before responding with a 200 OK… that’s a long wait (well over 15 seconds in my experience with a PowerShell function that loads a bunch of modules).

Now, the blunt way to mitigate this (as suggested by some in github issues on the subject) is to set up a timer function in the same Function App to run every 5 minutes to keep things warm. This to me seems like a wasteful and potentially expensive approach. For my use-case, there was a better way that would work.

The Classic CRUD Use-case

Here’s my use case: I’m building some custom SharePoint forms for a customer and using my preferred JS framework, good old Angular 1.x. Don’t believe the hype around the newer frameworks, ng1 still gets the job done with no performance problems at the scale I’m dealing with in SharePoint. It also comes with a very strong ecosystem of libraries to support doing amazing things in the browser. But that’s a topic for another blog.

Anyway, the only thing I couldn’t do effectively on the client-side was break permissions on the list item created using the form and secure it to the creator and some other users (eg. their manager, etc). You need elevated permissions for that. I called on the awesome power of the PnP PowerShell library (specifically the Set-PnPListItemPermission cmdlet) to do this and wrapped it in a PowerShell Azure Function:

Pretty simple. Nice and clean – gets the job done. My Angular service calls this right after saving the item based on the form input by the user. If the Azure Function is starting from cold, then that adds an extra 20 seconds to save operation. Unacceptable and avoidable.

A more nuanced warmup for those who can…

This seemed like such an obvious solution once I hit on it – but I hadn’t thought of it before. When the form is first opened, it’s pretty reasonable to assume that (unless the form is a monster), the user should be hitting that submit/save button in under 5 mins. So that’s when we hit our function with a modified HTTP payload of ‘WARMUP’.

Just a simple ‘If’ statement to bypass the function if the ‘WARMUP’ payload is detected. The Function immediately responds with a 200. We ignore that – this is effectively fire-and-forget. Yes, this would be even simpler if we had a separate warmup Function that did absolutely nothing except warm up the Functions App, but I like that this ensures that my dependent PnP dlls (in the ‘modules’ folder of my Function) have been fired up on the current instance before I hit the function for real. Maybe it makes no difference. Don’t really care – this is simple enough anyway.

Here’s the Angular code that calls the Function (both as a warmup and for real):

Anyway, nothing revolutionary here, I know. But I hadn’t come across this approach before, so I thought it was worth writing up as it suits this standard CRUD forms over data scenario so nicely.

Till next time!

‘Generic’ LDAP Connector for Azure AD Connect

I’m working for a large corporate who has a large user account store in Oracle Unified Directory (LDAP).   They want to use these existing accounts and synchronise them to Azure Active Directory for Azure application services (such as future Office 365 services).

Microsoft state here that Azure Active Directory Connect (AAD Connect) will, in a ‘Future Release’ version, provide native LDAP support (“Connect to single on-premises LDAP directory”), so timing wise I’m in a tricky position – do I guide my customer to attempt to use the current version? (at the time of writing is: v1.1.649.0) or wait for this ‘future release version’?.

This blog may not have a very large lifespan – indeed a new version of AAD Connect might be released at any time with native LDAP tree support, so be sure to research AAD Connect prior to providing a design or implementation.

My customer doesn’t have any requirement for ‘write back’ services (where data is written back from Azure Active Directory to the local directory user store) so this blog post covers just a straight export from the on-premises LDAP into Azure Active Directory.

I contacted Microsoft and they stated it’s supported ‘today’ to provide connectivity from AAD Connect to LDAP, so I’ve spun up a Proof of Concept (PoC) lab to determine how to get it working in this current version of AAD Connect.

Good news first, it works!  Bad news, it’s not very well documented so I’ve created this blog just to outline my learnings in getting it working for my PoC lab.

I don’t have access to the Oracle Unified Directory in my PoC lab, so I substituted in Active Directory Lightweight Directory Services (AD LDS) so my configuration reflects the use of AD LDS.

Tip #1 – You still need an Active Directory Domain Service (AD DS) for installation purposes

During the AAD connect installation wizard (specifically the ‘Connect your directories’ page), it expects to connect to an AD DS forest to progress the installation.  In this version of AAD Connect, the ‘Directory Type’ listbox only shows ‘Active Directory’ – which I’m expecting to include more options when the ‘Future Release’ version is available.

I created a single Domain Controller (forest root domain) and used the local HOST file of my AAD Connect Windows Server to point the forest root domain FQDN e.g. ‘forestAD.internal’ to the IP address of that Domain Controller.

I did not need to ‘join’ my AAD Connect Windows Server to that domain to complete the installation, which will make it easier to decommission this AD DS (if it’s put into Production) if Microsoft releases an AAD Connect version that does not require AD DS.

Screen Shot 2017-10-25 at 10.39.34 am

Tip #2 – Your LDAP Connector service account needs to be a part of the LDAP tree

After getting AAD Connect installed with mostly the default options, I then went into the “Synchronization Engine” to install the ‘Generic LDAP’ connector that is available by clicking on the ‘Connector’ tab and clicking ‘Create’ under the ‘Actions’ heading on the right hand side:

Screen Shot 2017-11-01 at 11.50.41 am

For the ‘User Name’ field, I created and used an account (in this example: ‘CN=FIMLDSAdmin,DC=domain,DC=internal) that was part of the LDAP tree itself instead of the installation account created by AD LDS which is part of the local Windows Server user store.

If I tried use a local Windows Server ie. ‘Server\username’ user, it would just give me generic connection errors and not bind to the LDAP tree, even if that user had full administrative rights to the AD LDS tree.  I gave up troubleshooting this – so I’ve resigned myself to needing a service account in the LDAP tree itself.

Screen Shot 2017-11-03 at 9.50.55 am

Tip #3 – Copy (and modify) the existing Inbound Rules for AD

When you create a custom Connector, the data mapping rules for that Connector are taken from the ‘Sychronization Rules Editor’ program and are not located in the Synchronization Engine.  There’s basically two choices at the moment for a custom connector: copy existing rules from another Connector, or create a brand new (blank) rule set for that connector (per object type e.g. ‘User’).

I chose the first option – I copied the three existing ‘Inbound’ Rules from the AD DS connector:

  • In from AD – User Join
  • In from AD – User AccountEnabled
  • In from AD – User Common

If you ‘edit’ each of these existing AD DS rules, you’ll get a choice to create a copy of that rule set and disable the original.  Select ‘Yes’ will create a copy of that rule, and you can then modify the ‘Connected Systen’ to use the LDAP Connector instead of the AD DS Connector:

Screen Shot 2017-11-03 at 10.10.51 am

I also modified the priority numbering of each of the rules from ‘201’ to ‘203’ to have these new rules applied last in my Synchronization Engine.

I ended up with the configuration of these three new ‘cloned’ rules for my LDAP Connector:

Screen Shot 2017-11-03 at 10.04.27 am

I found I had to edit or remove any rules that required the following:

  • Any rule looking for ‘sAMAccountName’, I modified the text of the rule to look for ‘UID’ instead (which is the schema attribute name most closely resembling that account name field in my custom LDAP)
  • I deleted the following rule from the Transformation section of the ‘In from AD – User Join’ cloned rule.  I found that it was preventing any of my LDAP accounts reaching Azure AD:

Screen Shot 2017-11-03 at 10.32.39 am

  • In the ‘In from AD – User AccountEnabled’ Rule, I modified the existing Scoping Filter to not look at the ‘userAccountControl’ bit and instead used the AD LDS attribute name:  msDS-UserAccountDisabled = FALSE’ value instead:

Screen Shot 2017-11-03 at 10.35.36 am

There are obviously many, many ways of configuring the rules for your LDAP tree but I thought I’d share how I did it with AD LDS.  The reasoning why I ‘cloned’ existing rules was that I wanted to protect the data integrity of Azure AD primarily.  There are many, many default data mapping rules for Azure AD that come with the AD DS rule set – a lot of them use ‘TRIM’ and ‘LEFT’ functions to ensure the data reaches Azure AD with the correct formatting.

It will be interesting to see how Microsoft tackles these rules sets in a more ‘wizard’ driven approach – particularly since LDAP trees can be highly customised with unique attribute names and data approaches.

Before closing the ‘Synchronization Rules Editor’, don’t forgot to ‘re-enable’ each of the (e.g AD DS) Connector rules you’ve previously cloned because the Synchronization Rules Editor assumes you’re not modifying the Connector they’re using.  Select the original rule cloned, and uncheck the ‘Disabled’ box.

Tip #4 – Create ‘Delta’ and ‘Full’ Run Profiles

Lastly, you might be wondering: how does the AAD Connector Scheduler (the one based entirely in PowerShell with seemingly no customisation commands) pickup the new LDAP Connector?

Well, it’s simply a matter of naming your ‘Run Profiles’ in the Synchronization Engine with the text: ‘Delta’ and ‘Full’ where required.  Select ‘Configure Run Profiles’ in the Engine for your LDAP Connector:

Screen Shot 2017-11-03 at 10.16.29 am.png

I then created ‘Run Profiles’ with the same naming convention as the ones created for AD DS and Azure AD:

Screen Shot 2017-11-03 at 10.17.58 am

Next time I ran an ‘Initial’ (which executes ‘Full Import’ and ‘Full Sync.’ jobs) or a ‘Delta’ AD Scheduler job (I’ve previously blogged about the AD Scheduler, but you can find the official Microsoft doc on it here), my new LDAP Connector Run Profiles were executed automatically along with the AD DS and AAD Connector Run Profiles:

Screen Shot 2017-11-03 at 10.19.57 am

Before I finish up, my colleague David Minnelli has found IDAMPundit’s blog post about a current bug upgrading to AAD Connect v1.1.649.0 version if you already have an LDAP Connector.  In a nutshell, just open up the existing LDAP Connector and step through each page and re-save it to clear the issue.

** Edit: I have had someone query how I’m authenticating with these accounts, well I’m leveraging an existing SecureAuth service that uses the WS-Federation protocol to communicate with Azure AD.  So ‘Federation’ basically – I’m not extracting passwords out of this LDAP or doing any kind of password hash synchronization.  Thanks for the question though!

Hope this helped, good luck!

 

Recursive Lambda

In this blog post we explore a recursive pattern for AWS Lambda. This pattern allows us to tackle potentially long running tasks such as ones requiring processing multiple items in the input or tracking the progress of a long running task.

Read More

Global Navigation and Branding for Modern Site using SharePoint Framework Extensions

Last month at the Microsoft Ignite 2017, SharePoint Framework Extensions became GA. It gave us whole new capabilities how we can customize Modern Team sites and Communication sites.

Even though there are lots of PnP examples on SPFx extensions, while presenting at Office 365 Bootcamp, Melbourne and taking hands-on lab, I realised not many people are aware about the new capabilities that SPFx extensions provide. One of the burning question we often get from clients, if we can have custom header, footer and global navigation in the modern sites and the answer is YES. Here is an example where Global Navigation has been derived from Managed Metadata:

Communication Site with header and footer:

Modern Team Site with header and footer (same navigation):

With the latest Yeoman SharePoint generator, along with the SPFx Web Part now we have options to create extensions:

To create header and footer for the modern site, we need to select the Application Customizer extension.

After the solution has been created, one noticeable difference is TenantGlobalNavBarApplicationCustomizer is extending from BaseApplicationCustomizer and not BaseClientSideWebPart.

export default class TenantGlobalNavBarApplicationCustomizer
extends BaseApplicationCustomizer

Basic Header and Footer

Now to create a very basic Application Customizer with header/footer, make sure to import the React, ReactDom, PlaceholderContent and PlaceholderName:

import * as ReactDom from 'react-dom';
import * as React from 'react';
import {
  BaseApplicationCustomizer,
  PlaceholderContent,
  PlaceholderName
} from '@microsoft/sp-application-base';

In the onInit() function, the top(header) and the bottom (footer) placeholders need to be created:

const topPlaceholder =  this.context.placeholderProvider.tryCreateContent(PlaceholderName.Top);
const bottomPlaceholder =  this.context.placeholderProvider.tryCreateContent(PlaceholderName.Bottom);

Create the appropriate elements for the header and footer:

const topElement =  React.createElement('h1', {}, 'This is Header');
const bottomElement =   React.createElement('h1', {}, 'This is Footer');

Those elements can be render within placeholder domElement:

ReactDom.render(topElement, topPlaceholder.domElement);
ReactDom.render(bottomElement, bottomPlaceholder.domElement);

If you now run the solution:

  • gulp serve –nobrowser
  • Copy the id from src\extensions\.manifest.json file e.g. “7650cbbb-688f-4c62-b5e3-5b3781413223”
  • Open a modern site and append the following URL (change the id as per your solution):
    ?loadSPFX=true&debugManifestsFile=https://localhost:4321/temp/manifests.js&customActions={“7650cbbb-688f-4c62-b5e3-5b3781413223”:{“location”:”ClientSideExtension.ApplicationCustomizer”}}

The above 6 lines code will give the following outcome:

Managed Metadata Navigation

Now back to the first example. That solution has been copied from SPFx Sample and has been updated.

To get the above header and footer:

Go to command line and run

  • npm i
  • gulp serve –nobrowser
  • To see the code running, go to the SharePoint Online Modern site and append the following with the site URL:

    ?loadSPFX=true&debugManifestsFile=https://localhost:4321/temp/manifests.js&customActions={“b1efedb9-b371-4f5c-a90f-3742d1842cf3”:{“location”:”ClientSideExtension.ApplicationCustomizer”,”properties”:{“TopMenuTermSet”:”TopNav”,”BottomMenuTermSet”:”Footer”}}}

Deployment

Create an Office 365 CDN

Update config\write-manifests.json file “cdnBasePath” to the new location. E.g.

"cdnBasePath":"https://publiccdn.sharepointonline.com/<YourTenantName>.sharepoint.com//<YourSiteName>//<YourLibName>/<FoldeNameifAny>"

In the command line, run

  • gulp bundle –ship
  • gulp package-solution –ship

and upload all artefacts from \temp\deploy\ to the CDN location

Upload \sharepoint\solution.sppkg to the App Library

Go to the Modern Site > Site Content > New > App > select and add the app:

Hopefully this post has given an overview how to implement SPFx Application Customizers. There are many samples available in the GitHub SharePoint.

Using Visual Studio with Github to Test New Azure CLI Features

Following the Azure Managed Kubernetes announcement yesterday, I immediately upgraded my Azure CLI on Windows 10 so I could try it out.

Unfortunately I discovered there was a bug with retrieving credentials for your newly created Kubernetes cluster – the command bombs with the following error:

C:\Users\rafb> az aks get-credentials --resource-group myK8Group --name myCluster
[Errno 13] Permission denied: 'C:\\Users\\rafb\\AppData\\Local\\Temp\\tmpn4goit44'
Traceback (most recent call last):
 File "C:\Program Files (x86)\Microsoft SDKs\Azure\CLI2\lib\site-packages\azure\cli\main.py", line 36, in main
 cmd_result = APPLICATION.execute(args)
(...)

A Github Issue had already been created by a someone else and a few hours later, the author of the offending code submitted a Pull Request (PR) fixing the issue. While this is great, until the PR is merged into master and a new release of the Azure CLI for Windows is out , the fix will be unavailable.

Given this delay I thought it would be neat to setup an environment in order to try out the fix and also contribute to the Azure CLI in future. Presumably any other GitHub project can be accessed this way so it is by no means restricted to the Azure CLI.

Prerequisites

I will assume you have a fairly recent version of Visual Studio 2017 running on Windows 10, with Python support installed. By going into the Tools menu and choosing “Get Extensions and Features” in Visual Studio, you can add Python support as follows:

Install Python

By clicking in the “Individual Components” tab in the Visual Studio installer, install the “GitHub Extension for Visual Studio”:

Install GitHub support

You will need a GitHub login, and for the purpose of this blog post you will also need a Git client such as Git Bash (https://git-for-windows.github.io/).

Getting the Code

After launching Visual Studio 2017, you’ll notice a “Github” option under “Open”:

Screen Shot 2017-10-26 at 13.51.56

If you click on Github, you will be asked to enter your Github credentials and you will presented with a list of repositories you own but you won’t be able to clone the Azure CLI repository (unless you have your own fork of it).

The trick is to go to the Team Explorer window (View > Team Explorer) and choose to clone to a Local Git Repository:

Clone to local Repository

After a little while the Azure CLI Github repo (https://github.com/Azure/azure-cli) will be cloned to the directory of your choosing (“C:\Users\rafb\Source\Repos\azure-cli2” in my case).

Running Azure CLI from your Dev Environment

Before we can start hacking the Azure CLI, we need to make sure we can run the code we have just cloned from Github.

In Visual Studio, go to File > Open > Project/Solutions and open
“azure-cli2017.pyproj” from your freshly cloned repo. This type of project is handled by the Python Support extension for Visual Studio.

Once opened you will see this in the Solution Explorer:

Solution Explorer

As is often the case with Python projects, you usually run programs in a virtual environment. Looking at the screenshot above, the environment pre-configured in the Python project has a little yellow warning sign. This is because it is looking for Python 3.5 and the Python Support extension for Visual Studio comes with 3.6.

The solution is to create an environment based on Python 3.6. In order to do so, right click on “Python Environments” and choose “Add Virtual Environment…”:

Add Virtual Python Environment

Then choose the default in the next dialog box  (“Python 3.6 (64 bit)”) and click on “Create”:

Create Environment

After a few seconds, you will see a brand new Python 3.6 virtual environment:

View Virtual Environment

Right click on “Python Environments” and Choose “View All Python Environments”:

View All Python Environments

Make sure the environment you created is selected and click on “Open in PowerShell”. The PowerShell Window should open straight in the cloned repository directory – the “src/env” sub directory, where your newly created Python Environment lives.

Run the “Activate.ps1” PowerShell script to activate the Python Environment.

Activate Python

Now we can get the dependencies for our version of Azure CLI to run (we’re almost there!).

Run “python scripts/dev_setup.py” from the base directory of the repo as per the documentation.

Getting the dependencies will take a while (in the order of 20 to 30 minutes on my laptop).

zzzzzzzzz

Once the environment is setup, you can execute the “az” command and it is indeed the version you have cloned that executes:

It's alive!

Trying Pull Request Modifications Straight from Github

Running “git status” will show which branch you are on. Here is “dev” which does not have the “aks get-credentials” fix yet:

dev branch

And indeed, we are getting this now familiar error:

MY EYES!

Back in Visual Studio, go to the Github Window (View > Other Windows > GitHub) and enter your credentials if need be.

Once authenticated click on the “Pull Requests” icon and you will see a list of current Pull Requests against the Azure CLI Github repo:

Don't raise issues, create Pull Requests!

Now, click on pull request 4762 which has the fix for the “aks get-credentials” issue and you will see an option to checkout the branch which has the fix. Click on it:

Fixed!

And lo and behold, we can see that the branch has been changed to the PR branch and that the fix is working:

Work Work Work

You now have a dev environment for the Azure CLI with the ability to test the latest features even before they are merged and in general use!

You can also easily visualise the differences between the code in dev and PR branches, which is a good way to learn from others 🙂

Diff screen

The same environment can probably be used to create Pull Requests straight from Visual Studio although I haven’t tried this yet.

HoloLens – Using the Windows Device Portal

Windows Device Portal is a web-based tool which was first introduced by Microsoft in the year 2016. The main purpose of the tool is to facilitate application management, performance management and advanced remote troubleshooting for Windows devices.  Device portal is a lightweight web server built into the Windows Device which can be enabled in the developer mode. On a HoloLens, developer mode can be enabled from the settings application (Settings->Update->For Developers->Developer Mode).

Connecting the Device

Once the developer mode on the device is enabled, the following steps must be performed to connect to the device portal.

  1. Connect the device to the same Wi-Fi network as the computer used to access the portal. Wi-Fi settings on the HoloLens can be reached through the settings application (Settings > Network & Internet > Wi-Fi).
  2. Note the IP address assigned to the HoloLens. This can be found in the ‘Advance Options’ screen under the Wi-Fi settings (Settings > Network & Internet > Wi-Fi > Advanced Options).
  3. On the web browser of your computer, navigate to “https://<HoloLens IP Address>”
  4. As you don’t have a valid certificate installed, you will be prompted with a warning. Proceed to the website by ignoring the warning.

Holo WDP1

  1. For the first connection, you will be prompted with a credential reset screen. Your HoloLens should now blackout with only a number displayed in the centre of your viewport. Note this number and enter this in the textbox highlighter below

Holo WDP2

  1. You should also enter a ‘username’ and ‘password’ for the portal to access the device in future.
  2. Click ‘Pair’ button
  3. You will now be prompted with a form to enter the username and password. Fill this form and click on the ‘Login’ button.Holo WDP3
  4. Successful login will take you to the home screen of the Windows Device Portal

Holo WDP4

Portal features

The portal has a main menu (towards the left of the screen) and a header menu (which also serves as a status bar). Following diagram elaborates on the controls available in the header menu

Holo WDP5

The main menu is divided into three sections

  • Views – Groups the Home page, 3D view and the Mixed Reality Capture page which can be used to record the device perspective.
  • Performance – Groups the tools which are primarily used for troubleshooting applications installed on the device or the device itself (hardware and software)
  • System – functionalities to retrieve and control the device behaviour ranging from the managing the applications installed on the device to control the network settings.

A detailed description of the functionalities available under each menu item can be found on the following link:

https://developer.microsoft.com/en-us/windows/mixed-reality/using_the_windows_device_portal

Tips and tricks

Windows device portal is an elaborate tool and some of its functions are tailor-made for advanced troubleshooting. Following are few features of interest to get started with the development.

  • Device name – Device name can be access from the home screen under the view category. It is helpful in the long run to meaningfully name your HoloLens for easier identification over a network. You can change this from the Device Portal.

Holo WDP6

  • Sleep setting – HoloLens has a very limited battery life which ranges from 2 to 6 hours based on the applications we are running on it. This makes the sleep settings important. You can control the sleep setting from home screen of the Device Portal. Keep in mind that during development, you may usually leave the device plugged in (using UBS cable to your laptop/desktop). To avoid the pain of starting the device repeatedly, it is recommended to configure the ‘sleep time when plugged’ into a larger number.

Holo WDP7

  • Mixed reality capture – This feature is very handy to capture the device perspective for application demonstrations. You can capture the vide through the PV (Picture/Video) camera, the audio from the microphones, audio from the app and the holograms projected by the app on a collated media stream. The captured media content is saved on the device and can be downloaded to your machine from the portal. However, the camera channel cannot be shared between two applications. So, if you have an application using the camera of the device your recording will not work by default.
  • Capturing ETW trace – Performance tracking feature under the performance category is very handy for collecting ETW traces for a specific period. The portal enables you to start and stop the trace monitoring. The ETL events captured during this period is logged into a file which can be downloaded for further analysis.
  • Application management – Device portal provides a set of features to manage the applications installed on the HoloLens. These features include deploying, starting and stopping applications. This feature is very useful when you want to remotely initiate an application launch for a user during a demo.

Holo WDP8

  • Virtual Keyboard – HoloLens is not engineered to efficiently capture inputs from the keyboard. Pointing at a virtual keyboard using HoloLens and making an ‘airtap’ for every key press can be very stressful. Device portal gives you an alternate way to input text into your application from the ‘virtual input’ screen.

Holo WDP9

Windows Device Portal APIs

Windows Device Portal is built on top of a collection of public REST APIs. The API collection consists of a set of core APIs common to all Windows devices and specialised sets of APIs for specific devices such as HoloLens and Xbox. The HoloLens Device Portal APIs are bundled under the following controllers

  • App deployment – Deploying applications on HoloLens
  • Dump collection – Managing crash dumps from installed applications
  • ETW – Managing ETW events
  • Holographic OS – Managing the OS level ETW events and states of the OS services
  • Holographic Perception – Manage the perception client
  • Holographic Thermal – Retrieve thermal states
  • Perception Simulation Control – Manage perception simulation
  • Perception Simulation Playback – Manage perception playback
  • Perception Simulation Recording – Manage perception recording
  • Mixed Reality Capture – Manage A/V capture
  • Networking – Retrieve information about IP configuration
  • OS Information – Retrieve information about the machine
  • Performance data – Retrieve performance data of a process
  • Power – Retrieve power state
  • Remote Control – Remotely shutdown or restart the device
  • Task Manager – Start/Stop an installed application
  • Wi-Fi Management – Manage Wi-Fi connections

The following link captures the documentation for HoloLens Device Portal APIs.

https://developer.microsoft.com/en-us/windows/mixed-reality/device_portal_api_reference.

The library behind this tool is an open source project hosted on GitHub. Following is a link to the source repository

https://github.com/Microsoft/WindowsDevicePortalWrapper

Continuous Deployment for Docker with VSTS and Azure Container Registry

siliconvalve

I’ve been watching with interest the growing maturity of Containers, and in particular their increasing penetration as a hosting and deployment artefact in Azure. While I’ve long believed them to be the next logical step for many developers, until recently they have had limited appeal to many every-day developers as the tooling hasn’t been there, particularly in the Microsoft ecosystem.

Starting with Visual Studio 2015, and with the support of Docker for Windows I started to see this stack as viable for many.

In my current engagement we are starting on new features and decided that we’d look to ASP.Net Core 2.0 to deliver our REST services and host them in Docker containers running in Azure’s Web App for Containers offering. We’re heavy uses of Visual Studio Team Services and given Microsoft’s focus on Docker we didn’t see that there would be any blockers.

Our flow at high level is…

View original post 978 more words

Royal Flying Doctor Service empowers employees with Office 365

Customer Overview

The Royal Flying Doctor Service (Queensland) is a part of one of the largest and most comprehensive aero-medical organisations in the world. It provides emergency and primary health care services for those living in rural, remote and regional Queensland. A not-for-profit organisation, the Flying Doctor relies on the support of the community and in Queensland the Service needs to raise over $12 million per year to keep the Flying Doctor flying.

Business Situation

The Royal Flying Doctor Service (Queensland) provides patient care to 95,000 people in remote communities as well as life-saving transfers to those in need of organ transplants and heart surgery. Their 19 aircraft fly more than 22,000 hours per week, 24 hours a day, spanning thousands of kilometres.

Their internal infrastructure was dated, unreliable and expensive, that hindered the quality of the emergency evacuation and transportation services they provide. Issues in network performance and security provisioning led to staff complaints and impacted on service delivery.

RFDS had recently adopted a cloud-first strategy to encourage innovation and harness technology to enhance the delivery of emergency and primary health services for patients. RFDS wanted to empower their employees with 24/7 access to find information within seconds to help them fix a problem or receive important updates. The solution would need to maintain a central point of administration for their electronic flight bag (EFB) devices, which ensure flight crew can perform flight management tasks and access critical aircraft documentation.

Charged with a mission to transform their workplace through a modern set of collaboration tools, Kloud partnered with RFDS to deliver a solution to provide flexible, transparent and simplified communications and connections across their business.

Solution

Kloud proposed a hybrid approach for deployment and adoption of Office 365 services including Exchange Online, OneDrive for Business, Skype for Business and Yammer. Utilising Telstra’s Cloud Gateway RFDS are able to leverage Microsoft Azure to host their ADFS farm and Azure AD-Connect server which is integral for this operating model. Transitioning RFDS (Queensland) to Office 365 and Azure has enabled the business to provide staff with a more robust up-to-date messaging system and Collaboration services that will lead to improved productivity and user satisfaction.

The change to Office 365 provides RFDS (Queensland) with an evergreen environment, providing more functionality and features then they ever had previously, resolved many operational and infrastructure concerns and more cost effective then running their own Exchange service. Ultimately, the solution means critical patient and business information is centrally maintained in a secure, cloud first infrastructure with multiple copies of data whether in transit or at rest. In parallel with this an additional program of work has commenced to migrate RFDS’ existing SharePoint services to SharePoint Online, which will continue to improve employee experience, though an enhanced collaboration and sharing network.

Technologies used in the solution include:

  • Skype for Business Online
  • Office 365 Exchange Online 2016
  • OneDrive for Business
  • Azure Active Directory
  • Azure AD authentication system
  • Exchange Server Hybrid Configuration
  • Azure Tenancy
  • Telstra Cloud Gateway
  • Azure ExpressRoute

Benefits

  • Highly available “Evergreen” cloud based solution
  • Reduced administrative overhead for IT teams
  • Better collaboration amongst users’ and teams’
  • Efficiency gains by use of Skype for Business
  • Larger user mailboxes and personal archive mailboxes in Exchange Online

Customer Testimonial 

The technical resources provided have been outstanding, as well as from a teaming perspective. Kloud have often gone the extra mile for us. The RFDS business in Queensland will now benefit from utilising Office 365 as it matures into the new set of products and features. – Colin Denzel, ICT Portfolio Manager

 

[Updated] How are email addresses created for Office 365 Mailboxes?

First published at https://nivleshc.wordpress.com

Background

Over the past few weeks, I have been doing some Cloud-Only Office 365 deployments using Azure AD Connect . As you might imagine, this deployment is abit different to the Hybrid Office 365 deployment.

One of the things that got me thinking was, how are the email addresses created for my Office 365 mailboxes? As I was synchronising objects from my on-premises Active Directory, this question held the answer to what values I needed to change in my on-premises Active Directory user object, to get the desired email addresses populated in the Office 365 mailbox that will be created for it.

Preparation

To find the above answer, I devised a simple experiment.

I decided to create four on-premises Active Directory user objects, with different combinations of Netbios, UserPrincipalName (UPN), Mail attribute and ProxyAddresses and trace what happens to these values as they are used to create their corresponding Azure AD object. I will then assign an Office 365 Exchange Online Plan 2 license to these Azure AD objects and see what email addresses got assigned to the resulting Office 365 mailbox.

Simple? Lets begin.

The four user accounts I created in on-premises Active Directory had the following properties (ProxyAddresses were populated using ADSIEdit). The domains used by the UPN, Email and proxyaddresses attributes are all internet routable domains.

FirstName     :Ross
LastName      :McCary
DisplayName.  :Ross McCary
Netbios       :CONTOSO\R.McCary
UPN           :Ross.McCary@contoso.com
Email         :{blank}
ProxyAddresses:{blank}

FirstName.    :Angela
LastName      :Jones
DisplayName.  :Angela Jones
Netbios       :CONTOSO\A.Jones
UPN           :Angela.Jones@contoso.com
Email         :An.Jones@contoso.com
ProxyAddresses:{blank}

FirstName     :Zada
LastName      :Daley
DisplayName.  :Zada Daley
Netbios       :CONTOSO\Z.Daley
UPN           :Zada.Daley@contoso.com
Email         :Zada.Daley@tailspintoys.com
ProxyAddresses:{blank}

FirstName     :Bob
LastName      :Brown
DisplayName   :Bob Brown
Netbios       :CONTOSO\B.Brown
UPN           :Bob.Brown@contoso.com
Email         :Bob.Brown@contoso.com
ProxyAddresses:SMTP:Bo.Brown@contoso.com
               smtp:Bob@contoso.com
               smtp:Bobi@tailspintoys.com

The Experiment

I initiated an Azure AD Connect delta synchronisation cycle and waited. After a few minutes, I saw new objects created in my Office 365 tenant’s Azure AD that corresponded to the ones that I had created in the on-premises Active Directory.

Here are the values that Azure AD Connect (AADC) added for the newly created AAD objects (tenantid is the id for the Office 365 tenant)

FirstName     :Ross
LastName      :McCary
DisplayName   :Ross McCary
SignInName    :Ross.McCary@contoso.com
UPN           :Ross.McCary@contoso.com
ProxyAddresses:{blank}

FirstName     :Angela
LastName      :Jones
DisplayName   :Angela Jones
SignInName    :Angela.Jones@contoso.com
UPN           :Angela.Jones@contoso.com
ProxyAddresses:{SMTP:An.Jones@contoso.com}

FirstName     :Zada
LastName      :Daley
DisplayName   :Zada Daley
SignInName.   :Zada.Daley@contoso.com
UPN           :Zada.Daley@contoso.com
ProxyAddresses:{SMTP:Zada.Daley@tailspintoys.com}

FirstName     :Bob
LastName      :Brown
DisplayName   :Bob Brown
SignInName.   :Bob.Brown@contoso.com
UPN           :Bob.Brown@contoso.com
ProxyAddresses:{SMTP:Bo.Brown@contoso.com,
                smtp:Bob@contoso.com,
                smtp:Bobi@tailspintoys.com,
                smtp:Bo.Brown@tenantid.onmicrosoft.com,
                smtp:Bob.Brown@contoso.com}

Now, this was quite interesting. AADC added proxyaddresses for only those Azure AD (AAD) objects that had the email field populated in their corresponding on-premises Active Directory user objects. Also, for Bob Brown, two additional proxy addresses had been added. Interesting indeed! (the additional attributes are in orange above)

I then proceeded to assigning an Office 365 Exchange Online Plan 2 license to all the above AAD objects so that a mailbox would be provisioned for them. After waiting a few minutes, I checked to confirm that the mailboxes had been successfully provisioned. I then went back to Azure AD and checked the attributes again.

Below is what I saw (additional values that got added after license assignment are in Orange)

FirstName     :Ross
LastName      :McCary
DisplayName   :Ross McCary
SignInName    :Ross.McCary@contoso.com
UPN           :Ross.McCary@contoso.com
ProxyAddresses:{SMTP:Ross.McCary@tenantid.onmicrosoft.com,
                smtp:Ross.McCary@contoso.com}

FirstName     :Angela
LastName      :Jones
DisplayName   :Angela Jones
SignInName    :Angela.Jones@contoso.com
UPN           :Angela.Jones@contoso.com
ProxyAddresses:{SMTP:An.Jones@contoso.com,
                smtp:Angela.Jones@contoso.com,
                smtp:An.Jones@tenantid.onmicrosoft.com}

FirstName     :Zada
LastName      :Daley
DisplayName   :Zada Daley
SignInName    :Zada.Daley@contoso.com
UPN           :Zada.Daley@contoso.com
ProxyAddresses:{SMTP:Zada.Daley@tailspintoys.com,
                smtp:Zada.Daley@contoso.com,
                smtp:Zada.Daley@tenantid.onmicrosoft.com}

FirstName     :Bob
LastName      :Brown
DisplayName   :Bob Brown
SignInName.   :Bob.Brown@contoso.com
UPN           :Bob.Brown@contoso.com
ProxyAddresses:{SMTP:Bo.Brown@contoso.com,
                smtp:Bob@contoso.com,
                smtp:Bobi@tailspintoys.com,
                smtp:Bo.Brown@tenantid.onmicrosoft.com,
                smtp:Bob.Brown@contoso.com}

For the mailboxes, below is what I saw

FirstName     :Ross
LastName      :McCary
DisplayName   :Ross McCary
UserID        :Ross.McCary@contoso.com
ProxyAddresses:{SMTP:Ross.McCary@tenantid.onmicrosoft.com,
                smtp:Ross.McCary@contoso.com}

FirstName     :Angela
LastName      :Jones
DisplayName   :Angela Jones
UserID        :Angela.Jones@contoso.com
ProxyAddresses:{SMTP:An.Jones@contoso.com,
                smtp:Angela.Jones@contoso.com,
                smtp:An.Jones@tenantid.onmicrosoft.com}

FirstName     :Zada
LastName      :Daley
DisplayName.  :Zada Daley
UserID        :Zada.Daley@contoso.com
ProxyAddresses:{SMTP:Zada.Daley@tailspintoys.com,
                smtp:Zada.Daley@contoso.com,
                smtp:Zada.Daley@tenantid.onmicrosoft.com}

FirstName     :Bob
LastName      :Brown
DisplayName   :Bob Brown
UserID        :Bob.Brown@contoso.com
ProxyAddresses:{SMTP:Bo.Brown@contoso.com,
                smtp:Bob@contoso.com,
                smtp:Bobi@tailspintoys.com,
                smtp:Bo.Brown@tenantid.onmicrosoft.com,
                smtp:Bob.Brown@contoso.com}

Quite interesting (hmm well not really 😉 ). The newly created mailboxes had the same proxyaddresses as the proxyaddresses assigned to their corresponding AAD object.

The Result

After looking at the results, I could easily see a pattern between the on-premises Active Directory user object’s email and proxyaddresses values and the Azure AD and Office 365 mailbox email addresses.

From my experiment, I deduced the following process that is used to create email addresses for the Office 365 mailboxes

  1. When provisioning Azure AD objects, AADC first checks to see if the on-premises AD user object has proxyaddresses assigned. If there are any, these are added as proxyaddresses for the Azure AD object (the proxyaddress with the SMTP: prefix (uppercase smtp) will be the primarySMTP for the AAD object). If the email attribute is not part of the proxyaddresses, it is added as an additional proxyaddress. Next, the primarySMTP’s part before the @ is prefixed to the tenant email domain to create the mailbox’s routing email address and then added as an additional proxyaddress (for instance Ross.McCary@tenantid.onmicrosoft.com)
  2. If the on-premises Active Directory user object does not have any proxyaddresses, then the email attribute is assigned as the primarySMTP address for the AAD object. If the email attribute is not the same as the UPN, then the UPN is added as an additional proxy address. Next, the primarySMTP’s part before the @ is prefixed to the tenant email domain to create the mailbox’s routing email address and then added as an additional proxyaddress (for instance Ross.McCary@tenantid.onmicrosoft.com)
  3. In cases where both the email attribute and proxyAddresses are blank, the part of the UPN before the @ is prefixed to the tenant email domain to create the mailbox’s routing email address. In this instance, the routing email address is set as the primarySMTP address. The UPN is then added as an additional proxyaddress.

An interesting thing to note is that even before assigning an Office 365 license, you can see what email addresses will be assigned, by using PowerShell to check the proxyaddresses attribute of the Azure AD object.

I hope the above provides some clarity around how email addresses are created for Office 365 mailboxes and helps with your Cloud-Only Office 365 architectures.

Have a great day 😉

[Update] There could be instances, where by some mistake

  • (a) a new user is assigned an email attribute that is already attached to an existing mailbox in Office 365
  • (b) a new user is assigned a proxyaddress that is already attached to an existing mailbox in Office 365

For issue (a), AADC will not create the new user in AAD and instead display an error in the AADC console when doing an exporting to AAD. The error will be similar to below

“Unable to update this object because the following attributes associated with this object have values that may already be associated with another object in your local directory services: [ProxyAddresses xxxxxxxxx;]. Correct or remove the duplicate values in your local directory.”

The error will provide detailed information regarding the values that are causing the issue. It will also contain the ObjectIdInConflict. This is the id of the existing Object that the new user is in conflict with.

Using the ObjectIdInConflict value, search the AADC Metaverse with the clause “cloudAnchor contains ObjectIdInConflict“. (replace ObjectIdInConflict with the ObjectIdInConflict value as shown in the error). This will show the metaverse record of the object that the new user is conflicting with. In this case, remediate the issue in the on-premises Active Directory and then initiate another AADC delta synchronisation cycle.

 

For issue (b), AADC will create a new user in AAD however it will remove the proxyAddress that is causing the conflict from the new user object in AAD. It will also create a record in the Office 365 Admin Center, under Settings\DirSync errors, with details of the new user, the existing user that it is in conflict with and also the attribute that is in conflict. In this case, remediate the issue in the on-premises Active Directory and initiate another AADC delta synchronisation cycle.

Note: In both cases above, the technical contact for the Office 365 tenant gets sent an email with details of the errors.