Create reports using a Power BI Gateway

Background

Once you have a Power BI gateway setup to ensure data flow from your on-premises data sources to Power BI service in the cloud, next step is to create reports using Power BI desktop and build reports using data from multiple on-premises data sources.

Note: If you didn’t have a gateway setup already, please follow my earlier post to set it up before you continue reading this post.

Scenario

All on-premises data is stored in SQL server instances and spread across few data warehouses and multiple databases built and managed by your internal IT teams.

Before building reports, you need to ensure following key points:

  1. Each data source should be having connectivity to your gateway with minimum latency, this should be ensured.
  2. Every data source intended to be used within reports needs to be configured within a gateway in the Power BI service
  3. List of people needs to be configured against each data source who can publish reports using this data source

An interaction between on-premises data sources and cloud services is depicted below:

Pre-requisites

Before you build reports, you need to setup on-premises data sources in the gateway to ensure Power BI service knows which data sources are allowed by gateway administrator to pull data from on-premises sources.

Login into https://app.powerbi.com with Power BI service administrator service credentials.

  1. Click on Manage gateways to modify settings
  2. You will see a screen with gateway options that your setup earlier while configuring gateway on-premises
  3. Next step is to setup gateway administrators, who will have permission to setup on-premises data sources as and when required
  4. After gateway configuration, you need to add data sources one by one so published reports can use on-premises data sources (pre-configured within gateway)
  5. You need to setup users against each data source within a gateway who can use this data source to pull data from on-premises sources within their published reports
  6. Repeat above steps for each of your on-premises data sources by selecting appropriate data source type and allowing users who can use them while building reports

Reports

Upon reaching this step, you are all good to create reports.

  1. Open Power BI desktop
  2. Select sources you want to retrieve data from
  3. Just ensure while creating reports, data source details are same as what was configured in Power BI service while you were setting up data sources.
  4. Great! once you publish reports to your Power BI service – your gateway will be able to connect to relevant on-premises data sources if you have followed steps above.

 

Where’s the source!

SauceIn this post I will talk about data (aka the source)! In IAM there’s really one simple concept that is often misunderstood or ignored. The data going out of any IAM solution is only as good as the data going in. This may seem simple enough but if not enough attention is paid to the data source and data quality then the results are going to be unfavourable at best and catastrophic at worst.
With most IAM solutions data is going to come from multiple sources. Most IAM professionals will agree the best place to source the majority of your user data is going to be the HR system. Why? Well simply put it’s where all important information about the individual is stored and for the most part kept up to date, for example if you were to change positions within the same company the HR systems are going to be updated to reflect the change to your job title, as well as any potential direct report changes which may come as a result of this sort of change.
I also said that data can come and will normally always come from multiple sources. At typical example of this generally speaking, temporary and contract staff will not be managed within the central HR system, the HR team simply put don’t care about contractors. So where do they come from, how are they managed? For smaller organisations this is usually something that’s manually done in AD with no real governance in place. For the larger organisations this is less ideal and can be a security nightmare for the IT team to manage and can create quite a large security risk to the business, so a primary data source for contractors becomes necessary what this is is entirely up to the business and what works for them, I have seen a standard SQL web application being used to populate a database, I’ve seen ITSM tools being used, and less common is using the IAM system they build to manage contractor accounts (within MIM 2016 this is through the MIM Portal).
There are many other examples of how different corporate applications can be used to augment the identity information of your user data such as email, phone systems and to a lessor extent physical security systems building access, and datacentre access, but we will try and keep it simple for the purpose of this post. The following diagram helps illustrate the dataflow for the different user types.

IAM Diagram

What you will notice from the diagram above, is even though an organisation will have data coming from multiple systems, they all come together and are stored in a central repository or an “Identity Vault”. This is able to keep an accurate record of the information coming from multiple sources to compile what is the users complete identity profile. From this we can then start to manage what information is flowed to downstream systems when provisioning accounts, and we can also ensure that if any information was to change, it can be updated to the users profiles in any attached system that is managed through the enterprise IAM Services.
In my next post I will go into the finer details of the central repository or the “Identity Vault”

So in summary, the source of data is very important in defining an IAM solution, it ensures you have the right data being distributed to any managed downstream systems regardless of what type of user base you have. My next post we will dig into the central repository or the Identity Vault, this will go into details around how we can set precedence to data from specific systems to ensure that if there is a difference in the data coming from the difference sources that only the highest precedence will be applied we will also discuss how we augment the data sets to ensure that we are also only collecting the necessary information related to the management of that user and the applications that use within your business.

As per usual, if you have any comments or questions on this post of any of my previous posts then please feel free to comment or reach out to me directly.

Windows 10 Domain Join + AAD and MFA Trusted IPs

Background

Those who have rolled out Azure MFA (in the cloud) to non-administrative users are probably well aware of the nifty Trusted IPs feature.   For those that are new to this, the short version is that this capability is designed to make it a little easier on the end user experience by allowing you to define a set of ‘trusted locations’ (e.g. your corporate network) in which MFA is not required.

This capability works via two methods:

  • Defining a set of ‘Trusted” IP addresses.  These IP addresses will be the public facing IP addresses of your Web Proxies and/or network gateways and firewalls
  • Utilising issued claims from Federated Users.   This uses the insidecorporatenetwork = true claim, sent by ADFS, to determine that this user is coming from a ‘trusted location’.  Enabling this capability is discussed in this article.

The Problem

Now, the latter of these is what needs further consideration when you are looking to moving to the ‘modern world’ of Windows 10 and Azure AD (AAD).  Unfortunately, due to some changes made in the way that ‘Win10 Domain Joined with AAD Registration (AAD+DJ) machines performs Single Sign On (SSO) with AAD, the method of utilising federated claims to determine a ‘trusted location’ for MFA will no longer work.

To understand why this is the case, I highly encourage that you first read Jairo Cadena‘s truly excellent blog series that discuss in detail around how Win10 AAD SSO and its associated services works.  The key takeaways from those posts are that Win10 now has this concept of a Primary Refresh Token (PRT) and with this approach to authentication you now have the following changes:

  • The PRT is what is used to obtain access tokens to AAD applications
  • The PRT is cached and has a sliding window lifetime from 14 days up to 90 days
  • The use of the PRT is built into the Windows 10 credential provider.  Both IE and Edge know to utilise the PRT when communicating with AAD
  • It effectively replaces the ADFS with Integrated Windows Auth (IWA) approach to achieve SSO with Azure AD
    • That is, the auth flow is no longer: Browser –> Login to AAD –> Redirect to ADFS –> Perform IWA SSO –> SAML Token provided with claims –> AAD grants access
    • Instead, the auth flow is a lot more streamlined:  Browser –> Login and provide PRT to AAD –> AAD grants access

Hopefully from this auth flow change you can see why Microsoft have done this.  Because the old way relied on IWA to perform ‘seamless’ SSO, it only worked when the device was domain joined and you had line of sight to a DC to perform kerberos.  So when connecting externally, you would always see the prompt from the ADFS forms based authentication.  In the new way, whenever an auth prompt came in from AAD, the credential provider could see this and immediately provide the cached PRT, providing SSO regardless of your network location.  It also meant that you no longer needed a domain joined machine to achieve ‘seamless’ SSO!

The side effect though is that because the SAML token provided by ADFS is no longer involved in gaining access, Azure AD loses visibility on those context based claims like insidecorporatenetwork which subsequently means that specific Trusted IPs feature no longer works.   While this is most commonly used for MFA scenarios, be aware that this will also apply to any Azure AD Conditional Access rules you define that uses the Trusted IPs criteria (e.g. block access to app when external).

Side Note: If you want to confirm this behaviour yourself, simply use a Win10 machine that is both Domain Joined and AAD Registered, perform a fiddler capture, and compare the sign in experience differences between a IE and Edge (i.e. PRT aware) and Chrome (i.e. not PRT aware)

The Solution/Workaround?

So, you might ask, how do you fix this apparent gap in capability?   Does this mean you’re going backwards now?   For any enterprise customer of decent size, managing a set of IP address ranges may not be practical or desireable in order to drive MFA (or conditional access) behaviours between internal and external users.   The federated user claim method was a simple, low admin, way of solving that problem.

To answer this question, I would actually take a step back and look at the underlying problem that you’re trying to solve.  If we remind ourselves of the MFA mantra, the idea is to ensure that the user provides “something they know” (e.g. a secret/password) and “something they have” (e.g. a mobile device) to prove their ‘trustworthiness’.

When we make a decision to allow an MFA bypass for internal users, we are predicating this on the fact that, from a security standpoint, they have met their ‘trustworthiness’ level through a seperate means.  This might be through a security access card that lets them into an office location or utilising a corporate laptop that can perform a VPN connection.  Both of which ultimately lets them connect to the internal network and thus is what you use as your criteria for granting them the luxury of not having to perform another factor of authentication.

So with that in mind, what you could then do is to also expand that critera to include Domain Joined machines.  That is, if a user is utilising a corporate issued device that has been domain joined (and registered to AAD), this can now act as your “something you have” aspect of the MFA mantra to prove your trustworthiness, and so you no longer need to differentiate whether they are actually internal or external anymore.

To achieve this, you’ll need to use Azure AD Conditional Access policies, and modify your Access Grant rules to look something like that below:

Win10PRT1

You’ll also need to perform the steps outlined in the How to configure automatic registration of Windows domain-joined devices with Azure Active Directory article to ensure the devices properly identify themselves as being domain joined.

Side Note:  If you include the Workplace Join packages as outlined above, this approach can also expand to Windows 7 and Windows 8.1 devices.

Side Note 2: You can also include Intune managed mobile devices for your ‘bypass criterias’ if you include the Require device to be marked as compliant critera as well.

Fun Fact: You’ll note that in my image the (preview) reference for ‘require one of the selected controls’ is still there.  This is because until recently (approx. May/June 2017), the MFA or domain joined device criteria didn’t acutally work because of the behaviour/order of how the evaluations were being done.  When AAD was evaluating the domain joined criteria, if it failed it would immediately block access rather then trying the MFA approach next, thus preventing an ‘or’ scenario.   This has now been fixed and I expect the (preview) tag to be removed soon.

Summary

The move to the modern ‘any where, any device’ approach to end user computing means that there is a need to start re-assessing how you approach security.  Old world views of security being defined via network boundaries will eventually disappear and instead you’ll need to consider user-and device based contexts to define when to initiate security controls.

With Windows 10’s approach to authentication with AAD, internal and external access is no longer relevant and should not be used for your criteria in driving MFA or conditional access. Instead, use the device based conditions such as ‘device compliance’ or ‘domain join’ as one of your deciding factors.

Setup a Power BI Gateway

Scenario

So, you have explored Power BI (free) and wanted to start some action in the cloud. Suddenly you realise that your data is stored in an on-premise SQL data source and you still wanted to get insights up in the cloud and share it with your senior business management.

Solution

Microsoft’s on-premises data gateway is a bridge that can securely transfer your data to Power BI service from your on-premises data source.

Assumptions

  • Power BI pro licenses have been procured already for the required no of users (this is a MUST)
  • Users are already part of Azure AD and can sign in to Power BI service as part of Office 365 offering

Pre-requisites

You can build and setup a machine to act as a gateway between your Azure cloud service and on-premises data sources. Following are the pre-requisites to build that machine:

1) Server Requirements

Minimum Requirements:
  • .NET 4.5 Framework
  • 64-bit version of Windows 7 / Windows Server 2008 R2 (or later)

Recommended:

  • 8 Core CPU
  • 8 GB Memory
  • 64-bit version of Windows 2012 R2 (or later)

Considerations:

  • The gateway is supported only on 64-bit Windows operating systems.
  • You can’t install gateway on a domain controller
  • Only one gateway can be installed on a single machine
  • Your gateway should not be turned off, disconnected from the Internet, or have a power plan to go sleep – in all cases, it should be ‘always on’
  • Avoid wireless network connection, always use a LAN connection
  • It is recommended to have the gateway as close to the data source as possible to avoid network latency. The gateway requires connectivity to the data source.
  • It is recommended to have good throughput for your network connection.

Notes:

  • Once a gateway is configured and you need to change a name, you will need to reinstall and configure a new gateway.

2) Service Account

If your company/client is using a proxy server and your gateway is not having a direct connection to Internet. You may need to configure a windows service account for authentication purposes and change default log on credential (NT SERVICE\PBIEgwService) to a service account you like with a right of ‘Log on as a service’

3) Ports

The gateway creates an outbound connection to Azure Service Bus and does not require inbound ports for communication. It is required to whitelist IP addresses listed in Azure Datacentres IP List.

Installation

Once you are over with a pre-requisite as listed in the previous paragraph, you can proceed to gateway installation.

  1. Login to Power BI with your organisation credentials and download your data gateway setup
  2. While installing, you need to select the highlighted option so a single gateway can be shared among multiple users.
  3. As listed in pre-requisites section, if your network has a proxy requirement – you can change the service account for the following windows service:

  4. You will notice gateway is installed on your server
  5. Once you open a gateway application, you can see a success message

Configuration

Post installation, you need to configure a gateway to be used within your organisation.

  1. Open gateway application and sign in with your credentials
  2. You need to set a name and a recovery key for a gateway that can be used later to reconfigure/restore gateway
  3. Once it is configured successfully, you will see a message window that now it is ready to use

  4. Switch to Gateway’s Network tab and you will see its status as Connected – great!
  5. You are all set, the gateway is up and running and you need to start building reports to use data from your on-premises server using gateway you just configured.

 

Brisbane O365 Saturday

On the weekend I had a pleasure of presenting to the O365 Saturday Brisbane event. Link below

http://o365saturdayaustralia.com/

In my presentation I demonstrated a new feature within Azure AD that allows the automatic assigment of licences to any of your Azure subscriptions using Dynamic Groups. So what’s cool about this feature?

Well, if you have a well established organisational structure within your on-premise AD and you are synchronising any of the attributes that you need to identity this structure, then you can have your users automatically assigned licences based on their job type, department or even location. The neat thing about this is it drastically simplifies the management of your licence allocation, which until now has been largely done through complicated scripting processes either through an enterprise IAM system, or just through the service desk when users are being setup for the first time.

You can view or download the presentation by clicking on the following link.

O365 Saturday Automate Azure Licencing

Throughout the Kloud Blog there is an enormous amount of material featuring the new innovative way to use Azure AD to advance your business, and if you would like a more detailed post on this topic then please place a comment below and I’ll put something together.

 

Implementing Bootstrap and Font-awesome in SharePoint Framework solutions using React

Responsive Design has been the biggest driving factor for SharePoint framework (SPFx) solutions. In a recent SPFx project for a customer, we developed a component using React, Bootstrap and Font-awesome icons for a responsive look and feel. While building the UI piece, we encountered many issues during the initial set up, so I am writing this blog with detail steps for future reference. One of the key fixes mentioned in this post, is for the WOFF2 font-type file which is a component in font-awesome and bootstrap.

In this blog post, I will not be detailing the technical implementation (business logic and functionality) just focusing on the UI implementation. The following steps outline how to configure SPFx React client side web parts to use Bootstrap and Font-awesome CSS styles

Steps:

  1. Create a SharePoint Framework project using Yeoman. Refer this link from Microsoft docs for reference.
  2. Next, install JQuery, Bootstrap and Font-awesome using npm so that it can be available from within node_modules
    npm install jquery --save
    npm install @types/jquery --save-dev
    npm install bootstrap --save
    npm install @types/bootstrap --save-dev
    npm install jquery --save
    npm install @types/jquery --save-dev
    

    Check the node_modules folder to make sure they got installed successfully

  3. Now locate config.json file in config folder and add the entry below for third party JS library references.
    "externals": {
          "jquery": {
          "path": "node_modules/jquery/dist/jquery.min.js",
          "globalName": "jQuery"
        },
        "bootstrap": {
          "path": "node_modules/bootstrap/dist/js/bootstrap.min.js",
          "globalName": "bootstrap"
        }

    Then reference them in the .ts file in the src folder using import

    import * as jQuery from "jquery";
    import * as bootstrap from "bootstrap";
    
  4. For CSS reference, we can either refer to the public CDN links using SPComponentloader.loadCss() or else refer to the local version as below in the .tsx file
    Note: Don’t use ‘require’ for js scripts as they are already imported in above step. If included again it will cause a component load error.

    require('../../../../node_modules/bootstrap/dist/css/bootstrap.css');
    require('../../../../node_modules/bootstrap/dist/css/bootstrap.min.css');
    require('../../../../node_modules/bootstrap/dist/css/bootstrap-theme.css');
    require('../../../../node_modules/bootstrap/dist/css/bootstrap-theme.min.css');
    require('../../../../node_modules/font-awesome/css/font-awesome.css');
    require('../../../../node_modules/font-awesome/css/font-awesome.min.css');
    
  5. When using React, copy the html to the .tsx file in the components folder. If you want to use the HTML CSS classes as-is and not the SASS way, refer to this blog post. For image references, here is a good post to refer.
    For anyone new to React as me, few tips below for styling:
    1. Use className instead of HTML class attribute
    2. In order to use inline styles, use style={{style attributes}} or define an object, since everything in JSX are elements
  6. When ready, use gulp serve to launch your solution in local workbench.
    Important: If you’re using custom icons or fonts from the above CSS libraries, you will receive Typescript errors saying that loader module was not found for WOFF2 font type. Here, you will need to push the custom loader for WOFF2 font type through gulpfile.js as below.First install url-loader from npm.

     npm install url-loader --save-dev

    Then modify gulpfile.js at the root directory to load the custom loader.

    build.configureWebpack.mergeConfig({ 
      additionalConfiguration: (generatedConfiguration) => { 
        generatedConfiguration.module.loaders.push([ 
          { test: /\.woff2(\?v=[0-9]\.[0-9]\.[0-9])?$/, loader: 'url-loader', query: { limit: 10000, mimetype: 'application/font-woff2'} } 
        ]); 
        return generatedConfiguration; 
      } 
    });
    

    Now gulp serve your solution and it should work fine.

You might still face CSS issues in the solution as the referring CSS doesn’t exactly match the HTML-CSS implementation. To resolve any conflicts, use CSS Override (!important) wherever necessary.

In this post I have shown how we can configure bootstrap and font-awesome third party CSS files to work with SharePoint Framework solutions while leveraging React Framework.

Decommissioning Exchange 2016 Server

I have created many labs over the years and never really spent the time to decommission my environment, I usually just blow it away and start again.

So I finally decided to go through the process and decommission my Exchange 2016 server in my lab environment.

My lab consisted of the following:

  • Domain Controller (Windows Server 2012 R2)
  • AAD Connect Server
  • Exchange 2016 Server/ Office 365 Hybrid
  • Office 365 tenant

Being a lab I only had one Exchange server which had the mailbox role configured and was also my hybrid server.

Proceed with the steps below to remove the Exchange Online configuration;

  1. Connect to Exchange Online using PowerShell (https://technet.microsoft.com/en-us/library/jj984289(v=exchg.160).aspx)
  2. Remove the Inbound connector
    Remove-InboundConnector -Identity "Inbound from [Inbound Connector ID]"
  3. Remove the Outbound connector
    Remove-OutboundConnector -Identity "Outbound to [Outbound Connector ID]"
  4. Remove the Office 365 federation
    Remove-OrganizationRelationship "O365 to On-premises - [Org Relationship ID]"
  5. Remove the OAuth configuration
    Remove-IntraOrganizationConnector -Identity "HybridIOC - [Intra Org Connector ID]"

Now connect to your Exchange 2016 server and conduct the following;

  1. Open Exchange Management Shell
  2. Remove the Hybrid Config
    Remove-HybridConfig
  3. Remove the federation configuration
    Remove-OrganizationRelationship "On-premises to O365 - [Org Relationship ID]"
  4. Remove OAuth configuration
    Remove-IntraorganizationConnector -Identity "HybridIOC - [Intra Org Connector ID]"
  5. Remove all mailboxes, mailbox databases, public folders etc.
  6. Uninstall Exchange (either via PowerShell or programs and features)
  7. Decommission server

I also confirmed Exchange was removed from Active Directory and I was able to run another installation of Exchange on the same Active Directory with no conflicts or issues.

DKIM for Custom Domain in Office 365

 

As Office 365 service keeps adding new features and functions, it is important for global admins to keep up with the latest offerings and service enhancements office 365 provides. In this blog post I am going to discuss one of the security feature offered by office 365 and how it can be beneficial to organizations when it comes to securing their office365 tenants. This feature is called DKIM. DKIM has been offered by Microsoft for some time now and most of the organizations are using it quite effectively. But I was surprised to know that still there is a huge gap of knowledge around DKIM basics and its implementation scenario. Let’s start with brief description of DKIM.

DKIM stands for Domain Keys Identified Mail. So, the name gives us a clue that it involves digital keys i.e. private/public keys. This means that there will be digital signing or encryption and decryption of messages. Yes! that’s right. So, in simple words “it is a digital signing of emails by sending party using its private keys or in this instance via private domain key and then decryption of this email by the recipient party using domain public keys. “This looks simple and to be honest it is very simple given you understand these questions: Why you need it? What are the benefits? How hard it is to configure and manage? Let’s answer them one by one.

Primary purpose of DKIM is to identify your identity on the internet as you are who you claim to be. Meaning your domain e.g. @contoso.com is a legit domain and the recipient can trust emails coming from this source since it has successfully identified itself. Another analogy to this scenario would be the ssl certificates used by different websites across the internet to validate their identity to clients. This also provides the first benefit to let your domain published as a trusted domain across internet and any spoofing attempt or attempt to send spoofed emails using your domain will fail. Second big benefit you get is your email security strategy gets a big boost and allows you to use more advanced security features such as DMARC for securing your emails.

Now let’s get to the question as how hard it is to implement? The answer is its simple and does not take much of effort in terms of configuration. Though it does require a significant amount of planning to ensure smooth and fruitful outcome. In fact, by default DKIM is enabled for your default office 365 domain i.e. onmicrosft.com.  Let’s review step by step as how it is configured in office 365.

 

DKIM Configuration:

DKIM requires just two steps for its configuration.

  1. Add CNAME records in public DNS for your custom domain
  2. Enable DKIM for your custom domain in office 365.

CNAME will have following format.

Key things to remember here are:

  • domainGUIDis the same as the domainGUID in the customized MX record for your custom domain
  • initialDomainis the domain that you used when you signed up for Office 365
  • For Office 365, the selectors will always be “selector1” or “selector2”

For example, if your custom domain is contoso.com, this record will look like this:

To enable DKIM signing for your custom domain through the Office 365 admin center

  1. Sign in to Office 365 with your work or school account.
  2. Select the app launcher icon in the upper-left and choose Admin.
  3. In the lower-left navigation, expand Admin and choose Exchange.
  4. Go to Protection > dkim.
  5. Select the domain for which you want to enable DKIM and then, for Sign messages for this domain with DKIM signatures, choose Enable. Repeat this step for each custom domain.

To enable DKIM signing for your custom domain by using PowerShell

  1. Connect to Exchange Online using remote PowerShell.
  2. Run the following cmdlet:

Where domain is the name of the custom domain for which you want to enable DKIM signing.

For example, for the domain contoso.com:

The above steps will allow you to setup DKIM for your custom domain in office365. I will be further discussing implementation scenarios and test process to verify your settings in my future blog.

Exchange 2010 Hybrid Auto Mapping Shared Mailboxes

Migrating shared mailboxes to Office 365 is one of those things that is starting to become easier over time, especially with full access permissions now working cross premises.

One little discovery that I thought I would share is that if you have an Exchange 2010 hybrid configuration, the auto mapping feature will not work cross premises. (Exchange 2013 and above, you are ok and have nothing to worry about).

This means if an on-premises user has access to a shared mailbox that you migrate to Office 365, it will disappear from their Outlook even though they still have full access.

Unless you migrate the shared mailbox and user at the same time, the only option is to manually add the shared mailbox back into Outlook.

Something to keep in mind especially if your user base accesses multiple shared mailboxes at a time.

 

 

Try/Catch works in PowerShell ISE and not in PowerShell console

I recently encountered an issue with one of my PowerShell scripts. It was a script to enable litigation hold on all mailboxes in Exchange Online.

I connected to Exchange Online via the usual means below.

$creds = Get-Credential
$session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $Creds -Authentication Basic -AllowRedirection
Import-PSSession $session -AllowClobber

I then attempted to execute the following with no success.

try
{
Set-Mailbox -Identity $user.UserPrincipalName -LitigationHoldEnabled $true -ErrorAction Stop
}
catch
{
Write-Host "ERROR!" -ForegroundColor Red
}

As a test I removed the “-ErrorAction Stop” switch and then added the following line to the top of my script.

ErrorActionPreference = 'Stop'

That’s when it would work in Windows PowerShell ISE but not in Windows PowerShell console.

After many hours of troubleshooting I then discovered it was related to the implicit remote session established when connecting to Exchange Online.

To get my try/catch to work correctly I added the following line to the top of my script and all was well again.

$Global:ErrorActionPreference = 'Stop'