Exchange Online & Splunk – Automating the solution

NOTES FROM THE FIELD:

I have recently been consulting on, what I think is a pretty cool engagement to integrate some Office365 mailbox data into the Splunk reporting platform.

I initially thought about using a .csv export methodology however through trial & error (more error than trial if I’m being honest), and realising that this method still required some manual interaction, I decided to embark on finding a fully automated solution.

The final solution comprises the below components:

  • Splunk HTTP event collector
    • Splunk hostname
    • Token from HTTP event collector config page
  • Azure automation account
    • Azure Run As Account
    • Azure Runbook
    • Exchange Online credentials (registered to Azure automation account

I’m not going to run through the creation of the automation account, or required credentials as these had already been created, however there is a great guide to configuring the solution I have used for this customer at  https://www.splunk.com/blog/2017/10/05/splunking-microsoft-cloud-data-part-3.html

What the PowerShell script we are using will achieve is the following:

  • Connect to Azure and Exchange Online – Azure run as account authentication
  • Configure variables for connection to Splunk HTTP event collector
  • Collect mailbox data from the Exchange Online environment
  • Split the mailbox data into parts for faster processing
  • Specify SSL/TLS protocol settings for self-signed cert in test environment
  • Create a JSON object to be posted to the Splunk environment
  • HTTP POST the data directly to Splunk

The Code:

#Clear Existing PS Sessions
Get-PSSession | Remove-PSSession | Out-Null
#Create Split Function for CSV file
function Split-array {
param($inArray,[int]$parts,[int]$size)
if($parts) {
$PartSize=[Math]::Ceiling($inArray.count/$parts)
}
if($size) {
$PartSize=$size
$parts=[Math]::Ceiling($inArray.count/$size)
}
$outArray=New-Object’System.Collections.Generic.List[psobject]’
for($i=1;$i-le$parts;$i++) {
$start=(($i-1)*$PartSize)
$end=(($i)*$PartSize)-1
if($end-ge$inArray.count) {$end=$inArray.count-1}
$outArray.Add(@($inArray[$start..$end]))
}
return,$outArray
}
function Connect-ExchangeOnline {
param(
$Creds
)
#Connect to Exchange Online
$Session=New-PSSession –ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/-Credential $Credentials-Authentication Basic -AllowRedirection
$Commands=@(“Add-MailboxPermission”,”Add-RecipientPermission”,”Remove-RecipientPermission”,”Remove-MailboxPermission”,”Get-MailboxPermission”,”Get-User”,”Get-DistributionGroupMember”,”Get-DistributionGroup”,”Get-Mailbox”)
Import-PSSession-Session $Session-DisableNameChecking:$true-AllowClobber:$true-CommandName $commands|Out-Null
}
#Create Variables
$SplunkHost = “Your Splunk hostname or IP Address”
$SplunkEventCollectorPort = “8088”
$SplunkEventCollectorToken = “Splunk Token from Http Event Collector”
$servicePrincipalConnection = Get-AutomationConnection -Name ‘AzureRunAsConnection’
$credentials = Get-AutomationPSCredential -Name ‘Exchange Online’
#Connect to Azure
Add-AzureRMAccount -ServicePrincipal -Tenant $servicePrincipalConnection.TenantID -ApplicationId $servicePrincipalConnection.ApplicationID -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
#Connect to Exchange Online
Connect-ExchangeOnline -Creds $credentials
#Invoke Script
$mailboxes = Get-Mailbox -resultsize unlimited | select-object -property DisplayName, PrimarySMTPAddress, IsMailboxEnabled, ForwardingSmtpAddress, GrantSendOnBehalfTo, ProhibitSendReceiveQuota, AddressBookPolicy
#Get Current Date & Time
$time = get-date -Format s
#Convert Timezone to Australia/Brisbane
$bnetime = [System.TimeZoneInfo]::ConvertTimeBySystemTimeZoneId($time, [System.TimeZoneInfo]::Local.Id, ‘E. Australia Standard Time’)
#Adding Time Column to Output
$mailboxes = $mailboxes | Select-Object @{expression = {$bnetime}; Name = ‘Time’}, DisplayName, PrimarySMTPAddress, IsMailboxEnabled, ForwardingSmtpAddress, GrantSendOnBehalfTo, ProhibitSendReceiveQuota, AddressBookPolicy
#Create Split Array for Mailboxes Spreadsheet
$recipients = Split-array -inArray $mailboxes -parts 5
#Create JSON objects and HTTP Post to Splunk HTTP Event Collector
foreach ($recipient in $recipients) {
foreach($rin$recipient) {
#Create SSL Validation Bypass for Self-Signed Certificate in Testing
$AllProtocols = [System.Net.SecurityProtocolType]’Ssl3,Tls,Tls11,Tls12′
[System.Net.ServicePointManager]::SecurityProtocol = $AllProtocols
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
#Get JSON string to post to Splunk
$StringToPost = “{ `”Time`”: `”$($r.Time)`”, `”DisplayName`”: `”$($r.DisplayName)`”, `”PrimarySMTPAddress`”: `”$($r.PrimarySmtpAddress)`”, `”IsMailboxEnabled`”: `”$($r.IsMailboxEnabled)`”, `”ForwardingSmtpAddress`”: `”$($r.ForwardingSmtpAddress)`”, `”GrantSendOnBehalfTo`”: `”$($r.GrantSendOnBehalfTo)`”, `”ProhibitSendReceiveQuota`”: `”$($r.ProhibitSendReceiveQuota)`”, `”AddressBookPolicy`”: `”$($r.AddressBookPolicy)`” }”
$uri = “https://” + $SplunkHost + “:” + $SplunkEventCollectorPort + “/services/collector/raw”
$header = @{“Authorization”=”Splunk ” + $SplunkEventCollectorToken}
#Post to Splunk Http Event Collector
Invoke-RestMethod -Method Post -Uri $uri -Body $StringToPost -Header $header
}
}
Get-PSSession | Remove-PSSession | Out-Null

 

The final output that can be seen in Splunk looks like the following:

11/13/17
12:28:22.000 PM
{ [-]
AddressBookPolicy:
DisplayName: Shane Fisher
ForwardingSmtpAddress:
GrantSendOnBehalfTo:
IsMailboxEnabled: True
PrimarySMTPAddress: shane.fisher@xxxxxxxx.com.au
ProhibitSendReceiveQuota: 50 GB (53,687,091,200 bytes)
Time: 11/13/2017 12:28:22
}Show as raw text·         AddressBookPolicy =  

·         DisplayName = Shane Fisher

·         ForwardingSmtpAddress =  

·         GrantSendOnBehalfTo =  

·         IsMailboxEnabled = True

·         PrimarySMTPAddress = shane.fisher@xxxxxxxx.com.au

·         ProhibitSendReceiveQuota = 50 GB (53,687,091,200 bytes)

I hope this helps some of you out there.

Cheers,

Shane.

 

 

 

AAD Connect – Using Directory Extensions to add attributes to Azure AD

I was recently asked to consult on a project that was looking at the integration of Workday with Azure AD for Single Sign On. One of the requirements for the project, is that staff number be used as the NameID value for authentication.

This got me thinking as the staff number wasn’t represented in Azure AD at all at this point, and in order to use it, we will need to get it to Azure AD.

These days, this is fairly easy to achieve by using the “Directory Extensions” option in Azure AD Connect. Directory Extensions allows us to synchronise additional attributes from the on-premises environment to Azure AD.

Launch the AADC configuration utility and select “Customize synchronisation options”

aadc_config_p1

Enter your Azure AD global administrator credentials to connect to Azure AD.

aadc_config_p2

Once authenticated to Azure AD, click next through the options until we get to “Optional Features” and select “Directory extension attribute sync”

aadc_config_p3

There are two additional attributes that I want to make use of in Azure AD, employeeID and employeeNumber. As such, I have selected these attributes from the list.

aadc_config_p4

Completing the wizard will configure AAD Connect to sync the requested attributes to Azure AD. A full synchronisation is required post configuration and can be launched either from the configuration wizard itself, or from powershell using the following cmdlet.

Start-ADSyncSyncCycle -PolicyType initial

Once the sync was complete, I went to validate that my new attributes were visible in Azure AD. They weren’t!

After some digging, I found that my attributes had in fact synced successfully, they just weren’t in the location I wanted them to be. My attributes had synced as follows:

Source AD                                                                    Azure AD

employeeNumber                                                         extension_tenantGUID_employeeNumber

employeeID                                                                    extension_tenantGUID_employeeID

 

So… This wasn’t exactly what I was looking for, but at least the theory works in practice.

Fortunately, this problem is also easy to fix. We can configure AAD Connect to synchronise to a different target attribute using the synchronisation rules editor.

When we configured Directory extensions above, a new rule was created in the synchronisation rules editor for “User Directory Extension”. If we edit this rule, we can see the source and target attributes as they are currently configured, and we are able to make changes to these rules.

Before making any changes, follow the received prompt guiding you not to change the existing rule, but instead clone the current rule and then edit the clone. The original rule will be disabled during the cloning process.

As seen below, I have now configured the employeeNumber and employeeID attributes to sync to extensionattribute5 and 6 respectively.

aadc_config_p6

A full synchronisation is required before the above changes will take effect. We are now in a position to configure Single Sign On settings in Azure AD, using extensionAttribute5 as the NameID value.

I hope this helps some of you out there.

Cheers,

Shane.

AAD Connect – Updating OU Sync Configuration Error: stopped-deletion-threshold-exceeded

I was recently working with a customer on cleaning up their Azure AD Connect synchronisation configuration.

Initially, the customer had enabled sync for all OU’s in the Forest (As a lot of companies do),  and had now come to a point in maturity where they could look at optimising the solution.

We identified an OU with approximately 7000 objects which did not need to be synced.

So…

I logged onto the AAD Connect server and launched the configuration utility. After authenticating with my Office365 global admin account, I navigated to the OU sync configuration and deselected the required OU.

At this point, everything appeared to be working as expected. The configuration utility saved my changes successfully and started a delta sync based on the checkbox which was automatically selected in the tool. The delta sync also completed successfully.

I went to validate my results, and noticed that no changes had been made, and no objects had been deleted from Azure AD. ????

It occurred to me that a full sync was probably required in order to force the deletion to occur. I kicked off a full synchronisation using the following command.

Start-ADSyncSyncCycle -PolicyType initial

When the sync cycle reach the export phase however, I noticed that the task had thrown an error as seen below:

aadc_deletion_error

It would seem I’m trying to delete too many objects. Well that does make sense considering we identified 7000 objects earlier. We need to disable the Export deletion threshold before we can move forward!

Ok, so now we know what we have to do! What does the order of events look like? See below:

  1. Update OU synchronisation configuration in Azure AD Connect utility
  2. Delect the run synchronisation option before saving the AADC utility
  3. Run the following powershell command to disable the deletion threshold
    1. Disable-ADSyncExportDeletionThreshold
  4. Run the following powershell command to start the full synchronisation
    1. Start-ADSyncSyncCycle -PolicyType initial
  5. Wait for the full synchronisation cycle to complete
  6. Run the following powershell command to disable the deletion threshold
    1. Enable-ADSyncExportDeletionThreshold

I hope this helps save some time for some of you out there.

Cheers,

Shane.

Complex Mail Routing in Exchange Online Staged Migration Scenario

Notes From the Field:

I was recently asked to assist an ongoing project with understanding some complex mail routing and identity scenario’s which had been identified during planning for an upcoming mail migration from an external system into Exchange Online.

New User accounts were created in Active Directory for the external staff who are about to be migrated. If we were to assign the target state, production email attributes now, and create the exchange online mailboxes, we would have a problem nearing migration.

When the new domain is verified in Office365 & Exchange Online, new mail from staff already in Exchange Online would start delivering to the newly created mailboxes for the staff soon to be onboarded.

Not doing this, will delay the project which is something we didn’t want either.

I have proposed the following in order to create a scenario whereby cutover to Exchange Online for the new domain is quick, as well as not causing user downtime during the co-existence period. We are creating some “co-existence” state attributes on the on-premises AD user objects that will allow mail flow to continue in all scenarios up until cutover. (I will come back to this later).

generic_exchangeonline_migration_process_flow

We have configured the AD user objects in the following way

  1. UserPrincipalName – username@localdomainname.local
  2. mail – username@mydomain.onmicrosoft.com
  3. targetaddress – username@mydomain.com

We have configured the remote mailbox objects in the following way

  1. mail – username@mydomain.onmicrosoft.com
  2. targetaddress – username@mydomain.com

We have configured the on-premises Exchange Accepted domain in the following way

  1. Accepted Domain – External Relay

We have configured the Exchange Online Accepted domain in the following way

  1. Accepted Domain – Internal Relay

How does this all work?

Glad you asked! As I eluded to earlier, the main problem here is with staff who already have mailboxes in Exchange Online. By configuring the objects in this way, we achieve several things:

  1. We can verify the new domains successfully in Office365 without impacting existing or new users. By setting the UPN & mail attributes to @mydomain.onmicrosoft.com, Office365 & Exchange Online do not (yet) reference the newly onboarded domain to these mailboxes.
  2. By configuring the accepted domains in this way, we are doing the following:
    1. When an email is sent from Exchange Online to an email address at the new domain, Exchange Online will route the message via the hybrid connector to the Exchange on-premises environment. (the new mailbox has an email address @mydomain.onmicrosoft.com)
    2. When the on-premises environment receives the email, Exchange will look at both the remote mailbox object & the accepted domain configuration.
      1. The target address on the mail is configured @mydomain.com
      2. The accepted domain is configured as external relay
      3. Because of this, the on-premises exchange environment will forward the message externally.

Why is this good?

Again, for a few reasons!

We are now able to pre-stage content from the existing external email environment to Exchange Online by using a target address of @mydomain.onmicrosoft.com. The project is no longer at risk of being delayed ! 🙂

At the night of cutover for MX records to Exchange Online (Or in this case, a 3rd party email hygiene provider),  We are able to use the same powershell code that we used in the beginning to configure the new user objects to modify the user accounts for production use. (We are using a different csv import file to achieve this).

Target State Objects

We have configured the AD user objects in the following way

  1. UserPrincipalName – username@mydomain.com
  2. mail – username@mydomain.com
  3. targetaddress – username@mydomain.mail.onmicrosoft.com

We have configured the remote mailbox objects in the following way

  1. mail
    1. username@mydomain.com (primary)
    2. username@mydomain.onmicrosoft.com
  2. targetaddress – username@mydomain.mail.onmicrosoft.com

We have configured the on-premises Exchange Accepted domain in the following way

  1. Accepted Domain – Authoritive

We have configured the Exchange Online Accepted domain in the following way

  1. Accepted Domain – Internal Relay

NOTE: AAD Connect sync is now run and a manual validation completed to confirm user accounts in both on-premises AD & Exchange, as well as Azure AD & Exchange Online to confirm that the user updates have been successful.

We can now update DNS MX records to our 3rd party email hygiene provider (or this could be Exchange Online Protection if you don’t have one).

A final synchronisation of mail from the original email system is completed once new mail is being delivered to Exchange Online.

Fixing the Windows 10 Insider 14946 Bitdefender Update Issue

I have been part of the Windows 10 Insider program for some time now, and as usual the time had come around again to install the latest fast ring update 14946.

However, when I went to download the update via the usual windows update channel, I found I could not download the update at all. (Or the bar showed zero progress).

I started to go looking for an explanation and came across the following post on the Microsoft Forum site.

https://social.technet.microsoft.com/Forums/windows/en-US/e984c816-5c21-47f9-8d9d-94dd1d0137de/insider-preview-build-14946-at-fast-ring?forum=win10itprosetup

I am running Bitdefender 2016 on my machine so guessed this might be the problem. Now! I didn’t want to leave my machine unprotected, so I thought I would see if I could get the problem fixed.

  1. Run a repair of Bitdefender 2016
    • Open control panel and launch Programs & Features
    • Locate Bitdefender 2016 and select Uninstall (This won’t uninstall the product)
    • Choose the repair option in the popup menu
    • Restart the computer when prompted
    • Update Bitdefender once the machine has restarted
  2. Open the Bitdefender control panel from the desktop or taskbar
    • Open the Firewall module using the modules button on the front panel
    • Click the gear icon next to the firewall module
    • Select the adaptors tab
  3. Update the wifi or ethernet connection that is active to the following
    • Network Type – Trusted or home/office
    • Stealth Mode – Off
    • Generic On
  4. Close the Bitdefender control panel

Note: I needed to toggle the firewall module off/on before I could edit the network adapter configuration. I was running a ping to my local gateway while making changes to the Bitdefender adapter configuration in order to see when the network connection became active.

Hope this helps some of you out as well.

Shane.

 

Implementing Microsoft (Office365) Peering for ExpressRoute

Notes from the Field

I have recently been involved with an implementation of Microsoft Peering for Expressroute with a large Australian customer and thought I would share the experience with you.

Firstly, and secondly, make sure that you read the specific guidance from Microsoft regarding prerequisites for Microsoft Peering. (See below)

Configure Microsoft peering for the circuit

Make sure that you have the following information before you proceed.

  • A /30 subnet for the primary link. This must be a valid public IPv4 prefix owned by you and registered in an RIR / IRR.
  • A /30 subnet for the secondary link. This must be a valid public IPv4 prefix owned by you and registered in an RIR / IRR.
  • A valid VLAN ID to establish this peering on. Ensure that no other peering in the circuit uses the same VLAN ID.
  • AS number for peering. You can use both 2-byte and 4-byte AS numbers.
  • Advertised prefixes: You must provide a list of all prefixes you plan to advertise over the BGP session. Only public IP address prefixes are accepted. You can send a comma separated list if you plan to send a set of prefixes. These prefixes must be registered to you in an RIR / IRR.
  • Customer ASN: If you are advertising prefixes that are not registered to the peering AS number, you can specify the AS number to which they are registered. This is optional.
  • Routing Registry Name: You can specify the RIR / IRR against which the AS number and prefixes are registered.
  • An MD5 hash, if you choose to use one. This is optional.

https://azure.microsoft.com/en-us/documentation/articles/expressroute-howto-routing-classic/#microsoft-peering

Let me address the items in bold above.

AS Number for Peering: A public AS number is required in order to implement Microsoft Peering. The customer I was working with, didn’t own a public AS, and as a result, were required to apply to APNIC for a public AS. APNIC are the registrar for AS numbers in Asia Pacific

The prerequisites for applying for a public AS number in Australia are as follows, and have been taken from the APNIC site.

Your organization is eligible for an AS Number assignment if:

  • it is currently multihomed, or
  • it holds previously-allocated provider independent address space and intends to multihome in the future.

An organization will also be eligible if it can demonstrate that it will meet the above criteria upon receiving an AS Number (or within a reasonably short time afterwards).

I have heard along the grapevine from colleagues of mine, that this process can be quite time consuming, particularly when the above requirements are not clear, however this wasn’t the case for the customer in question, and the AS number was approved in a timely manner.

Advertised prefixes: This is the list of pubic IP prefixes which azure will receive requests from.

Important Note: Microsoft will run a validation process once the dedicatedcircuit for Microsoft Peering is established. I attempted to find out detailed information regarding the validation process however I was not able to obtain this information. What I was able to find out, is that a validation check is run against the Public AS number to verify that the customer who owns the tenant, also owns the AS. A similar validation check is completed against the AdvertisedPrefixes.

In our case, even though the customer owned both the AS, and the AdvertisedPrefixes, automatic validation failed.

In order to fix this issue, we opened a support call with Azure support who were able to perform a manual validation within about 10 mins of being in contact.

Once the virtual circuit was up, and the AdvertisedPrefixes value showing “configured”, we moved onto configuring the virtual circuit in the Equinix Cloud Exchange.

The customers’ ISP configured the virtual circuits correctly, however we were not able to successfully establish BGP Peering. Another support call was created, with all vendors involved and after some troubleshooting, it was discovered the Primary and Secondary Peer subnets had been configured in reverse and this was causing a mac address mismatch with Azure.

Fortunately, the solution to this issue was to simply recreate the virtual circuit via the cloud exchange portal using the same service key as was used originally. Once this was completed, BGP peering was successfully established.

Premium Sku

Although not documented anywhere (that I could find), Microsoft Peering requires a Premium sku type in order to be enabled.

If you attempt to enable Microsoft Peering on a virtual circuit with a “standard” sku type, you will be promptly rewarded with a powershell error telling you exactly this.

Quick Shoutout: Thanks to our Microsoft Technology Strategist “Scott Turner” for point this out to me ahead of time.

Implementation Process

Create Virtual Circuit

#Import Azure Expressroute modules
Import-Module 'C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\Azure.psd1'
Import-Module 'C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\ExpressRoute\ExpressRoute.psd1'

#Creating a new circuit for Sydney O365
$Bandwidth = 500
$CircuitName = "EquinixO365Sydney"
$ServiceProvider = "Equinix"
$Location = "Sydney"
New-AzureDedicatedCircuit -CircuitName $CircuitName `
-ServiceProviderName $ServiceProvider `
-Bandwidth $Bandwidth -Location $Location -Sku Premium

#Creating a new circuit for Melbourne O365
$Bandwidth = 500
$CircuitName = "EquinixO365Melbourne"
$ServiceProvider = "Equinix"
$Location = "Melbourne"
New-AzureDedicatedCircuit -CircuitName $CircuitName `
-ServiceProviderName $ServiceProvider `
-Bandwidth $Bandwidth -Location $Location -Sku Premium

Get your Service Keys

Get-AzureDedicatedCircuit

Enable BGP Peering

Set-AzureBGPPeering -AccessType Microsoft `
-ServiceKey "SYD Service Key" -PrimaryPeerSubnet "x.x.x.x/30" `
-SecondaryPeerSubnet "x.x.x.x/30" -VlanId "Enter Vlan Id" `
-PeerAsn "Enter Public ASN" `
-AdvertisedPublicPrefixes "Enter Advertised Prefixes" -RoutingRegistryName "APNIC"

Set-AzureBGPPeering -AccessType Microsoft `
-ServiceKey "MEL Service Key" -PrimaryPeerSubnet "x.x.x.x/30" `
-SecondaryPeerSubnet "x.x.x.x/30" -VlanId "Enter Vlan Id" `
-PeerAsn "Enter Public ASN" `
-AdvertisedPublicPrefixes "Enter Advertised Prefixes" -RoutingRegistryName "APNIC"

Important Note: Advertising default routes

Default routes are permitted only on Azure private peering sessions. In such a case, we will route all traffic from the associated virtual networks to your network. Advertising default routes into private peering will result in the internet path from Azure being blocked. You must rely on your corporate edge to route traffic from and to the internet for services hosted in Azure.

To enable connectivity to other Azure services and infrastructure services, you must make sure one of the following items is in place:

  • Azure public peering is enabled to route traffic to public endpoints
  • You use user defined routing to allow internet connectivity for every subnet requiring Internet connectivity.

Note: Advertising default routes will break Windows and other VM license activation. Follow instructions here to work around this.

https://azure.microsoft.com/en-us/documentation/articles/expressroute-routing/

Do not advertise the default route to Microsoft Peering as this will break things!

I hope this helps some of you down the track as at the time we completed implementation, there was very little documentation available on the Web for Microsoft Peering.

 

Autodiscover Troubleshooting

Notes from the Field

I have been onsite working on remediating a partially completed Exchange 2007 to Exchange 2010 migration. This environment was then configured for Exchange Online Hybrid using ADFS 2.0 and Dirsync.

After reviewing the Autodiscover configuration, I discovered that something wasn’t right. In addition to this, I had received the following issues list from the customer.

Symptoms

  1. Outlook for Office 365 mailboxes is not able to be configured using Autodiscover. This occurred on both domain and non-domain joined machines.
  2. Outlook on domain joined machines is intermittently unable to be configured using Autodiscover
  3. Outlook on domain joined machines is intermittently very slow

These symptoms combined told me that there was likely more than one issue in effect here due to the fact that both domain joined and non-domain joined machines were affected.

Topology

The network architecture had multiple Active Directory sites across all regions with Client Access Servers located in all regions but not all sites.

Customer_Autodiscover_Generic

Resolution

1. Office 365 mailboxes unable to be configured using autodiscover

Upon investigating where the internal DNS record for autodiscover.domain.com pointed to, I discovered that this record was being pointed to one of the Exchange 2007 Client Access Servers. As part of the Hybrid Configuration for Office 365, Autodiscover records are required to be updated to point to the Exchange 2010 hybrid servers.

After updating the internal DNS record, Autodiscover began working correctly for non-domain joined machines. For domain joined machines, this process began working intermittently however another piece of the puzzle was required in order to fix this issue.

2. Outlook on domain joined machines is intermittently unable to be configured using Autodiscover

and

3. Outlook on domain joined machines is intermittently very slow

Knowing that Active Directory SCP records will be in effect here, I began looking into the configuration of two attributes assigned to the various SCP records.

  • serviceBindingInformation – This attribute returns what is effectively the Autodiscover url to the domain joined machine
  • keywords – This attribute returns a variety of values depending on configuration, but in this case, the value we are interested in is Site=

In this case, all SCP records, except for the 2 Exchange 2007 Client Access Servers were configured with a Site= keyword.

This value however only contained the site name for the location where the Client Access Server was installed.

When SCP lookup is conducted, AD is queried for the site name of the Client. The lookup then tries to match this with a site= value in the keyword attribute of the SCP record.

Considering this, a site name will nearly never be matched, as 90% of the Client Access Servers are located in datacentres in this environment.

The fall back behaviour of SCP when no match is found, is that where the SCP URL record contains at least one keyword that starts with Site= but none of these keywords read Site=siteName.

So what does this all mean? This means that in effect, any SCP record with a value of Site=Anyvalue could be returned to the client. My testing showed that this is in fact what was occurring. The first SCP record in the list by alphabetical order was being returned to the client. It just turned out that this server was located in a datacentre in the UK.

Armed with all of this information, I recalled a set-clientaccessserver setting called autodiscoversitescope. This setting allows us to bind Active Directory site names to specific client access servers. In effect, what this does on the back end is populate the keywords attribute with site= values for all sites that are defined in the autodiscoversitescope.

Once all of the Active Directory Sites were assigned to the autodiscoversitescope attribute on the appropriate regional Client Access Servers, all clients were now redirected to the local servers for connection.

This configuration resolved both of the identified issues above as SCP lookups were now being completed locally on the Client Access Servers.

Wave 15 Shared Mailboxes in a Hybrid Configuration

Notes from the Field

I have been working on a customer site for some time now and have recently been migrated to Wave 15 of Exchange Online.

It was brought to my attention during the week, that since the migration, Shared Mailboxes which were created via the Exchange Online EAC could not receive external email. Shared mailboxes which were created in the on-premise environment and then migrated to Exchange Online are working as expected.

Note: The support staff have already created the Shared mailboxes using the Exchange online EAC and these mailboxes already have significant amounts of mail contained within.

In the scenario where emails were not delivered, an NDR was sent to the sender advising that the maximum hop count for the email had been exceeded and that this was the reason for the delivery failure.

So, I decided to take a look at the NDR (as you would), and discovered that there did appear to be a routing loop in play. But how could this happen when other shared mailboxes on the same email domain, hosted in Exchange Online, are working fine?

The offending email was first routed to FOPE via the MX record on the domain. This was expected. Then it was routed to the On-premise hybrid server. Also expected, as this is the default routing connector at Exchange Online. But then, the message was routed through the external send connector for some reason. This turned out to be the key to solving this riddle.

Why would exchange route what is effectively an internal email, external to the organization?

Because the on-premise Active Directory knew nothing about this email address. There was no AD object as the mailbox was created in Exchange Online and Dirsync is only synchronised from on-premise to the cloud.

The solution turned out to be remarkably simple after a little bit of thought. Create an on-premise remote user mailbox using the Exchange 2010 EMC.

O365_SharedMailboxes_SS1

Active Directory now knew about the address mytestuser@contoso.com, and also knew that it needed to route this address through the outbound Office 365 connector using mytestuser@contoso.mail.onmicrosoft.com.

I hope this saves some head scratching for those of you out there.