Complex Mail Routing in Exchange Online Staged Migration Scenario

Notes From the Field:

I was recently asked to assist an ongoing project with understanding some complex mail routing and identity scenario’s which had been identified during planning for an upcoming mail migration from an external system into Exchange Online.

New User accounts were created in Active Directory for the external staff who are about to be migrated. If we were to assign the target state, production email attributes now, and create the exchange online mailboxes, we would have a problem nearing migration.

When the new domain is verified in Office365 & Exchange Online, new mail from staff already in Exchange Online would start delivering to the newly created mailboxes for the staff soon to be onboarded.

Not doing this, will delay the project which is something we didn’t want either.

I have proposed the following in order to create a scenario whereby cutover to Exchange Online for the new domain is quick, as well as not causing user downtime during the co-existence period. We are creating some “co-existence” state attributes on the on-premises AD user objects that will allow mail flow to continue in all scenarios up until cutover. (I will come back to this later).

generic_exchangeonline_migration_process_flow

We have configured the AD user objects in the following way

  1. UserPrincipalName – username@localdomainname.local
  2. mail – username@mydomain.onmicrosoft.com
  3. targetaddress – username@mydomain.com

We have configured the remote mailbox objects in the following way

  1. mail – username@mydomain.onmicrosoft.com
  2. targetaddress – username@mydomain.com

We have configured the on-premises Exchange Accepted domain in the following way

  1. Accepted Domain – External Relay

We have configured the Exchange Online Accepted domain in the following way

  1. Accepted Domain – Internal Relay

How does this all work?

Glad you asked! As I eluded to earlier, the main problem here is with staff who already have mailboxes in Exchange Online. By configuring the objects in this way, we achieve several things:

  1. We can verify the new domains successfully in Office365 without impacting existing or new users. By setting the UPN & mail attributes to @mydomain.onmicrosoft.com, Office365 & Exchange Online do not (yet) reference the newly onboarded domain to these mailboxes.
  2. By configuring the accepted domains in this way, we are doing the following:
    1. When an email is sent from Exchange Online to an email address at the new domain, Exchange Online will route the message via the hybrid connector to the Exchange on-premises environment. (the new mailbox has an email address @mydomain.onmicrosoft.com)
    2. When the on-premises environment receives the email, Exchange will look at both the remote mailbox object & the accepted domain configuration.
      1. The target address on the mail is configured @mydomain.com
      2. The accepted domain is configured as external relay
      3. Because of this, the on-premises exchange environment will forward the message externally.

Why is this good?

Again, for a few reasons!

We are now able to pre-stage content from the existing external email environment to Exchange Online by using a target address of @mydomain.onmicrosoft.com. The project is no longer at risk of being delayed ! 🙂

At the night of cutover for MX records to Exchange Online (Or in this case, a 3rd party email hygiene provider),  We are able to use the same powershell code that we used in the beginning to configure the new user objects to modify the user accounts for production use. (We are using a different csv import file to achieve this).

Target State Objects

We have configured the AD user objects in the following way

  1. UserPrincipalName – username@mydomain.com
  2. mail – username@mydomain.com
  3. targetaddress – username@mydomain.mail.onmicrosoft.com

We have configured the remote mailbox objects in the following way

  1. mail
    1. username@mydomain.com (primary)
    2. username@mydomain.onmicrosoft.com
  2. targetaddress – username@mydomain.mail.onmicrosoft.com

We have configured the on-premises Exchange Accepted domain in the following way

  1. Accepted Domain – Authoritive

We have configured the Exchange Online Accepted domain in the following way

  1. Accepted Domain – Internal Relay

NOTE: AAD Connect sync is now run and a manual validation completed to confirm user accounts in both on-premises AD & Exchange, as well as Azure AD & Exchange Online to confirm that the user updates have been successful.

We can now update DNS MX records to our 3rd party email hygiene provider (or this could be Exchange Online Protection if you don’t have one).

A final synchronisation of mail from the original email system is completed once new mail is being delivered to Exchange Online.

SharePoint content migration using Sharegate and Powershell

Content Migration

When it comes to content migration we have the option to write code (script) or use a migration toolset or a combination of both, thus it is important to identify the appropriate toolset based on “ease of use” and what we need to achieve.

I have evaluated several migration toolsets however, in this blog I am going with Sharegate as I have extensively used this product recently.

Sharegate is a toolset used to  “Manage, Migrate and Secure SharePoint & Office 365”. 

We will look at migrating a document library in SharePoint O365 environment to another document library in the same environment and see how we can use Sharegate to speed up this process. I will incorporate some of my experiences, working with a customer on a document migration strategy.

What are we trying to achieve?
              Migrate document libraries in SharePoint and apply metadata in the process using a combination of excel and PowerShell scripting along with Sharegate. 

sharegate-content-migration-using-powershell

Image 1

 

  1. Select the Source document library to Migrate.
  2. Create the spreadsheet using Sharegate “Export to Excel” function.
  3. Update the Metadata within the excel spreadsheet (this can be done manually or by a console app).
  4. Using Sharegate PowerShell automate the Import/Migration process.

 

To achieve this, we need to do the following:

  1. Logon to Sharegate and click on the “Copy SharePoint Content” option as depicted below.
Sharegate1.png

Image 2

2. Connect using  your O365 SharePoint tenant, using your credentials.

3. Below, as you can see I have a bunch of test documents that I need to migrate from source document library to the target document library.

Sharegate 3 1.png

Image 3

4. Click on Excel to export an excel spreadsheet (to update the metadata columns).

5. Select the documents that you would like to copy across, however before starting to copying you will need to set up a custom property template as below:

Property templates allow you to select the options used for the bulk edition and to set custom actions for all of the list or library’s columns.

sharegate4

Image 4

6. Give the template a name and set up the template properties as below as per your requirements.

sharegate5

Image 5

7. When you click on “Save & Start” you can start copying the files across using the Sharegate UI.

Using PowerShell script to automate this migration process

  1. Save the Excel file locally, for e.g. “MigrationTest.xlsx”
Sharegate excel.png

Image 6

2. Update the columns in the excel spreadsheet with the appropriate metadata and save the file.

3. Click on the Sharegate PowerShell  to open the PowerShell as shown in the below Image.

sgps

Image 7

4. Run the below script.

PowerShell script using Sharegate’s “Copy-Content” cmdlet :

#PowerShell Script to migrate documents using Sharegate PowerShell
#Connection to O365 account
$mypassword = ConvertTo-SecureString "******" -AsPlainText -Force
$srcSite = Connect-Site -Url "https://yourtenant.sharepoint.com/sites/test/ /" -Username "user@domain.com.au" -Password $mypassword 
Write-Host "Connected to source"
$dstSite = Connect-Site -Url "https:// yourtenant.sharepoint.com/sites/test2/DestDocLib/" -Username "user@domain.com.au" -Password $mypassword
Write-Host "Connected to Target"
Write-Host $srcList
$srcList = Get-List -Name "SrcDocLib" -Site $srcSite
Write-Host $dstList
$dstList = Get-List -Name "DestDocLib" -Site $dstSite
$Template = “TestTemplate" 
Write-Host "Copying..."
Copy-Content -TemplateName $Template -SourceList $srcList -DestinationList $dstList -ExcelFilePath "C:\POC\MigrationTest.xlsx"
Write-Host "Done Copying"

 

PowerShell to “export to excel” from Sharegate

As of today, Sharegate does not have an “export to excel” script as yet, this step has to be done through the Sharegate UI. I have talked to the Sharegate support team and they came back to me saying, “this is one of the most requested feature and will be released soon“. Please refer to the Sharegate documentation for updates on new features: http://help.share-gate.com/

Conclusion

Using Sharegate and PowerShell we can automate the document migration and metadata tagging. Going further you can create a number of excel export files using Sharegate and script to iterate through each excel file as input parameter to the above PowerShell script.

 

 

Migrating resources from AWS to Microsoft Azure

Kloud receives a lot of communications in relation to the work we do and the content we publish on our blog. My colleague Hugh Badini recently published a blog about Azure deployment models from which we received the following legitimate follow up question…

So, Murali, thanks for letting us know you’d like to know more about this… consider this blog a starting point :).

Firstly though…

this topic (inter-cloud migrations), as you might guess, isn’t easily captured in a single blog post, nor, realistically in a series, so what I’m going to do here is provide some basics to consider. I may not answer your specific scenario but hopefully provide some guidance on approach.

Every cloud has a silver lining

The good news is that if you’re already operating in a cloud environment then you have likely had to deal with many of the fundamental differences between traditional application hosting and architecture and that of cloud platforms.

You will have dealt with how you ensure availability of your application(s) across outages; dealing with spikes in traffic via use of elastic compute resources; and will have come to recognise that is many ways, Infrastructure-as-a-Service (IaaS) in the cloud has many similarities to the way you’ve always done things on-prem (such as backups).

Clearly you have less of a challenge in approaching a move to another cloud provider.

Where to start

When we talk about moving from AWS to Azure we need to consider a range of things – let’s take a look at some key ones.

Understand what’s the same and what’s different

Both platforms have very similar offerings, and Microsoft provides many great resources to help those utilising AWS to build an understanding of which services in AWS map to which services in Azure. As you can see the majority of AWS’ services have an equivalent in Azure.

Microsoft’s Channel 9 is also a good place to start to learn about the similarities, with there being no better place than the Microsoft Azure for Amazon AWS Professional video series.

So, at a platform level, we are pretty well covered, but…

the one item to be wary of in planning any move of an existing application is how it has been developed. If we are moving components from, say, an EC2 VM environment to an Azure VM environment then we will probably have less work to do as we can build our Azure VM as we like (yes, as we know, even Linux!) and install whatever languages, frameworks or runtimes we need.

If, however, we are considering moving an application from a more Platform-as-a-Service capability such AWS Lambda we need to look at the programming model required to move its equivalent in Azure – Azure Functions. While AWS Lambda and Azure Functions are functionally the same (no pun intended) we cannot simply take our Lambda code and drop it into an Azure Function and have it work. It may not even make sense to utilise Azure Functions depending on what you are shifting.

It’s also important to consider the differences in the availability models in use today in AWS and Azure. AWS uses Availability Zones to help you manage the uptime of your application and it’s components. In Azure we manage availability at two levels – locally via Availability Sets and then geographically through use of Regions. As these models differ it’s an important area to consider for any migration.

Tools are good, but are no magic wand

Microsoft provides a way to migrate AWS EC2 instances to Azure using Azure Site Recovery (ASR) and while there are many tools for on-prem to cloud migrations and for multi-cloud management, they mostly steer away from actual migration between cloud providers.

Kloud specialises in assessing application readiness for cloud migrations (and then helping with the migration), and we’ve found inter-cloud migration is no different – understanding the integration points an application has and the SLAs it must meet are a big part of planning what your target cloud architecture will look like. Taking into consideration underlying platform services in use is also key as we can see from the previous section.

If you’re re-platforming an application you’ve built or maintain in-house, make sure to review your existing deployment processes to leverage features available to you for modern Continuous Deployment (CD) scenarios which are certainly a strength of Azure.

Data has a gravitational pull

The modern application world is entirely a data-driven one. One advantage to cloud platforms is the logically bottomless pit of storage you have at your disposal. This presents a challenge, though, when moving providers where you may have spent years building data stores containing Terabytes or Petabytes of data. How do you handle this when moving? There are a few strategies to consider:

  • Leave it where it is: you may decide that you don’t need all the data you have to be immediately available. Clearly this option requires you to continue to manage multiple clouds but may make economic sense.
  • Migrate via physical shipping: AWS provides Snowball as a way to extract data out of AWS without needing to pull it over a network connection. If your solution allows it you could ship your data out of AWS to a physical location, extract that data, and then prepare it for import into Azure, either over a network connection using ExpressRoute or through the Azure Import/Export service.
  • Migrate via logical transfer: you may have access to a service such as Equinix’s Cloud Exchange that allows you to provision inter-connects between cloud and other network providers. If so, you may consider using this as your migration enabler. Ensure you consider how much data you will transfer and what, if any, impact the data transfer might have on existing network services.

Outside of the above strategies on transferring of data, perhaps you can consider a staged migration where you only bring across chunks of data as required and potentially let older data expire over time. The type and use of data obviously impacts on which approach to take.

Clear as…

Hopefully this post has provided a bit more clarity around what you need to consider when migrating resources from AWS to Azure. What’s been your experience? Feel free to leave comments if you have feedback or recommendations based on the paths you’ve followed.

Happy dragon slaying!

azurelogo

Azure Deployment Models And How To Migrate From ASM to ARM

This is a post about the two deployment models currently available in Azure, Service Management (ASM) and Resource Manager (ARM). And how to migrate from one to the other if necessary.

About the Azure Service Management deployment model

The ASM model, also known as version 1 and Classic mode, started out as a web interface and a backend API for the PaaS services Azure opened with at launch.

Features

  1. ASM deployments are based on an XML schema.
  2. ASM operations are based at the cloud service level.
  3. Cloud services are the logical containers for IaaS VMs and PaaS services.
  4. ASM is managed through the CLI, old and new portals (features) and PowerShell.
Picture1

In ASM mode the cloud service acts as a container for VMs and PaaS services.

About the Resource Manager deployment model

The ARM model consists of a new web interface and API for resource management in Azure which came out of preview in 2016 and introduced several new features.

Features

  1. ARM deployments are based on a JSON schema.
  2. Templates, which can be imported and exported, define deployments.
  3. RBAC support.
  4. Resources can be tagged for logical access and grouping.
  5. Resource groups are the logical containers for all resources.
  6. ARM is managed through PowerShell (PS), the CLI and new portal only.
Picture2

In ARM mode the resource group acts as a container for all resources.

Why use Service Management mode?

  1. Support for all features that are not exclusive to ARM mode.
  2. No new features will be made available in this mode.
  3. Cannot process operations in parallel (.e.g. vm start, vm create, etc).
  4. ASM needs a VPN or ExpressRoute connection to communicate with ARM.
  5. In Classic mode templates cannot be used to configure resources.

Users should therefore only be using service management mode if they have legacy environments to manage which include features exclusive to it.

Why use Resource Manager mode?

  1. Support for all features that are not exclusive to ASM mode.
  2. Can process multiple operations in parallel.
  3. JSON templates are a practical way of managing resources.
  4. RBAC, resource groups and tags!

Resource manager mode is the recommended deployment model for all Azure environments going forward.

Means of migration

The following tools and software are available to help with migrating environments.

ASM2ARM custom PowerShell script module.
Platform supported migrations using PowerShell or the Azure CLI.
The MigAz tool.
Azure Site Recovery.

About ASM2ARM

ASM2ARM is a custom PowerShell script module for migrating a single Virtual Machine from Azure Service Management to Resource Manager stack which makes available two new cmdlets.

Cmdlets: Add-AzureSMVmToRM & New-AzureSmToRMDeployment

Code samples:

$vm = Get-AzureVm -ServiceName acloudservice -Name atestvm
Add-AzureSMVmToRM -VM $vm -ResourceGroupName aresourcegroupname -DiskAction CopyDisks -OutputFileFolder D:\myarmtemplates -AppendTimeStampForFiles -Deploy

Using the service name and VM name parameters directly.

Add-AzureSMVmToRM -ServiceName acloudservice -Name atestvm -ResourceGroupName aresourcegroupname -DiskAction CopyDisks -OutputFileFolder D:\myarmtemplates -AppendTimeStampForFiles -Deploy

Features

  1. Copy the VM’s disks to an ARM storage account or create a new one.
  2. Create a destination vNet and subnet for migrated VMs.
  3. Create ARM JSON templates and PS script for deployment of resources.
  4. Create an availability set if one exists at source.
  5. Create a public IP if the VM is open to the internet.
  6. Create network security groups for the source VMs public endpoints.

Limitations

  1. Cannot migrate running VMs.
  2. Cannot migrate multiple VMs.
  3. Cannot migrate a whole ASM network
  4. Cannot create load balanced VMs.

For more information: https://github.com/fullscale180/asm2arm

About platform supported migrations using PowerShell

Consists of standard PowerShell cmdlets from Microsoft for migrating resources to ARM.

Features

  1. Migration of virtual machines not in a virtual network (disruptive!).
  2. Migration of virtual machines in a virtual network (non-disruptive!).
  3. Storage accounts are cross compatible but can also be migrated.

Limitations

  1. More than one availability set in a single cloud service.
  2. One or more availability sets.
  3. VMs not in an availability set in a single cloud service.

About platform supported migrations using the Azure CLI

Consists of standard Azure CLI commands from Microsoft for migrating resources to ARM.

Features & Limitations

See above.

A video on the subject of platform supported migrations using PowerShell or the CLI.

About MigAz

MigAz comes with an executable which outputs reference JSON files and makes available a Powershell script capable of migrating ASM resources and blob files to ARM mode environments.

Features

  1. MigAz exports JSON templates from REST API calls for migration.
  2. New resources are created in and disk blobs copied to their destination, all original resources left intact.
  3. Exported JSON can (and should) be reviewed and customized before use.
  4. Export creates all new resources in a single resource group.
  5. Supports using any subscription target, same or different.
  6. With JSON being at the core of ARM, templates can be used for DevOPs.
  7. Can be used to clone existing environments or create new ones for testing.
main

A screenshot of the MigAZ frontend GUI.

About Azure Site Recovery (ASR)

ASR is a backup, continuity and recovery solution set which can also be used for migrating resources to ARM.

Features

  1. Cold backup and replication of both on and off premise virtual machines.
  2. Cross compatible between ASM and ARM deployment models.
  3. ASM virtual machines can be restored into ARM environments.

Picture1

Pros and cons

ASM2ARM: Requires downtime but can be scripted which has potential however this approach only allows for the migration of one VM at a time which is a sizable limitation.

Azure PowerShell and CLI: This approach is well rounded. It can be scripted and allows for rollbacks. Supported migration scenarios include some caveats however and you cannot migrate a whole vNet into an existing network.

MigAz Tool: Exports JSON of ASM resources for customization and uses a PowerShell script for deployment to ARM. Downtime is required going to the same address space or cutting over to new services but this is easily your best and most comprehensive option at this time.

Site Recovery: Possibly the easiest way of migrating resources and managing the overall process but requires a lot of work to set up. Downtime is required in all cases.

Migrating Sitecore 7.0 to Azure IaaS Virtual Machines – Part 1

INTRODUCTION

Recently, I had the opportunity of working on a Sitecore migration project. I was tasked with moving a third-party hosted Sitecore 7.0 instance to Azure IaaS. The task sounds simple enough but if only life was that simple. A new requirement was to improve upon the existing infrastructure by making the new Sitecore environment highly available and the fun begins right there.

To give some context, the CURRENT Sitecore environment is not highly available and has the following server topology:

  • Single Sitecore Content Delivery (CD) Instance
  • Single Sitecore Content Management (CM) Instance
  • Single SQL Server 2008 Instance for Sitecore Content and Configurations
  • Single SQL Server 2008 Instance for Sitecore Analytics

The NEW Sitecore Azure environment is highly available and has the following server topology:

  • Load-balanced Sitecore CD Instances (2 servers)
  • Single Sitecore CM Instance (single server)
  • SQL Server 2012 AlwaysOn Availability Group (AAG) for Sitecore Content (2 servers)
  • SQL Server 2012 AlwaysOn Availability Group (AAG) for Sitecore Analytics (2 servers)

In this tutorial I will walk you through the processes required to provision a brand new Azure environment and migrate Sitecore.

This tutorial will be split into three parts and they are:

  1. Part 1 – Provision the Azure Sitecore Environment
  2. Part 2 – SQL Server 2012 AlwaysOn Availability Group Configuration (coming soon)
  3. Part 3 – Sitecore Configuration and Migration (coming soon)

 

PART 1 – Provision the Azure Sitecore Environment

In the Part 1 of the tutorial, we’ll look at building the foundations required for the Sitecore migration.

1. Sitecore Web Servers

  • First we need to create the two Sitecore CD instances. In Azure Management Portal, navigate to Virtual Machines and create a new virtual machine from gallery. Find the Windows Server 2012 R2 Datacenter template, go through the creation wizard and fill out all the required information.

1

  • When creating a new VM it must be assigned to a Cloud Service, you will get the opportunity to create a new Cloud Service if you don’t already have one. For load-balanced configurations, you also need to create a new Availability Set. Let’s create that too.

2

  • Repeat the above steps to create the second Sitecore CD instance and assign it to the same Cloud Service and Availability Group.
  • Repeat the above steps to create the Sitecore CM instance and create a new Cloud Service for it (you don’t need to create an Availability Group for a single instance).
  • Once all the VMs have been provision and configured properly, make sure they are all joined to the same domain in AD.

2. Sitecore SQL Servers

  • Now we need to create two SQL Server 2012 clusters – one for Sitecore content and the other for Sitecore analytics.
  • In Azure Management Portal, navigate to Virtual Machines and create a new virtual machine from gallery. Find the SQL Server 2012 SP2 Enterprise template (it will also work with SQL Server 2012 Standard and Web editions), go through the creation wizard and fill out all the required information.

Please Note: It’s important to note that by creating a new VM based on the SQL Server template, you are automatically assigned a pre-bundled SQL Server licence. If you want to use your own SQL Server licence, you’ll have to manually install SQL Server after spinning up a standard Windows Server VM.

3

  • During the creation process, create a new Cloud Service and Availability Group, and assign them to this VM.

4

  • Repeat the above steps to create the second Sitecore SQL Server instance and assign it to the same Cloud Service and Availability Group. These two SQL Servers will form the SQL Server cluster.
  • Repeat the above steps for the second SQL Server cluster for Sitecore Analystics.
  • Once all the VMs have been provision and configured properly, make sure they are all joined to the same domain in AD.

3. Enable Load-balanced Sitecore Web Servers

In order to make Sitecore CD instances highly available, we need configure a load-balancer that will handle traffic for those two Sitecore CD instances. In Azure terms, it just means adding a new endpoint, clicking on a few check boxes and you are ready to go. If only everything in life was that easy 🙂

  • In Azure Management Portal, find your Sitecore CD VM instance, open the Dashboard, navigate to the Endpoints tab and add a new endpoint (located at the bottom left hand corner).

5

  • You would need to add two new load-balanced endpoints – one for normal web traffic (port 80) and another for secure web traffic (port 443). In the creation wizard, specify the type of traffic for the endpoint, in this case it’s for HTTP Port 80. Make sure you check the Create a Load-balanced Set check box.

6

  • In the next screen, you’ll have to give the load-balanced set a name and leave the rest of the options as the default, confirm and create.

7

  • Do the same for secure web traffic and create a new endpoint for HTTPS Port 443.
  • Find your second Sitecore CD VM instance, open the Dashboard, navigate to the Endpoints tab and add a new endpoint (located at the bottom left hand corner). You’ll also need to add two load-balanced endpoints – one for normal web traffic (port 80) and another for secure web traffic (port 443). But this time around, you’ll create the endpoints based on the existing Load-Balanced Sets.

8

  • On the next screen, give the endpoint a name, confirm and create. Repeat the same steps for the HTTPS endpoint.

9

  • You should now have load-balanced ready Sitecore CD instances.

 

The next part of this tutorial, we’ll look at how to install and configure SQL Server 2012 AlwaysOn Availability Group. Please stay tuned for the Part 2 of this tutorial.

Migrating Azure Virtual Machines to another Region

I have a number of DEV/TEST Virtual Machines (VMs) deployed to Azure Regions in Southeast Asia (Singapore) and West US as these were the closet to those of us living in Australia. Now that the new Azure Regions in Australia have been launched, it’s time to start migrating those VMs closer to home. Manually moving VMs between Regions is pretty straight forward and a number of articles already exist outlining the manual steps.

To migrate an Azure VM to another Region

  1. Shutdown the VM in the source Region
  2. Copy the underlying VHDs to storage accounts in the new Region
  3. Create OS and Data disks in the new Region
  4. Re-create the VM in the new Region.

Simple enough but tedious manual configuration, switching between tools and long waits while tens or hundreds of GBs are transferred between Regions.

What’s missing is the automation…

Automating the Migration

In this post I will share a Windows PowerShell script that automates the migration of Azure Virtual Machines between Regions. I have made the full script available via GitHub.

Here is what we are looking to automate:

Migrate-AzureVM

  1. Shutdown and Export the VM configuration
  2. Setup async copy jobs for all attached disks and wait for them to complete
  3. Restore the VM using the saved configuration.

The Migrate-AzureVM.ps1 script assumes the following:

  • Azure Service Management certificates are installed on the machine running the script for both source and destination Subscriptions (same Subscription for both is allowed)
  • Azure Subscription profiles have been created on the machine running the script. Use Get-AzureSubscription to check.
  • Destination Storage accounts, Cloud Services, VNets etc. already have been created.

The script accepts the following input parameters:

.\Migrate-AzureVM.ps1 -SourceSubscription "MySub" `
                      -SourceServiceName "MyCloudService" `
                      -VMName "MyVM" `
                      -DestSubscription "AnotherSub" `
                      -DestStorageAccountName "mydeststorage" `
                      -DestServiceName "MyDestCloudService" `
                      -DestVNETName "MyRegionalVNet" `
                      -IsReadOnlySecondary $false `
                      -Overwrite $false `
                      -RemoveDestAzureDisk $false
SourceSubscription Name of the source Azure Subscription
SourceServiceName Name of the source Cloud Service
VMName Name of the VM to migrate
DestSubscription Name of the destination Azure Subscription
DestStorageAccountName Name of the destination Storage Account
DestServiceName Name of the destination Cloud Service
DestVNETName Name of the destination VNet – blank if none used
IsReadOnlySecondary Indicates if we are copying from the source storage accounts read-only secondary location
Overwrite Indicates if we are overwriting if the VHD already exists in the destination storage account
RemoveDestAzureDisk Indicates if we remove an Azure Disk if it already exists in the destination disk repository

To ensure that the Virtual Machine configuration is not lost (and avoid us have to re-create by hand) we must first shutdown the VM and export the configuration as shown in the PowerShell snippet below.

# Set source subscription context
Select-AzureSubscription -SubscriptionName $SourceSubscription -Current

# Stop VM
Stop-AzureVMAndWait -ServiceName $SourceServiceName -VMName $VMName

# Export VM config to temporary file
$exportPath = "{0}\{1}-{2}-State.xml" -f $ScriptPath, $SourceServiceName, $VMName
Export-AzureVM -ServiceName $SourceServiceName -Name $VMName -Path $exportPath

Once the VM configuration is safely exported and the machine shutdown we can commence copying the underlying VHDs for the OS and any data disks attached to the VM. We’ll want to queue these up as jobs and kick them off asynchronously as they will take some time to copy across.

Get list of azure disks that are currently attached to the VM
$disks = Get-AzureDisk | ? { $_.AttachedTo.RoleName -eq $VMName }

# Loop through each disk
foreach($disk in $disks)
{
    try
    {
        # Start the async copy of the underlying VHD to
        # the corresponding destination storage account
        $copyTasks += Copy-AzureDiskAsync -SourceDisk $disk
    }
    catch {}   # Support for existing VHD in destination storage account
}

# Monitor async copy tasks and wait for all to complete
WaitAll-AsyncCopyJobs

Tip: You’ll probably want to run this overnight. If you are copying between Storage Accounts within the same Region copy times can vary between 15 mins and a few hours. It all depends on which storage cluster the accounts reside. Michael Washam provides a good explanation of this and shows how you can check if your accounts live on the same cluster. Between Regions will always take a longer time (and incur data egress charges don’t forget!)… see below for a nice work-around that could save you heaps of time if you happen to be migrating within the same Geo.

You’ll notice the script also supports being re-run as you’ll have times when you can’t leave the script running during the async copy operation. A number of switches are also provided to assist when things might go wrong after the copy has completed.

Now that we have our VHDs in our destination Storage Account we can begin putting our VM back together again.

We start by re-creating the logical OS and Azure Data disks that take a lease on our underlying VHDs. So we don’t get clashes, I use a convention based on Cloud Service name (which must be globally unique), VM name and disk number.

# Set destination subscription context
Select-AzureSubscription -SubscriptionName $DestSubscription -Current

# Load VM config
$vmConfig = Import-AzureVM -Path $exportPath

# Loop through each disk again
$diskNum = 0
foreach($disk in $disks)
{
    # Construct new Azure disk name as [DestServiceName]-[VMName]-[Index]
    $destDiskName = "{0}-{1}-{2}" -f $DestServiceName,$VMName,$diskNum   

    Write-Log "Checking if $destDiskName exists..."

    # Check if an Azure Disk already exists in the destination subscription
    $azureDisk = Get-AzureDisk -DiskName $destDiskName `
                              -ErrorAction SilentlyContinue `
                              -ErrorVariable LastError
    if ($azureDisk -ne $null)
    {
        Write-Log "$destDiskName already exists"

        if ($RemoveDisk -eq $true)
        {
            # Remove the disk from the repository
            Remove-AzureDisk -DiskName $destDiskName

            Write-Log "Removed AzureDisk $destDiskName"
            $azureDisk = $null
        }
        # else keep the disk and continue
    }

    # Determine media location
    $container = ($disk.MediaLink.Segments[1]).Replace("/","")
    $blobName = $disk.MediaLink.Segments | Where-Object { $_ -like "*.vhd" }
    $destMediaLocation = "http://{0}.blob.core.windows.net/{1}/{2}" -f $DestStorageAccountName,$container,$blobName

    # Attempt to add the azure OS or data disk
    if ($disk.OS -ne $null -and $disk.OS.Length -ne 0)
    {
        # OS disk
        if ($azureDisk -eq $null)
        {
            $azureDisk = Add-AzureDisk -DiskName $destDiskName `
                                      -MediaLocation $destMediaLocation `
                                      -Label $destDiskName `
                                      -OS $disk.OS `
                                      -ErrorAction SilentlyContinue `
                                      -ErrorVariable LastError
        }

        # Update VM config
        $vmConfig.OSVirtualHardDisk.DiskName = $azureDisk.DiskName
    }
    else
    {
        # Data disk
        if ($azureDisk -eq $null)
        {
            $azureDisk = Add-AzureDisk -DiskName $destDiskName `
                                      -MediaLocation $destMediaLocation `
                                      -Label $destDiskName `
                                      -ErrorAction SilentlyContinue `
                                      -ErrorVariable LastError
        }

        # Update VM config
        #   Match on source disk name and update with dest disk name
        $vmConfig.DataVirtualHardDisks.DataVirtualHardDisk | ? { $_.DiskName -eq $disk.DiskName } | ForEach-Object {
            $_.DiskName = $azureDisk.DiskName
        }
    }              

    # Next disk number
    $diskNum = $diskNum + 1
}
# Restore VM
$existingVMs = Get-AzureService -ServiceName $DestServiceName | Get-AzureVM
if ($existingVMs -eq $null -and $DestVNETName.Length -gt 0)
{
    # Restore first VM to the cloud service specifying VNet
    $vmConfig | New-AzureVM -ServiceName $DestServiceName -VNetName $DestVNETName -WaitForBoot
}
else
{
    # Restore VM to the cloud service
    $vmConfig | New-AzureVM -ServiceName $DestServiceName -WaitForBoot
}

# Startup VM
Start-AzureVMAndWait -ServiceName $DestServiceName -VMName $VMName

For those of you looking at migrating VMs between Regions within the same Geo and have GRS enabled, I have also provided an option to use the secondary storage location of the source storage account.

To support this you will need to enable RA-GRS (read access) and wait a few minutes for access to be made available by the storage service. Copying your VHDs will be very quick (in comparison to egress traffic) as the copy operation will use the secondary copy in the same region as the destination. Nice!

Enabling RA-GRS can be done at any time but you will be charged for a minimum of 30 days at the RA-GRS rate even if you turn it off after the migration.

# Check if we are copying from a RA-GRS secondary storage account
if ($IsReadOnlySecondary -eq $true)
{
    # Append "-secondary" to the media location URI to reference the RA-GRS copy
    $sourceUri = $sourceUri.Replace($srcStorageAccount, "$srcStorageAccount-secondary")
}

Don’t forget to clean up your source Cloud Services and VHDs once you have tested the migrated VMs are running fine so you don’t incur ongoing charges.

Conclusion

In this post I have walked through the main sections of a Windows PowerShell script I have developed that automates the migration of an Azure Virtual Machine to another Azure data centre. The full script has been made available in GitHub. The script also supports a number of other migration scenarios (e.g. cross Subscription, cross Storage Account, etc.) and will be handy addition to your Microsoft Azure DevOps Toolkit.