Migrating VirtualBox VDI Virtual Machines to Azure

Overview

Over the years I’ve transitioned through a number of laptops and for whatever reason they never fully get put out to pasture. Two specific laptops are used semi-regularly for functions associated with a few virtual machines they hold. Over the last 10 years or so, I’ve been a big proponent of VirtualBox. It’s footprint and functionality aligned with my needs. The downside these days is needing to sometimes carry two laptops just to use an application or two contained inside a Virtual Machine on VirtualBox.

It’s 2017 and time to get with the times. Dedicate an evening of working through the process of migrating those VM’s.

DISCLAIMER and CONSIDERATIONS Keep in mind that if you are migrating legacy operating systems, you’ll need a method to remote into them once they are in Azure. Check the configuration of them before  you convert and migrate them. Do they have firewalls? Is the network interface on the VM configured for dynamic or static addressing? Do the VM have remote access configured, VNC, RDP, SSH. As they are also likely to be less secure my process below includes a Network Security Group as part of the Azure Resource Group with no rules specified. You’ll need to add some inbound rules for the method you’ll be using to remote into your Virtual Machine. And I STRONGLY suggest locking those rules down to a single host or home subnet.

The VM Conversion Process

This blog post covers the migration of a Windows Virtual Machine in VDI format from VirtualBox on SUSE Linux to Azure.

  • With the VM Started un-install the VBox Guest Additions from the virtual machine

RemoveExtensions2

  • Shutdown the VM
  • In VirtualBox Manager select the VM and Settings
    1. Select Storage. If the VBoxGuestAdditions CD/DVD is attached then remove it.
    2. Take note of the VM’s disk(s) location (WinXPv2.vdi in my case) and naming. Mine just had a single hard disk. You’ll need the path for the conversion utility.

RemoveVBoxAdditions

RemoveAdditionsDVD

  • Virtual Box includes a utility named vboxmanage. We can use that to convert the VM virtual hard disk from VDI to VHD format. Simply run vboxmanage clonehd –format VHD –variant Fixed
    • You will need to make sure you have enough space on your laptop hard disk for the VHD which will be about the same size as your VDI Hard Disk
      • If you don’t on Linux you’ll get a slightly cryptic message like
        • Could not create the clone medium (VERR_EOF)
        • VBOX_E_FILE_ERROR (0x80bb0004)
      • the –variant Fixed switch is not shown in the virtual disk conversion screenshot (three images further down the page). One of my other VM’s was Dynamic. Size needs to be Fixed for the VHD to be associated with a VM in Azure
      • Below shows determining an existing disk that is Dynamic and needs to be converted to Fixed

DynamicDisk

  • Below shows determining an existing disk that is Fixed and doesn’t need to be converted

FixedDisk

  • Converting the VDI virtual disk to VHD

Convert60Percent

Preparing our Azure Environment for our new Virtual Machine

  • Whilst the conversion was taking place I logged into the Azure Portal and created a new Resource Group for my VM to go into. I also created a new Storage Account in that Resource Group to put the VM’s VHD into. Basically I’m keeping these specific individual VM’s that serve a very specific purpose in their own little compartment.

RGandSG

  • Using the fantastic Azure Storage Explorer which works on Linux, Mac and Windows I created a Blob Container in my newly created Storage Account named vhds.

CreateBlobContainer

Upload the Converted Virtual Hard Disk

  • By now my virtual disk had converted. Using the Azure Storage Explorer I uploaded my converted virtual disk. NOTE Make sure you have the ‘upload vhd/vhdx files as Page Blobs’ selected. 

UploadVHD

For a couple of other VM’s I wrote a little PowerShell script to upload the VHD’s to blob storage.

Create the Azure VM

The following script follows on from the Resource Group, Storage Account and the Virtual Machine Virtual Disk we created and uploaded to Azure and creates the VM to attached to the virtual disk.

All the variables are up front, we create the Network Security Group, Subnet and Virtual Network. Then the Public IP and Network Interface. Finally we define the details for the VM with the networking and the uploaded VHD before creating the VM.

And we’re done. VM created and started.

VMCreated
Happy days and good bye to a number of old laptops.

How to make a copy of a virtual machine running Windows in Azure

How to make a copy of a virtual machine running Windows in Azure

I was called upon recently to help a customer create copies of some of their Windows virtual machines. The idea was to quickly deploy copies of these hosts at any time as opposed to using a system image or point in time copy.

The following PowerShell will therefore allow you to make a copy or clone of a Windows virtual machine using a copy of it’s disks in Azure Resource Manager mode.

Create a new virtual machine from a copy of the disks of another

Having finalized the configuration of the source virtual machine the steps required are as follows.

  1. Stop the source virtual machine, then using Storage Explorer copy it’s disks to a new location and rename them in line with the target name of the new virtual machine.

  2. Run the following in PowerShell making the required configuration changes.

Login-AzureRmAccount
Get-AzureRmSubscription –SubscriptionName "<subscription-name>" | Select-AzureRmSubscription

$location = (get-azurermlocation | out-gridview -passthru).location
$rgName = "<resource-group>"
$vmName = "<vm-name>"
$nicname = "<nic-name>"
$subnetID = "<subnetID>"
$datadisksize = "<sizeinGB>"
$vmsize = (Get-AzureLocation | Where-Object { $_.name -eq "East US"}).VirtualMachineRoleSizes | out-gridview -passthru
$osDiskUri = "https://<storage-acccount>.blob.core.windows.net/vhds/<os-disk-name.vhd>"
$dataDiskUri = "https://<storage-acccount>.blob.core.windows.net/vhds/<data-disk-name.vhd>"

Notes: The URIs above belong to the copies not the original disks and the SubnetID refers to it’s resource ID.

$nic = New-AzureRmNetworkInterface -Name $nicname -ResourceGroupName $rgName -Location $location -SubnetId $subnetID
$vmConfig = New-AzureRmVMConfig -VMName $vmName -VMSize $vmsize
$vm = Add-AzureRmVMNetworkInterface -VM $vmConfig -Id $nic.Id
$osDiskName = $vmName + "os-disk"
$vm = Set-AzureRmVMOSDisk -VM $vm -Name $osDiskName -VhdUri $osDiskUri -CreateOption attach -Windows
$dataDiskName = $vmName + "data-disk"
$vm = Add-AzureRmVMDataDisk -VM $vm -Name $dataDiskName -VhdUri $dataDiskUri -Lun 0 -Caching 'none' -DiskSizeInGB $datadisksize -CreateOption attach
New-AzureRmVM -ResourceGroupName $rgName -Location $location -VM $vm

List virtual machines in a resource group.

$vmList = Get-AzureRmVM -ResourceGroupName $rgName
$vmList.Name

Having run the above. Log on to the new host in order to make the required changes.

Exchange in Azure: NIC disabled/in error state

I recently had the need to build my own Exchange server within Azure and connect it to my Office 365 tenant.
I loosely followed the steps in this Microsoft article: https://technet.microsoft.com/library/mt733070(v=exchg.160).aspx to get my Azure (ARM) VMs and infrastructure deployed.

I initially decided to utilise an A1 Azure VM for my Exchange server to reduce my costs, however upon successfully installing Exchange it was extremely slow and basic things like EAC and creating mailboxes would not function correctly due to the lack of resources. I found that resizing my VM to an A3 Azure VM resolved my issues and Exchange then functioned correctly.

After I powered down the Azure VM to a stopped (deallocated) state that’s when I encountered issues.

I found that after I powered up the VM I could no longer connect to it and once I enabled boot diagnostics I discovered that the NIC was in an error state/disabled.

After going down multiple troubleshooting paths such as redeploying the VM, resizing the VM, changing subnets etc. I discovered that patience was the key and after about 20 minutes the NIC re-enabled itself and all was well.

I have run multiple tests with an A3 Azure VM and found that in some cases it could take anywhere from 20-40 minutes to successfully boot up with 10 minutes being the quickest boot up time.

Hopefully this assists someone out there banging their head against the wall trying to get this to work!

Hands Free VM Management with Azure Automation and Resource Manager – Part 2

In this two part series, I am looking at how we can leverage Azure Automation and Azure Resource Manager to schedule the shutting down of tagged Virtual Machines in Microsoft Azure.

  • In Part 1 we walked through tagging resources using the Azure Resource Manager PowerShell module
  • In Part 2 we will setup Azure Automation to schedule a runbook to execute nightly and shutdown tagged resources.

Azure Automation Runbook

At the time of writing, the tooling support around Azure Automation can be politely described as a hybrid one. For starters, there is no support for Azure Automation in the preview portal. The Azure command line tools only support basic automation account and runbook management, leaving the current management portal as the most complete tool for the job

As I mentioned in Part 1, Azure Automation does not yet support the new Azure Resource Manager PowerShell module out-of-the-box, so we need to import that module ourselves. We will then setup service management credentials that our runbook will use (recall the ARM module doesn’t use certificates anymore, we need to supply user account credentials).

We then create our PowerShell workflow to query for tagged virtual machine resources and ensure they are shutdown. Lastly, we setup our schedule and enable the runbook… lets get cracking!

When we first create an Azure Automation account, the Azure PowerShell module is already imported as an Asset for us (v0.8.11 at the time of writing) as shown below.

Clean Azure Automation Screen.
To import the Azure Resource Manager module we need to zip it up and upload it to the portal using the following process. In Windows Explorer on your PC

  1. Navigate to the Azure PowerShell modules folder (typically C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ResourceManager\AzureResourceManager)
  2. Zip the AzureResourceManager sub-folder.

Local folder to zip.

In the Automation pane of the current Azure Portal:

  1. Select an existing Automation account (or create a new one)
  2. Navigate to the Asset tab and click the Import Module button
  3. Browse to the AzureResourceManager.zip file you created above.

ARM Module Import

After the import completes (this usually takes a few minutes) you should see the Azure Resource Manager module imported as an Asset in the portal.

ARM Module Imported

We now need to setup the credentials the runbook will use and for this we will create a new user in Azure Active Directory (AAD) and add that user as a co-administrator of our subscription (we need to query resource groups and shutdown our virtual machines).

In the Azure Active Directory pane:

  1. Add a new user of type New user in your organisation
  2. Enter a meaningful user name to distinguish it as an automation account
  3. Select User as the role
  4. Generate a temporary password which we’ll need to change it later.

Tell us about this user.

Now go to the Settings pane and add the new user as a co-administrator of your subscription:

Add user as co-admin.

Note: Azure generated a temporary password for the new user. Log out and sign in as the new user to get prompted to change the password and confirm the user has service administration permissions on your subscription.

We now need to add our users credentials to our Azure Automation account assets.

In the Automation pane:

  1. Select the Automation account we used above
  2. Navigate to the Asset tab and click on the Add Setting button on the bottom toolbar
  3. Select Add Credential
  4. Choose Windows PowerShell Credential from type dropdown
  5. Enter a meaningful name for the asset (e.g. runbook-account)
  6. Enter username and password of the AAD user we created above.

Runbook credentials

With the ARM module imported and credentials setup we can now turn to authoring our runbook. The completed runbook script can be found on Github. Download the script and save it locally.

Open the script in PowerShell ISE and change the Automation settings to match the name you gave to your Credential asset created above and enter your Azure subscription name.

workflow AutoShutdownWorkflow
{
    #$VerbosePreference = "continue"

    # Automation Settings
    $pscreds = Get-AutomationPSCredential -Name "runbook-account"
    $subscriptionName = "[subscription name here]"
    $tagName = "autoShutdown"

    # Authenticate using WAAD credentials
    Add-AzureAccount -Credential $pscreds | Write-Verbose 

    # Set subscription context
    Select-AzureSubscription -SubscriptionName $subscriptionName | Write-Verbose

    Write-Output "Checking for resources with $tagName flag set..."

    # Get virtual machines within tagged resource groups
    $vms = Get-AzureResourceGroup -Tag @{ Name=$tagName; Value=$true } | `
    Get-AzureResource -ResourceType "Microsoft.ClassicCompute/virtualMachines"

    # Shutdown all VMs tagged
    $vms | ForEach-Object {
        Write-Output "Shutting down $($_.Name)..."
        # Gather resource details
        $resource = $_
        # Stop VM
        Get-AzureVM | ? { $_.Name -eq $resource.Name } | Stop-AzureVM -Force
    }

    Write-Output "Completed $tagName check"
}

Walking through the script, the first thing we do is gather the credentials we will use to manage our subscription. We then authenticate using those credentials and select the Azure subscription we want to manage. Next we gather all virtual machine resources in resource groups that have been tagged with autoShutdown.

We then loop through each VM resource and force a shutdown. One thing you may notice about our runbook is that we don’t explicitly “switch” between the Azure module and Azure Resource Management module as we must when running in PowerShell.

This behaviour may change over time as the Automation service is enhanced to support ARM out-of-the-box, but for now the approach appears to work fine… at least on my “cloud” [developer joke].

We should now have our modified runbook script saved locally and ready to be imported into the Azure Automation account we used above. We will use the Azure Service Management cmdlets to create and publish the runbook, create the schedule asset and link it to our runbook.

Copy the following script into a PowerShell ISE session and configure it to match your subscription and location of the workflow you saved above. You may need to refresh your account credentials using Add-AzureAccount if you get an authentication error.

$automationAccountName = "[your account name]"
$runbookName = "autoShutdownWorkflow"
$scriptPath = "c:\temp\AutoShutdownWorkflow.ps1"
$scheduleName = "ShutdownSchedule"

# Create a new runbook
New-AzureAutomationRunbook –AutomationAccountName $automationAccountName –Name $runbookName

# Import the autoShutdown runbook from a file
Set-AzureAutomationRunbookDefinition –AutomationAccountName $automationAccountName –Name $runbookName –Path $scriptPath -Overwrite

# Publish the runbook
Publish-AzureAutomationRunbook –AutomationAccountName $automationAccountName –Name $runbookName

# Create the schedule asset
New-AzureAutomationSchedule –AutomationAccountName $automationAccountName –Name $scheduleName –StartTime $([DateTime]::Today.Date.AddDays(1).AddHours(1)) –DayInterval 1

# Link the schedule to our runbook
Register-AzureAutomationScheduledRunbook –AutomationAccountName $automationAccountName –Name $runbookName –ScheduleName $scheduleName

Switch over to the portal and verify your runbook has been created and published successfully…

Runbook published.

…drilling down into details of the runbook, verify the schedule was linked successfully as well…

Linked Schedule.

To start your runbook (outside of the schedule) navigate to the Author tab and click the Start button on the bottom toolbar. Wait for the runbook to complete and click on the View Job icon to examine the output of the runbook.

Manual Start

Run Output

Note: Create a draft version of your runbook to troubleshoot failing runbooks using the built in testing features. Refer to this link for details on testing your Azure Automation runbooks.

Our schedule will now execute the runbook each night to ensure virtual machine resources tagged with autoShutdown are always shutdown. Navigating to the Dashboard tab of the runbook will display the runbook history.

Runbook Dashboard

Considerations

1. The AzureResourceManager module is not officially supported yet out-of-the-box so a breaking change may come down the release pipeline that will require our workflow to be modified. The switch behaviour will be the most likely candidate. Watch that space!

2. Azure Automation is not available in all Azure Regions. At the time of writing it is available in East US, West EU, Japan East and Southeast Asia. However, region affinity isn’t a primary concern as we are merely just invoking the service management API where our resources are located. Where we host our automation service is not as important from a performance point of view but may factor into organisation security policy constraints.

3. Azure Automation comes in two tiers (Free and Basic). Free provides 500 minutes of job execution per month. The Basic tier charges $0.002 USD a minute for unlimited minutes per month (e.g. 1,000 job execution mins will cost $2). Usage details will be displayed on the Dashboard of your Azure Automation account.

Account Usage

In this two part post we have seen how we can tag resource groups to provide more granular control when managing resource lifecycles and how we can leverage Azure Automation to schedule the shutting down of these tagged resources to operate our infrastructure in Microsoft Azure more efficiently.

Microsoft Azure Pricing Calculator

Originally posted in Lucian’s blog over at www.clouduccino.com.

Whether you’re wanting to deploy a new workload in Microsoft Azure, wanting to extend an existing workload via a hybrid scenario or like me wanting to use Azure outside of work to gain more knowledge and experience, the pay-as-you-go charge model can often times intimidate and even deter many from using a cloud service like Azure. From a lab or dev point of view, it is all well and good to dabble in Azure at the various tiers of engagement, but at the end of the day you could be left with a credit card bill allot larger than expected. Enter the Microsoft Azure Pricing Calculator where you can accurately estimate your potential usage for any given service.

2015-03-16-APC-001

Read More

Hands Free VM Management with Azure Automation and Resource Manager – Part 1

Over the past six months, Microsoft have launched a number of features in Azure to enable you to better manage your resources hosted there.

In this two part series, I will show how we can leverage two of these new features – Azure Automation and Azure Resource Manager – to schedule the shutting down of tagged Virtual Machines in Microsoft Azure.

  • In Part 1 we will walk through tagging resources using the Azure Resource Manager features and
  • In Part 2 we will setup Azure Automation to schedule a runbook to execute nightly and shutdown tagged VM resources.

About Azure Resource Manager and Azure Automation

Azure Resource Manager (ARM) provides the capability to create reusable deployment templates and provide a common way to manage the resources that make up your deployment. ARM is the foundation of the ALM and DevOps tooling being developed by Microsoft and investments in it are ongoing.

Another key service in this space is the Azure Automation service which provides the ability to create, monitor, manage, and deploy resources using runbooks based on Windows PowerShell workflows. Automation assets can be defined to share credentials, modules, schedules and runtime configuration between runbooks and a runbook gallery provides contributions from both Microsoft and the community that can be imported and used within your Automation account.

Operating infrastructure effectively in the cloud is the new holy grail of today’s IT teams – scaling out to meet demand, back to reduce unnecessary operating costs, running only when needed and de-allocating when not. Automating elastic scale has been one area cloud providers and 3rd party tooling ISVs have invested in and the capability is pretty solid.

The broader resource lifecycle management story is not so compelling. Shutting down resources when they are not needed is still largely a manual process unless heavy investments are made into infrastructure monitoring and automation tools or via 3rd party SaaS offerings such as Azure Watch.

Objectives

We will start by identifying and tagging the resource groups we want to ensure are automatically shutdown overnight. Following this we will author an Azure Automation runbook that looks for tagged resources and shuts them down, configuring it to run every night.

Setting this up is not as straight forward as you would think (and we will need to bend the rules a little) so I will spend some time going through these steps in detail.

Tagging Resource Groups

Azure Resource Management features are currently only available via the preview portal and command line tools (PowerShell and cross-platform CLI). Using the preview portal we can manually create tags and assign them to our resource groups. A decent article on the Azure site walks you through how to perform these steps. Simon also introduced the ARM command line tools in his post a few months ago.

Performing these tasks in PowerShell is a little different to previous service management tasks you may be use to. For starters, the Azure Resource Manager cmdlets cannot be used in the same PS session as the “standard” Azure cmdlets. We must now switch between the two modes. The other main difference is the way we authenticate to the ARM API. We no longer use service management certificates with expiry periods of two to three years but user accounts backed by Azure Active Directory (AAD). These user tokens have expiry periods of two to three days so we must constantly refresh them.

Note: Switch “modes” just removes one set of Azure modules and imports others. So switching to Azure Resource Manager mode removes the Azure module and imports the Azure Resource Manager module and Azure Profile module. We need this understanding when we setup our Azure Automation later on.

Let’s walk through the steps to switch into ARM mode, add our user account and tag the resource groups we want to automate.

Before you start, ensure you have the latest drop of the Azure PowerShell cmdlets from GitHub. With the latest version installed, switch to Azure Resource Manager mode and add your AAD account to your local PS profile

# Switch to Azure Resource Manager mode
Switch-AzureMode AzureResourceManager

# Sign in to add our account to our PS profile
Add-AzureAccount

# Check our profile has the newly gathered credentials
Get-AzureAccount

Add-AzureAccount will prompt us to sign-in using either our Microsoft Account or Organisational account with service management permissions. Get-AzureAccount will list the profiles we have configured locally.

Now that we are in ARM mode and authenticated, we can examine and manage our tag taxonomy using the Get-AzureTag Cmdlet

#Display current taxonomy
Get-AzureTag

This displays all tags in our taxonomy and how many resource groups they are assigned to. To add tags to our taxonomy we use the New-AzureTag cmdlet. To remove, we use the Remove-AzureTag cmdlet. We can only remove tags that are not assigned to resources (that is, have a zero count).

In this post we want to create and use a tag named “autoShutdown” that we can then assign a value of “True” against resource groups that we want to automatically shutdown.

# Add Tag to Taxonomy
New-AzureTag -Name "autoShutdown"

azure_tag_-_get-azuretag

Now, let’s tag our resource groups. This can be performed using the preview portal as mentioned above, but if you have many resource groups to manage PowerShell is still the best approach. To manage resource groups using the ARM module we use the following cmdlets

azure_tag_-_azure_resource_groups

# Assign tag to DEV/TEST resource groups
Get-AzureResourceGroup | ? { $_.ResourceGroupName -match "dev-cloud" } | `
    Set-AzureResourceGroup -Tag @( @{ Name="autoShutdown"; Value=$true } )

The above statement finds all resource groups that contain the text “dev-cloud” in the name (a naming convention I have adopted, yours will be different) and sets the autoShutdown tag with a value of True to that resource group. If we list the resource groups using the Get-AzureResourceGroups cmdlet we can see the results of the tagging.

azure_tag_-_tagged_resource_groups

Note: Our resource group contains many resources. However, for our purposes we are only interested in managing our virtual machines. These resource types shown above will come in handy when we look to filter the tagged resources to only return our VMs.

We can also un-tag resource groups using the following statement

# Reset tags on DEV/TEST resource groups
Get-AzureResourceGroup | ? { $_.ResourceGroupName -match "dev-cloud" } | `
    Set-AzureResourceGroup -Tag @{}

We can see these tagged resource groups in the preview portal as well…

azure_tag_-_portal_tags

…and if we drill into one of the resource groups we can see our tag has the value True assigned

azure_tag_-_resource_group_tags

In this post we have tagged our resource groups so that they can be managed separately. In Part 2 of the post, we will move on to creating an Azure Automation runbook to automate the shutting down of our tagged resources on a daily schedule.

Considerations

At the time of writing there is no support in the preview portal for tagging individual resources directly (just Resource Groups). The PowerShell Cmdlets suggest it is possible however I always get an error indicating setting the tags property is not permitted yet. This may change in the near future and provide more granular control.

Migrating Azure Virtual Machines to another Region

I have a number of DEV/TEST Virtual Machines (VMs) deployed to Azure Regions in Southeast Asia (Singapore) and West US as these were the closet to those of us living in Australia. Now that the new Azure Regions in Australia have been launched, it’s time to start migrating those VMs closer to home. Manually moving VMs between Regions is pretty straight forward and a number of articles already exist outlining the manual steps.

To migrate an Azure VM to another Region

  1. Shutdown the VM in the source Region
  2. Copy the underlying VHDs to storage accounts in the new Region
  3. Create OS and Data disks in the new Region
  4. Re-create the VM in the new Region.

Simple enough but tedious manual configuration, switching between tools and long waits while tens or hundreds of GBs are transferred between Regions.

What’s missing is the automation…

Automating the Migration

In this post I will share a Windows PowerShell script that automates the migration of Azure Virtual Machines between Regions. I have made the full script available via GitHub.

Here is what we are looking to automate:

Migrate-AzureVM

  1. Shutdown and Export the VM configuration
  2. Setup async copy jobs for all attached disks and wait for them to complete
  3. Restore the VM using the saved configuration.

The Migrate-AzureVM.ps1 script assumes the following:

  • Azure Service Management certificates are installed on the machine running the script for both source and destination Subscriptions (same Subscription for both is allowed)
  • Azure Subscription profiles have been created on the machine running the script. Use Get-AzureSubscription to check.
  • Destination Storage accounts, Cloud Services, VNets etc. already have been created.

The script accepts the following input parameters:

.\Migrate-AzureVM.ps1 -SourceSubscription "MySub" `
                      -SourceServiceName "MyCloudService" `
                      -VMName "MyVM" `
                      -DestSubscription "AnotherSub" `
                      -DestStorageAccountName "mydeststorage" `
                      -DestServiceName "MyDestCloudService" `
                      -DestVNETName "MyRegionalVNet" `
                      -IsReadOnlySecondary $false `
                      -Overwrite $false `
                      -RemoveDestAzureDisk $false
SourceSubscription Name of the source Azure Subscription
SourceServiceName Name of the source Cloud Service
VMName Name of the VM to migrate
DestSubscription Name of the destination Azure Subscription
DestStorageAccountName Name of the destination Storage Account
DestServiceName Name of the destination Cloud Service
DestVNETName Name of the destination VNet – blank if none used
IsReadOnlySecondary Indicates if we are copying from the source storage accounts read-only secondary location
Overwrite Indicates if we are overwriting if the VHD already exists in the destination storage account
RemoveDestAzureDisk Indicates if we remove an Azure Disk if it already exists in the destination disk repository

To ensure that the Virtual Machine configuration is not lost (and avoid us have to re-create by hand) we must first shutdown the VM and export the configuration as shown in the PowerShell snippet below.

# Set source subscription context
Select-AzureSubscription -SubscriptionName $SourceSubscription -Current

# Stop VM
Stop-AzureVMAndWait -ServiceName $SourceServiceName -VMName $VMName

# Export VM config to temporary file
$exportPath = "{0}\{1}-{2}-State.xml" -f $ScriptPath, $SourceServiceName, $VMName
Export-AzureVM -ServiceName $SourceServiceName -Name $VMName -Path $exportPath

Once the VM configuration is safely exported and the machine shutdown we can commence copying the underlying VHDs for the OS and any data disks attached to the VM. We’ll want to queue these up as jobs and kick them off asynchronously as they will take some time to copy across.

Get list of azure disks that are currently attached to the VM
$disks = Get-AzureDisk | ? { $_.AttachedTo.RoleName -eq $VMName }

# Loop through each disk
foreach($disk in $disks)
{
    try
    {
        # Start the async copy of the underlying VHD to
        # the corresponding destination storage account
        $copyTasks += Copy-AzureDiskAsync -SourceDisk $disk
    }
    catch {}   # Support for existing VHD in destination storage account
}

# Monitor async copy tasks and wait for all to complete
WaitAll-AsyncCopyJobs

Tip: You’ll probably want to run this overnight. If you are copying between Storage Accounts within the same Region copy times can vary between 15 mins and a few hours. It all depends on which storage cluster the accounts reside. Michael Washam provides a good explanation of this and shows how you can check if your accounts live on the same cluster. Between Regions will always take a longer time (and incur data egress charges don’t forget!)… see below for a nice work-around that could save you heaps of time if you happen to be migrating within the same Geo.

You’ll notice the script also supports being re-run as you’ll have times when you can’t leave the script running during the async copy operation. A number of switches are also provided to assist when things might go wrong after the copy has completed.

Now that we have our VHDs in our destination Storage Account we can begin putting our VM back together again.

We start by re-creating the logical OS and Azure Data disks that take a lease on our underlying VHDs. So we don’t get clashes, I use a convention based on Cloud Service name (which must be globally unique), VM name and disk number.

# Set destination subscription context
Select-AzureSubscription -SubscriptionName $DestSubscription -Current

# Load VM config
$vmConfig = Import-AzureVM -Path $exportPath

# Loop through each disk again
$diskNum = 0
foreach($disk in $disks)
{
    # Construct new Azure disk name as [DestServiceName]-[VMName]-[Index]
    $destDiskName = "{0}-{1}-{2}" -f $DestServiceName,$VMName,$diskNum   

    Write-Log "Checking if $destDiskName exists..."

    # Check if an Azure Disk already exists in the destination subscription
    $azureDisk = Get-AzureDisk -DiskName $destDiskName `
                              -ErrorAction SilentlyContinue `
                              -ErrorVariable LastError
    if ($azureDisk -ne $null)
    {
        Write-Log "$destDiskName already exists"

        if ($RemoveDisk -eq $true)
        {
            # Remove the disk from the repository
            Remove-AzureDisk -DiskName $destDiskName

            Write-Log "Removed AzureDisk $destDiskName"
            $azureDisk = $null
        }
        # else keep the disk and continue
    }

    # Determine media location
    $container = ($disk.MediaLink.Segments[1]).Replace("/","")
    $blobName = $disk.MediaLink.Segments | Where-Object { $_ -like "*.vhd" }
    $destMediaLocation = "http://{0}.blob.core.windows.net/{1}/{2}" -f $DestStorageAccountName,$container,$blobName

    # Attempt to add the azure OS or data disk
    if ($disk.OS -ne $null -and $disk.OS.Length -ne 0)
    {
        # OS disk
        if ($azureDisk -eq $null)
        {
            $azureDisk = Add-AzureDisk -DiskName $destDiskName `
                                      -MediaLocation $destMediaLocation `
                                      -Label $destDiskName `
                                      -OS $disk.OS `
                                      -ErrorAction SilentlyContinue `
                                      -ErrorVariable LastError
        }

        # Update VM config
        $vmConfig.OSVirtualHardDisk.DiskName = $azureDisk.DiskName
    }
    else
    {
        # Data disk
        if ($azureDisk -eq $null)
        {
            $azureDisk = Add-AzureDisk -DiskName $destDiskName `
                                      -MediaLocation $destMediaLocation `
                                      -Label $destDiskName `
                                      -ErrorAction SilentlyContinue `
                                      -ErrorVariable LastError
        }

        # Update VM config
        #   Match on source disk name and update with dest disk name
        $vmConfig.DataVirtualHardDisks.DataVirtualHardDisk | ? { $_.DiskName -eq $disk.DiskName } | ForEach-Object {
            $_.DiskName = $azureDisk.DiskName
        }
    }              

    # Next disk number
    $diskNum = $diskNum + 1
}
# Restore VM
$existingVMs = Get-AzureService -ServiceName $DestServiceName | Get-AzureVM
if ($existingVMs -eq $null -and $DestVNETName.Length -gt 0)
{
    # Restore first VM to the cloud service specifying VNet
    $vmConfig | New-AzureVM -ServiceName $DestServiceName -VNetName $DestVNETName -WaitForBoot
}
else
{
    # Restore VM to the cloud service
    $vmConfig | New-AzureVM -ServiceName $DestServiceName -WaitForBoot
}

# Startup VM
Start-AzureVMAndWait -ServiceName $DestServiceName -VMName $VMName

For those of you looking at migrating VMs between Regions within the same Geo and have GRS enabled, I have also provided an option to use the secondary storage location of the source storage account.

To support this you will need to enable RA-GRS (read access) and wait a few minutes for access to be made available by the storage service. Copying your VHDs will be very quick (in comparison to egress traffic) as the copy operation will use the secondary copy in the same region as the destination. Nice!

Enabling RA-GRS can be done at any time but you will be charged for a minimum of 30 days at the RA-GRS rate even if you turn it off after the migration.

# Check if we are copying from a RA-GRS secondary storage account
if ($IsReadOnlySecondary -eq $true)
{
    # Append "-secondary" to the media location URI to reference the RA-GRS copy
    $sourceUri = $sourceUri.Replace($srcStorageAccount, "$srcStorageAccount-secondary")
}

Don’t forget to clean up your source Cloud Services and VHDs once you have tested the migrated VMs are running fine so you don’t incur ongoing charges.

Conclusion

In this post I have walked through the main sections of a Windows PowerShell script I have developed that automates the migration of an Azure Virtual Machine to another Azure data centre. The full script has been made available in GitHub. The script also supports a number of other migration scenarios (e.g. cross Subscription, cross Storage Account, etc.) and will be handy addition to your Microsoft Azure DevOps Toolkit.

How to create custom images for use in Microsoft Azure

In this post I will discuss how we can create custom virtual machine images and deploy them to the Microsoft Azure platform. To complete this process you will need an Azure Subscription, the Azure PowerShell module installed and a pre-prepared VHD which you would like to use (VHDX is not supported at present.)

You can sign up for a free trial of Microsoft Azure here if you don’t currently hold a subscription.

Completing this process will allow you take advantage of platforms which aren’t offered “out of the box” on Microsoft Azure eg, Server 2003 and Server 2008 for testing and development. Currently Microsoft offers Server 2008 R2 as the minimum level from the Azure Image Gallery.

What do I need to do to prepare my image?

To complete this process, I built a volume license copy of Windows Server 2008 Standard inside a generation one Hyper-V guest virtual machine. Once the installation of the operating system completed I installed Adobe Acrobat Reader. I then ran sysprep.exe to generalise the image. This is important, if you don’t generalise your images, they will fail to deploy on the Azure platform.

I will detail the steps carried out after the operating system install below.

  1. Log into the newly created virtual machine
  2. Install the Hyper-V virtual machine additions (if your guest doesn’t already have it installed)
  3. Install any software that is required in your image (I installed Acrobat Reader)
  4. From an Administrative command prompt, navigate to %windir%\system32\sysprep and then execute the command “sysprep.exe”

  1. Once the SysPrep window has opened, select Enter System Out of Box Experience (OOBE) and tick the Generalize check box. The shutdown action should be set to Shutdown, this will shut down the machine gracefully once the sysprep process has completed.
  2. Once you are ready, select OK and wait for the process to complete.

I built my machine inside a dynamically expanding VHD, the main reason for doing so was to avoid having to upload a file size which was larger than necessary. As a result of this, I chose to compact the VHD before moving on to the next step by using the disk wizard inside the Hyper-V management console. To complete this process, follow the steps below.

  1. From the Hyper-V Host pane, select Edit Disk
  2. Browse to the path of VHD we were working on, in my case it is “C:\VHDs\Server2008.vhd” and select Next
  3. Select Compact and Finish.
  4. Wait for the process to complete. Your VHD file is now ready to upload.

What’s next?

We are now ready to upload the virtual machine image, to complete this process you will need access to the Azure PowerShell cmd-lets and a storage account for the source VHD. If you do not already have a storage account created, you can follow the documentation provided by Microsoft here.

IMPORTANT: Once you have a storage account in Azure, ensure that you have a container called VHDs. If you don’t have a container you can create on by selecting Add from the bottom toolbar, name it vhds and ensure the access is set to Private (container shown below.)


We are now ready to connect to the Azure account to kick off the upload process. To do so, launch an Administrative Azure PowerShell console and follow the following steps.

  1. Run the cmd-let Add-AzureAccount, this will present a window which will allow you to authenticate to Azure.

  1. On the next screen, enter your Password. The PowerShell session is now connected.
  2. To verify that the session connected successfully, run the cmd Get-AzureAccount, you should see your account listed below.

We are now ready to commence the upload process. You will need your storage blob URL. You can find this on the container page we visited previously to create the vhds container.

The complete command is as follows.

Add-AzureVhd -Destination “<StorageBlobURL>/vhds/Server2008.vhd” -LocalFilePath “C:\VHDs\Server2008.vhd”

Once you have executed the command, two things happen..

  1. The VHD file is indexed by calculating the MD5 hash

  1. Once the indexing process is completed, the upload starts.


This is very neat, as the demo gods often fail us… (my upload actually failed part way through.) Thankfully I was able to re-execute the command, which resumed the upload process where the first pass left off (see below.)

  1. Wait for the upload process to complete.

Creating the Image in the Azure console.

Now that our upload has completed, we are ready to create an image in the Azure console. This will allow us to easily spawn virtual machines based on the image we uploaded earlier. To complete this process you will need access to the Azure console and your freshly uploaded image.

  1. Select Virtual Machines from the management portal.
  2. Select Images from the virtual machines portal.
  3. Select Create an Image

  1. A new window titled Create an image from a VHD will pop up. Enter the following details (as shown below.)
  • Name
  • Description
  • VHD URL (from your storage blob)
  • Operating System Family


Ensure you have ticked I have run Sysprep on the virtual machine or you will not be able to proceed.

  1. The Image will now appear under MY IMAGES in the image gallery.

Deploying the image!

All the work we have completed so far won’t be much use if the deployment phase fails. In this part of the process we will deploy the image to ensure it will work as expected.

  1. Select Virtual Machines from the management portal.
  2. Select New > Compute > Virtual Machine > From Gallery
  3. From the Choose an Image screen, select MY IMAGES. You should see the image that we just created in the gallery (shown below.)

  1. Select the Image and click Next
  2. Complete the Virtual Machine Configuration with your desired settings.
  3. Wait for the virtual machine to complete deployment provisioning.

Connecting to the virtual machine.

The hard work is done! We are now ready to connect to our newly deployed machine to ensure it is functioning as expected.

  1. Select Virtual Machines from the management portal.
  2. Select the Virtual Machine and then click Connect from the toolbar down the bottom. This will kick-off a download for the RDP file which will allow you to connect to the virtual machine.
  3. Launch the RDP file, you will be asked to authenticate. Enter the credentials you specified during the deployment phase and click OK


  1. You will now be presented with your remote desktop session, connected to your custom image deployed on Microsoft Azure.

I went ahead and activated my Virtual Machine. To prove there is no funny business involved, I have provided one final screenshot showing the machine activation status (which details the Windows version) and a snip showing the results of the ipconfig command. This lists the internal.cloudapp.net addresses showing that machine is running on Microsoft Azure.

Enjoy!