Migrating Sharepoint 2013 on prem to Office365 using Sharegate

Recently I completed a migration project which brought a number of sub-sites within Sharepoint 2013 on-premise to the cloud (Sharepoint Online). We decided to use Sharegate as the primary tool due to the simplistic of it.

Although it might sound as a straightforward process, there are a few things worth to be checked pre and post migration and I have summarized them here. I found it easier to have these information recorded in a spreadsheet with different tabs:

Pre-migration check:

  1. First thing, Get Site Admin access!

    This is the first and foremost important step, get yourself the admin access. It could be a lengthy process especially in a large corporation environment. The best level of access is being granted as the Site Collection Admin for all sites, but sometimes this might not be possible. Hence, getting Site Administrator access is the bare minimum for getting migration to work.

    You will likely be granted Global Admin on the new tenant at most cases, but if not, ask for it!

  2. List down active site collection features

    Whatever feature activated on the source site would need to be activated on the destination site as well. Therefore, we need to record down what have been activated on the source site. If there is any third party feature activated, you will need to liaise with relevant stakeholder in regards to whether it is still required on the new site. If it is, it is highly likely that a separate piece of license is required as the new environment will be a cloud based, rather than on-premise. Take Nintex Workflow for example, Nintex Workflow Online is a separate license comparing to Nintex Workflow 2013.

  3. Segregate the list of sites, inventory analysis

    I found it important to list down all the list of sites you are going to migrate, distinguish if they are site collections or just subsites. What I did was to put each site under a new tab, with all its site contents listed. Next to each lists/ libraries, I have fields for the list type, number of items and comment (if any).

    Go through each of the content, preferably sit down with the site owner and get in details of it. Some useful questions can be asked

  • Is this still relevant? Can it be deleted or skipped for the migration?
  • Is this heavily used? How often does it get accessed?
  • Does this form have custom edit/ new form? Sometimes owners might not even know, so you might have to take extra look by scanning through the forms.
  • Check if pages have custom script with site URL references as this will need to be changed to accommodate the new site url.

It would also be useful to get a comprehensive knowledge of how much storage each site holds. This can help you working out which site has the most content, hence likely to take the longest time during the migration. Sharegate has an inventory reporting tool, which can help but it requires Site Collection Admin access.

  1. Discuss some of the limitations

    Pages library

    Pages library under each site need specific attention, especially if you don’t have site collection admin! Pages which inherit any content type and master page from the parent site will not have these migrated across by Sharegate, meaning these pages will either not be created at the new site, or they will simply show as using default master page. This needs to be communicated and discussed with each owners.

    External Sharing

    External users will not be migrated across to the new site! These are users who won’ be provisioned in the new tenant but still require access to Sharepoint. They will need to be added (invited) manually to a site using their O365 email account or a Microsoft account.

    An O365 account would be whatever account they have been using to get on to their own Sharepoint Online. If they have not had one, they would need to use their Microsoft account, which would be a Hotmail/ Outlook account. Once they have been invited, they would need to response to the email by signing into the portal in order to get provisioned. New SPO site collection will need to have external sharing enabled before external access can happen. For more information, refer to: https://support.office.com/en-us/article/Manage-external-sharing-for-your-SharePoint-Online-environment-C8A462EB-0723-4B0B-8D0A-70FEAFE4BE85

    What can’t Sharegate do?

    Some of the following minor things cannot be migrated to O365:

  • User alerts – user will need to reset their alerts on new site
  • Personal views – user will need to create their personal views again on new site
  • Web part connections – any web part connections will not be preserved

For more, refer: https://support.share-gate.com/hc/en-us/categories/115000076328-Limitations

Performing the migration:

  1. Pick the right time

    Doing the migration at the low activity period would be ideal. User communications should be sent out to inform about the actual date happening as earlier as possible. I tend to stick to middle of the week as that way we still have a couple of days left to solve any issues instead of doing it on Friday or Saturday.

  2. Locking old sites

    During the migration, we do not want any users to be making changes to the old site. If you are migrating site collections, fortunately there’s a way to lock it down, provided you having access to the central admin portal. See https://technet.microsoft.com/en-us/library/cc263238.aspx

    However, if you are migrating sub-sites, there’s no way to lock down a sole sub-site, except changing its site permissions. That also means changing the site permissions risk having all these permissions information lost, so it would be ideal to record these permissions before making any changes. Also, take extra note on lists or libraries with unique permissions, which means they do not inherit site permissions, hence won’t be “locked unless manually changed respectively.

  3. Beware of O365 traffic jam

    Always stick to the Insane mode when running the migration in Sharegate. The Insane mode makes use of the new Offie 365 Migration API which is the fastest way to migrate huge volumes of data to Office365. While it’s been fast to export these data to Office365, I did find a delay in waiting for Office365 to import these into Sharepoint tenant. Sometimes, it could sit there for an hour before continuing with the import. Also, avoid running too many sessions if your VM is not powerful enough.

  4. Delta migration

    The good thing with using Sharegate is that you could do delta migration, which means you only migrate those files which have been modified or added since last migrated. However, it doesn’t handle deletion! If any files have been removed since you last migrated, running a delta sync will not delete these files from the destination end. Therefore, best practice is still delete the list from the destination site and re-create it using the Site Object wizard.

Post-migration check:

Doing the migration at the low activity period would be ideal. User communications should be sent out to inform about the actual date happening as earlier as possible. I tend to stick to middle of the week as that way we still have a couple of days left to solve any issues instead of doing it on Friday or Saturday.

Things to check:

  • Users can still access relevant pages, list and libraries
  • Users can still CRUD files/ items
  • Users can open Office web app (there can be different experience related to authentication when opening Office files, in most cases, users should only get prompted the very first time opening)

A tool to find mailbox permission dependencies

First published at https://nivleshc.wordpress.com

When planning to migrate mailboxes to Office 365, a lot of care must be taken around which mailboxes are moved together. The rule of the thumb is “those that work together, move together”. The reason for taking this approach is due to the fact that there are some permissions that do not work cross-premises and can cause issues. For instance, if a mailbox has delegate permissions to another mailbox (these are permissions that have been assigned using Outlook email client) and if one is migrated to Office 365 while the other remains on-premises, the delegate permissions capability is broken as it does not work cross-premises.

During the recent Microsoft Ignite, it was announced that there are a lot of features coming to Office 365 which will help with the cross-premises access issues.

I have been using Roman Zarka’s Export-MailboxPermissions.ps1 (part of https://blogs.technet.microsoft.com/zarkatech/2015/06/11/migrate-mailbox-permissions-to-office-365/ bundle) script to export all on-premises mailboxes permissions then using the output to decide which mailboxes move together. Believe me, this can be quite a challenge!

Recently, while having a casual conversation with one of my colleagues, I was introduced to an Excel  spreadsheet that he had created. Being the Excel guru that he is, he was doing various VLOOKUPs into the outputs from Roman Zarka’s script, to find out if the mailboxes he was intending to migrate had any permission dependencies with other mailboxes. I just stared at the spreadsheet with awe, and uttered the words “dude, that is simply awesome!”

I was hooked on that spreadsheet. However, I started craving for it to do more. So I decided to take it on myself to add some more features to it. However, not being too savvy with Excel, I decided to use PowerShell instead. Thus was born Find_MailboxPermssions_Dependencies.ps1

I will now walk you through the script and explain what it does

 

  1. The first pre-requisite for Find_MailboxPermissions_Dependencies.ps1 are the four output files from Roman Zarka’s Export-MailboxPermissions.ps1 script (MailboxAccess.csv, MailboxFolderDelegate.csv, MailboxSendAs.csv, MaiboxSendOnBehalf.csv)
  2. The next pre-requisite is details about the on-premises mailboxes. The on-premises Exchange environment must be queried and the details output into a csv file with the name OnPrem_Mbx_Details.csv. The csv must contain the following information (along the following column headings)“DisplayName, UserPrincipalName, PrimarySmtpAddress, RecipientTypeDetails, Department, Title, Office, State, OrganizationalUnit”
  3. The last pre-requisite is information about mailboxes that are already in Office 365. Use PowerShell to connect to Exchange Online and then run the following command (where O365_Mbx_Details.csv is the output file)
    Get-Mailbox -ResultSize unlimited | Select DisplayName,UserPrincipalName,EmailAddresses,WindowsEmailAddress,RecipientTypeDetails | Export-Csv -NoTypeInformation -Path O365_Mbx_Details.csv 

    If there are no mailboxes in Office 365, then create a blank file and put the following column headings in it “DisplayName”, “UserPrincipalName”, “EmailAddresses”, “WindowsEmailAddress”, “RecipientTypeDetails”. Save the file as O365_Mbx_Details.csv

  4. Next, put the above files in the same folder and then update the variable $root_dir in the script with the path to the folder (the path must end with a )
  5. It is assumed that the above files have the following names
    • MailboxAccess.csv
    • MailboxFolderDelegate.csv
    • MailboxSendAs.csv
    • MailboxSendOnBehalf.csv
    • O365_Mbx_Details.csv
    • OnPrem_Mbx_Details.csv
  6.  Now, that all the inputs have been taken care of, run the script.
  7. The first task the script does is to validate if the input files are present. If any of them are not found, the script outputs an error and terminates.
  8. Next, the files are read and stored in memory
  9. Now for the heart of the script. It goes through each of the mailboxes in the OnPrem_Mbx_Details.csv file and finds the following
    • all mailboxes that have been given SendOnBehalf permissions to this mailbox
    • all mailboxes that this mailbox has been given SendOnBehalf permissions on
    • all mailboxes that have been given SendAs permissions to this mailbox
    • all mailboxes that this mailbox has been given SendAs permissions on
    • all mailboxes that have been given Delegate permissions to this mailbox
    • all mailboxes that this mailbox has been given Delegate permissions on
    • all mailboxes that have been given Mailbox Access permissions on this mailbox
    • all mailboxes that this mailbox has been given Mailbox Access permissions on
    • if the mailbox that this mailbox has given the above permissions to or has got permissions on has already been migrated to Office 365
  10. The results are then output to a csv file (the name of the output file is of the format Find_MailboxPermissions_Dependencies_{timestamp of when script was run}_csv.csv
  11. The columns in the output file are explained below
Column Name Description
PermTo_OtherMbx_Or_FromOtherMbx? This is Y if the mailbox has given permissions to or has permissions on other mailboxes. Is N if there are no permission dependencies for this mailbox
PermTo_Or_PermFrom_O365Mbx? This is TRUE if the mailbox that this mailbox has given permissions to or has permissions on is  already in Office 365
Migration Readiness This is a color code based on the migration readiness of this permission. This will be further explained below
DisplayName The display name of the on-premises mailbox for which the permission dependency is being found
UserPrincipalName The userprincipalname of the on-premises mailbox for which the permission dependency is being found
PrimarySmtp The primarySmtp of the on-premises mailbox  for which the permission dependency is being found
MailboxType The mailbox type of the on-premises mailbox  for which the permission dependency is being found
Department This is the department the on-premises mailbox belongs to (inherited from Active Directory object)
Title This is the title that this on-premises mailbox has (inherited from Active Directory object)
SendOnBehalf_GivenTo emailaddress of the mailbox that has been given SendOnBehalf permissions to this on-premises mailbox
SendOnBehalf_GivenOn emailaddress of the mailbox that this on-premises mailbox has been given SendOnBehalf permissions to
SendAs_GivenTo emailaddress of the mailbox that has been given SendAs permissions to this on-premises mailbox
SendAs_GivenOn emailaddress of the mailbox that this on-premises mailbox has been given SendAs permissions on
MailboxFolderDelegate_GivenTo emailaddress of the mailbox that has been given Delegate access to this on-premises mailbox
MailboxFolderDelegate_GivenTo_FolderLocation the folders of the on-premises mailbox that the delegate access has been given to
MailboxFolderDelegate_GivenTo_DelegateAccess the type of delegate access that has been given on this on-premises mailbox
MailboxFolderDelegate_GivenOn email address of the mailbox that this on-premises mailbox has been given Delegate Access to
MailboxFolderDelegate_GivenOn_FolderLocation the folders that this on-premises mailbox has been given delegate access to
MailboxFolderDelegate_GivenOn_DelegateAccess the type of delegate access that this on-premises mailbox has been given
MailboxAccess_GivenTo emailaddress of the mailbox that has been given Mailbox Access to this on-premises mailbox
MailboxAccess_GivenTo_DelegateAccess the type of Mailbox Access that has been given on this on-premises mailbox
MailboxAccess_GivenOn emailaddress of the mailbox that this mailbox has been given Mailbox Access to
MailboxAccess_GivenOn_DelegateAccess the type of Mailbox Access that this on-premises mailbox has been given
OrganizationalUnit the Organizational Unit for the on-premises mailbox

The color codes in the column Migration Readiness correspond to the following

  • LightBlue – this on-premises mailbox has no permission dependencies and can be migrated
  • DarkGreen  – this on-premises mailbox has got a Mailbox Access permission dependency to another mailbox. It can be migrated while the other mailbox can remain on-premises, without experiencing any issues as Mailbox Access permissions are supported cross-premises.
  • LightGreen – this on-premises mailbox can be migrated without issues as the permission dependency is on a mailbox that is already in Office 365
  • Orange – this on-premises mailbox has SendAs permissions given to/or on another on-premises mailbox. If both mailboxes are not migrated at the same time, the SendAs capability will be broken. Lately, it has been noticed that this capability can be restored by re-applying the SendAs permissions to both the migrated and on-premises mailbox post migration
  • Pink – the on-premises mailbox has FolderDelegate given to/or on another on-premises mailbox. If both mailboxes are not migrated at the same time, the FolderDelegate capability will be broken. A possible workaround is to replace the FolderDelegate permission with Full Mailbox access as this works cross-premises, however there are privacy concerns around this workaround as this will enable the delegate to see all the contents of the mailbox instead of just the folders they had been given access on.
  • Red – the on-premises mailbox has SendOnBehalf permissions given to/or on another on-premises mailbox. If both mailboxes are not migrated at the same time, the SendOnBehalf capability will be broken. A possible workaround could be to replace SendOnBehalf with SendAs however the possible implications of this change must be investigated

Yay, the output has now been generated. All we need to do now is to make it look pretty in Excel 🙂

Carry out the following steps

  • Import the output csv file into Excel, using the semi-colon “;” as the delimiter (I couldn’t use commas as the delimiter as sometimes department,titles etc fields use them and this causes issues with the output file)
  • Create Conditional Formatting rules for the column Migration Readiness so that the fill color of this cell corresponds to the word in this column (for instance, if the word is LightBlue then create a rule to apply a light blue fill to the cell)

Thats it Folks! The mailbox permissions dependency spreadsheet is now ready. It provides a single-pane view to all the permissions across your on-premises mailboxes and gives a color coded analysis on which mailboxes can be migrated on their own without any issues and which might experience issues if they are not migrated in the same batch with the ones they have permissions dependencies on.

In the output file, for each on-premises mailbox, each line represents a permission dependency (unless the column PermTo_OtherMbx_Or_FromOtherMbx? is N). If there are more than one set of permissions applicable to an on-premises mailbox, these are displayed consecutively underneath each other.

It is imperative that the migration readiness of the mailbox be evaluated based on the migration readiness of all the permissions associated with that mailbox.

Find_MailboxPermissions_Dependencies.ps1 can be downloaded from  GitHub

A sample of the spreadsheet that was created using the output from the Find_MailboxPermissions_Dependencies.ps1 script can be downloaded from https://github.com/nivleshc/arm/blob/master/Sample%20Output_MailboxPermissions%20Dependencies.xlsx

I hope this script comes in handy when you are planning your migration batches and helps alleviate some of the headache that this task brings with it.

Till the next time, have a great day 😉

Exchange 2010 Hybrid Auto Mapping Shared Mailboxes

Migrating shared mailboxes to Office 365 is one of those things that is starting to become easier over time, especially with full access permissions now working cross premises.

One little discovery that I thought I would share is that if you have an Exchange 2010 hybrid configuration, the auto mapping feature will not work cross premises. (Exchange 2013 and above, you are ok and have nothing to worry about).

This means if an on-premises user has access to a shared mailbox that you migrate to Office 365, it will disappear from their Outlook even though they still have full access.

Unless you migrate the shared mailbox and user at the same time, the only option is to manually add the shared mailbox back into Outlook.

Something to keep in mind especially if your user base accesses multiple shared mailboxes at a time.

 

 

Complex Mail Routing in Exchange Online Staged Migration Scenario

Notes From the Field:

I was recently asked to assist an ongoing project with understanding some complex mail routing and identity scenario’s which had been identified during planning for an upcoming mail migration from an external system into Exchange Online.

New User accounts were created in Active Directory for the external staff who are about to be migrated. If we were to assign the target state, production email attributes now, and create the exchange online mailboxes, we would have a problem nearing migration.

When the new domain is verified in Office365 & Exchange Online, new mail from staff already in Exchange Online would start delivering to the newly created mailboxes for the staff soon to be onboarded.

Not doing this, will delay the project which is something we didn’t want either.

I have proposed the following in order to create a scenario whereby cutover to Exchange Online for the new domain is quick, as well as not causing user downtime during the co-existence period. We are creating some “co-existence” state attributes on the on-premises AD user objects that will allow mail flow to continue in all scenarios up until cutover. (I will come back to this later).

generic_exchangeonline_migration_process_flow

We have configured the AD user objects in the following way

  1. UserPrincipalName – username@localdomainname.local
  2. mail – username@mydomain.onmicrosoft.com
  3. targetaddress – username@mydomain.com

We have configured the remote mailbox objects in the following way

  1. mail – username@mydomain.onmicrosoft.com
  2. targetaddress – username@mydomain.com

We have configured the on-premises Exchange Accepted domain in the following way

  1. Accepted Domain – External Relay

We have configured the Exchange Online Accepted domain in the following way

  1. Accepted Domain – Internal Relay

How does this all work?

Glad you asked! As I eluded to earlier, the main problem here is with staff who already have mailboxes in Exchange Online. By configuring the objects in this way, we achieve several things:

  1. We can verify the new domains successfully in Office365 without impacting existing or new users. By setting the UPN & mail attributes to @mydomain.onmicrosoft.com, Office365 & Exchange Online do not (yet) reference the newly onboarded domain to these mailboxes.
  2. By configuring the accepted domains in this way, we are doing the following:
    1. When an email is sent from Exchange Online to an email address at the new domain, Exchange Online will route the message via the hybrid connector to the Exchange on-premises environment. (the new mailbox has an email address @mydomain.onmicrosoft.com)
    2. When the on-premises environment receives the email, Exchange will look at both the remote mailbox object & the accepted domain configuration.
      1. The target address on the mail is configured @mydomain.com
      2. The accepted domain is configured as external relay
      3. Because of this, the on-premises exchange environment will forward the message externally.

Why is this good?

Again, for a few reasons!

We are now able to pre-stage content from the existing external email environment to Exchange Online by using a target address of @mydomain.onmicrosoft.com. The project is no longer at risk of being delayed ! 🙂

At the night of cutover for MX records to Exchange Online (Or in this case, a 3rd party email hygiene provider),  We are able to use the same powershell code that we used in the beginning to configure the new user objects to modify the user accounts for production use. (We are using a different csv import file to achieve this).

Target State Objects

We have configured the AD user objects in the following way

  1. UserPrincipalName – username@mydomain.com
  2. mail – username@mydomain.com
  3. targetaddress – username@mydomain.mail.onmicrosoft.com

We have configured the remote mailbox objects in the following way

  1. mail
    1. username@mydomain.com (primary)
    2. username@mydomain.onmicrosoft.com
  2. targetaddress – username@mydomain.mail.onmicrosoft.com

We have configured the on-premises Exchange Accepted domain in the following way

  1. Accepted Domain – Authoritive

We have configured the Exchange Online Accepted domain in the following way

  1. Accepted Domain – Internal Relay

NOTE: AAD Connect sync is now run and a manual validation completed to confirm user accounts in both on-premises AD & Exchange, as well as Azure AD & Exchange Online to confirm that the user updates have been successful.

We can now update DNS MX records to our 3rd party email hygiene provider (or this could be Exchange Online Protection if you don’t have one).

A final synchronisation of mail from the original email system is completed once new mail is being delivered to Exchange Online.

SharePoint content migration using Sharegate and Powershell

Content Migration

When it comes to content migration we have the option to write code (script) or use a migration toolset or a combination of both, thus it is important to identify the appropriate toolset based on “ease of use” and what we need to achieve.

I have evaluated several migration toolsets however, in this blog I am going with Sharegate as I have extensively used this product recently.

Sharegate is a toolset used to  “Manage, Migrate and Secure SharePoint & Office 365”. 

We will look at migrating a document library in SharePoint O365 environment to another document library in the same environment and see how we can use Sharegate to speed up this process. I will incorporate some of my experiences, working with a customer on a document migration strategy.

What are we trying to achieve?
              Migrate document libraries in SharePoint and apply metadata in the process using a combination of excel and PowerShell scripting along with Sharegate. 

sharegate-content-migration-using-powershell

Image 1

 

  1. Select the Source document library to Migrate.
  2. Create the spreadsheet using Sharegate “Export to Excel” function.
  3. Update the Metadata within the excel spreadsheet (this can be done manually or by a console app).
  4. Using Sharegate PowerShell automate the Import/Migration process.

 

To achieve this, we need to do the following:

  1. Logon to Sharegate and click on the “Copy SharePoint Content” option as depicted below.
Sharegate1.png

Image 2

2. Connect using  your O365 SharePoint tenant, using your credentials.

3. Below, as you can see I have a bunch of test documents that I need to migrate from source document library to the target document library.

Sharegate 3 1.png

Image 3

4. Click on Excel to export an excel spreadsheet (to update the metadata columns).

5. Select the documents that you would like to copy across, however before starting to copying you will need to set up a custom property template as below:

Property templates allow you to select the options used for the bulk edition and to set custom actions for all of the list or library’s columns.

sharegate4

Image 4

6. Give the template a name and set up the template properties as below as per your requirements.

sharegate5

Image 5

7. When you click on “Save & Start” you can start copying the files across using the Sharegate UI.

Using PowerShell script to automate this migration process

  1. Save the Excel file locally, for e.g. “MigrationTest.xlsx”
Sharegate excel.png

Image 6

2. Update the columns in the excel spreadsheet with the appropriate metadata and save the file.

3. Click on the Sharegate PowerShell  to open the PowerShell as shown in the below Image.

sgps

Image 7

4. Run the below script.

PowerShell script using Sharegate’s “Copy-Content” cmdlet :

#PowerShell Script to migrate documents using Sharegate PowerShell
#Connection to O365 account
$mypassword = ConvertTo-SecureString "******" -AsPlainText -Force
$srcSite = Connect-Site -Url "https://yourtenant.sharepoint.com/sites/test/ /" -Username "user@domain.com.au" -Password $mypassword 
Write-Host "Connected to source"
$dstSite = Connect-Site -Url "https:// yourtenant.sharepoint.com/sites/test2/DestDocLib/" -Username "user@domain.com.au" -Password $mypassword
Write-Host "Connected to Target"
Write-Host $srcList
$srcList = Get-List -Name "SrcDocLib" -Site $srcSite
Write-Host $dstList
$dstList = Get-List -Name "DestDocLib" -Site $dstSite
$Template = “TestTemplate" 
Write-Host "Copying..."
Copy-Content -TemplateName $Template -SourceList $srcList -DestinationList $dstList -ExcelFilePath "C:\POC\MigrationTest.xlsx"
Write-Host "Done Copying"

 

PowerShell to “export to excel” from Sharegate

As of today, Sharegate does not have an “export to excel” script as yet, this step has to be done through the Sharegate UI. I have talked to the Sharegate support team and they came back to me saying, “this is one of the most requested feature and will be released soon“. Please refer to the Sharegate documentation for updates on new features: http://help.share-gate.com/

Conclusion

Using Sharegate and PowerShell we can automate the document migration and metadata tagging. Going further you can create a number of excel export files using Sharegate and script to iterate through each excel file as input parameter to the above PowerShell script.

 

 

Migrating resources from AWS to Microsoft Azure

Kloud receives a lot of communications in relation to the work we do and the content we publish on our blog. My colleague Hugh Badini recently published a blog about Azure deployment models from which we received the following legitimate follow up question…

So, Murali, thanks for letting us know you’d like to know more about this… consider this blog a starting point :).

Firstly though…

this topic (inter-cloud migrations), as you might guess, isn’t easily captured in a single blog post, nor, realistically in a series, so what I’m going to do here is provide some basics to consider. I may not answer your specific scenario but hopefully provide some guidance on approach.

Every cloud has a silver lining

The good news is that if you’re already operating in a cloud environment then you have likely had to deal with many of the fundamental differences between traditional application hosting and architecture and that of cloud platforms.

You will have dealt with how you ensure availability of your application(s) across outages; dealing with spikes in traffic via use of elastic compute resources; and will have come to recognise that is many ways, Infrastructure-as-a-Service (IaaS) in the cloud has many similarities to the way you’ve always done things on-prem (such as backups).

Clearly you have less of a challenge in approaching a move to another cloud provider.

Where to start

When we talk about moving from AWS to Azure we need to consider a range of things – let’s take a look at some key ones.

Understand what’s the same and what’s different

Both platforms have very similar offerings, and Microsoft provides many great resources to help those utilising AWS to build an understanding of which services in AWS map to which services in Azure. As you can see the majority of AWS’ services have an equivalent in Azure.

Microsoft’s Channel 9 is also a good place to start to learn about the similarities, with there being no better place than the Microsoft Azure for Amazon AWS Professional video series.

So, at a platform level, we are pretty well covered, but…

the one item to be wary of in planning any move of an existing application is how it has been developed. If we are moving components from, say, an EC2 VM environment to an Azure VM environment then we will probably have less work to do as we can build our Azure VM as we like (yes, as we know, even Linux!) and install whatever languages, frameworks or runtimes we need.

If, however, we are considering moving an application from a more Platform-as-a-Service capability such AWS Lambda we need to look at the programming model required to move its equivalent in Azure – Azure Functions. While AWS Lambda and Azure Functions are functionally the same (no pun intended) we cannot simply take our Lambda code and drop it into an Azure Function and have it work. It may not even make sense to utilise Azure Functions depending on what you are shifting.

It’s also important to consider the differences in the availability models in use today in AWS and Azure. AWS uses Availability Zones to help you manage the uptime of your application and it’s components. In Azure we manage availability at two levels – locally via Availability Sets and then geographically through use of Regions. As these models differ it’s an important area to consider for any migration.

Tools are good, but are no magic wand

Microsoft provides a way to migrate AWS EC2 instances to Azure using Azure Site Recovery (ASR) and while there are many tools for on-prem to cloud migrations and for multi-cloud management, they mostly steer away from actual migration between cloud providers.

Kloud specialises in assessing application readiness for cloud migrations (and then helping with the migration), and we’ve found inter-cloud migration is no different – understanding the integration points an application has and the SLAs it must meet are a big part of planning what your target cloud architecture will look like. Taking into consideration underlying platform services in use is also key as we can see from the previous section.

If you’re re-platforming an application you’ve built or maintain in-house, make sure to review your existing deployment processes to leverage features available to you for modern Continuous Deployment (CD) scenarios which are certainly a strength of Azure.

Data has a gravitational pull

The modern application world is entirely a data-driven one. One advantage to cloud platforms is the logically bottomless pit of storage you have at your disposal. This presents a challenge, though, when moving providers where you may have spent years building data stores containing Terabytes or Petabytes of data. How do you handle this when moving? There are a few strategies to consider:

  • Leave it where it is: you may decide that you don’t need all the data you have to be immediately available. Clearly this option requires you to continue to manage multiple clouds but may make economic sense.
  • Migrate via physical shipping: AWS provides Snowball as a way to extract data out of AWS without needing to pull it over a network connection. If your solution allows it you could ship your data out of AWS to a physical location, extract that data, and then prepare it for import into Azure, either over a network connection using ExpressRoute or through the Azure Import/Export service.
  • Migrate via logical transfer: you may have access to a service such as Equinix’s Cloud Exchange that allows you to provision inter-connects between cloud and other network providers. If so, you may consider using this as your migration enabler. Ensure you consider how much data you will transfer and what, if any, impact the data transfer might have on existing network services.

Outside of the above strategies on transferring of data, perhaps you can consider a staged migration where you only bring across chunks of data as required and potentially let older data expire over time. The type and use of data obviously impacts on which approach to take.

Clear as…

Hopefully this post has provided a bit more clarity around what you need to consider when migrating resources from AWS to Azure. What’s been your experience? Feel free to leave comments if you have feedback or recommendations based on the paths you’ve followed.

Happy dragon slaying!

Azure Deployment Models And How To Migrate From ASM to ARM

This is a post about the two deployment models currently available in Azure, Service Management (ASM) and Resource Manager (ARM). And how to migrate from one to the other if necessary.

About the Azure Service Management deployment model

The ASM model, also known as version 1 and Classic mode, started out as a web interface and a backend API for the PaaS services Azure opened with at launch.

Features

  1. ASM deployments are based on an XML schema.
  2. ASM operations are based at the cloud service level.
  3. Cloud services are the logical containers for IaaS VMs and PaaS services.
  4. ASM is managed through the CLI, old and new portals (features) and PowerShell.
Picture1

In ASM mode the cloud service acts as a container for VMs and PaaS services.

About the Resource Manager deployment model

The ARM model consists of a new web interface and API for resource management in Azure which came out of preview in 2016 and introduced several new features.

Features

  1. ARM deployments are based on a JSON schema.
  2. Templates, which can be imported and exported, define deployments.
  3. RBAC support.
  4. Resources can be tagged for logical access and grouping.
  5. Resource groups are the logical containers for all resources.
  6. ARM is managed through PowerShell (PS), the CLI and new portal only.
Picture2

In ARM mode the resource group acts as a container for all resources.

Why use Service Management mode?

  1. Support for all features that are not exclusive to ARM mode.
  2. No new features will be made available in this mode.
  3. Cannot process operations in parallel (.e.g. vm start, vm create, etc).
  4. ASM needs a VPN or ExpressRoute connection to communicate with ARM.
  5. In Classic mode templates cannot be used to configure resources.

Users should therefore only be using service management mode if they have legacy environments to manage which include features exclusive to it.

Why use Resource Manager mode?

  1. Support for all features that are not exclusive to ASM mode.
  2. Can process multiple operations in parallel.
  3. JSON templates are a practical way of managing resources.
  4. RBAC, resource groups and tags!

Resource manager mode is the recommended deployment model for all Azure environments going forward.

Means of migration

The following tools and software are available to help with migrating environments.

ASM2ARM custom PowerShell script module.
Platform supported migrations using PowerShell or the Azure CLI.
The MigAz tool.
Azure Site Recovery.

About ASM2ARM

ASM2ARM is a custom PowerShell script module for migrating a single Virtual Machine from Azure Service Management to Resource Manager stack which makes available two new cmdlets.

Cmdlets: Add-AzureSMVmToRM & New-AzureSmToRMDeployment

Code samples:

$vm = Get-AzureVm -ServiceName acloudservice -Name atestvm
Add-AzureSMVmToRM -VM $vm -ResourceGroupName aresourcegroupname -DiskAction CopyDisks -OutputFileFolder D:\myarmtemplates -AppendTimeStampForFiles -Deploy

Using the service name and VM name parameters directly.

Add-AzureSMVmToRM -ServiceName acloudservice -Name atestvm -ResourceGroupName aresourcegroupname -DiskAction CopyDisks -OutputFileFolder D:\myarmtemplates -AppendTimeStampForFiles -Deploy

Features

  1. Copy the VM’s disks to an ARM storage account or create a new one.
  2. Create a destination vNet and subnet for migrated VMs.
  3. Create ARM JSON templates and PS script for deployment of resources.
  4. Create an availability set if one exists at source.
  5. Create a public IP if the VM is open to the internet.
  6. Create network security groups for the source VMs public endpoints.

Limitations

  1. Cannot migrate running VMs.
  2. Cannot migrate multiple VMs.
  3. Cannot migrate a whole ASM network
  4. Cannot create load balanced VMs.

For more information: https://github.com/fullscale180/asm2arm

About platform supported migrations using PowerShell

Consists of standard PowerShell cmdlets from Microsoft for migrating resources to ARM.

Features

  1. Migration of virtual machines not in a virtual network (disruptive!).
  2. Migration of virtual machines in a virtual network (non-disruptive!).
  3. Storage accounts are cross compatible but can also be migrated.

Limitations

  1. More than one availability set in a single cloud service.
  2. One or more availability sets.
  3. VMs not in an availability set in a single cloud service.

About platform supported migrations using the Azure CLI

Consists of standard Azure CLI commands from Microsoft for migrating resources to ARM.

Features & Limitations

See above.

A video on the subject of platform supported migrations using PowerShell or the CLI.

About MigAz

MigAz comes with an executable which outputs reference JSON files and makes available a Powershell script capable of migrating ASM resources and blob files to ARM mode environments.

Features

  1. MigAz exports JSON templates from REST API calls for migration.
  2. New resources are created in and disk blobs copied to their destination, all original resources left intact.
  3. Exported JSON can (and should) be reviewed and customized before use.
  4. Export creates all new resources in a single resource group.
  5. Supports using any subscription target, same or different.
  6. With JSON being at the core of ARM, templates can be used for DevOPs.
  7. Can be used to clone existing environments or create new ones for testing.
main

A screenshot of the MigAZ frontend GUI.

About Azure Site Recovery (ASR)

ASR is a backup, continuity and recovery solution set which can also be used for migrating resources to ARM.

Features

  1. Cold backup and replication of both on and off premise virtual machines.
  2. Cross compatible between ASM and ARM deployment models.
  3. ASM virtual machines can be restored into ARM environments.

Picture1

Pros and cons

ASM2ARM: Requires downtime but can be scripted which has potential however this approach only allows for the migration of one VM at a time which is a sizable limitation.

Azure PowerShell and CLI: This approach is well rounded. It can be scripted and allows for rollbacks. Supported migration scenarios include some caveats however and you cannot migrate a whole vNet into an existing network.

MigAz Tool: Exports JSON of ASM resources for customization and uses a PowerShell script for deployment to ARM. Downtime is required going to the same address space or cutting over to new services but this is easily your best and most comprehensive option at this time.

Site Recovery: Possibly the easiest way of migrating resources and managing the overall process but requires a lot of work to set up. Downtime is required in all cases.

Migrating Sitecore 7.0 to Azure IaaS Virtual Machines – Part 1

INTRODUCTION

Recently, I had the opportunity of working on a Sitecore migration project. I was tasked with moving a third-party hosted Sitecore 7.0 instance to Azure IaaS. The task sounds simple enough but if only life was that simple. A new requirement was to improve upon the existing infrastructure by making the new Sitecore environment highly available and the fun begins right there.

To give some context, the CURRENT Sitecore environment is not highly available and has the following server topology:

  • Single Sitecore Content Delivery (CD) Instance
  • Single Sitecore Content Management (CM) Instance
  • Single SQL Server 2008 Instance for Sitecore Content and Configurations
  • Single SQL Server 2008 Instance for Sitecore Analytics

The NEW Sitecore Azure environment is highly available and has the following server topology:

  • Load-balanced Sitecore CD Instances (2 servers)
  • Single Sitecore CM Instance (single server)
  • SQL Server 2012 AlwaysOn Availability Group (AAG) for Sitecore Content (2 servers)
  • SQL Server 2012 AlwaysOn Availability Group (AAG) for Sitecore Analytics (2 servers)

In this tutorial I will walk you through the processes required to provision a brand new Azure environment and migrate Sitecore.

This tutorial will be split into three parts and they are:

  1. Part 1 – Provision the Azure Sitecore Environment
  2. Part 2 – SQL Server 2012 AlwaysOn Availability Group Configuration (coming soon)
  3. Part 3 – Sitecore Configuration and Migration (coming soon)

 

PART 1 – Provision the Azure Sitecore Environment

In the Part 1 of the tutorial, we’ll look at building the foundations required for the Sitecore migration.

1. Sitecore Web Servers

  • First we need to create the two Sitecore CD instances. In Azure Management Portal, navigate to Virtual Machines and create a new virtual machine from gallery. Find the Windows Server 2012 R2 Datacenter template, go through the creation wizard and fill out all the required information.

1

  • When creating a new VM it must be assigned to a Cloud Service, you will get the opportunity to create a new Cloud Service if you don’t already have one. For load-balanced configurations, you also need to create a new Availability Set. Let’s create that too.

2

  • Repeat the above steps to create the second Sitecore CD instance and assign it to the same Cloud Service and Availability Group.
  • Repeat the above steps to create the Sitecore CM instance and create a new Cloud Service for it (you don’t need to create an Availability Group for a single instance).
  • Once all the VMs have been provision and configured properly, make sure they are all joined to the same domain in AD.

2. Sitecore SQL Servers

  • Now we need to create two SQL Server 2012 clusters – one for Sitecore content and the other for Sitecore analytics.
  • In Azure Management Portal, navigate to Virtual Machines and create a new virtual machine from gallery. Find the SQL Server 2012 SP2 Enterprise template (it will also work with SQL Server 2012 Standard and Web editions), go through the creation wizard and fill out all the required information.

Please Note: It’s important to note that by creating a new VM based on the SQL Server template, you are automatically assigned a pre-bundled SQL Server licence. If you want to use your own SQL Server licence, you’ll have to manually install SQL Server after spinning up a standard Windows Server VM.

3

  • During the creation process, create a new Cloud Service and Availability Group, and assign them to this VM.

4

  • Repeat the above steps to create the second Sitecore SQL Server instance and assign it to the same Cloud Service and Availability Group. These two SQL Servers will form the SQL Server cluster.
  • Repeat the above steps for the second SQL Server cluster for Sitecore Analystics.
  • Once all the VMs have been provision and configured properly, make sure they are all joined to the same domain in AD.

3. Enable Load-balanced Sitecore Web Servers

In order to make Sitecore CD instances highly available, we need configure a load-balancer that will handle traffic for those two Sitecore CD instances. In Azure terms, it just means adding a new endpoint, clicking on a few check boxes and you are ready to go. If only everything in life was that easy 🙂

  • In Azure Management Portal, find your Sitecore CD VM instance, open the Dashboard, navigate to the Endpoints tab and add a new endpoint (located at the bottom left hand corner).

5

  • You would need to add two new load-balanced endpoints – one for normal web traffic (port 80) and another for secure web traffic (port 443). In the creation wizard, specify the type of traffic for the endpoint, in this case it’s for HTTP Port 80. Make sure you check the Create a Load-balanced Set check box.

6

  • In the next screen, you’ll have to give the load-balanced set a name and leave the rest of the options as the default, confirm and create.

7

  • Do the same for secure web traffic and create a new endpoint for HTTPS Port 443.
  • Find your second Sitecore CD VM instance, open the Dashboard, navigate to the Endpoints tab and add a new endpoint (located at the bottom left hand corner). You’ll also need to add two load-balanced endpoints – one for normal web traffic (port 80) and another for secure web traffic (port 443). But this time around, you’ll create the endpoints based on the existing Load-Balanced Sets.

8

  • On the next screen, give the endpoint a name, confirm and create. Repeat the same steps for the HTTPS endpoint.

9

  • You should now have load-balanced ready Sitecore CD instances.

 

The next part of this tutorial, we’ll look at how to install and configure SQL Server 2012 AlwaysOn Availability Group. Please stay tuned for the Part 2 of this tutorial.

Migrating Azure Virtual Machines to another Region

I have a number of DEV/TEST Virtual Machines (VMs) deployed to Azure Regions in Southeast Asia (Singapore) and West US as these were the closet to those of us living in Australia. Now that the new Azure Regions in Australia have been launched, it’s time to start migrating those VMs closer to home. Manually moving VMs between Regions is pretty straight forward and a number of articles already exist outlining the manual steps.

To migrate an Azure VM to another Region

  1. Shutdown the VM in the source Region
  2. Copy the underlying VHDs to storage accounts in the new Region
  3. Create OS and Data disks in the new Region
  4. Re-create the VM in the new Region.

Simple enough but tedious manual configuration, switching between tools and long waits while tens or hundreds of GBs are transferred between Regions.

What’s missing is the automation…

Automating the Migration

In this post I will share a Windows PowerShell script that automates the migration of Azure Virtual Machines between Regions. I have made the full script available via GitHub.

Here is what we are looking to automate:

Migrate-AzureVM

  1. Shutdown and Export the VM configuration
  2. Setup async copy jobs for all attached disks and wait for them to complete
  3. Restore the VM using the saved configuration.

The Migrate-AzureVM.ps1 script assumes the following:

  • Azure Service Management certificates are installed on the machine running the script for both source and destination Subscriptions (same Subscription for both is allowed)
  • Azure Subscription profiles have been created on the machine running the script. Use Get-AzureSubscription to check.
  • Destination Storage accounts, Cloud Services, VNets etc. already have been created.

The script accepts the following input parameters:

.\Migrate-AzureVM.ps1 -SourceSubscription "MySub" `
                      -SourceServiceName "MyCloudService" `
                      -VMName "MyVM" `
                      -DestSubscription "AnotherSub" `
                      -DestStorageAccountName "mydeststorage" `
                      -DestServiceName "MyDestCloudService" `
                      -DestVNETName "MyRegionalVNet" `
                      -IsReadOnlySecondary $false `
                      -Overwrite $false `
                      -RemoveDestAzureDisk $false
SourceSubscription Name of the source Azure Subscription
SourceServiceName Name of the source Cloud Service
VMName Name of the VM to migrate
DestSubscription Name of the destination Azure Subscription
DestStorageAccountName Name of the destination Storage Account
DestServiceName Name of the destination Cloud Service
DestVNETName Name of the destination VNet – blank if none used
IsReadOnlySecondary Indicates if we are copying from the source storage accounts read-only secondary location
Overwrite Indicates if we are overwriting if the VHD already exists in the destination storage account
RemoveDestAzureDisk Indicates if we remove an Azure Disk if it already exists in the destination disk repository

To ensure that the Virtual Machine configuration is not lost (and avoid us have to re-create by hand) we must first shutdown the VM and export the configuration as shown in the PowerShell snippet below.

# Set source subscription context
Select-AzureSubscription -SubscriptionName $SourceSubscription -Current

# Stop VM
Stop-AzureVMAndWait -ServiceName $SourceServiceName -VMName $VMName

# Export VM config to temporary file
$exportPath = "{0}\{1}-{2}-State.xml" -f $ScriptPath, $SourceServiceName, $VMName
Export-AzureVM -ServiceName $SourceServiceName -Name $VMName -Path $exportPath

Once the VM configuration is safely exported and the machine shutdown we can commence copying the underlying VHDs for the OS and any data disks attached to the VM. We’ll want to queue these up as jobs and kick them off asynchronously as they will take some time to copy across.

Get list of azure disks that are currently attached to the VM
$disks = Get-AzureDisk | ? { $_.AttachedTo.RoleName -eq $VMName }

# Loop through each disk
foreach($disk in $disks)
{
    try
    {
        # Start the async copy of the underlying VHD to
        # the corresponding destination storage account
        $copyTasks += Copy-AzureDiskAsync -SourceDisk $disk
    }
    catch {}   # Support for existing VHD in destination storage account
}

# Monitor async copy tasks and wait for all to complete
WaitAll-AsyncCopyJobs

Tip: You’ll probably want to run this overnight. If you are copying between Storage Accounts within the same Region copy times can vary between 15 mins and a few hours. It all depends on which storage cluster the accounts reside. Michael Washam provides a good explanation of this and shows how you can check if your accounts live on the same cluster. Between Regions will always take a longer time (and incur data egress charges don’t forget!)… see below for a nice work-around that could save you heaps of time if you happen to be migrating within the same Geo.

You’ll notice the script also supports being re-run as you’ll have times when you can’t leave the script running during the async copy operation. A number of switches are also provided to assist when things might go wrong after the copy has completed.

Now that we have our VHDs in our destination Storage Account we can begin putting our VM back together again.

We start by re-creating the logical OS and Azure Data disks that take a lease on our underlying VHDs. So we don’t get clashes, I use a convention based on Cloud Service name (which must be globally unique), VM name and disk number.

# Set destination subscription context
Select-AzureSubscription -SubscriptionName $DestSubscription -Current

# Load VM config
$vmConfig = Import-AzureVM -Path $exportPath

# Loop through each disk again
$diskNum = 0
foreach($disk in $disks)
{
    # Construct new Azure disk name as [DestServiceName]-[VMName]-[Index]
    $destDiskName = "{0}-{1}-{2}" -f $DestServiceName,$VMName,$diskNum   

    Write-Log "Checking if $destDiskName exists..."

    # Check if an Azure Disk already exists in the destination subscription
    $azureDisk = Get-AzureDisk -DiskName $destDiskName `
                              -ErrorAction SilentlyContinue `
                              -ErrorVariable LastError
    if ($azureDisk -ne $null)
    {
        Write-Log "$destDiskName already exists"

        if ($RemoveDisk -eq $true)
        {
            # Remove the disk from the repository
            Remove-AzureDisk -DiskName $destDiskName

            Write-Log "Removed AzureDisk $destDiskName"
            $azureDisk = $null
        }
        # else keep the disk and continue
    }

    # Determine media location
    $container = ($disk.MediaLink.Segments[1]).Replace("/","")
    $blobName = $disk.MediaLink.Segments | Where-Object { $_ -like "*.vhd" }
    $destMediaLocation = "http://{0}.blob.core.windows.net/{1}/{2}" -f $DestStorageAccountName,$container,$blobName

    # Attempt to add the azure OS or data disk
    if ($disk.OS -ne $null -and $disk.OS.Length -ne 0)
    {
        # OS disk
        if ($azureDisk -eq $null)
        {
            $azureDisk = Add-AzureDisk -DiskName $destDiskName `
                                      -MediaLocation $destMediaLocation `
                                      -Label $destDiskName `
                                      -OS $disk.OS `
                                      -ErrorAction SilentlyContinue `
                                      -ErrorVariable LastError
        }

        # Update VM config
        $vmConfig.OSVirtualHardDisk.DiskName = $azureDisk.DiskName
    }
    else
    {
        # Data disk
        if ($azureDisk -eq $null)
        {
            $azureDisk = Add-AzureDisk -DiskName $destDiskName `
                                      -MediaLocation $destMediaLocation `
                                      -Label $destDiskName `
                                      -ErrorAction SilentlyContinue `
                                      -ErrorVariable LastError
        }

        # Update VM config
        #   Match on source disk name and update with dest disk name
        $vmConfig.DataVirtualHardDisks.DataVirtualHardDisk | ? { $_.DiskName -eq $disk.DiskName } | ForEach-Object {
            $_.DiskName = $azureDisk.DiskName
        }
    }              

    # Next disk number
    $diskNum = $diskNum + 1
}
# Restore VM
$existingVMs = Get-AzureService -ServiceName $DestServiceName | Get-AzureVM
if ($existingVMs -eq $null -and $DestVNETName.Length -gt 0)
{
    # Restore first VM to the cloud service specifying VNet
    $vmConfig | New-AzureVM -ServiceName $DestServiceName -VNetName $DestVNETName -WaitForBoot
}
else
{
    # Restore VM to the cloud service
    $vmConfig | New-AzureVM -ServiceName $DestServiceName -WaitForBoot
}

# Startup VM
Start-AzureVMAndWait -ServiceName $DestServiceName -VMName $VMName

For those of you looking at migrating VMs between Regions within the same Geo and have GRS enabled, I have also provided an option to use the secondary storage location of the source storage account.

To support this you will need to enable RA-GRS (read access) and wait a few minutes for access to be made available by the storage service. Copying your VHDs will be very quick (in comparison to egress traffic) as the copy operation will use the secondary copy in the same region as the destination. Nice!

Enabling RA-GRS can be done at any time but you will be charged for a minimum of 30 days at the RA-GRS rate even if you turn it off after the migration.

# Check if we are copying from a RA-GRS secondary storage account
if ($IsReadOnlySecondary -eq $true)
{
    # Append "-secondary" to the media location URI to reference the RA-GRS copy
    $sourceUri = $sourceUri.Replace($srcStorageAccount, "$srcStorageAccount-secondary")
}

Don’t forget to clean up your source Cloud Services and VHDs once you have tested the migrated VMs are running fine so you don’t incur ongoing charges.

Conclusion

In this post I have walked through the main sections of a Windows PowerShell script I have developed that automates the migration of an Azure Virtual Machine to another Azure data centre. The full script has been made available in GitHub. The script also supports a number of other migration scenarios (e.g. cross Subscription, cross Storage Account, etc.) and will be handy addition to your Microsoft Azure DevOps Toolkit.