The Present [and Future] Landscape of Data Migrations

A rite of passage for the majority of us in the tech consultancy world is being a part of a medium to large scale data migration at some stage in our careers. No, I don’t mean dragging files from a PC to a USB drive, though this may have very well factored into the equation for some us. What I’m referencing is a planned piece of work where the objective is to move an entire data set from a legacy storage system to a target storage. Presumably, a portion of this data is actively used, so this migration usually occurs during a planned downtime period, ad communication strategy, staging, resourcing, etc.

Yes, a lot of us can say ‘been there, done that’. And for some us, it can seem simple when broken down as above. But what does it mean for the end user? The recurring cycle of change is never an easy one, and the impact of a data migration is often a big change. For the team delivering it can be just as stress-inducing – sleepless shift cycles, outside of hours and late-night calls, project scope creeping (note: avoid being vague in work requests, especially when it comes to data migration work), are just a few of the issues that will shave years off anyone who’s unprepared for what a data migration encompasses.

Back to the end-users, it’s a big change: new applications, new front-end interfaces, new operating procedures and a potential shake-up of business processes, and so on. Most opt and agree with the client to taper off the pain of the transition/change period, ‘rip the Band-Aid right off’ and move an entire dataset from one system to another in one operation. Sometimes, and dependent on context/platforms, this is a completely seamless exercise. The end user logs in on a Monday and they’re mostly unaware of a switch. Whether taking this, or a phased approach to the migration, there are signs showing in today’s technology services landscape that these operations are aging and become somewhat outdated.

Data Volumes Are Climbing…

… to put it mildly. We’re in a world of Big Data, and this isn’t only for Global Enterprises and Large Companies, but even mid-sized ones and for some individuals too. Weekend downtimes aren’t going to be enough – or aren’t, as this BA discovered on a recent assignment – and when your data amounts aren’t equitable to the actual end users you’re transitioning (the bigger goal is, in my mind, the transformation of the user experience in fact), then you’re left with finite amounts of time to actually perform tests, gain user acceptance, plan and strategise for mitigation and potential rollback.

Migration through Cloud Platforms are not yet well-optimized for effective (pain-free) Migrations

Imagine you have a billing system that contains somewhere up to 100 million fixed assets (active and backlog). The requirement is to migrate these all to a new system that is more intuitive to the accountants of your business. On top of this, the app has a built-in API that supports 500 asset migrations a second. Not bad, the migration will, therefore, take just under 20 days to complete. Not optimal for a project, no matter how much planning goes into the delivery phase. On top of this, consider the slowing down of performance due to user access going through an API or load gateway. Not fun.

What’s the Alternative?

In a world where we’re looking to make technology and solution delivery faster and more efficient, the future of data migration may, in fact, be headed in the opposite direction.

Rather than phasing your migrations over outage windows of days or weeks, or from weekend-to-weekend, why not stretch this out to months even?

Now, before anyone cries ‘exorbitant bill-ables’, I’m not suggesting that the migration project itself be drawn out for an overly long period of time (months, a year).

No, the idea is not to keep a project team around for unforeseen, yet to-be-expected challenges that face them as previously mentioned above. Rather, as tech and business consultants and experts, a possible alternative is redirecting our efforts towards our quality of service, to focus on change management aspect with regards to end-user adoption of a new platform and associated process, and the capability of a given company’s managed IT serviced too, not only support the change but in fact incorporate the migration into as a standard service offering.

The Bright(er) Future for Data Migrations

How can managed services support a data migration, without specialisation in, say, PowerShell scripting or experience in performing a migration via a tool or otherwise, before? Nowadays we are fortunate enough that vendors are developing migration tools to be highly user-friendly and purposed for ongoing enterprise use. They are doing this to shift the view that a relationship with a solution provider for projects such as this should simply be a one-off, and that the focus on migration software capability is more important than the capability of the resource performing the migration (still important, but ‘technical skills’ in this space becoming more of a level playing field).

From a business consultancy angle, an opportunity to provide an improved quality of service is presented by looking at ways in which we can utilise our engagement and discovery skills to bridge the gaps which can often be prevalent between managed services and an understanding of the businesses everyday processes. A lot of this will hinge on the very data being migrated. This can onset positive action from a business given time and with full support from managed services. Data migrations as a BAU activity can become iterative and via request; active and relevant data first, followed potentially by a ‘house-cleaning’ activity where the business effectively de clutters data which it no longer needs or is no longer relevant.

It’s early days and we’re likely still toeing the line between old data migration methodology and exploring what could be. But ultimately, enabling a client or company to be more technologically capable, starting with data migrations, is definitely worth a cent or two.

Automate network share migrations to Sharepoint Online using ShareGate PowerShell

Sharegate supports PowerShell scripting which can be used to automate and schedule migrations. In this post, I am going to demonstrate an example of end to end automation to migrate network Shares to SharePoint Online. The process effectively reduces the task of executing migrations to “just flicking a switch”.

Pre-Migration

The following pre-migration activities were conducted before the actual migration:

  1. Analysis of Network Shares
  2. Discussions with stakeholders from different business units to identify content needs
  3. Pilot migrations to identify average throughput capability of migration environment
  4. Identification of acceptable data filtration criteria, and prepare Sharegate migration template files based on business requirements
  5. Derive a migration plan from above steps

Migration Automation flow

The diagram represents a high-level flow of the process:

 

The migration automation was implemented to execute the following steps:

  1. Migration team indicates that migration(s) are ready to be initiated by updating the list item(s) in the SharePoint list
  2. Updated item(s) are detected by a PowerShell script polling the SharePoint list
  3. The list item data is downloaded as a CSV file. It is one CSV file per list item. The list item status is updated to “started”, so that it would not be read again
  4. The CSV file(s) are picked up by another migration PowerShell script to initiate migration using Sharegate
  5. The required migration template is selected based on the options specified in the migration list item / csv to create a migration package
  6. The prepared migration task is queued for migration with Sharegate, and migration is executed
  7. Information mails are “queued” to be dispatched to migration team
  8. Emails are sent out to the recipients
  9. The migration reports are extracted out as CSV and stored at a network location.

Environment Setup

Software Components

The following software components were utilized for the implementing the automation:

  1. SharePoint Online Management shell
  2. SharePoint PnP PowerShell
  3. Sharegate migration Tool

Environment considerations

Master and Migration terminals hosted as Virtual machines – Each terminal is a windows 10 virtual machine. The use of virtual machines provides following advantages over using desktops:

  • VMs are generally deployed directly in datacenters, hence, near the data source.
  • Are available all the time and are not affected by power outages or manual shutdowns
  • Can be easily scaled up or down based on the project requirements
  • Benefit from having better internet connectivity and separate internet routing can be drawn up based on requirements

Single Master Terminal – A single master terminal is useful to centrally manage all other migration terminals. Using a single master terminal offers following advantages:

  • Single point of entry to migration process
  • Acts as central store for scripts, templates, aggregated reports
  • Acts as a single agent to execute non-sequential tasks such as sending out communication mails

Multiple Migration terminals – It would be optimal to initiate parallel migrations and use multiple machines (terminals) to expedite overall throughput by parallel runs in an available migration window (generally non-business hours).  Sharegate has option to use either 1 or 5 licenses at once during migration. We utilized 5 ShareGate licenses on 5 separate migration terminals.

PowerShell Remoting – Using PowerShell remoting allows opening remote PowerShell sessions to other windows machines. This will allow the migration team to control and manage migrations using just one terminal (Master Terminal) and simplify monitoring of simultaneous migration tasks. More information about PowerShell remoting can be found here.

PowerShell execution policy – The scripts running on migration terminals will be stored at a network location in Master Terminal. This will allow changing / updating scripts on the fly without copying the script over to other migration terminals. The script execution policy of the PowerShell window will need to be set as “Bypass” to allow execution of scripts stored in network location (for quick reference, the command is “Set-ExecutionPolicy -ExecutionPolicy Bypass”.

Windows Scheduled Tasks – The PowerShell scripts are scheduled as tasks through Windows Task schedulers and these tasks could be managed remotely using scripts running on the migration terminals. The scripts are stored at a network location in master terminal.

Basic task in a windows scheduler

PowerShell script file configured to run as a Task

Hardware specifications

Master terminal (Manage migrations)

  • 2 cores, 4 GB RAM, 100 GB HDD
  • Used for managing scripts execution tasks on other terminals (start, stop, disable, enable)
  • Used for centrally storing all scripts and ShareGate property mapping and migration templates
  • Used for Initiating mails (configured as basic tasks in task scheduler)
  • Used for detecting and downloading migration configuration of tasks ready to be initiated (configured as basic tasks in task scheduler)
  • Windows 10 virtual machine installed with the required software.
  • Script execution policy set as “Bypass”

Migration terminals (Execute migrations)

  • 8 cores, 16 GB RAM, 100 GB HDD
  • Used for processing migration tasks (configured as basic tasks in windows task scheduler)
  • Multiple migration terminals may be set up based on the available Sharegate licenses
  • Windows 10 Virtual machines each installed with the required software.
  • Activated Sharegate license on each of the migration terminals
  • PowerShell remoting needs to be enabled
  • Script execution policy set as “Bypass”

Migration Process

Initiate queueing of Migrations

Before migration, migration team must perform manual pre – migration tasks (if any as defined by the migration process as defined and agreed with stake holders). Some of the pre-migration tasks / checks may be:

  • Inform other teams about a possible network surge
  • Confirming if another activity is consuming bandwith (scheduled updates)
  • Inform the impacted business users about the migration – this would be generally set up as the communication plan
  • Freezing the source data store as “read-only”

A list was created on a SharePoint online site to enable users to indicate that the migration is ready to be processed. The updates in this list shall trigger the actual migration downstream. The migration plan is pre-populated in this list as a part of migration planning phase. The migration team can then update one of the fields (ReadyToMigrate in this case) to initiate the migration. Migration status is also updated back to this list by the automation process or skip a planned migration (if so desired).

The list provides as a single point of entry to initiate and monitor migrations. In other words, we are abstracting out migration processing with this list and can be an effective tool for migration and communication teams.

The list was created with the following columns:

  • Source => Network Share root path
  • Destination site => https://yourteanant.sharepoint.com/sites/<Sitename>
  • Destination Library => Destination library on the site
  • Ready to migrate => Indicates that the migration is ready to be triggered
  • Migrate all data => Indicate if all data from the source is to be migrated (default is No). Only filtered data based on the predefined options will be migrated. (more on filtered options can be found here)
  • Started => updated by automation when the migration package has been downloaded
  • Migrated => updated by automation after migration completion
  • Terminal Name => updated by automation specifying the terminal being used to migrate the task

 

Migration configuration list

 

After the migration team is ready to initiate the migration, the field “ReadyToMigrate” for the migration item in the SharePoint list is updated to “Yes”.


“Flicking the switch”

 

Script to create the migration configuration list

The script below creres the source list in sharepoint online.

Script to store credentials

The file stores the credentials and can be used subsequent scripts.

Queuing the migration tasks

A PowerShell script is executed to poll the migration configuration list in SharePoint at regular intervals to determine if a migration task is ready to be initiated. The available migration configurations are then downloaded as CSV files, one item / file and stored in a migration packages folder on the master terminal. Each CSV file maps to one migration task to be executed by a migration terminal and ensures that the same migration task is not executed by more than one terminal. It is important that this script runs on a single terminal to ensure only one migration is executed for one source item.

 

 

Execute Migration

The downloaded migration configuration CSV files are detected by migration script tasks executing on each of the migration terminals. Based on the specified source, destination and migration options the following tasks are executed:

  1. Reads the item from configuration list to retrieve updated data based on item ID
  2. Verify Source. Additionally, sends a failure mail if source is invalid or not available
  3. Revalidates if a migration is already initiated by another terminal
  4. Updates the “TerminalName” field in the SharePoint list to indicate an initiated migration
  5. Checks if the destination site is created. Creates if not already available
  6. Checks if the destination library is created. Creates if not already available
  7. Triggers an information mail informing migration start
  8. Loads the required configurations based on the required migration outcome. The migration configurations specify migration options such as cut over dates, source data filters, security and metadata. More about this can be found here.
  9. Initiates the migration task
  10. Extracts the migration report and stores as CSV
  11. Extracts the secondary migration report as CSV to derive paths of all files successfully migrated. These CSV can be read by an optional downstream process.
  12. Triggers an information mail informing migration is completed
  13. Checks for another queued migration to repeat the procedure.

The automatioin script is given below –

 


Additional Scripts

Send Mails

The script triggers emails to required recipients.

This script polls a folder:   ‘\masterterminal\c$\AutomatedMigrationData\mails\input’ to check any files to be send out as emails. The csv files sepcify subject and body to be send out as emails to recipients configured in the script. Processed CSV files are moved to final folder.

 

Manage migration tasks (scheduled tasks)

The PowerShell script utilizes PowerShell remoting to manage windows Task scheduler tasks configured on other terminals.

 


Conclusion

The migration automation process as described above helps in automating the migration project and reduces manual overhead during the migration. Since the scripts utilize pre-configured migration options / templates, the outcome is consistent with the plan. Controlling and monitoring migration tasks utilizing a SharePoint list introduces transparency in the system and abstracts the migration complexity. Business stakeholders can review migration status easily from the SharePoint list and this ensures an effective communication channel. Automated mails informing about migration status provide additional information about the migration. The migration tasks are executed in parallel across multiple migration machines which aids in a better utilization of available migration window.

 

 

Migrating Sharepoint 2013 on prem to Office365 using Sharegate

Recently I completed a migration project which brought a number of sub-sites within Sharepoint 2013 on-premise to the cloud (Sharepoint Online). We decided to use Sharegate as the primary tool due to the simplistic of it.

Although it might sound as a straightforward process, there are a few things worth to be checked pre and post migration and I have summarized them here. I found it easier to have these information recorded in a spreadsheet with different tabs:

Pre-migration check:

  1. First thing, Get Site Admin access!

    This is the first and foremost important step, get yourself the admin access. It could be a lengthy process especially in a large corporation environment. The best level of access is being granted as the Site Collection Admin for all sites, but sometimes this might not be possible. Hence, getting Site Administrator access is the bare minimum for getting migration to work.

    You will likely be granted Global Admin on the new tenant at most cases, but if not, ask for it!

  2. List down active site collection features

    Whatever feature activated on the source site would need to be activated on the destination site as well. Therefore, we need to record down what have been activated on the source site. If there is any third party feature activated, you will need to liaise with relevant stakeholder in regards to whether it is still required on the new site. If it is, it is highly likely that a separate piece of license is required as the new environment will be a cloud based, rather than on-premise. Take Nintex Workflow for example, Nintex Workflow Online is a separate license comparing to Nintex Workflow 2013.

  3. Segregate the list of sites, inventory analysis

    I found it important to list down all the list of sites you are going to migrate, distinguish if they are site collections or just subsites. What I did was to put each site under a new tab, with all its site contents listed. Next to each lists/ libraries, I have fields for the list type, number of items and comment (if any).

    Go through each of the content, preferably sit down with the site owner and get in details of it. Some useful questions can be asked

  • Is this still relevant? Can it be deleted or skipped for the migration?
  • Is this heavily used? How often does it get accessed?
  • Does this form have custom edit/ new form? Sometimes owners might not even know, so you might have to take extra look by scanning through the forms.
  • Check if pages have custom script with site URL references as this will need to be changed to accommodate the new site url.

It would also be useful to get a comprehensive knowledge of how much storage each site holds. This can help you working out which site has the most content, hence likely to take the longest time during the migration. Sharegate has an inventory reporting tool, which can help but it requires Site Collection Admin access.

  1. Discuss some of the limitations

    Pages library

    Pages library under each site need specific attention, especially if you don’t have site collection admin! Pages which inherit any content type and master page from the parent site will not have these migrated across by Sharegate, meaning these pages will either not be created at the new site, or they will simply show as using default master page. This needs to be communicated and discussed with each owners.

    External Sharing

    External users will not be migrated across to the new site! These are users who won’ be provisioned in the new tenant but still require access to Sharepoint. They will need to be added (invited) manually to a site using their O365 email account or a Microsoft account.

    An O365 account would be whatever account they have been using to get on to their own Sharepoint Online. If they have not had one, they would need to use their Microsoft account, which would be a Hotmail/ Outlook account. Once they have been invited, they would need to response to the email by signing into the portal in order to get provisioned. New SPO site collection will need to have external sharing enabled before external access can happen. For more information, refer to: https://support.office.com/en-us/article/Manage-external-sharing-for-your-SharePoint-Online-environment-C8A462EB-0723-4B0B-8D0A-70FEAFE4BE85

    What can’t Sharegate do?

    Some of the following minor things cannot be migrated to O365:

  • User alerts – user will need to reset their alerts on new site
  • Personal views – user will need to create their personal views again on new site
  • Web part connections – any web part connections will not be preserved

For more, refer: https://support.share-gate.com/hc/en-us/categories/115000076328-Limitations

Performing the migration:

  1. Pick the right time

    Doing the migration at the low activity period would be ideal. User communications should be sent out to inform about the actual date happening as earlier as possible. I tend to stick to middle of the week as that way we still have a couple of days left to solve any issues instead of doing it on Friday or Saturday.

  2. Locking old sites

    During the migration, we do not want any users to be making changes to the old site. If you are migrating site collections, fortunately there’s a way to lock it down, provided you having access to the central admin portal. See https://technet.microsoft.com/en-us/library/cc263238.aspx

    However, if you are migrating sub-sites, there’s no way to lock down a sole sub-site, except changing its site permissions. That also means changing the site permissions risk having all these permissions information lost, so it would be ideal to record these permissions before making any changes. Also, take extra note on lists or libraries with unique permissions, which means they do not inherit site permissions, hence won’t be “locked unless manually changed respectively.

  3. Beware of O365 traffic jam

    Always stick to the Insane mode when running the migration in Sharegate. The Insane mode makes use of the new Offie 365 Migration API which is the fastest way to migrate huge volumes of data to Office365. While it’s been fast to export these data to Office365, I did find a delay in waiting for Office365 to import these into Sharepoint tenant. Sometimes, it could sit there for an hour before continuing with the import. Also, avoid running too many sessions if your VM is not powerful enough.

  4. Delta migration

    The good thing with using Sharegate is that you could do delta migration, which means you only migrate those files which have been modified or added since last migrated. However, it doesn’t handle deletion! If any files have been removed since you last migrated, running a delta sync will not delete these files from the destination end. Therefore, best practice is still delete the list from the destination site and re-create it using the Site Object wizard.

Post-migration check:

Doing the migration at the low activity period would be ideal. User communications should be sent out to inform about the actual date happening as earlier as possible. I tend to stick to middle of the week as that way we still have a couple of days left to solve any issues instead of doing it on Friday or Saturday.

Things to check:

  • Users can still access relevant pages, list and libraries
  • Users can still CRUD files/ items
  • Users can open Office web app (there can be different experience related to authentication when opening Office files, in most cases, users should only get prompted the very first time opening)

A tool to find mailbox permission dependencies

First published at https://nivleshc.wordpress.com

When planning to migrate mailboxes to Office 365, a lot of care must be taken around which mailboxes are moved together. The rule of the thumb is “those that work together, move together”. The reason for taking this approach is due to the fact that there are some permissions that do not work cross-premises and can cause issues. For instance, if a mailbox has delegate permissions to another mailbox (these are permissions that have been assigned using Outlook email client) and if one is migrated to Office 365 while the other remains on-premises, the delegate permissions capability is broken as it does not work cross-premises.

During the recent Microsoft Ignite, it was announced that there are a lot of features coming to Office 365 which will help with the cross-premises access issues.

I have been using Roman Zarka’s Export-MailboxPermissions.ps1 (part of https://blogs.technet.microsoft.com/zarkatech/2015/06/11/migrate-mailbox-permissions-to-office-365/ bundle) script to export all on-premises mailboxes permissions then using the output to decide which mailboxes move together. Believe me, this can be quite a challenge!

Recently, while having a casual conversation with one of my colleagues, I was introduced to an Excel  spreadsheet that he had created. Being the Excel guru that he is, he was doing various VLOOKUPs into the outputs from Roman Zarka’s script, to find out if the mailboxes he was intending to migrate had any permission dependencies with other mailboxes. I just stared at the spreadsheet with awe, and uttered the words “dude, that is simply awesome!”

I was hooked on that spreadsheet. However, I started craving for it to do more. So I decided to take it on myself to add some more features to it. However, not being too savvy with Excel, I decided to use PowerShell instead. Thus was born Find_MailboxPermssions_Dependencies.ps1

I will now walk you through the script and explain what it does

 

  1. The first pre-requisite for Find_MailboxPermissions_Dependencies.ps1 are the four output files from Roman Zarka’s Export-MailboxPermissions.ps1 script (MailboxAccess.csv, MailboxFolderDelegate.csv, MailboxSendAs.csv, MaiboxSendOnBehalf.csv)
  2. The next pre-requisite is details about the on-premises mailboxes. The on-premises Exchange environment must be queried and the details output into a csv file with the name OnPrem_Mbx_Details.csv. The csv must contain the following information (along the following column headings)“DisplayName, UserPrincipalName, PrimarySmtpAddress, RecipientTypeDetails, Department, Title, Office, State, OrganizationalUnit”
  3. The last pre-requisite is information about mailboxes that are already in Office 365. Use PowerShell to connect to Exchange Online and then run the following command (where O365_Mbx_Details.csv is the output file)
    Get-Mailbox -ResultSize unlimited | Select DisplayName,UserPrincipalName,EmailAddresses,WindowsEmailAddress,RecipientTypeDetails | Export-Csv -NoTypeInformation -Path O365_Mbx_Details.csv 

    If there are no mailboxes in Office 365, then create a blank file and put the following column headings in it “DisplayName”, “UserPrincipalName”, “EmailAddresses”, “WindowsEmailAddress”, “RecipientTypeDetails”. Save the file as O365_Mbx_Details.csv

  4. Next, put the above files in the same folder and then update the variable $root_dir in the script with the path to the folder (the path must end with a )
  5. It is assumed that the above files have the following names
    • MailboxAccess.csv
    • MailboxFolderDelegate.csv
    • MailboxSendAs.csv
    • MailboxSendOnBehalf.csv
    • O365_Mbx_Details.csv
    • OnPrem_Mbx_Details.csv
  6.  Now, that all the inputs have been taken care of, run the script.
  7. The first task the script does is to validate if the input files are present. If any of them are not found, the script outputs an error and terminates.
  8. Next, the files are read and stored in memory
  9. Now for the heart of the script. It goes through each of the mailboxes in the OnPrem_Mbx_Details.csv file and finds the following
    • all mailboxes that have been given SendOnBehalf permissions to this mailbox
    • all mailboxes that this mailbox has been given SendOnBehalf permissions on
    • all mailboxes that have been given SendAs permissions to this mailbox
    • all mailboxes that this mailbox has been given SendAs permissions on
    • all mailboxes that have been given Delegate permissions to this mailbox
    • all mailboxes that this mailbox has been given Delegate permissions on
    • all mailboxes that have been given Mailbox Access permissions on this mailbox
    • all mailboxes that this mailbox has been given Mailbox Access permissions on
    • if the mailbox that this mailbox has given the above permissions to or has got permissions on has already been migrated to Office 365
  10. The results are then output to a csv file (the name of the output file is of the format Find_MailboxPermissions_Dependencies_{timestamp of when script was run}_csv.csv
  11. The columns in the output file are explained below
Column Name Description
PermTo_OtherMbx_Or_FromOtherMbx? This is Y if the mailbox has given permissions to or has permissions on other mailboxes. Is N if there are no permission dependencies for this mailbox
PermTo_Or_PermFrom_O365Mbx? This is TRUE if the mailbox that this mailbox has given permissions to or has permissions on is  already in Office 365
Migration Readiness This is a color code based on the migration readiness of this permission. This will be further explained below
DisplayName The display name of the on-premises mailbox for which the permission dependency is being found
UserPrincipalName The userprincipalname of the on-premises mailbox for which the permission dependency is being found
PrimarySmtp The primarySmtp of the on-premises mailbox  for which the permission dependency is being found
MailboxType The mailbox type of the on-premises mailbox  for which the permission dependency is being found
Department This is the department the on-premises mailbox belongs to (inherited from Active Directory object)
Title This is the title that this on-premises mailbox has (inherited from Active Directory object)
SendOnBehalf_GivenTo emailaddress of the mailbox that has been given SendOnBehalf permissions to this on-premises mailbox
SendOnBehalf_GivenOn emailaddress of the mailbox that this on-premises mailbox has been given SendOnBehalf permissions to
SendAs_GivenTo emailaddress of the mailbox that has been given SendAs permissions to this on-premises mailbox
SendAs_GivenOn emailaddress of the mailbox that this on-premises mailbox has been given SendAs permissions on
MailboxFolderDelegate_GivenTo emailaddress of the mailbox that has been given Delegate access to this on-premises mailbox
MailboxFolderDelegate_GivenTo_FolderLocation the folders of the on-premises mailbox that the delegate access has been given to
MailboxFolderDelegate_GivenTo_DelegateAccess the type of delegate access that has been given on this on-premises mailbox
MailboxFolderDelegate_GivenOn email address of the mailbox that this on-premises mailbox has been given Delegate Access to
MailboxFolderDelegate_GivenOn_FolderLocation the folders that this on-premises mailbox has been given delegate access to
MailboxFolderDelegate_GivenOn_DelegateAccess the type of delegate access that this on-premises mailbox has been given
MailboxAccess_GivenTo emailaddress of the mailbox that has been given Mailbox Access to this on-premises mailbox
MailboxAccess_GivenTo_DelegateAccess the type of Mailbox Access that has been given on this on-premises mailbox
MailboxAccess_GivenOn emailaddress of the mailbox that this mailbox has been given Mailbox Access to
MailboxAccess_GivenOn_DelegateAccess the type of Mailbox Access that this on-premises mailbox has been given
OrganizationalUnit the Organizational Unit for the on-premises mailbox

The color codes in the column Migration Readiness correspond to the following

  • LightBlue – this on-premises mailbox has no permission dependencies and can be migrated
  • DarkGreen  – this on-premises mailbox has got a Mailbox Access permission dependency to another mailbox. It can be migrated while the other mailbox can remain on-premises, without experiencing any issues as Mailbox Access permissions are supported cross-premises.
  • LightGreen – this on-premises mailbox can be migrated without issues as the permission dependency is on a mailbox that is already in Office 365
  • Orange – this on-premises mailbox has SendAs permissions given to/or on another on-premises mailbox. If both mailboxes are not migrated at the same time, the SendAs capability will be broken. Lately, it has been noticed that this capability can be restored by re-applying the SendAs permissions to both the migrated and on-premises mailbox post migration
  • Pink – the on-premises mailbox has FolderDelegate given to/or on another on-premises mailbox. If both mailboxes are not migrated at the same time, the FolderDelegate capability will be broken. A possible workaround is to replace the FolderDelegate permission with Full Mailbox access as this works cross-premises, however there are privacy concerns around this workaround as this will enable the delegate to see all the contents of the mailbox instead of just the folders they had been given access on.
  • Red – the on-premises mailbox has SendOnBehalf permissions given to/or on another on-premises mailbox. If both mailboxes are not migrated at the same time, the SendOnBehalf capability will be broken. A possible workaround could be to replace SendOnBehalf with SendAs however the possible implications of this change must be investigated

Yay, the output has now been generated. All we need to do now is to make it look pretty in Excel 🙂

Carry out the following steps

  • Import the output csv file into Excel, using the semi-colon “;” as the delimiter (I couldn’t use commas as the delimiter as sometimes department,titles etc fields use them and this causes issues with the output file)
  • Create Conditional Formatting rules for the column Migration Readiness so that the fill color of this cell corresponds to the word in this column (for instance, if the word is LightBlue then create a rule to apply a light blue fill to the cell)

Thats it Folks! The mailbox permissions dependency spreadsheet is now ready. It provides a single-pane view to all the permissions across your on-premises mailboxes and gives a color coded analysis on which mailboxes can be migrated on their own without any issues and which might experience issues if they are not migrated in the same batch with the ones they have permissions dependencies on.

In the output file, for each on-premises mailbox, each line represents a permission dependency (unless the column PermTo_OtherMbx_Or_FromOtherMbx? is N). If there are more than one set of permissions applicable to an on-premises mailbox, these are displayed consecutively underneath each other.

It is imperative that the migration readiness of the mailbox be evaluated based on the migration readiness of all the permissions associated with that mailbox.

Find_MailboxPermissions_Dependencies.ps1 can be downloaded from  GitHub

A sample of the spreadsheet that was created using the output from the Find_MailboxPermissions_Dependencies.ps1 script can be downloaded from https://github.com/nivleshc/arm/blob/master/Sample%20Output_MailboxPermissions%20Dependencies.xlsx

I hope this script comes in handy when you are planning your migration batches and helps alleviate some of the headache that this task brings with it.

Till the next time, have a great day 😉

Exchange 2010 Hybrid Auto Mapping Shared Mailboxes

Migrating shared mailboxes to Office 365 is one of those things that is starting to become easier over time, especially with full access permissions now working cross premises.

One little discovery that I thought I would share is that if you have an Exchange 2010 hybrid configuration, the auto mapping feature will not work cross premises. (Exchange 2013 and above, you are ok and have nothing to worry about).

This means if an on-premises user has access to a shared mailbox that you migrate to Office 365, it will disappear from their Outlook even though they still have full access.

Unless you migrate the shared mailbox and user at the same time, the only option is to manually add the shared mailbox back into Outlook.

Something to keep in mind especially if your user base accesses multiple shared mailboxes at a time.

 

 

Complex Mail Routing in Exchange Online Staged Migration Scenario

Notes From the Field:

I was recently asked to assist an ongoing project with understanding some complex mail routing and identity scenario’s which had been identified during planning for an upcoming mail migration from an external system into Exchange Online.

New User accounts were created in Active Directory for the external staff who are about to be migrated. If we were to assign the target state, production email attributes now, and create the exchange online mailboxes, we would have a problem nearing migration.

When the new domain is verified in Office365 & Exchange Online, new mail from staff already in Exchange Online would start delivering to the newly created mailboxes for the staff soon to be onboarded.

Not doing this, will delay the project which is something we didn’t want either.

I have proposed the following in order to create a scenario whereby cutover to Exchange Online for the new domain is quick, as well as not causing user downtime during the co-existence period. We are creating some “co-existence” state attributes on the on-premises AD user objects that will allow mail flow to continue in all scenarios up until cutover. (I will come back to this later).

generic_exchangeonline_migration_process_flow

We have configured the AD user objects in the following way

  1. UserPrincipalName – username@localdomainname.local
  2. mail – username@mydomain.onmicrosoft.com
  3. targetaddress – username@mydomain.com

We have configured the remote mailbox objects in the following way

  1. mail – username@mydomain.onmicrosoft.com
  2. targetaddress – username@mydomain.com

We have configured the on-premises Exchange Accepted domain in the following way

  1. Accepted Domain – External Relay

We have configured the Exchange Online Accepted domain in the following way

  1. Accepted Domain – Internal Relay

How does this all work?

Glad you asked! As I eluded to earlier, the main problem here is with staff who already have mailboxes in Exchange Online. By configuring the objects in this way, we achieve several things:

  1. We can verify the new domains successfully in Office365 without impacting existing or new users. By setting the UPN & mail attributes to @mydomain.onmicrosoft.com, Office365 & Exchange Online do not (yet) reference the newly onboarded domain to these mailboxes.
  2. By configuring the accepted domains in this way, we are doing the following:
    1. When an email is sent from Exchange Online to an email address at the new domain, Exchange Online will route the message via the hybrid connector to the Exchange on-premises environment. (the new mailbox has an email address @mydomain.onmicrosoft.com)
    2. When the on-premises environment receives the email, Exchange will look at both the remote mailbox object & the accepted domain configuration.
      1. The target address on the mail is configured @mydomain.com
      2. The accepted domain is configured as external relay
      3. Because of this, the on-premises exchange environment will forward the message externally.

Why is this good?

Again, for a few reasons!

We are now able to pre-stage content from the existing external email environment to Exchange Online by using a target address of @mydomain.onmicrosoft.com. The project is no longer at risk of being delayed ! 🙂

At the night of cutover for MX records to Exchange Online (Or in this case, a 3rd party email hygiene provider),  We are able to use the same powershell code that we used in the beginning to configure the new user objects to modify the user accounts for production use. (We are using a different csv import file to achieve this).

Target State Objects

We have configured the AD user objects in the following way

  1. UserPrincipalName – username@mydomain.com
  2. mail – username@mydomain.com
  3. targetaddress – username@mydomain.mail.onmicrosoft.com

We have configured the remote mailbox objects in the following way

  1. mail
    1. username@mydomain.com (primary)
    2. username@mydomain.onmicrosoft.com
  2. targetaddress – username@mydomain.mail.onmicrosoft.com

We have configured the on-premises Exchange Accepted domain in the following way

  1. Accepted Domain – Authoritive

We have configured the Exchange Online Accepted domain in the following way

  1. Accepted Domain – Internal Relay

NOTE: AAD Connect sync is now run and a manual validation completed to confirm user accounts in both on-premises AD & Exchange, as well as Azure AD & Exchange Online to confirm that the user updates have been successful.

We can now update DNS MX records to our 3rd party email hygiene provider (or this could be Exchange Online Protection if you don’t have one).

A final synchronisation of mail from the original email system is completed once new mail is being delivered to Exchange Online.

SharePoint content migration using Sharegate and Powershell

Content Migration

When it comes to content migration we have the option to write code (script) or use a migration toolset or a combination of both, thus it is important to identify the appropriate toolset based on “ease of use” and what we need to achieve.

I have evaluated several migration toolsets however, in this blog I am going with Sharegate as I have extensively used this product recently.

Sharegate is a toolset used to  “Manage, Migrate and Secure SharePoint & Office 365”. 

We will look at migrating a document library in SharePoint O365 environment to another document library in the same environment and see how we can use Sharegate to speed up this process. I will incorporate some of my experiences, working with a customer on a document migration strategy.

What are we trying to achieve?
              Migrate document libraries in SharePoint and apply metadata in the process using a combination of excel and PowerShell scripting along with Sharegate. 

sharegate-content-migration-using-powershell

Image 1

 

  1. Select the Source document library to Migrate.
  2. Create the spreadsheet using Sharegate “Export to Excel” function.
  3. Update the Metadata within the excel spreadsheet (this can be done manually or by a console app).
  4. Using Sharegate PowerShell automate the Import/Migration process.

 

To achieve this, we need to do the following:

  1. Logon to Sharegate and click on the “Copy SharePoint Content” option as depicted below.
Sharegate1.png

Image 2

2. Connect using  your O365 SharePoint tenant, using your credentials.

3. Below, as you can see I have a bunch of test documents that I need to migrate from source document library to the target document library.

Sharegate 3 1.png

Image 3

4. Click on Excel to export an excel spreadsheet (to update the metadata columns).

5. Select the documents that you would like to copy across, however before starting to copying you will need to set up a custom property template as below:

Property templates allow you to select the options used for the bulk edition and to set custom actions for all of the list or library’s columns.

sharegate4

Image 4

6. Give the template a name and set up the template properties as below as per your requirements.

sharegate5

Image 5

7. When you click on “Save & Start” you can start copying the files across using the Sharegate UI.

Using PowerShell script to automate this migration process

  1. Save the Excel file locally, for e.g. “MigrationTest.xlsx”
Sharegate excel.png

Image 6

2. Update the columns in the excel spreadsheet with the appropriate metadata and save the file.

3. Click on the Sharegate PowerShell  to open the PowerShell as shown in the below Image.

sgps

Image 7

4. Run the below script.

PowerShell script using Sharegate’s “Copy-Content” cmdlet :

#PowerShell Script to migrate documents using Sharegate PowerShell
#Connection to O365 account
$mypassword = ConvertTo-SecureString "******" -AsPlainText -Force
$srcSite = Connect-Site -Url "https://yourtenant.sharepoint.com/sites/test/ /" -Username "user@domain.com.au" -Password $mypassword 
Write-Host "Connected to source"
$dstSite = Connect-Site -Url "https:// yourtenant.sharepoint.com/sites/test2/DestDocLib/" -Username "user@domain.com.au" -Password $mypassword
Write-Host "Connected to Target"
Write-Host $srcList
$srcList = Get-List -Name "SrcDocLib" -Site $srcSite
Write-Host $dstList
$dstList = Get-List -Name "DestDocLib" -Site $dstSite
$Template = “TestTemplate" 
Write-Host "Copying..."
Copy-Content -TemplateName $Template -SourceList $srcList -DestinationList $dstList -ExcelFilePath "C:\POC\MigrationTest.xlsx"
Write-Host "Done Copying"

 

PowerShell to “export to excel” from Sharegate

As of today, Sharegate does not have an “export to excel” script as yet, this step has to be done through the Sharegate UI. I have talked to the Sharegate support team and they came back to me saying, “this is one of the most requested feature and will be released soon“. Please refer to the Sharegate documentation for updates on new features: http://help.share-gate.com/

Conclusion

Using Sharegate and PowerShell we can automate the document migration and metadata tagging. Going further you can create a number of excel export files using Sharegate and script to iterate through each excel file as input parameter to the above PowerShell script.

 

 

Migrating resources from AWS to Microsoft Azure

Kloud receives a lot of communications in relation to the work we do and the content we publish on our blog. My colleague Hugh Badini recently published a blog about Azure deployment models from which we received the following legitimate follow up question…

So, Murali, thanks for letting us know you’d like to know more about this… consider this blog a starting point :).

Firstly though…

this topic (inter-cloud migrations), as you might guess, isn’t easily captured in a single blog post, nor, realistically in a series, so what I’m going to do here is provide some basics to consider. I may not answer your specific scenario but hopefully provide some guidance on approach.

Every cloud has a silver lining

The good news is that if you’re already operating in a cloud environment then you have likely had to deal with many of the fundamental differences between traditional application hosting and architecture and that of cloud platforms.

You will have dealt with how you ensure availability of your application(s) across outages; dealing with spikes in traffic via use of elastic compute resources; and will have come to recognise that is many ways, Infrastructure-as-a-Service (IaaS) in the cloud has many similarities to the way you’ve always done things on-prem (such as backups).

Clearly you have less of a challenge in approaching a move to another cloud provider.

Where to start

When we talk about moving from AWS to Azure we need to consider a range of things – let’s take a look at some key ones.

Understand what’s the same and what’s different

Both platforms have very similar offerings, and Microsoft provides many great resources to help those utilising AWS to build an understanding of which services in AWS map to which services in Azure. As you can see the majority of AWS’ services have an equivalent in Azure.

Microsoft’s Channel 9 is also a good place to start to learn about the similarities, with there being no better place than the Microsoft Azure for Amazon AWS Professional video series.

So, at a platform level, we are pretty well covered, but…

the one item to be wary of in planning any move of an existing application is how it has been developed. If we are moving components from, say, an EC2 VM environment to an Azure VM environment then we will probably have less work to do as we can build our Azure VM as we like (yes, as we know, even Linux!) and install whatever languages, frameworks or runtimes we need.

If, however, we are considering moving an application from a more Platform-as-a-Service capability such AWS Lambda we need to look at the programming model required to move its equivalent in Azure – Azure Functions. While AWS Lambda and Azure Functions are functionally the same (no pun intended) we cannot simply take our Lambda code and drop it into an Azure Function and have it work. It may not even make sense to utilise Azure Functions depending on what you are shifting.

It’s also important to consider the differences in the availability models in use today in AWS and Azure. AWS uses Availability Zones to help you manage the uptime of your application and it’s components. In Azure we manage availability at two levels – locally via Availability Sets and then geographically through use of Regions. As these models differ it’s an important area to consider for any migration.

Tools are good, but are no magic wand

Microsoft provides a way to migrate AWS EC2 instances to Azure using Azure Site Recovery (ASR) and while there are many tools for on-prem to cloud migrations and for multi-cloud management, they mostly steer away from actual migration between cloud providers.

Kloud specialises in assessing application readiness for cloud migrations (and then helping with the migration), and we’ve found inter-cloud migration is no different – understanding the integration points an application has and the SLAs it must meet are a big part of planning what your target cloud architecture will look like. Taking into consideration underlying platform services in use is also key as we can see from the previous section.

If you’re re-platforming an application you’ve built or maintain in-house, make sure to review your existing deployment processes to leverage features available to you for modern Continuous Deployment (CD) scenarios which are certainly a strength of Azure.

Data has a gravitational pull

The modern application world is entirely a data-driven one. One advantage to cloud platforms is the logically bottomless pit of storage you have at your disposal. This presents a challenge, though, when moving providers where you may have spent years building data stores containing Terabytes or Petabytes of data. How do you handle this when moving? There are a few strategies to consider:

  • Leave it where it is: you may decide that you don’t need all the data you have to be immediately available. Clearly this option requires you to continue to manage multiple clouds but may make economic sense.
  • Migrate via physical shipping: AWS provides Snowball as a way to extract data out of AWS without needing to pull it over a network connection. If your solution allows it you could ship your data out of AWS to a physical location, extract that data, and then prepare it for import into Azure, either over a network connection using ExpressRoute or through the Azure Import/Export service.
  • Migrate via logical transfer: you may have access to a service such as Equinix’s Cloud Exchange that allows you to provision inter-connects between cloud and other network providers. If so, you may consider using this as your migration enabler. Ensure you consider how much data you will transfer and what, if any, impact the data transfer might have on existing network services.

Outside of the above strategies on transferring of data, perhaps you can consider a staged migration where you only bring across chunks of data as required and potentially let older data expire over time. The type and use of data obviously impacts on which approach to take.

Clear as…

Hopefully this post has provided a bit more clarity around what you need to consider when migrating resources from AWS to Azure. What’s been your experience? Feel free to leave comments if you have feedback or recommendations based on the paths you’ve followed.

Happy dragon slaying!

Azure Deployment Models And How To Migrate From ASM to ARM

This is a post about the two deployment models currently available in Azure, Service Management (ASM) and Resource Manager (ARM). And how to migrate from one to the other if necessary.

About the Azure Service Management deployment model

The ASM model, also known as version 1 and Classic mode, started out as a web interface and a backend API for the PaaS services Azure opened with at launch.

Features

  1. ASM deployments are based on an XML schema.
  2. ASM operations are based at the cloud service level.
  3. Cloud services are the logical containers for IaaS VMs and PaaS services.
  4. ASM is managed through the CLI, old and new portals (features) and PowerShell.
Picture1

In ASM mode the cloud service acts as a container for VMs and PaaS services.

About the Resource Manager deployment model

The ARM model consists of a new web interface and API for resource management in Azure which came out of preview in 2016 and introduced several new features.

Features

  1. ARM deployments are based on a JSON schema.
  2. Templates, which can be imported and exported, define deployments.
  3. RBAC support.
  4. Resources can be tagged for logical access and grouping.
  5. Resource groups are the logical containers for all resources.
  6. ARM is managed through PowerShell (PS), the CLI and new portal only.
Picture2

In ARM mode the resource group acts as a container for all resources.

Why use Service Management mode?

  1. Support for all features that are not exclusive to ARM mode.
  2. No new features will be made available in this mode.
  3. Cannot process operations in parallel (.e.g. vm start, vm create, etc).
  4. ASM needs a VPN or ExpressRoute connection to communicate with ARM.
  5. In Classic mode templates cannot be used to configure resources.

Users should therefore only be using service management mode if they have legacy environments to manage which include features exclusive to it.

Why use Resource Manager mode?

  1. Support for all features that are not exclusive to ASM mode.
  2. Can process multiple operations in parallel.
  3. JSON templates are a practical way of managing resources.
  4. RBAC, resource groups and tags!

Resource manager mode is the recommended deployment model for all Azure environments going forward.

Means of migration

The following tools and software are available to help with migrating environments.

ASM2ARM custom PowerShell script module.
Platform supported migrations using PowerShell or the Azure CLI.
The MigAz tool.
Azure Site Recovery.

About ASM2ARM

ASM2ARM is a custom PowerShell script module for migrating a single Virtual Machine from Azure Service Management to Resource Manager stack which makes available two new cmdlets.

Cmdlets: Add-AzureSMVmToRM & New-AzureSmToRMDeployment

Code samples:

$vm = Get-AzureVm -ServiceName acloudservice -Name atestvm
Add-AzureSMVmToRM -VM $vm -ResourceGroupName aresourcegroupname -DiskAction CopyDisks -OutputFileFolder D:\myarmtemplates -AppendTimeStampForFiles -Deploy

Using the service name and VM name parameters directly.

Add-AzureSMVmToRM -ServiceName acloudservice -Name atestvm -ResourceGroupName aresourcegroupname -DiskAction CopyDisks -OutputFileFolder D:\myarmtemplates -AppendTimeStampForFiles -Deploy

Features

  1. Copy the VM’s disks to an ARM storage account or create a new one.
  2. Create a destination vNet and subnet for migrated VMs.
  3. Create ARM JSON templates and PS script for deployment of resources.
  4. Create an availability set if one exists at source.
  5. Create a public IP if the VM is open to the internet.
  6. Create network security groups for the source VMs public endpoints.

Limitations

  1. Cannot migrate running VMs.
  2. Cannot migrate multiple VMs.
  3. Cannot migrate a whole ASM network
  4. Cannot create load balanced VMs.

For more information: https://github.com/fullscale180/asm2arm

About platform supported migrations using PowerShell

Consists of standard PowerShell cmdlets from Microsoft for migrating resources to ARM.

Features

  1. Migration of virtual machines not in a virtual network (disruptive!).
  2. Migration of virtual machines in a virtual network (non-disruptive!).
  3. Storage accounts are cross compatible but can also be migrated.

Limitations

  1. More than one availability set in a single cloud service.
  2. One or more availability sets.
  3. VMs not in an availability set in a single cloud service.

About platform supported migrations using the Azure CLI

Consists of standard Azure CLI commands from Microsoft for migrating resources to ARM.

Features & Limitations

See above.

A video on the subject of platform supported migrations using PowerShell or the CLI.

About MigAz

MigAz comes with an executable which outputs reference JSON files and makes available a Powershell script capable of migrating ASM resources and blob files to ARM mode environments.

Features

  1. MigAz exports JSON templates from REST API calls for migration.
  2. New resources are created in and disk blobs copied to their destination, all original resources left intact.
  3. Exported JSON can (and should) be reviewed and customized before use.
  4. Export creates all new resources in a single resource group.
  5. Supports using any subscription target, same or different.
  6. With JSON being at the core of ARM, templates can be used for DevOPs.
  7. Can be used to clone existing environments or create new ones for testing.
main

A screenshot of the MigAZ frontend GUI.

About Azure Site Recovery (ASR)

ASR is a backup, continuity and recovery solution set which can also be used for migrating resources to ARM.

Features

  1. Cold backup and replication of both on and off premise virtual machines.
  2. Cross compatible between ASM and ARM deployment models.
  3. ASM virtual machines can be restored into ARM environments.

Picture1

Pros and cons

ASM2ARM: Requires downtime but can be scripted which has potential however this approach only allows for the migration of one VM at a time which is a sizable limitation.

Azure PowerShell and CLI: This approach is well rounded. It can be scripted and allows for rollbacks. Supported migration scenarios include some caveats however and you cannot migrate a whole vNet into an existing network.

MigAz Tool: Exports JSON of ASM resources for customization and uses a PowerShell script for deployment to ARM. Downtime is required going to the same address space or cutting over to new services but this is easily your best and most comprehensive option at this time.

Site Recovery: Possibly the easiest way of migrating resources and managing the overall process but requires a lot of work to set up. Downtime is required in all cases.

Migrating Sitecore 7.0 to Azure IaaS Virtual Machines – Part 1

INTRODUCTION

Recently, I had the opportunity of working on a Sitecore migration project. I was tasked with moving a third-party hosted Sitecore 7.0 instance to Azure IaaS. The task sounds simple enough but if only life was that simple. A new requirement was to improve upon the existing infrastructure by making the new Sitecore environment highly available and the fun begins right there.

To give some context, the CURRENT Sitecore environment is not highly available and has the following server topology:

  • Single Sitecore Content Delivery (CD) Instance
  • Single Sitecore Content Management (CM) Instance
  • Single SQL Server 2008 Instance for Sitecore Content and Configurations
  • Single SQL Server 2008 Instance for Sitecore Analytics

The NEW Sitecore Azure environment is highly available and has the following server topology:

  • Load-balanced Sitecore CD Instances (2 servers)
  • Single Sitecore CM Instance (single server)
  • SQL Server 2012 AlwaysOn Availability Group (AAG) for Sitecore Content (2 servers)
  • SQL Server 2012 AlwaysOn Availability Group (AAG) for Sitecore Analytics (2 servers)

In this tutorial I will walk you through the processes required to provision a brand new Azure environment and migrate Sitecore.

This tutorial will be split into three parts and they are:

  1. Part 1 – Provision the Azure Sitecore Environment
  2. Part 2 – SQL Server 2012 AlwaysOn Availability Group Configuration (coming soon)
  3. Part 3 – Sitecore Configuration and Migration (coming soon)

 

PART 1 – Provision the Azure Sitecore Environment

In the Part 1 of the tutorial, we’ll look at building the foundations required for the Sitecore migration.

1. Sitecore Web Servers

  • First we need to create the two Sitecore CD instances. In Azure Management Portal, navigate to Virtual Machines and create a new virtual machine from gallery. Find the Windows Server 2012 R2 Datacenter template, go through the creation wizard and fill out all the required information.

1

  • When creating a new VM it must be assigned to a Cloud Service, you will get the opportunity to create a new Cloud Service if you don’t already have one. For load-balanced configurations, you also need to create a new Availability Set. Let’s create that too.

2

  • Repeat the above steps to create the second Sitecore CD instance and assign it to the same Cloud Service and Availability Group.
  • Repeat the above steps to create the Sitecore CM instance and create a new Cloud Service for it (you don’t need to create an Availability Group for a single instance).
  • Once all the VMs have been provision and configured properly, make sure they are all joined to the same domain in AD.

2. Sitecore SQL Servers

  • Now we need to create two SQL Server 2012 clusters – one for Sitecore content and the other for Sitecore analytics.
  • In Azure Management Portal, navigate to Virtual Machines and create a new virtual machine from gallery. Find the SQL Server 2012 SP2 Enterprise template (it will also work with SQL Server 2012 Standard and Web editions), go through the creation wizard and fill out all the required information.

Please Note: It’s important to note that by creating a new VM based on the SQL Server template, you are automatically assigned a pre-bundled SQL Server licence. If you want to use your own SQL Server licence, you’ll have to manually install SQL Server after spinning up a standard Windows Server VM.

3

  • During the creation process, create a new Cloud Service and Availability Group, and assign them to this VM.

4

  • Repeat the above steps to create the second Sitecore SQL Server instance and assign it to the same Cloud Service and Availability Group. These two SQL Servers will form the SQL Server cluster.
  • Repeat the above steps for the second SQL Server cluster for Sitecore Analystics.
  • Once all the VMs have been provision and configured properly, make sure they are all joined to the same domain in AD.

3. Enable Load-balanced Sitecore Web Servers

In order to make Sitecore CD instances highly available, we need configure a load-balancer that will handle traffic for those two Sitecore CD instances. In Azure terms, it just means adding a new endpoint, clicking on a few check boxes and you are ready to go. If only everything in life was that easy 🙂

  • In Azure Management Portal, find your Sitecore CD VM instance, open the Dashboard, navigate to the Endpoints tab and add a new endpoint (located at the bottom left hand corner).

5

  • You would need to add two new load-balanced endpoints – one for normal web traffic (port 80) and another for secure web traffic (port 443). In the creation wizard, specify the type of traffic for the endpoint, in this case it’s for HTTP Port 80. Make sure you check the Create a Load-balanced Set check box.

6

  • In the next screen, you’ll have to give the load-balanced set a name and leave the rest of the options as the default, confirm and create.

7

  • Do the same for secure web traffic and create a new endpoint for HTTPS Port 443.
  • Find your second Sitecore CD VM instance, open the Dashboard, navigate to the Endpoints tab and add a new endpoint (located at the bottom left hand corner). You’ll also need to add two load-balanced endpoints – one for normal web traffic (port 80) and another for secure web traffic (port 443). But this time around, you’ll create the endpoints based on the existing Load-Balanced Sets.

8

  • On the next screen, give the endpoint a name, confirm and create. Repeat the same steps for the HTTPS endpoint.

9

  • You should now have load-balanced ready Sitecore CD instances.

 

The next part of this tutorial, we’ll look at how to install and configure SQL Server 2012 AlwaysOn Availability Group. Please stay tuned for the Part 2 of this tutorial.