The 5 ways to migrate from Skype for Business to Microsoft Teams

Microsoft recently published a technet article outlining the different ways to migrate away from Skype for Business to Microsoft Teams. The article currently contains 5 different migration methods. Lets take a closer look at each of them, and how they might be used within your organisation.

The 5 migration methods

They say good things come in three’s, but in this case they come in five! Five different methods of moving from SfB to Microsoft Teams. When it comes to migration planning, choice is a good thing

5-migration-methods-teams

 

Migration Method 1: Skype for Business with Teams Collaboration

Ok, so you have a Skype for Business deployment right now, and are looking at moving to Teams. The problem is that Teams just doesn’t meet your requirements right now. This could be due to:

  • Running custom SfB applications such as a call centre app, or on premise UCMA/UCWA app.
  • Teams is missing a feature that you currently use in Skype for Business

If this is you and your goal is to adopt Teams quickly, you can easily start using Teams for collaboration. Your users can quickly faimilarise themselves with Teams and how they can use it to work with colleagues on documents in SharePoint and OneDrive, as well as sharing ideas within their newly created Teams and Channels.

Advantages:

  • No overlapping capabilities between Teams and Skype for Business.
  • Instant messaging and chat will reside in Skype for Business (tied to calling).

Caveats:

  • None!

 

Migration Method 2: Skype for Business with Teams Collaboration and Meetings.

Maybe you already have a Skype for Business deployment with significant use of enterprise voice, but right now some of your calling requirements aren’t yet met by Teams calling (such as a third party meeting service).

If this sounds like you, consider enabling Teams for Collaboration as well as Meetings. Existing Skype for Business scheduled meetings will work as normal, but users will be able to create new meetings within Teams.

Advantages:

  • Start Teams adoption quickly, going beyond group collaboration.
  • Improve your users’ meetings experience.

Caveats:

  • Instant messaging and chat will reside in Skype for Business (tied to calling)

 

Migration Method 3: Islands – The default option

If you choose to do nothing, Office 365 enables “Islands” mode by default.

Both Skype for Business and Teams continue to run within their own “island” and all features and functions are enabled within both products.

You may consider this approach if you’re running a PoC with a number of users, and want them to experience the full range of Teams features whilst still having the ability to use Skype for Business.

Of course, without the right user adoption and communications, things can get messy fast. Be sure that your communication around how each product should be used is solid.

Advantages:

  • Simple to operate, no interoperability.
  • Best Teams experience up-front for all capabilities

Caveats:

  • Requires good user communication to avoid confusion and to drive usage toward Teams.
  • Exit strategy requires users to have fully adopted Teams by the time Skype for Business is   decommissioned.

 

Migration Method 4: Teams only.

Alright, so you’re ready to take the plunge and use Microsoft Teams. Of course, you may still have users using Skype for Business on premise, but you want all of your cloud based users to use Teams.

Advantages:

  • Limits user confusion by providing only one client to work with.

Caveats:

  • Interoperability only supports basic chat and calling between Skype for Business and Teams

 

Migration Method 5: Skype for Business only

And lastly, you may choose to avoid Teams (for now at least), and wish to stick with Skype for Business only.

Keep in mind that at some point you’re going to need to make the move to Teams anyway, but at least you still have the option for now.

Advantages:

  • Continue to meet business requirements that currently can only be met by Skype for Business.

Caveats:

  • Interoperability only supports basic chat and calling between Skype for Business and Teams.

 

Upgrade Journeys

Still following? Good. Maybe go and grab a coffee. Don’t worry, i’ll wait. Ok, you’re back? Let’s push on.

There are two recommended upgrade journeys, one “simple” and the other .. not so much.

Simple Upgrade

If you like keeping things simple (and who doesn’t), there’s a three step process of Selecting users for a  PoCEnabling Teams Collaboration ModeEnabling Teams-Only Mode

That process is outlined in the below graphic:

 

Simple-upgrade-teams

This is a nice and simple way of selecting your users based upon their job roles (and their eagerness for change), then slowly introducing them to Teams before enabling Teams only mode for them.

 

Gradual Upgrade

I hope you’ve finished that coffee! The other recommended upgrade path is the gradual path. I’ll give you a moment to absorb the below graphic:

gradual-upgrade-teams

As you can see, it is possible to migrate different users at different rates. You may choose to move IT into Teams only mode quickly, but choose to move HR and Sales at a slower pace. Which ever method you choose, you’ll more than likely want to end up in a Teams Only mode.

 

Upgrade Scenarios

Alright. At this point you’re probably saying “That’s great Craig. I have a choice. 5 choices to be exact. But uh … which one do I choose?”.

Great question! Of course, you’ll need to have a think about how your organisation responds to change, and how you’ll equip your userbase to start using and adopting Microsoft Teams. The below may help steer you in the right direction, though!

 

Scenario 1: I’m Running Pure Skype For Business Online

In short, move to Teams. You’re not using any custom applications or have any on premise servers to deal with. Sort your user adoption comms out, select some users for a PoC and get them up and running with Teams.

Once everyone is trained and happy, enable Teams Only.

 

Scenario 2: I have Cloud Connector Edition (CCE) deployed

Firstly, kudos. CCE is an awesome product. Secondly,  You’re in a great position to deploy Direct Routing to Microsoft Teams to continue using your existing Sonus or Audiocodes SBC and Phone company.

Consider the approach of enabling Teams in Collaboration and Meeting only mode first of all. you’ll be able to continue using CCE to route calls to Skype for Business as well as direct routing to route calls to Teams.

 

Scenario 3: Skype for Business Hybrid with Sfb / Teams Online

This one is a popular scenario. The good news is you have many options available to you. You could enable Skype for Business with Teams collaboration–only mode, Skype for Business with Teams collaboration and meetings mode, keep Islands mode enabled or jump ship and enable Teams-only mode.

Keep in mind that your existing on premise SfB users will be unaffected by this change. Only your cloud users will be able to communicate with Teams users, and vice versa.

Work to create a plan of moving as many (or all) users from SfB on premise to SfB online, and then to Teams. Leave only those users that absolutely must remain on premise (because of specific SfB on premise requirements).

 

Scenario 4: Skype for Business Server on premise – no Hybrid

There’s good news for you. Microsoft have announced Skype for Business server 2019 for on premise, which we’re told will help you to eventually move your users to Office 365 and Microsoft Teams.

If you have no desire to move users to Office 365 or to Teams, consider upgrading your Skype for Business on premise environment to 2019, once it’s released.

 

Next Steps

Microsoft have outlined the steps mentioned above in their own technet post, which can be found here: https://docs.microsoft.com/en-au/MicrosoftTeams/upgrade-and-coexistence-of-skypeforbusiness-and-teams

 

 

Measure O365 ATP Safe Attachments Latency using PowerShell

Microsoft Office 365 Advanced Threat Protection (ATP) is a cloud based security service that is part of the O365 E5 offering. Also can be separately added to other O365 subscriptions. Now a lot can be learned about ATP from here. But in this post we’re going to extract data corresponding to one of ATP’s primary features; ATP Safe Attachments.

In short, ATP Safe Attachments scans documents for malicious content and can block these attachments depending on the policy configuration. More details can be learned about ATP Safe Attachments from here. Now, Safe Attachments can basically react to emails with attachments in multiple ways. It can deliver the email without an attachment, just a thumbnail that tells the user that ATP is currently scanning the attachment. Or it can delay the message until scanning is complete. Microsoft has been  improving on ATP Safe Attachments with a lot of new features like reducing message delays, support for different file formats, dynamic delivery, and previewing, etc. But what we’re interested in this post is that the delay that ATP Safe Attachments introduces if the policy was set to delay emails as sometimes the other features may cause confusion to users and would require educating the end-user if enabled.

But why is there a delay in the first place?

The way ATP Safe Attachments work is by creating a small sandbox, then it detonates / opens these attachments in that sandbox. Basically it checks what types of changes the attachment does to the sandbox and also uses machine learning to then decide if the attachment is safe or not. It is a nice feature, and it actually simulates how users open attachments when receiving them.

So how fast is this thing?

Now recently I’ve been working on a project with a customer, who wanted to test ATP Safe Attachments with the policy configured to delay messages and not deliver and scan at the same time (dynamic delivery). The question they had in mind is that how can we ensure that ATP Safe Attachments delay is acceptable (tried so hard to find an official document from Microsoft around the numbers but couldn’t). But the delay is not that big of a deal, it is about a minute or two (used to be a lot longer when ATP was first introduced). That is really fast considering the amount of work that needs to be done in the the backend for this thing to work. Still the customer wanted this measured, and I had to think of a way in doing that. So I tried to find a direct way to do this, but with all of the advanced reporting that ATP brings to the table, there isn’t one that tells you how long messages actually take to arrive to the users mailbox, if they have ATP Safe Attachments policy in place and set to delay messages.

Now let us define ‘delay‘ here. The measurement is done when messages arrive at Exchange Online, now you can have other solutions in front of O365 so that doesn’t count. So we are going to measure the time the message hits Exchange Online Protection (EOP) and ATP, until O365 delivers the message to the recipient. Also the size of the attachment(s) plays a role in the process.

The solution!

Now assuming you have the Microsoft Exchange Online Modules installed, or if you have MFA enabled, you need to authenticate in a different way (that was my case and I will show you how I did it).

First.. jump on to the Office 365 Admin Centre (assuming you’re an admin). Then navigate to the Exchange Online admin centre. From the left side, search for ‘Hybrid‘, an then click on second ‘Configure’ button (This one installs the Exchange Online PowerShell modules for users with MFA). Once the file is downloaded, click on it and let it do its magic. And once you’re in there, you should see this:

Screen Shot 2018-07-19 at 9.55.02 am

Connect-EXOPSSession -UserPrincipalName yourAccount@testcorp.com 

Now go through the authentication process and let PowerShell load the modules into this session. You should see the following:

Screen Shot 2018-07-19 at 10.00.57 am

Now this means all of the modules have been loaded into the PowerShell session. Before we do any PowerShell scripting, let us do the following:

  • Send two emails to an address that has ATP Safe Attachment policy enabled.
  • The first email is simple and has no attachments into it.
  • The second contains at least one attachment (i.e. a DOCX or PDF File).
  • Now the subject name shouldn’t matter, but for the sake of this test, try to name the subject in a way that you can tell which email contains the attachment.
  • Wait 5 minutes, why? Because ATP Safe Attachments should do some magic before releasing that email (assuming you have it turned ON and for delaying messages only).

Let us do some PowerShell scripting 🙂

Declare a variable that contains the recipient email address that has ATP Safe Attachments enabled. Then use the cmdlet (Get-MessageTrace) to fetch emails delivered to that address in the last hour.

$RecipientAddress = 'FirstName.LastName@testcorp.com';

$Messages = Get-MessageTrace -RecipientAddress $RecipientAddress -StartDate (Get-Date).AddHours(-1) -EndDate (get-date) 

Expand the $Messages variable, and you should see:

Screen Shot 2018-07-19 at 11.10.59 am

Here is what we are going to do next. Now I wish I can explain each line of code, but that will make this blog post too lengthy. So I’ll summarise the steps:

  • Do a For each message loop.
  • Use the ‘Get-MessageTraceDetail’ cmdlet to extract details for each message and filter out events related to ATP.
  • Create a custom object containing two main properties;
    • The recipient address (of type String)
    • The message Trace ID ( of type GUID).
  • Remove all temp variables used in this loop.
foreach($Message in $Messages)
{
$Message_RecipientAddress = $Message.RecipientAddress
$Message_Detail = $Message | Get-MessageTraceDetail | Where-Object -FilterScript {$PSItem.'Event' -eq "Advanced Threat Protection"} 
if($Message_Detail)
{
$Message_Detail = $Message_Detail | Select-Object -Property MessageTraceId -Unique
$Custom_Object += New-Object -TypeName psobject -Property ([ordered]@{'RecipientAddress'=$Message_RecipientAddress;'MessageTraceId'=$Message_Detail.'MessageTraceId'})
} #End If Message_Detail Variable 
Remove-Variable -Name MessageDetail,Message_RecipientAddress -ErrorAction SilentlyContinue
} #End For Each Message  

The expected outcome is a single row.. Why? that is because we sent only two messages, one with an attachment (got captured using the above script), the other is without an attachment (doesn’t contain ATP events).

Screen Shot 2018-07-19 at 11.28.42 am

Just in case you want to do multiple tests, remember to empty your Custom Object before retesting, so you do not get duplicate results.

The next step is to extract details for the message in order to measure the start time and end time until it was sent to the recipient. Here’s what we are going to do:

  • Create a for each item in that custom object.
  • Run a ‘Get-MessageTraceDetail’ again to extract all of the message events. Sort the data by Date.
  • Measure the time difference between the last event and the first event.
  • Store the result into a new custom object for further processing.
  • Remove the temp variables used in the loop.
foreach($MessageTrace in $Custom_Object)
{
$Message = $MessageTrace | Get-MessageTraceDetail | sort Date
$Message_TimeDiff = ($Message | select -Last 1 | select Date).Date - ($Message | select -First 1 | select Date).Date
$Final_Data += New-Object -TypeName psobject -Property ([ordered]@{'RecipientAddress'=$MessageTrace.'RecipientAddress';'MessageTraceId'=$MessageTrace.'MessageTraceId';'TotalMinutes'="{0:N3}" -f [decimal]$Message_TimeDiff.'TotalMinutes';'TotalSeconds'="{0:N2}" -f [decimal]$Message_TimeDiff.'TotalSeconds'})
Remove-Variable -Name Message,Message_TimeDiff -ErrorAction SilentlyContinue
} # End For each Message Trace in the custom object

 The expected output should look like:

Screen Shot 2018-07-19 at 11.42.25 am

So here are some tips you can use to extract more valuable data from this:

  • You can try to do this on a Distribution Group and get more users, but you need a larger foreach loop for that.
  • If your final data object has more than one result, you can use the ‘measure-object’ cmdlet to find average time for you.
  • If you have a user complaining that they are experiencing delays in receiving messages, you can use this script to extract the delay.

Just keep in mind that if you’re doing this on a large number of users, it might take some time to process all that data, so patience is needed 🙂

Happy scripting 🙂

 

 

IaaS – Application Migration Management Tracker

What is IaaS Application Migration

Application migration is the process of moving an application program or set of applications from one environment to another. This includes migration from an on-premises enterprise server to a cloud provider’s environment or from one cloud environment to another. In this example, Infrastructure as a Service (IaaS) application migration.

Application Migration Management Tracker

Having a visual IaaS application migration tracker, helps to clearly identify all dependencies and blockers to manage your end to end migration tracking. In addition to the project plan, this artefact will help to manage daily stand-ups and accurate weekly status reporting.

Benefits

  • Clear visibility of current status
  • Ownerships/accountability
  • Assist escalation
  • Clear overall status
  • Lead time to CAB and preparation times
  • Allows time to agree and test key firewall/network configurations
  • Assist go/no-go decisions
  • Cutover communications
  • All dependencies
  • Warranty period tracking
  • BAU sign-off
  • Decommission of old systems if required

When to use and why?

  • Daily stand-ups
  • Go-no go meetings to take clear next steps and accountability
  • Risks and issues preparation and mitigation steps
  • During change advisory board (CAB) meeting to provide accurate report to obtain approval to implement
  • Traceability to tick and progress BAU activities and preparation of operational support activities

Application Migration Approach

Apps migration.jpg

Example of IaaS Application Migration Tracker

Below is an example which may assist your application migration tracking in detail.

  • Application list
  • Quarterly timelines
  • Clear ownerships
  • Migration tracking sub tasks
  • Warranty tracking sub tasks
  • Current status
  • Final status

IaaS - Application Migration Tracker - Example

IaaS Application Migration Tracker

Summary

Hope this example helps but it can be customised as per organisational processes and priorities. This tracker can also be used on non-complex applications and complex application migration. Thanks.

SharePoint site template error : IsProduction field is not valid or does not exists

Introduction

In this post I will be talking about exception “IsProduction field not accessible or does not exist”. In our case we had saved an existing site as site template in solution gallery and created a new site collection from saved site template but it was breaking with the below exception message.

Error message:

“The field specified with the name IsProduction is not accessible or does not exist”.


Background

The idea of using Site templates feature in SharePoint OnPrem helps with saving site as template and reusing the site template to pre provision the standard site elements in new site collection such as list, libraries, views, workflows, logos, branding and other elements for different department. Site templates are blue print for the site which can be used when we create new site collections.

Here the requirement was to save the existing site collection as site template with all the custom list, libraries, pages, content type and Nintex workflow. When the site collection was saved as site template it gets saved in the solution gallery and then can be available under the custom template section in the new site collection wizard.

Issue was when a new site was created using the saved custom template the provisioning terminated with the error “The field specified with the name IsProduction is not accessible or does not exist”. Since the error was not much descriptive and checking the SharePoint logs did not provide much information either.

To understand the root cause for the error, I checked field reference in site columns and content type but could not find any reference. Next step was to check the site template cab file (can be downloaded from the solution gallery) and looked for the reference in the site artifacts scheme definition files which pointed me to the Nintex list definition.

Nintex maintains a list internally to manage the site workflow definition, this list had a reference to the column “IsProduction”

.

On checking Nintex documentation and forums “The ‘IsProduction’ Field was introduced in 3.1.7.0 for subscription based Nintex. It was later removed due to few critical bugs

Resolution:

To resolve the issue the reference for the column “IsProduction” had to be removed from the site template, then rebuild the package and deploy it to SharePoint.

I have put together the steps briefly to remove the field reference and deploy the wsp to SharePoint

Steps

  1. Download the solution package for site template from the solution gallery in SharePoint site
  2. Change the extension for wsp package to cab. To unzip the cap file we can use tool or command prompt. I had used the command prompt

    Expand -R “Filename.cab” “Destination Folder” -F:*

  3. Once the cab has been unzipped, go to the files folder.
  4. Under Files è List folder => NintexWorkflows è Schema.xml

  5. In the schema definition file remove the reference to the IsProduction field and save the file.

  6. Last step is to rebuild the wsp using the SharePoint stsadmin command prompt. After the wsp is built it has to be uploaded to the solution gallery and activated again.

With the new custom template, I was able to create the site collection without any issues. I hope this will help solve the issue. Happy Coding!!

Web Application ADFS integration error: Invalid Cryptographic Algorithm

Introduction

In this post I will be talking about invalid cryptographic algorithm exception in web application. We have a multi-tenant single sign on asp.net application which connected with different identity provider to enable single sign on experience.

Background

Single sign-on multi-application scenario has been a soughed feature lately to make the user experience seamless across applications. In this case web application (service provider) was integrating with the ADFS 2.0 client hosted on Windows server 2012 R2 to implement single sign on experience for the end user on their network.

The application code written in C# uses component space helper facade to builds the http request using the Service provider configuration input parameters.

  1. Service provider name
  2. Assertion service endpoint url
  3. Service Provider sign on certificate and certificate password.

Certificate which was previously being used in the application for the assertion request had expired and new issued certificate which was when added to the ADFS server( Identity Provider) and the web application( Service Provider) when used was throwing an exception “Cryptographic Exception: Invalid Algorithm specified”.

On looking closely and debugging the code for the error I could notice exception “SAMLSignatureException: Failed to generate signature” was being thrown when it was stepping through the code segment where it was reading the certificate.


Resolution:

Certificate which has to be used by the assertion service expects to have Microsoft Cryptographic Service Provider (CSP) attribute set to “Microsoft Enhanced RSA and AES Cryptographic Provider”.

In this case the default certificate had the service provider configuration set as “Microsoft rsa schannel cryptographic provider”.

Difference is in the list of supported algorithms, key operations and key sizes. Microsoft RSA sChaneel Cryptographic provider doesn’t support the SHA-256 signature.

To check the certificate CSP we can check it using the below command and need to have open ssl on your system.

Command Prompt

\bin\openssl pkcs12 -in WebAppSelfSignedSSO.pfx

Make sure you point correct path to open ssl.

After the command is executed look for the Microsoft CSP Name attribute to confirm is if the CSP supports SHA-256 signature or not.

In this case we need to change the attribute to “Microsoft Enhanced RSA and AES Cryptographic Provider” to support SHA-256.

Then to update the CSP attribute to support SHA-256 signature in assertion request we need to run the below command to update the CSP.

  1. Convert the pfx file to .pem from command prompt

    Once the command is execute successfully it will generate .pem file.

  2. Next convert the .pem back to pfx and update the CSP attribute property

  3. We can verify the CSP property has been changed to “Microsoft Enhanced RSA and AES Cryptographic Provider”

I hope this will help solve the issue. Happy Coding!!

IT Service Management (ITSM) – Continual Service Improvement (CSI)Process and Approach

Continual Service Improvement (CSI) Process

To define specific initiatives aimed at improving services and processes, based on the results of service reviews and process evaluations. The resulting initiatives are either internal initiatives pursued by the service provider on his own behalf, or initiatives which require the customer’s cooperation. (from ITIL).

Continual Service Improvement (CSI) Purpose, Goals and Objectives

  • Continually align IT services to changing business needs
  • Identify and implement improvements throughout the service life cycle
  • Determine what to measure, why to measure it and define successful outcomes
  • Implement processes with clearly defined goals, objectives and measures
  • Review service level achievement results
  • Ensure quality management methods are used

Continual-Service-Improvement.png

Continual Service Improvement (CSI) Values

  • Enables continuous monitoring and feedback through all life cycle stages
  • Sets targets for improvement
  • Calculates Return on Investment (ROI)
  • Calculates Value on Investment (VOI)

Business Value of Measurement

Consider the following factors when measuring process or service efficiency.

CSI 1.jpg

  • Why are we monitoring and measuring?
  • When do we stop?
  • Is anyone Is using the data?
  • Do we still need this?

Metric Types

  • Service metrics
  • Technology metrics
  • Process metrics

Continual Service Improvement(CSI) Supporting Models and Processes

  1. Plan-DO-Check-ACT (PDCA) Model
  2. 7-Step Improvement Process
  3. Continual Service Improvement Model

1. Plan-Do-Check-Act (PDCA) Model

PDCA 2.jpg

2. 7-Step Improvement Process

CSi 7 step process.jpg

3. Continual Service Improvement Model

CSI model.jpg

Key Takeaways

  1. Once you have implemented a new process, tool or an event – Plan for improvement. As the end users will expecting the next levels of service
  2. Obtain feedback from end users, always encourage them to do so.
  3. Plan it, Do it (Implement), Check it (assess, metrics) and act (take actions to align or rectify)
  4. Always look to improve you service, through benefit, cost, risk and strategy

Summary

Hope you found it useful to implement your CSI journey.

Cloud Operations – Key Service Operation Principles – Consideration

Below are some good IT Service Management Operational Principles to consider when migrating applications into Cloud.  These will help to align your operational goals and organisation’s strategic initiatives.

Principle #1

Organisation’s IT Service Management will govern and lead all IT services utilising strategic processes and technology enablers based on industry best practices.

Implications / Outcomes

  • The selected process and technology will be fit for purpose
  • Suppliers and Service Partners will be required to integrate with strategic processes and technologies
  • Process re-engineering including training will be required
  • Everyone uses or integrates with a central platform
  • Process efficiency through effective process integration
  • Reduced operating cost
  • Ensures contestability of services for Organisation

Principle #2

Contestability between IT Service providers is a key outcome for service management across IT@Organisation, where it does not have a negative impact on the customer experience.

Implications / Outcomes

  • Avoid vendor lock-in
  • Requires strategic platforms
  • Sometimes greater complexity
  • More ownership of process by Organisation
  • Better cost outcomes through competition
  • Improved performance, incumbent advantage is earned
  • Access to innovation
  • Access to capacity

Principle #3

The Organisation’s IT operating model will be based on the principles of Customer-centricity (Organisation’s business and IT), consistency and quality of service and continual improvement of process maturity.

Implications / Outcomes

  • More extensive process integration
  • Possible constraints – cost, time, resources, agility
  • Additional internal expertise
  • Governance as a focal point
  • Continual improvement
  • Improved process alignment with business alignment
  • Quantitative, demonstrable benefits
  • Improved customer satisfaction

Principle #4

Organisation will retain and own all IP for Organisation’s Service Management knowledge assets and processes.

Implications / Outcomes

  • Strong asset, capacity, knowledge management
  • Service provider governance
  • Improved internal capability
  • Service provider independence
  • Reduced risk
  • Exploitation of skills and experience gained
  • Encourage self-healing culture

Principle #5

Changes to existing Organisation processes and procedures will only be made where those changes are necessary to deliver benefits from the Cloud platform.

Implications / Outcomes

  • Vendors adapt to Organisation’s processes
  • Existing process needs to be critically assessed
  • Reduced exposure to risk
  • Reduced levels of disruption
  • Faster adoption of new processes through familiarity
  • Faster Implementation due to less change

Principle #6

Before beginning process design, ownership of the process and its outcomes, resource availability, cost benefit analysis and performance measurements will be defined and agreed.

Implications / Outcomes

  • Ownership of process is known
  • The process is appropriately resourced
  • Alignment of activities with desired outcomes
  • Improved process effectiveness
  • Reduced risk of failure
  • Resourcing cost

 

Summary

Please note that there will be practical implications to organisation’s service management processes (typically – incident management, problem management, capacity management, service restoration, change management, configuration management and release management). Also these are some of the good principles to consider and can be customised as per organisational strategy and priorities.

 

IT Service Management (ITSM) – Process Maturity Evaluation

ITSM Process Maturity Evaluation

Why are we doing this?

It is useful to measure Service Management process maturity for a number of reasons:

  • To understand the strengths and weaknesses of existing processes.
  • As a guide to planning continual process improvement.
  • As a mechanism to measure the impact of process improvement activities.

What is it?

The Process Maturity Evaluation is a method of assessing the current state of Service Management processes and procedures, it needs to:

  • Have the right balance between detail and effort.
  • Be flexible enough to be applied to any Service Management Process.
  • Capture input from all stakeholders, not just Service Management.
  • Deliver actionable information to drive improvement.
  • Can be used to drive Continual Service Improvement.

How does it work?

  • Using a set of standard process maturity questions, we assess the current state of each process/procedure, evaluating 9 aspects of maturity (typically these are pre-requirements, management intent, process capability, internal integration, products, quality control, management information, external integration and customer interface. These may vary depending on organisation types and priorities), grading on a scale from 0-5.
  • Based on this we then identify and execute a process improvement plan to address the maturity issues that we have identified.
  • Using the same approach, we re-evaluate process maturity to measure the improvement achieved and to identify the next cycle of improvement.

Process Maturity Evaluation Cycle

process maturity cycle.jpg

Sample Process Maturity Questions and Model

process questions example

Sample Process Maturity Evaluation Results (Visuals)

process maturity example.jpg

Summary

ITSM process maturity questions can vary depending on organisation types and priorities and also the baseline questions can be changed to track improvements based in initial scores. Hope you found this useful.

 

Standard Operational Checks for IT Service Management Processes – Once Implemented

Why Operational Process Checks are Required in Service Management?

  • In order to sustain and measure how effectively we are executing our processes, we require to have process operational checks once implemented.
  • This will allow us to identify inefficiencies and subsequently to improve the current Service Management processes.
  • These checks will also provide input into Continual Service Improvement (CSI) Programme.

checklist.jpg

Operational Checks for Service Management Processes – Once Implemented

Ops checks SM.jpg

Standard Guidelines

  • The operational process checks will be managed by IT Service Management
  • Through process governance meetings, agreement can be established on who will update a section of a process or the whole document.
  • IT Service Management will need to be involved in all process governance meetings.
  • IT Service Management will conduct an internal audit on all Service Management processes at least once annually.

Review and Measure at Regular Intervals

It is very important that IT Service Management reviews all processes and measurements at least once every year to make appropriate changes to obtain their target IT and business goals.

 taylor.jpg

 

Thank you……..

Automate network share migrations to Sharepoint Online using ShareGate PowerShell

Sharegate supports PowerShell scripting which can be used to automate and schedule migrations. In this post, I am going to demonstrate an example of end to end automation to migrate network Shares to SharePoint Online. The process effectively reduces the task of executing migrations to “just flicking a switch”.

Pre-Migration

The following pre-migration activities were conducted before the actual migration:

  1. Analysis of Network Shares
  2. Discussions with stakeholders from different business units to identify content needs
  3. Pilot migrations to identify average throughput capability of migration environment
  4. Identification of acceptable data filtration criteria, and prepare Sharegate migration template files based on business requirements
  5. Derive a migration plan from above steps

Migration Automation flow

The diagram represents a high-level flow of the process:

 

The migration automation was implemented to execute the following steps:

  1. Migration team indicates that migration(s) are ready to be initiated by updating the list item(s) in the SharePoint list
  2. Updated item(s) are detected by a PowerShell script polling the SharePoint list
  3. The list item data is downloaded as a CSV file. It is one CSV file per list item. The list item status is updated to “started”, so that it would not be read again
  4. The CSV file(s) are picked up by another migration PowerShell script to initiate migration using Sharegate
  5. The required migration template is selected based on the options specified in the migration list item / csv to create a migration package
  6. The prepared migration task is queued for migration with Sharegate, and migration is executed
  7. Information mails are “queued” to be dispatched to migration team
  8. Emails are sent out to the recipients
  9. The migration reports are extracted out as CSV and stored at a network location.

Environment Setup

Software Components

The following software components were utilized for the implementing the automation:

  1. SharePoint Online Management shell
  2. SharePoint PnP PowerShell
  3. Sharegate migration Tool

Environment considerations

Master and Migration terminals hosted as Virtual machines – Each terminal is a windows 10 virtual machine. The use of virtual machines provides following advantages over using desktops:

  • VMs are generally deployed directly in datacenters, hence, near the data source.
  • Are available all the time and are not affected by power outages or manual shutdowns
  • Can be easily scaled up or down based on the project requirements
  • Benefit from having better internet connectivity and separate internet routing can be drawn up based on requirements

Single Master Terminal – A single master terminal is useful to centrally manage all other migration terminals. Using a single master terminal offers following advantages:

  • Single point of entry to migration process
  • Acts as central store for scripts, templates, aggregated reports
  • Acts as a single agent to execute non-sequential tasks such as sending out communication mails

Multiple Migration terminals – It would be optimal to initiate parallel migrations and use multiple machines (terminals) to expedite overall throughput by parallel runs in an available migration window (generally non-business hours).  Sharegate has option to use either 1 or 5 licenses at once during migration. We utilized 5 ShareGate licenses on 5 separate migration terminals.

PowerShell Remoting – Using PowerShell remoting allows opening remote PowerShell sessions to other windows machines. This will allow the migration team to control and manage migrations using just one terminal (Master Terminal) and simplify monitoring of simultaneous migration tasks. More information about PowerShell remoting can be found here.

PowerShell execution policy – The scripts running on migration terminals will be stored at a network location in Master Terminal. This will allow changing / updating scripts on the fly without copying the script over to other migration terminals. The script execution policy of the PowerShell window will need to be set as “Bypass” to allow execution of scripts stored in network location (for quick reference, the command is “Set-ExecutionPolicy -ExecutionPolicy Bypass”.

Windows Scheduled Tasks – The PowerShell scripts are scheduled as tasks through Windows Task schedulers and these tasks could be managed remotely using scripts running on the migration terminals. The scripts are stored at a network location in master terminal.

Basic task in a windows scheduler

PowerShell script file configured to run as a Task

Hardware specifications

Master terminal (Manage migrations)

  • 2 cores, 4 GB RAM, 100 GB HDD
  • Used for managing scripts execution tasks on other terminals (start, stop, disable, enable)
  • Used for centrally storing all scripts and ShareGate property mapping and migration templates
  • Used for Initiating mails (configured as basic tasks in task scheduler)
  • Used for detecting and downloading migration configuration of tasks ready to be initiated (configured as basic tasks in task scheduler)
  • Windows 10 virtual machine installed with the required software.
  • Script execution policy set as “Bypass”

Migration terminals (Execute migrations)

  • 8 cores, 16 GB RAM, 100 GB HDD
  • Used for processing migration tasks (configured as basic tasks in windows task scheduler)
  • Multiple migration terminals may be set up based on the available Sharegate licenses
  • Windows 10 Virtual machines each installed with the required software.
  • Activated Sharegate license on each of the migration terminals
  • PowerShell remoting needs to be enabled
  • Script execution policy set as “Bypass”

Migration Process

Initiate queueing of Migrations

Before migration, migration team must perform manual pre – migration tasks (if any as defined by the migration process as defined and agreed with stake holders). Some of the pre-migration tasks / checks may be:

  • Inform other teams about a possible network surge
  • Confirming if another activity is consuming bandwith (scheduled updates)
  • Inform the impacted business users about the migration – this would be generally set up as the communication plan
  • Freezing the source data store as “read-only”

A list was created on a SharePoint online site to enable users to indicate that the migration is ready to be processed. The updates in this list shall trigger the actual migration downstream. The migration plan is pre-populated in this list as a part of migration planning phase. The migration team can then update one of the fields (ReadyToMigrate in this case) to initiate the migration. Migration status is also updated back to this list by the automation process or skip a planned migration (if so desired).

The list provides as a single point of entry to initiate and monitor migrations. In other words, we are abstracting out migration processing with this list and can be an effective tool for migration and communication teams.

The list was created with the following columns:

  • Source => Network Share root path
  • Destination site => https://yourteanant.sharepoint.com/sites/<Sitename>
  • Destination Library => Destination library on the site
  • Ready to migrate => Indicates that the migration is ready to be triggered
  • Migrate all data => Indicate if all data from the source is to be migrated (default is No). Only filtered data based on the predefined options will be migrated. (more on filtered options can be found here)
  • Started => updated by automation when the migration package has been downloaded
  • Migrated => updated by automation after migration completion
  • Terminal Name => updated by automation specifying the terminal being used to migrate the task

 

Migration configuration list

 

After the migration team is ready to initiate the migration, the field “ReadyToMigrate” for the migration item in the SharePoint list is updated to “Yes”.


“Flicking the switch”

 

Script to create the migration configuration list

The script below creres the source list in sharepoint online.

Script to store credentials

The file stores the credentials and can be used subsequent scripts.

Queuing the migration tasks

A PowerShell script is executed to poll the migration configuration list in SharePoint at regular intervals to determine if a migration task is ready to be initiated. The available migration configurations are then downloaded as CSV files, one item / file and stored in a migration packages folder on the master terminal. Each CSV file maps to one migration task to be executed by a migration terminal and ensures that the same migration task is not executed by more than one terminal. It is important that this script runs on a single terminal to ensure only one migration is executed for one source item.

 

 

Execute Migration

The downloaded migration configuration CSV files are detected by migration script tasks executing on each of the migration terminals. Based on the specified source, destination and migration options the following tasks are executed:

  1. Reads the item from configuration list to retrieve updated data based on item ID
  2. Verify Source. Additionally, sends a failure mail if source is invalid or not available
  3. Revalidates if a migration is already initiated by another terminal
  4. Updates the “TerminalName” field in the SharePoint list to indicate an initiated migration
  5. Checks if the destination site is created. Creates if not already available
  6. Checks if the destination library is created. Creates if not already available
  7. Triggers an information mail informing migration start
  8. Loads the required configurations based on the required migration outcome. The migration configurations specify migration options such as cut over dates, source data filters, security and metadata. More about this can be found here.
  9. Initiates the migration task
  10. Extracts the migration report and stores as CSV
  11. Extracts the secondary migration report as CSV to derive paths of all files successfully migrated. These CSV can be read by an optional downstream process.
  12. Triggers an information mail informing migration is completed
  13. Checks for another queued migration to repeat the procedure.

The automatioin script is given below –

 


Additional Scripts

Send Mails

The script triggers emails to required recipients.

This script polls a folder:   ‘\masterterminal\c$\AutomatedMigrationData\mails\input’ to check any files to be send out as emails. The csv files sepcify subject and body to be send out as emails to recipients configured in the script. Processed CSV files are moved to final folder.

 

Manage migration tasks (scheduled tasks)

The PowerShell script utilizes PowerShell remoting to manage windows Task scheduler tasks configured on other terminals.

 


Conclusion

The migration automation process as described above helps in automating the migration project and reduces manual overhead during the migration. Since the scripts utilize pre-configured migration options / templates, the outcome is consistent with the plan. Controlling and monitoring migration tasks utilizing a SharePoint list introduces transparency in the system and abstracts the migration complexity. Business stakeholders can review migration status easily from the SharePoint list and this ensures an effective communication channel. Automated mails informing about migration status provide additional information about the migration. The migration tasks are executed in parallel across multiple migration machines which aids in a better utilization of available migration window.