Deploy and Add SPFx webparts to Modern Pages using OfficeDevPnP CSOM library

In the previous blog here, we looked at how to install apps on a SharePoint site. With SharePoint and Office Dev PnP CSOM, we could also add web parts to Modern Pages, both out of the box (OOB) web parts and custom web parts. For out of box web parts, refer to Chris O’Brien article here , where he has provided steps and also the web part IDs for the OOB webparts which is really helpful.

In this blog, we will look at steps to add a custom web part and set it properties.

For the below code, we will use OfficeDevPnP CSOM library. The high level steps for implementation are below:

1. Find the page where to add the web part. For creating a modern page programmatically, refer to this link here.

2. Find if the web part is added to the page, if not then add it using web part ID. We will read the web part ID from a JSON class where we have kept the details of the custom web part

3. After the web part is added, then set the web part properties using JSON schema of the properties using NewtonSoft.Json.

Note: If the web part app is just installed, then it might take time for 
the web part to be ready for use. 
Hence put a wait state till the web part is loaded for use.

The below code has the steps for adding the web part programmatically using CSOM

Conclusion

The above process could be used to add web part programmatically onto a page and set it’s properties.

Happy Coding !!

Programmatically deploy and add SharePoint Framework Extensions using SharePoint CSOM and PowerShell

In the previous blog here, we looked at how to deploy and install SharePoint Apps. Now let’s look at installing SharePoint Framework extensions – Listview command sets programmatically.

SharePoint CSOM

SharePoint Framework has three type of extensions that could be created – Application customiser, Listview command sets and Field customisers. In this blog, we will look at adding list view command sets programmatically.

Listview command extensions are actually custom actions installed in a library or list. Hence to activate it we will go to the library/list, find the installed custom actions, if not installed we will install the new custom action. Below is the code for that.

PowerShell

We could also use PnP PowerShell to add the Library extension onto a page using the code below

Hence, above we saw how we could add extensions onto a library or list using CSOM or PowerShell

Happy Coding!!

Deploy and Install SharePoint Apps using SharePoint CSOM and PnP PowerShell

In this blog, we will look at steps to install and deploy SharePoint apps to Modern Sites using SharePoint ALM CSOM and PnP PowerShell. Using the below steps, it is possible to programmatically deploy and install custom SharePoint Framework apps using an Azure Function or a Local PowerShell script.

Installing SharePoint Apps

SharePoint Apps can be deployed on a site using ALM (Application Lifecycle Management) APIs. After the app is installed in the App catalog, we could add it to a SharePoint site.

SharePoint CSOM

The steps are simple. The below snippet has the code for deploying and installing apps.

1. Get the App Id
2. Create an App Manager Object
3. Deploy the App
4. After deploy the app, install the app.
5. Use Try – catch to handle if the installation has already done

PnP PowerShell

First, lets’ get a list of apps in the App catalog.

Note: There are two values that is supported by scope paramaters for Apps – Tenant and Site.

Get-PnPApp -Scope Tenant
or
Get-PnPApp -Scope Site

If the App is not installed, then we will add the app to the App catalog

Add-PnPApp -Path "<.sppkg file location>" -Scope Site

Then, publish the App

Publish-PnPApp -Identity  -Scope Tenant -SkipFeatureDeployment

-SkipFeatureDeployment is helpful to deploy Apps across the SharePoint tenancy if the app is developed for tenant wide deployments

After the above, we will install the App

Install-PnPApp -Identity  -Scope Site

After the app is installed, it is ready to be added or used at the site.

In the upcoming blog, we will see how to add SharePoint Framework extensions and web parts programmatically.

Happy Coding!!

Office 365 URLs and IP address updates for firewall and proxy configuration, using Flow and Azure Automation

tl;dr

To use Microsoft Office 365, an organisation must allow traffic to [and sometimes from] the respective cloud services via the internet on specific ports and protocols to various URLs and/or IP addresses, or if you meet the requirements via Azure ExpressRoute. Oh duh?!

To further expand on that, connections to trusted networks (which we assume Office 365 falls into this category) that are also high in volume (since most communication and collaborative infrastructure resides there) should be via a low latency egress that is as close to the end user as possible.

As more and more customers use the service, as more and more services and functionality is added, so to will the URLs and IP addresses need to change over time. Firewalls and proxies need to be kept up to date with the destination details of Office 365 services. This is an evergreen solution, lets not forget. So, it’s important to put the processes in-place to correctly optimise connectivity to Office 365. It’s also very important to note that these processes, around change management, if left ignored, will result in services being blocked or delivering inconsistent experiences for end users.

Change is afoot

Come October 2nd 2018, Microsoft will change the way customers can keep up to date with these changes to these URLs and IP addresses. A new web service is coming online that publishes Office 365 endpoints, making it easier for you to evaluate, configure, and stay up to date with changes.

Furthermore, the holistic overview of these URLs and IP addresses is being broken down into three new key categories: OPTIMISE, ALLOW and DEFAULT.

You can get more details on these 3x categories from the following blog post on TechNet: https://blogs.technet.microsoft.com/onthewire/2018/04/06/new-office-365-url-categories-to-help-you-optimize-the-traffic-which-really-matters/

 

It’s not all doom and gloom as your RSS feed no longer works. The new web service (still in preview, public preview, at the time of writing this blog) is rather zippy and allows for some great automation. So, that’s the target state: automation.

Microsoft wants to make it nice and easy for firewall, proxy or whatever edge security appliance vendor or service provider to programmatically interact with the web service and offer dynamic updates for Office 365 URL and IP address information. In practice, change management and governance processes will evidently still be followed. In most circumstances, organisations are following whatever ITIL or ITIL like methodologies are in place for those sorts of things.

The dream Microsoft has, though, is actually one that is quite compelling.

Before we get to this streamlined utopia where my customers edge devices update automatically, I’ve needed to come up with a process for the interim tactical state. This process runs through as follows:

  • Check daily for changes in Office 365 URLs and IP addresses
  • Download changes in a user readable format (So, ideally, no XML or JSON. Perhaps CSV for easy data manipulation or even ingestion into production systems)
  • Email intended parties that there has been a change in the global version number of the current Office 365 URLs and IP addresses
  • Allow intended parties to download the output

NOTE – for my use case here, the details for the output is purely IP addresses. That’s because the infrastructure that the teams I’ll be sending this information to only allows for that data type. If you were to tweak the web service request (details further down), you can grab both URLs and IP addresses, or one or the other.

 

Leveraging Microsoft Flow and Azure Automation

My first instinct here was to use Azure Automation and run a very long PowerShell script with If’s and Then’s and so on. However, when going through the script, 1) my PowerShell skills are not that high level to bang this out and 2) Flow is an amazing tool to run through some of the tricky bits in a more effortless way.

So, leveraging the goodness of Flow, here’s a high level rundown of what the solution looks like:

 

The workflow runs as follows:

  1. Microsoft Flow
  2. On a daily schedule, the flow is triggered at 6am
  3. Runbook #1
    1. Runbook is initiated
    2. Runbook imports CSV from Azure Blob
    3. Powershell runs comment to query web service and saves output to CSV
    4. CSV is copied to Azure Blob
  4. Runbook #2 imports a CSV
    1. Runbook is initiated
    2. Runbook imports CSV from Azure Blob
    3. The last cell in the version column is compared to the previous
    4. An Output is saved to Azure Automation if a newer version found, “NEW-VERSION-FOUND”
  5. The Output is taken from the prvious Azure Automation Runbook run
  6. A Flow Condition is triggered – YES if Output is found, NO if nothing found

Output = YES

  • 7y1 = Runbook #3 is run
    • Runbook queries web service for all 3 conditions: optimise, allow and default
    • Each query for that days IP address information is saved into 3 separate CSV files
  • 7y2 = CSV files are copied to Azure Blob
  • 7y3 = Microsoft Flow queries Azure Blob for the three files
  • 7y4 = An email template is used to email respective interested parties about change to the IP address information
    • The 3x files are added as attachments

Output = Nothing or NO

  • 7n1 = Sent an email to the service account mailbox to say there was no changes to the IP address information for that day

 

The process

Assuming, dear reader, that you have some background with Azure and Flow, here’s a detailed outlined of the process I went through (and one that you can replicate) to automate checking and providing relevant parties with updates to the Office 365 URLs and IP address data.

Lets begin!

Step 1 – Azure AD
  • I created a service account in Azure AD that has been given an Office 365 license for Exchange Online and Flow
  • The user details don’t really matter here as you can follow your own naming convention
  • My example username is as follows: svc-as-aa-01@[mytenant].onmicrosoft.com
    • Naming convention being: “Service account – Australia South East – Azure Automation – Sequence number”
Step 2 – Azure setup – Resource Group
  • I logged onto my Azure environment and created a new Resource Group
  • My solution has a couple of components (Azure Automation account and a Storage account), so I put them all in the same RG. Nice and easy
  • My Resource Group details
    • Name = [ASPRODSVCAA01RG]
    • Region = Australia South East as that’s the local Azure Automation region
    • That’s a basic naming convention of: “Australia South East – Production environment – Purpose, being for the SVC account and Azure Automation – Sequence number – Resource Group”
  • Once the group was created, I added my service account as a Contributor to the group
    • This allows the account downstream permissions to the Azure Automation and Storage accounts I’ll add to the resource group
Step 3 – Azure Setup – Storage account
  • I created a storage account and stored that in my resource group
  • The storage account details are as follows
    • Name = [asprodsvcaa01] = Again, follow your own naming convention
    • Deployment model = Resource manager
    • Storage General Purpose v2
    • Local redundant storage only
    • Standard performance
    • Hot access tier
  • Within the storage account, I’ve used Blob storage
    • There’s two containers that I used:
      • Container #1 = “daily”
      • Container #2 = “ipaddresses”
    • This is where the output CSV files will be stored
  • Again, we don’t need to assign any permissions as we assigned Contributor permissions to the resource group
Step 4 – Azure Setup – Azure Automation
  • I created a new Azure Automation account with the following parameters
    • Name = [SVCASPROD01AA] = Again, follow your own naming convention
    • Default parameters, matching my resource group settings
    • Yes, I also had a Run As account created (default option)
  • I created three Runbooks created, as per below

 

  • Step1-GetGlobalVersion = Again, follow your own naming convention
  • This is a Powershell runbook
  • Here’s the example script I put together:
#region SETUP
Import-Module AzureRM.Profile
Import-Module AzureRM.Resources
Import-Module AzureRM.Storage
#endregion

#region CONNECT
$pass = ConvertTo-SecureString "[pass phrase here]" -AsPlainText –Force
$cred = New-Object -TypeName pscredential –ArgumentList "[credential account]", $pass
Login-AzureRmAccount -Credential $cred -ServicePrincipal –TenantId "[tenant id]"
#endregion

#region IMPORT CSV FILE FROM BLOB
$acctKey = (Get-AzureRmStorageAccountKey -Name [name here] -ResourceGroupName [name here]).Value[0]
$storageContext = New-AzureStorageContext -StorageAccountName "[name here]" -StorageAccountKey '[account key here]'
Get-AzureStorageBlob -Context $storageContext -Container "[name here]" | Get-AzureStorageBlobContent -Destination . -Context $storageContext -Force
#endregion

#region GET CURRENT VERION
$DATE = $(((get-date).ToUniversalTime()).ToString("yyyy-MM-dd"))
Invoke-RestMethod -Uri https://endpoints.office.com/version/Worldwide?ClientRequestId=b10c5ed1-bad1-445f-b386-b919946339a7 | Select-Object @{Label="VERSION";Expression={($_.Latest)}},@{Label="DATE";Expression={($Date)}} | Export-Csv [daily-export.csv] -NoTypeInformation -Append 

# SAVE TO BLOB
$acctKey = (Get-AzureRmStorageAccountKey -Name [name here] -ResourceGroupName [name here]).Value[0]
$storageContext = New-AzureStorageContext -StorageAccountName "[name here]" -StorageAccountKey '[account key here]'
Set-AzureStorageBlobContent -File [.\daily-export.csv] -Container "[name here]" -BlobType "Block" -Context $storageContext -Force
#endregion

#region OUTPUT
Write-Output "SCRIPT-COMPLETE"
#endregion

 

  • Step2-CheckGlobalVersion = Again, follow your own naming convention
  • This is a Powershell runbook
  • Here’s the example script I put together:
#region SETUP
Import-Module AzureRM.Profile
Import-Module AzureRM.Resources
Import-Module AzureRM.Storage
#endregion

#region CONNECT 
$pass = ConvertTo-SecureString "[pass phrase here]" -AsPlainText –Force 
$cred = New-Object -TypeName pscredential –ArgumentList "[credential account]", $pass 
Login-AzureRmAccount -Credential $cred -ServicePrincipal –TenantId "[tenant id]" #endregion

#region IMPORT CSV FILE FROM BLOB
$acctKey = (Get-AzureRmStorageAccountKey -Name [name here] -ResourceGroupName [name here]).Value[0]
$storageContext = New-AzureStorageContext -StorageAccountName "[name here]" -StorageAccountKey '[key here]'
Get-AzureStorageBlob -Context $storageContext -Container [name here] | Get-AzureStorageBlobContent -Destination . -Context $storageContext -Force
#endregion

#region CHECK IF THERE IS A DIFFERENCE IN THE VERSION
$ExportedCsv = import-csv [.\daily-export.csv]
$Last = $ExportedCsv | Select-Object -Last 1 -ExpandProperty Version # Last value in Version column
$SecondLast = $ExportedCsv | Select-Object -Last 1 -Skip 1 -ExpandProperty Version #Second last value in version column
If ($Last –gt $SecondLast) { 
Write-Output '[NEW-VERSION-FOUND]'
}

 

  • Step3-GetURLsAndIPAddresses = Again, follow your own naming convention
  • This is a Powershell runbook
  • Here’s the example script I put together:
#region SETUP
Import-Module AzureRM.Profile
Import-Module AzureRM.Resources
Import-Module AzureRM.Storage
#endregion

#region EXECUTE PROCESS TO DOWNLOAD NEW VERSION
$endpoints = Invoke-RestMethod -Uri https://endpoints.office.com/endpoints/Worldwide?ClientRequestId=b10c5ed1-bad1-445f-b386-b919946339a7
$endpoints | Foreach {if ($_.category -in ('Optimize')) {$_.IPs}} | Sort-Object -unique | Out-File [.\OptimizeFIle.csv]
$endpoints | Foreach {if ($_.category -in ('Allow')) {$_.IPs}} | Sort-Object -unique | Out-File [.\AllowFile.csv]
$endpoints | Foreach {if ($_.category -in ('Default')) {$_.IPs}} | Sort-Object -unique | Out-File [.\DefaultFile.csv]

$acctKey = (Get-AzureRmStorageAccountKey -Name [name here] -ResourceGroupName [name here]).Value[0]
$storageContext = New-AzureStorageContext -StorageAccountName "[name here]" -StorageAccountKey '[key here]'
Set-AzureStorageBlobContent -File [.\OptimizeFIle.csv] -Container "[name here]" -BlobType "Block" -Context $storageContext -Force
Set-AzureStorageBlobContent -File [.\AllowFile.csv] -Container "[name here]" -BlobType "Block" -Context $storageContext -Force
Set-AzureStorageBlobContent -File [.\DefaultFile.csv] -Container "[name here]" -BlobType "Block" -Context $storageContext -Force
#endregion

#region OUTPUT
Write-Output "SCRIPT COMPLETE"
#endregion
  • Note that we don’t need to import the complete AzureRM Powershell modules
  • You’ll find that if you do something “lazy” like that, there’s a whole lot of dependencies in Azure Automation
    • You’ll need to manually add in all the sub-modules which is very time consuming
Step 5 – Microsoft Flow
  • With my service account having a Flow license, I created my Flow there
  • This means that I can pass this onto Managed Services to run with and maintain going forward
  • I started with a blank Flow
  • I added a schedule
    • The schedule runs at 6am every day

  • Step 1 is to add in an Azure Automation Create Job task
    • This is to execute the Runbook “Step1-GetGlobalVersion”
    • Flow will try and connect to Azure with our Service account
    • Because we added all the relevant permissions earlier in Azure, the Resource Group and downstream resources will come up automatically
    • Enter in the relevant details

  • Step 2 is to add in another Azure Automation Create Job task
    • This is to execute the Runbook “Step2-CheckGlobalVersion”
    • Again, Flow will connect and allow you to select resources that the service account has Contributor permissions to

  • Step 3 is to add in an Azure Automation Get Job Output
    • This is to grab the Output data from the previous Azure Automation runbook
    • The details are pretty simply
    • I selected the “JobID” from the Step 2 Azure Automation runbook job

  • Step 4 is where things get interesting
  • This is a Flow Condition
  • This is where we need to specify if a Value of “NEW-VERSION-FOUND” is found in the content of the Output from the Step 2 Job, Do something or Not do something

  • Step 5 is where I added in all the “IF YES” flow to Do something because we have an output of “NEW-VERSION-FOUND”
  • The first sub-process is another Azure Automation Create Job task
  • This is to execute the Runbook “Step3-GetURLsandIPaddresses”
  • Again, Flow will connect and allow you to select resources that the service account has Contributor permissions to

  • Step 6 is to create 3 x Get Blob Content actions
  • This will allow us to connect to Azure Blob storage and grab the 3x CSV files that the previous steps output to Blob created
  • We’re doing this so we can embed them in an email we’re going to send to relevant parties in need of this information

  • Step 7 is to create an email template
  • As we added an Exchange Online license to our service account earlier, we’ll have the ability to send email as the service accounts mailbox
  • The details are pretty straight forward here:
    • Enter in the recipient address
    • The sender
    • The subject
    • The email body
      • This can be a little tricky, but, I’ve found that if you enable HTML (last option in the Send An Email action), you can use <br> or line break to space out your email nicely
    • Lastly, we’ll attach the 3x Blobs that we picked up in the previous step
    • We just need to manually set the name of email file
    • Then select the Content via the Dynamic Content option
      • Note: if you see this error “We can’t find any outputs to match this input format.Select to see all outputs from previous actions.” – simply hit the “See more” button
      • The See more button will show you the content option in the previous step (step 6 above)

  • Step 8 is to go over to the If No condition
  • This is probably option because I believe the old saying goes “no new is good news”
  • However, for the purposes of tracking how often changes happen easily, I thought I’d email the service account and store a daily email if no action was taken
    • I’ll probably see myself as well here to keep an eye on Flow to make sure it’s running
    • I can use inbox rules to move the emails out of my inbox and into a folder to streamline it further and keep my inbox clean
  • The details are pretty much the same as the previous Step 7
    • However, there’s no attachments required here
    • This is a simple email notification where I entered the following in the body: “### NO CHANGE IN O365 URLs and IP ADDRESSES TODAY ###”

 

Final words

Having done many Office 365 email migrations, I’ve come to use Powershell and CSV’s quite a lot to make my life easier when there’s 1000’s of records to work with. This process uses that experience and that speed of working on a solution using CSV files. I’m sure there’s better ways to streamline that component, like for example using Azure Table Storage.

I’m also sure there’s better ways of storing credential information, which, for the time being isn’t a problem while I work out this new process. The overall governance will get ironed out and I’ll likely leverage Azure Automation Credential store, or even Azure Key Vault.

If you, dear reader, have found a more streamlined and novel way to achieve this that requires even less effort to setup, please share!

Best,

Lucian

#WorkSmarterNotHarder

 

SharePoint Integration for Health Care eLearning – Moving LMS to the Cloud

Health care systems often face challenges in the way of being unkept and unmaintained or managed by too many without consistency in content and harbouring outdated resources. A lot of these legacy training and development systems also wear the pain of constant record churning without a supportable record management system. With the accrual of these records over time forming a ‘Big Data concern’, modernising these eLearning platforms may be the right call to action for medical professionals and researchers. Gone should be the days of manually updating Web Vista on regular basis.

Cloud solutions for Health Care and Research should be well on its way, but the better utilisation of these new technologies will play a key factor in how confidence is invested by health professionals in IT providing a means for departmental education and career development moving forward.

Why SharePoint Makes Sense (versus further developing Legacy Systems)

Every day, each document, slide image and scan matters when the paying customer’s dollar is placed on your proficiency to solve pressing health care challenges. Compliance and availability of resources aren’t enough – streamlined and collaborative processes, from quality control to customer relationship management, module testing and internal review are all minimum requirements for building a modern, online eLearning centre i.e. a ‘Learning Management System’.

ELearningIndustry.com has broken down ten key components that a Learning Management System (LMS) requires in order to be effective. From previous cases, working in developing an LMS, or OLC (Online Learning Centre) Site using SharePoint, these ten components can indeed be designed within the online platform:

  1. Strong Analytics and Report Generation – for the purposes of eLearning, e.g. dashboards which contain progress reports, exam scores and other learner data, SharePoint workflows allows for progress tracking of training and user’s engagement with content and course materials while versioning ensures that learning managers, content builders (subject matter experts) and the learners themselves are on the same page (literally).
  1. Course Authoring Capability – SharePoint access and user permissions are directly linked to your Active Directory. Access to content can be managed, both from a hierarchical standpoint or role-based if we’re talking content authors. Furthermore, learners can have access to specific ‘modules’ allocated to them based on department, vocation, etc.
  1. Scalable Content Hosting – flexibility of content or workflows, or plug-ins (using ‘app parts’) to adapt functionality to welcome new learners where learning requirements may shift to align with organisational needs.
  1. Certifications – due to the availability and popularity of SharePoint online in large/global Enterprises, certifications for anywhere from smart to super users is available from Microsoft affiliated authorities or verified third-parties.
  1. Integrations (with other SaaS software, communication tools, etc.) – allow for exchange of information through API’s for content feeds and record management e.g. with virtual classrooms, HR systems, Google Analytics.
  1. Community and Collaboration – added benefit of integrated and packaged Microsoft apps, to create channels for live group study, or learner feedback, for instance (Skype for Business, Yammer, Microsoft Teams).
  1. White Labelling vs. Branding – UI friendly, fully customisable appearance. Modern layout is design flexible to allow for the institutes branding to be proliferated throughout the tenant’s SharePoint sites.
  1. Mobile Capability – SharePoint has both a mobile app and can be designed to be responsive to multiple mobile device types
  1. Customer Support and Success – as it is a common enterprise tool, support by local IT should be feasible with any critical product support inquiries routed to Microsoft
  1. Support of the Institutes Mission and Culture – in Health Care Services, where the churn of information and data pushes for an innovative, rapid response, SharePoint can be designed to meet these needs where, as an LMS, it can adapt to continuously represent the expertise and knowledge of Health Professionals.

Outside of the above, the major advantage for health services to make the transition to the cloud is the improved information security experience. There are still plenty of cases today where patients are at risk of medical and financial identity fraud due to inadequate information security and manual (very implicitly hands-on) records management processes. Single platform databasing, as well as the from-anywhere accessibility of SharePoint as a Cloud platform meets the challenge of maintaining networks, PCs, servers and databases, which can be fairly extensive due to many health care institutions existing beyond hospitals, branching off into neighbourhood clinics, home health providers and off-site services.

Provisioning complex Modern Sites with Azure Functions and Flow – Part 2 – Create and Apply Template

In the previous blog here, we got an overview of the high level Architecture of a Complex Modern team site provisioning process. In this blog, we will look at the step 1 of the process – Create and Apply template process, in detail.

Before that, below are few links to earlier blogs, as a refresher, to prerequisties for the blog.

  1. Set up a Graph App to call Graph Service using App ID and Secret – link
  2. Sequencing HTTP Trigger Azure Functions for simultaneous calls – link
  3. Adding and Updating owners using Microsoft Graph Async calls – link

Overview

The Create and Apply Template process aims at the following

1. Create a blank modern team site using Groups Template (Group#0 Site template)

2. Apply the provisioning template on the created site.

Step 1 : Create a blank Modern team site

For creating a modern team site using CSOM we will use the TeamSiteCollectionCreationInformation class of OfficeDevPnP.  Before we create the site, we will make sure the site doesn’t already exist.

Note: There is an issue with the Site Assets library not getting intialized 
when the site is created using the below code. 
Hence, calling the EnsureSiteAssets library is necessary.

Step 2:  Apply the Provisioning Template

Note: The Apply template process is a long running process and takes from 60-90 min to complete 
for a complex provisioning template with many site columns, content types and libraries. 
In order to prevent the Azure function from timing out, it is required to host the Azure Function 
using a App Service Plan instead of a Consumption plan so the Azure function 
is not affected by the 10 min time out. 

For the Apply Provisioning Template process, use the below steps.

1. Reading the Template

It is important to note that the XMLPnPSchemaFormatter version (in the code below) must match the PnP version used to generate the PnP template. If the version is older, then set the XMLPnPSchemaFormatter to read from the older version. In order to find the version of the PnP Template, open the xml and look at the start of the file

PnPTemplateVersion

2. Apply the Template

For applying the template, we will use the ProvisioningTemplateApplyingInformation class of the OfficeDevPnP module. The ProvisioningTemplateApplyingInformation also has a property called HandlerToProcess which could be used the invoke the particular handler in the provisioning template process. Below is the code for the same.

After the apply template process is complete, since the flow will have timed out, we will invoke another flow to do the post process by updating a list item in the SharePoint list.

Conclusion

In this blog, we saw how we could create a modern team site and apply the template on it. The next blog we will finalize the process by doing site specfic changes after applying the template.

Provisioning complex Modern Sites with Azure Functions and Microsoft Flow – Part 1 – Architecture

In one of my previous blog here,  I have discussed about creating Office 365 groups using Azure Function and Flow. The same process could be used also to provision Modern Team sites in SharePoint Online because Modern Team Sites are Office 365 groups too. However, if you are creating a Complex Modern Team Site with lots of Libraries, Content types, Termstore associated columns etc. it will challenging to do it with a single Azure Function.

Thus, in this blog (part 1), we will look at the Architecture of a Solution to provision a complex Modern Team Site using multiple Azure Function and Flows. This is an approach that went through four months of validation and testing. There might be other options but this one worked for the complex team site which takes around 45-90 mins to provision.

Solution Design

To start with lets’ look at the solution design. The solution consists of two major components

1. Template Creation – Create a SharePoint Modern Team site to be used as a template and generate a Provisioning template from it

2. Provisioning Process – Create a SharePoint Inventory List to run the Flow and Azure Function. There will be three Azure Functions that will run three separate parts of the provisioning lifecycle. More details about the Azure Functions will in upcoming blog.

Get the Provisioning Template

The first step in the process is to  create a clean site that will be used as a reference template site for the Provisioning template. In this site, create all the lists, libraries, site columns, content type and set other necessary site settings.

In order to make sure that the generated template doesn’t have any elements which are not needed for provisioning, use the following PnP PowerShell cmdlet. The below cmdlet removes any content type hub association, ALM api handles and site security for provisioning requirements.

Get-PnPProvisioningTemplate -Out "" -ExcludeHandlers ApplicationLifecycleManagement, SiteSecurity -ExcludeContentTypesFromSyndication

The output of the above cmdlet is ProvisioningTemplate.xml file which could be applied to new sites for setting up the same SharePoint elements. To know more about the provisioning template file, schema and allowed tags, check the link here.

ModernSitesProvisioningFlow_GetTemplate

Team Site Provsioning Process

The second step in the process would be to create and apply the template to a Modern SharePoint Team site using Flow and Azure Function. The detail steps would be as follows:

1. Create an Inventory list to capture all the requirements for Site Creation

2. Create two flows

a) Create and Apply Template flow, and

b) Post Provisioning Flow

3. Create three Azure Functions –

a) Create a blank Modern Team Site

b) Apply Provisioning Template on the above site. This is a long running process and can take about 45-90 min for applying a complex template with about 20 libraries, 20-30 site columns and 10-15 content types

Note: Azure Functions on Consumption plan have a timeout of 10 min. Host the Azure function on an App Service Plan for the above to work without issues

c) Post Provisioning to apply changes that are not supported by Provisioning Template such as Creating default folders etc.

Below is the process flow for the provisioning process. It has steps from 1 – 11 which goes from creating the site to applying it. The brief list of the steps are as follows

  1. Call the Create Site flow to start the Provisioning Process
  2. Call the Create Site Azure Function
  3. Create the Modern Team Site in Azure Function and set any dependencies required for the Apply template such as Navigation items, pages etc, and then return to flow
  4. Call the Apply Template Azure Function.
  5. Get the previously generated ProvisioningTemplate.xml file from a shared location
  6. Apply the Template onto the newly created Modern site. Note: The flow call times out because it cannot wait for such a long running process
  7. Update the status column in the Site Directory for the post provisioning flow to start
  8. Call the Post provisioning flow to run the Post provisioning azure function
  9. The Post provisioning azure function will complete the remaining SharePoint changes which were not completed by the apply template such as, set field default values, create folders in any libraries, associate default values to taxonomy fields etc.

ModernSitesProvisioningFlow_ProvisioningProcess

Conclusion:

Hence in the above blog, we saw how to create a provisioning process to handle complex modern team site creation at a high architectural level. Next, we will deep dive into the Azure functions to create, apply template and post process in the next upcoming blogs.

Happy Coding!!!

The 5 ways to migrate from Skype for Business to Microsoft Teams

Microsoft recently published a technet article outlining the different ways to migrate away from Skype for Business to Microsoft Teams. The article currently contains 5 different migration methods. Lets take a closer look at each of them, and how they might be used within your organisation.

The 5 migration methods

They say good things come in three’s, but in this case they come in five! Five different methods of moving from SfB to Microsoft Teams. When it comes to migration planning, choice is a good thing

5-migration-methods-teams

 

Migration Method 1: Skype for Business with Teams Collaboration

Ok, so you have a Skype for Business deployment right now, and are looking at moving to Teams. The problem is that Teams just doesn’t meet your requirements right now. This could be due to:

  • Running custom SfB applications such as a call centre app, or on premise UCMA/UCWA app.
  • Teams is missing a feature that you currently use in Skype for Business

If this is you and your goal is to adopt Teams quickly, you can easily start using Teams for collaboration. Your users can quickly faimilarise themselves with Teams and how they can use it to work with colleagues on documents in SharePoint and OneDrive, as well as sharing ideas within their newly created Teams and Channels.

Advantages:

  • No overlapping capabilities between Teams and Skype for Business.
  • Instant messaging and chat will reside in Skype for Business (tied to calling).

Caveats:

  • None!

 

Migration Method 2: Skype for Business with Teams Collaboration and Meetings.

Maybe you already have a Skype for Business deployment with significant use of enterprise voice, but right now some of your calling requirements aren’t yet met by Teams calling (such as a third party meeting service).

If this sounds like you, consider enabling Teams for Collaboration as well as Meetings. Existing Skype for Business scheduled meetings will work as normal, but users will be able to create new meetings within Teams.

Advantages:

  • Start Teams adoption quickly, going beyond group collaboration.
  • Improve your users’ meetings experience.

Caveats:

  • Instant messaging and chat will reside in Skype for Business (tied to calling)

 

Migration Method 3: Islands – The default option

If you choose to do nothing, Office 365 enables “Islands” mode by default.

Both Skype for Business and Teams continue to run within their own “island” and all features and functions are enabled within both products.

You may consider this approach if you’re running a PoC with a number of users, and want them to experience the full range of Teams features whilst still having the ability to use Skype for Business.

Of course, without the right user adoption and communications, things can get messy fast. Be sure that your communication around how each product should be used is solid.

Advantages:

  • Simple to operate, no interoperability.
  • Best Teams experience up-front for all capabilities

Caveats:

  • Requires good user communication to avoid confusion and to drive usage toward Teams.
  • Exit strategy requires users to have fully adopted Teams by the time Skype for Business is   decommissioned.

 

Migration Method 4: Teams only.

Alright, so you’re ready to take the plunge and use Microsoft Teams. Of course, you may still have users using Skype for Business on premise, but you want all of your cloud based users to use Teams.

Advantages:

  • Limits user confusion by providing only one client to work with.

Caveats:

  • Interoperability only supports basic chat and calling between Skype for Business and Teams

 

Migration Method 5: Skype for Business only

And lastly, you may choose to avoid Teams (for now at least), and wish to stick with Skype for Business only.

Keep in mind that at some point you’re going to need to make the move to Teams anyway, but at least you still have the option for now.

Advantages:

  • Continue to meet business requirements that currently can only be met by Skype for Business.

Caveats:

  • Interoperability only supports basic chat and calling between Skype for Business and Teams.

 

Upgrade Journeys

Still following? Good. Maybe go and grab a coffee. Don’t worry, i’ll wait. Ok, you’re back? Let’s push on.

There are two recommended upgrade journeys, one “simple” and the other .. not so much.

Simple Upgrade

If you like keeping things simple (and who doesn’t), there’s a three step process of Selecting users for a  PoCEnabling Teams Collaboration ModeEnabling Teams-Only Mode

That process is outlined in the below graphic:

 

Simple-upgrade-teams

This is a nice and simple way of selecting your users based upon their job roles (and their eagerness for change), then slowly introducing them to Teams before enabling Teams only mode for them.

 

Gradual Upgrade

I hope you’ve finished that coffee! The other recommended upgrade path is the gradual path. I’ll give you a moment to absorb the below graphic:

gradual-upgrade-teams

As you can see, it is possible to migrate different users at different rates. You may choose to move IT into Teams only mode quickly, but choose to move HR and Sales at a slower pace. Which ever method you choose, you’ll more than likely want to end up in a Teams Only mode.

 

Upgrade Scenarios

Alright. At this point you’re probably saying “That’s great Craig. I have a choice. 5 choices to be exact. But uh … which one do I choose?”.

Great question! Of course, you’ll need to have a think about how your organisation responds to change, and how you’ll equip your userbase to start using and adopting Microsoft Teams. The below may help steer you in the right direction, though!

 

Scenario 1: I’m Running Pure Skype For Business Online

In short, move to Teams. You’re not using any custom applications or have any on premise servers to deal with. Sort your user adoption comms out, select some users for a PoC and get them up and running with Teams.

Once everyone is trained and happy, enable Teams Only.

 

Scenario 2: I have Cloud Connector Edition (CCE) deployed

Firstly, kudos. CCE is an awesome product. Secondly,  You’re in a great position to deploy Direct Routing to Microsoft Teams to continue using your existing Sonus or Audiocodes SBC and Phone company.

Consider the approach of enabling Teams in Collaboration and Meeting only mode first of all. you’ll be able to continue using CCE to route calls to Skype for Business as well as direct routing to route calls to Teams.

 

Scenario 3: Skype for Business Hybrid with Sfb / Teams Online

This one is a popular scenario. The good news is you have many options available to you. You could enable Skype for Business with Teams collaboration–only mode, Skype for Business with Teams collaboration and meetings mode, keep Islands mode enabled or jump ship and enable Teams-only mode.

Keep in mind that your existing on premise SfB users will be unaffected by this change. Only your cloud users will be able to communicate with Teams users, and vice versa.

Work to create a plan of moving as many (or all) users from SfB on premise to SfB online, and then to Teams. Leave only those users that absolutely must remain on premise (because of specific SfB on premise requirements).

 

Scenario 4: Skype for Business Server on premise – no Hybrid

There’s good news for you. Microsoft have announced Skype for Business server 2019 for on premise, which we’re told will help you to eventually move your users to Office 365 and Microsoft Teams.

If you have no desire to move users to Office 365 or to Teams, consider upgrading your Skype for Business on premise environment to 2019, once it’s released.

 

Next Steps

Microsoft have outlined the steps mentioned above in their own technet post, which can be found here: https://docs.microsoft.com/en-au/MicrosoftTeams/upgrade-and-coexistence-of-skypeforbusiness-and-teams

 

 

Measure O365 ATP Safe Attachments Latency using PowerShell

Microsoft Office 365 Advanced Threat Protection (ATP) is a cloud based security service that is part of the O365 E5 offering. Also can be separately added to other O365 subscriptions. Now a lot can be learned about ATP from here. But in this post we’re going to extract data corresponding to one of ATP’s primary features; ATP Safe Attachments.

In short, ATP Safe Attachments scans documents for malicious content and can block these attachments depending on the policy configuration. More details can be learned about ATP Safe Attachments from here. Now, Safe Attachments can basically react to emails with attachments in multiple ways. It can deliver the email without an attachment, just a thumbnail that tells the user that ATP is currently scanning the attachment. Or it can delay the message until scanning is complete. Microsoft has been  improving on ATP Safe Attachments with a lot of new features like reducing message delays, support for different file formats, dynamic delivery, and previewing, etc. But what we’re interested in this post is that the delay that ATP Safe Attachments introduces if the policy was set to delay emails as sometimes the other features may cause confusion to users and would require educating the end-user if enabled.

But why is there a delay in the first place?

The way ATP Safe Attachments work is by creating a small sandbox, then it detonates / opens these attachments in that sandbox. Basically it checks what types of changes the attachment does to the sandbox and also uses machine learning to then decide if the attachment is safe or not. It is a nice feature, and it actually simulates how users open attachments when receiving them.

So how fast is this thing?

Now recently I’ve been working on a project with a customer, who wanted to test ATP Safe Attachments with the policy configured to delay messages and not deliver and scan at the same time (dynamic delivery). The question they had in mind is that how can we ensure that ATP Safe Attachments delay is acceptable (tried so hard to find an official document from Microsoft around the numbers but couldn’t). But the delay is not that big of a deal, it is about a minute or two (used to be a lot longer when ATP was first introduced). That is really fast considering the amount of work that needs to be done in the the backend for this thing to work. Still the customer wanted this measured, and I had to think of a way in doing that. So I tried to find a direct way to do this, but with all of the advanced reporting that ATP brings to the table, there isn’t one that tells you how long messages actually take to arrive to the users mailbox, if they have ATP Safe Attachments policy in place and set to delay messages.

Now let us define ‘delay‘ here. The measurement is done when messages arrive at Exchange Online, now you can have other solutions in front of O365 so that doesn’t count. So we are going to measure the time the message hits Exchange Online Protection (EOP) and ATP, until O365 delivers the message to the recipient. Also the size of the attachment(s) plays a role in the process.

The solution!

Now assuming you have the Microsoft Exchange Online Modules installed, or if you have MFA enabled, you need to authenticate in a different way (that was my case and I will show you how I did it).

First.. jump on to the Office 365 Admin Centre (assuming you’re an admin). Then navigate to the Exchange Online admin centre. From the left side, search for ‘Hybrid‘, an then click on second ‘Configure’ button (This one installs the Exchange Online PowerShell modules for users with MFA). Once the file is downloaded, click on it and let it do its magic. And once you’re in there, you should see this:

Screen Shot 2018-07-19 at 9.55.02 am

Connect-EXOPSSession -UserPrincipalName yourAccount@testcorp.com 

Now go through the authentication process and let PowerShell load the modules into this session. You should see the following:

Screen Shot 2018-07-19 at 10.00.57 am

Now this means all of the modules have been loaded into the PowerShell session. Before we do any PowerShell scripting, let us do the following:

  • Send two emails to an address that has ATP Safe Attachment policy enabled.
  • The first email is simple and has no attachments into it.
  • The second contains at least one attachment (i.e. a DOCX or PDF File).
  • Now the subject name shouldn’t matter, but for the sake of this test, try to name the subject in a way that you can tell which email contains the attachment.
  • Wait 5 minutes, why? Because ATP Safe Attachments should do some magic before releasing that email (assuming you have it turned ON and for delaying messages only).

Let us do some PowerShell scripting 🙂

Declare a variable that contains the recipient email address that has ATP Safe Attachments enabled. Then use the cmdlet (Get-MessageTrace) to fetch emails delivered to that address in the last hour.

$RecipientAddress = 'FirstName.LastName@testcorp.com';

$Messages = Get-MessageTrace -RecipientAddress $RecipientAddress -StartDate (Get-Date).AddHours(-1) -EndDate (get-date) 

Expand the $Messages variable, and you should see:

Screen Shot 2018-07-19 at 11.10.59 am

Here is what we are going to do next. Now I wish I can explain each line of code, but that will make this blog post too lengthy. So I’ll summarise the steps:

  • Do a For each message loop.
  • Use the ‘Get-MessageTraceDetail’ cmdlet to extract details for each message and filter out events related to ATP.
  • Create a custom object containing two main properties;
    • The recipient address (of type String)
    • The message Trace ID ( of type GUID).
  • Remove all temp variables used in this loop.
foreach($Message in $Messages)
{
$Message_RecipientAddress = $Message.RecipientAddress
$Message_Detail = $Message | Get-MessageTraceDetail | Where-Object -FilterScript {$PSItem.'Event' -eq "Advanced Threat Protection"} 
if($Message_Detail)
{
$Message_Detail = $Message_Detail | Select-Object -Property MessageTraceId -Unique
$Custom_Object += New-Object -TypeName psobject -Property ([ordered]@{'RecipientAddress'=$Message_RecipientAddress;'MessageTraceId'=$Message_Detail.'MessageTraceId'})
} #End If Message_Detail Variable 
Remove-Variable -Name MessageDetail,Message_RecipientAddress -ErrorAction SilentlyContinue
} #End For Each Message  

The expected outcome is a single row.. Why? that is because we sent only two messages, one with an attachment (got captured using the above script), the other is without an attachment (doesn’t contain ATP events).

Screen Shot 2018-07-19 at 11.28.42 am

Just in case you want to do multiple tests, remember to empty your Custom Object before retesting, so you do not get duplicate results.

The next step is to extract details for the message in order to measure the start time and end time until it was sent to the recipient. Here’s what we are going to do:

  • Create a for each item in that custom object.
  • Run a ‘Get-MessageTraceDetail’ again to extract all of the message events. Sort the data by Date.
  • Measure the time difference between the last event and the first event.
  • Store the result into a new custom object for further processing.
  • Remove the temp variables used in the loop.
foreach($MessageTrace in $Custom_Object)
{
$Message = $MessageTrace | Get-MessageTraceDetail | sort Date
$Message_TimeDiff = ($Message | select -Last 1 | select Date).Date - ($Message | select -First 1 | select Date).Date
$Final_Data += New-Object -TypeName psobject -Property ([ordered]@{'RecipientAddress'=$MessageTrace.'RecipientAddress';'MessageTraceId'=$MessageTrace.'MessageTraceId';'TotalMinutes'="{0:N3}" -f [decimal]$Message_TimeDiff.'TotalMinutes';'TotalSeconds'="{0:N2}" -f [decimal]$Message_TimeDiff.'TotalSeconds'})
Remove-Variable -Name Message,Message_TimeDiff -ErrorAction SilentlyContinue
} # End For each Message Trace in the custom object

 The expected output should look like:

Screen Shot 2018-07-19 at 11.42.25 am

So here are some tips you can use to extract more valuable data from this:

  • You can try to do this on a Distribution Group and get more users, but you need a larger foreach loop for that.
  • If your final data object has more than one result, you can use the ‘measure-object’ cmdlet to find average time for you.
  • If you have a user complaining that they are experiencing delays in receiving messages, you can use this script to extract the delay.

Just keep in mind that if you’re doing this on a large number of users, it might take some time to process all that data, so patience is needed 🙂

Happy scripting 🙂

 

 

Planning Site structure and Navigation in SharePoint Modern Experience Communication and Team sites

If you are planning to implement or implementing Modern team sites or Communication sites, there is change in best practices for planning and managing the Sites structure, Site Hierarchy and Navigation. This is a very common question during my presentations – how do we manage site structures, navigation and content in Modern experiences.

So, in this blog, we will look at few strategies for planning Site structure and Navigation in Modern Experience sites.

1. First and foremost, get rid of nested subsites and Site hierarchy navigation. Recently Microsoft has been pushing for Site Collections flat structure with Modern Team and Communication sites, which adds a lot of benefit for managing isolation and content. So, the new approach – Flat Site Collections and no Subsites. (There are various advantages of flat structure site collections which will be listed in another upcoming blog)

2. Secondly, to achieve a hierarchy relationship among sites such as Navigation, news items, search etc, use Hub Sites. Hub sites are the new way of connecting SharePoint site collections together. Besides, they have added advantage of aggregating information such as News and Search results from related hub associated sites. So, create a Hub site for Navigation across related sites.HubSiteAssociatedTeam

3. A best candidate for Hub sites, in my opinion, is Communication sites. Communication sites have a top navigation that can be easily extended for Hub sites. They are perfect for publishing information and showing aggregrated content. However, it also depends on if the Hub site is meant for a team and business unit or company as a whole. So, use Communication as a Hub site if targeting all company or a major group.QuickLaunchNestedCommunicationSite

4. One Navigation structure – Quick launch (Left hand) is Top Navigation for Communication sites. So no need to maintain two navigations. If you ask me, this a big win and removes a lot of confusion for end users.QuickLaunchEdit_CommSite

5. Quick launch for Modern Team and Communication Sites allows three level sub hierarchy which allows to define a nested custom hierarchy structure for Navigation which could be different from the content structure and site structure.

6. Focus on Content, not on Navigation or location of Content, through new Modern web parts such as Highlighted content, Quick links etc. which allow you to find content anywhere easily.HighlightedContent

7. Finally, few limitations of Modern Site Structure and Navigation (as of June 2018) for reference. Hopefully, this will be changed soon.

    • Permissions management still needs to be managed at each Site Collection, no nested structure there yet. Yet it is possible to use AD groups for consistent permissions set up
    • Office 365 Unified Security Groups cannot have AD or other Office 365 groups nested for Modern Team sites. But SharePoint site permissions could be managed through AD groups
    • Contextual Navigation bolding is missing in Hub sites i.e. if you click on the link to move to a child site then navigation is not automatically bolded, but this might be coming soon.
    • Navigation headers in Modern sites cannot be blank and needs to be set to a link

Conclusion:

Hence in this blog, we looked at an approach for Modern site structures, hierarchy and navigation.