SharePoint Integration for Health Care eLearning – Moving LMS to the Cloud

Health care systems often face challenges in the way of being unkept and unmaintained or managed by too many without consistency in content and harbouring outdated resources. A lot of these legacy training and development systems also wear the pain of constant record churning without a supportable record management system. With the accrual of these records over time forming a ‘Big Data concern’, modernising these eLearning platforms may be the right call to action for medical professionals and researchers. Gone should be the days of manually updating Web Vista on regular basis.

Cloud solutions for Health Care and Research should be well on its way, but the better utilisation of these new technologies will play a key factor in how confidence is invested by health professionals in IT providing a means for departmental education and career development moving forward.

Why SharePoint Makes Sense (versus further developing Legacy Systems)

Every day, each document, slide image and scan matters when the paying customer’s dollar is placed on your proficiency to solve pressing health care challenges. Compliance and availability of resources aren’t enough – streamlined and collaborative processes, from quality control to customer relationship management, module testing and internal review are all minimum requirements for building a modern, online eLearning centre i.e. a ‘Learning Management System’.

ELearningIndustry.com has broken down ten key components that a Learning Management System (LMS) requires in order to be effective. From previous cases, working in developing an LMS, or OLC (Online Learning Centre) Site using SharePoint, these ten components can indeed be designed within the online platform:

  1. Strong Analytics and Report Generation – for the purposes of eLearning, e.g. dashboards which contain progress reports, exam scores and other learner data, SharePoint workflows allows for progress tracking of training and user’s engagement with content and course materials while versioning ensures that learning managers, content builders (subject matter experts) and the learners themselves are on the same page (literally).
  1. Course Authoring Capability – SharePoint access and user permissions are directly linked to your Active Directory. Access to content can be managed, both from a hierarchical standpoint or role-based if we’re talking content authors. Furthermore, learners can have access to specific ‘modules’ allocated to them based on department, vocation, etc.
  1. Scalable Content Hosting – flexibility of content or workflows, or plug-ins (using ‘app parts’) to adapt functionality to welcome new learners where learning requirements may shift to align with organisational needs.
  1. Certifications – due to the availability and popularity of SharePoint online in large/global Enterprises, certifications for anywhere from smart to super users is available from Microsoft affiliated authorities or verified third-parties.
  1. Integrations (with other SaaS software, communication tools, etc.) – allow for exchange of information through API’s for content feeds and record management e.g. with virtual classrooms, HR systems, Google Analytics.
  1. Community and Collaboration – added benefit of integrated and packaged Microsoft apps, to create channels for live group study, or learner feedback, for instance (Skype for Business, Yammer, Microsoft Teams).
  1. White Labelling vs. Branding – UI friendly, fully customisable appearance. Modern layout is design flexible to allow for the institutes branding to be proliferated throughout the tenant’s SharePoint sites.
  1. Mobile Capability – SharePoint has both a mobile app and can be designed to be responsive to multiple mobile device types
  1. Customer Support and Success – as it is a common enterprise tool, support by local IT should be feasible with any critical product support inquiries routed to Microsoft
  1. Support of the Institutes Mission and Culture – in Health Care Services, where the churn of information and data pushes for an innovative, rapid response, SharePoint can be designed to meet these needs where, as an LMS, it can adapt to continuously represent the expertise and knowledge of Health Professionals.

Outside of the above, the major advantage for health services to make the transition to the cloud is the improved information security experience. There are still plenty of cases today where patients are at risk of medical and financial identity fraud due to inadequate information security and manual (very implicitly hands-on) records management processes. Single platform databasing, as well as the from-anywhere accessibility of SharePoint as a Cloud platform meets the challenge of maintaining networks, PCs, servers and databases, which can be fairly extensive due to many health care institutions existing beyond hospitals, branching off into neighbourhood clinics, home health providers and off-site services.

Cloud PABX with On-premises PSTN connectivity

Sometimes my consulting engagements require creative thinking on how we can deliver Skype for Business services based on the customer needs and timing of suitable products becoming available to the market. In this case my customer wanted Skype for Business online with enterprise voice with  Telstra calling for Office 365. At the time the Telstra PSTN calling plan was not generally available. Business issues and time constraints required the business to implement a new greenfield solution within a week. Normal Telco lead times and gateway acquisitions can take four to six weeks to have SIP infrastructure ready. Gateway acquisitions can be expensive, especially if it becomes redundant when the customer moves to a full cloud solution as soon as Telstra PSTN calling plans for SFB are available.
In this design I implemented a hybrid solution using a new Skype for Business Server deployment and PSTN connectivity through a third-party SIP trunk provider for Skype for Business. Through this provider we could purchase PSTN numbers and connectivity without the need for a hardware gateway appliance. The solution required a hybrid topology. The initial implementation required an on-premise solution with a single Skype for Business front end server and an SFB edge.

sfb hybrib
The users are initially homed on-premise and PSTN delivered through 3rd party SIP trunks terminating onto the mediation service role. The PSTN media is anchored through a registered public IP that the Telco provider allows. On the Skype for Business server the 3rd party SIP hosting service is configured as a standard PSTN gateway

To take advantage of Microsoft Cloud PABX features we can simply migrate the on-premise user to the cloud. In this topology, users are homed in the cloud on Skype for Business Online instead of being homed on the on-premise deployment.
With this option, your Skype for Business Online users get their Enterprise Voice PSTN connectivity through  the  on-premises Skype for Business Server deployment.

So how does Cloud PABX know to associate the on-premise PSTN with the user?

Through the Office 365 online power shell portal we can look at the users online properties. The get-csonlineuser command needs to show on-prem enterprise-voice is enabled, a SIP address and the on-Premises line URI as in the example below.

Next in the on-premise SFB Management Shell I run the get-csuser command to retrieve the users on-premise properties and find the user is assigned the Voice routing Policy of Global-Hybrid. I then run the get-csvoiceroutingpolicy command to check the Global-Hybrid voice routing policy and determine the PSTN usages assigned to the user. The PSTN usage configuration in the on-premise Server will determine the route used to dial out.

blog

The cloud PABX user and a on-premise SFB user in this SFB hybrid scenario will both  follow the on-premise call routing logic. The PSTN usage configuration in the on-premise server will send the call to the 3rd Party SIP trunk provider.

Note: Telstra calling for Office 365 with Skype for Business is now available. This will allow a PSTN calling solution without any on-premises infrastructure.

Keith

Removing blockers for Cloud PABX with On-Premise PSTN for Skype for Business Online.

Overcoming obstacles to migrating to Cloud based enterprise voice solutions is achievable through clever design options. Some enterprise business infrastructure managers may feel that their legacy voice environment is restricting the migration of voice services to cloud based offerings like Skype for Business or Microsoft teams. However, Microsoft offers a variety of design options for enabling PSTN connectivity for Office 365 accounts or Skype for Business accounts with your on-premise Skype for Business Server deployment. Microsoft Cloud PBX with PSTN access through a Hybrid Skype for Business on-premise deployment can provide a migration strategic vision.

Your goal maybe a full unified communications cloud vision, for example Skype for Business Cloud PABX with Telstra calling for Office 365 plans or a hybrid, but your organization migration strategy may be blocked. This issue may arise for some of the following reasons:

  1. Locked in TELCO contract or Investments in On-Premises PSTN connectivity
  2. Legacy PABX infrastructure capital investment.
  3. Existing On-premise server applications ie: contact centre and call recording Solutions.
  4. Dependency on analogue infrastructure

With a Hybrid SFB solution It is possible to realise a staged migration to Hosted Cloud PABX services now. Therefore, your migration strategy may be migrating some of the user base now and testing cloud-based telephony until the restricting forces are removed.

In this scenario you can move users to Skype for Business Online Cloud PBX with SFB on-premise PSTN connectivity.

Skype for Business Server Hybrid deployment

sfb hybrib

The diagram shows a hybrid design that consists of a Skype for Business Server on-premise deployment federated with Office 365. Users can be homed on premises or online in cloud. The users will be able to make and receive calls through existing on-premises voice infrastructure. Notice that existing legacy PABX infrastructure is integrated into the solution. Additional third party gateways from Microsoft qualified gateway provider can deliver multiple connectivity options for legacy infrastructure to integrate into a SFB solution.

Overcoming the barriers for migration to Skype for Business:

  1. The business can maintain existing TELCO PSTN contracts. Existing PSTN services can terminate on SFB session border controllers that are controlled through the on-premise server. The business can then simply port the existing number range to new Office 365 Telstra  PSTN plans when existing contracts expire.
  1. Integration with legacy PABX solutions can be maintained through SIP or ISDN Links between SFB and 3rd party PABX solutions through a Session Border controller. This can maintain investments in legacy services and also enjoy benefits of Microsoft Cloud PABX and digital workspace.
  1. Key users on legacy contact centre and call recording applications can remain on existing platforms. Cloud based contact centre solutions can be evaluated and migration strategies separating contact center users from the rest of the business can be created.
  1. Legacy analogue requirements may be maintained through SFB on-premise servers and provision of analogue gateways

Therefore, if your business objective is progressing to Cloud based enterprise voice infrastructure and committed to Office 365 Cloud PABX as your voice solution then you have migration strategies with a SFB Hybrid design.

Keith Coutlemanis

The Present [and Future] Landscape of Data Migrations

A rite of passage for the majority of us in the tech consultancy world is being a part of a medium to large scale data migration at some stage in our careers. No, I don’t mean dragging files from a PC to a USB drive, though this may have very well factored into the equation for some us. What I’m referencing is a planned piece of work where the objective is to move an entire data set from a legacy storage system to a target storage. Presumably, a portion of this data is actively used, so this migration usually occurs during a planned downtime period, ad communication strategy, staging, resourcing, etc.

Yes, a lot of us can say ‘been there, done that’. And for some us, it can seem simple when broken down as above. But what does it mean for the end user? The recurring cycle of change is never an easy one, and the impact of a data migration is often a big change. For the team delivering it can be just as stress-inducing – sleepless shift cycles, outside of hours and late-night calls, project scope creeping (note: avoid being vague in work requests, especially when it comes to data migration work), are just a few of the issues that will shave years off anyone who’s unprepared for what a data migration encompasses.

Back to the end-users, it’s a big change: new applications, new front-end interfaces, new operating procedures and a potential shake-up of business processes, and so on. Most opt and agree with the client to taper off the pain of the transition/change period, ‘rip the Band-Aid right off’ and move an entire dataset from one system to another in one operation. Sometimes, and dependent on context/platforms, this is a completely seamless exercise. The end user logs in on a Monday and they’re mostly unaware of a switch. Whether taking this, or a phased approach to the migration, there are signs showing in today’s technology services landscape that these operations are aging and become somewhat outdated.

Data Volumes Are Climbing…

… to put it mildly. We’re in a world of Big Data, and this isn’t only for Global Enterprises and Large Companies, but even mid-sized ones and for some individuals too. Weekend downtimes aren’t going to be enough – or aren’t, as this BA discovered on a recent assignment – and when your data amounts aren’t equitable to the actual end users you’re transitioning (the bigger goal is, in my mind, the transformation of the user experience in fact), then you’re left with finite amounts of time to actually perform tests, gain user acceptance, plan and strategise for mitigation and potential rollback.

Migration through Cloud Platforms are not yet well-optimized for effective (pain-free) Migrations

Imagine you have a billing system that contains somewhere up to 100 million fixed assets (active and backlog). The requirement is to migrate these all to a new system that is more intuitive to the accountants of your business. On top of this, the app has a built-in API that supports 500 asset migrations a second. Not bad, the migration will, therefore, take just under 20 days to complete. Not optimal for a project, no matter how much planning goes into the delivery phase. On top of this, consider the slowing down of performance due to user access going through an API or load gateway. Not fun.

What’s the Alternative?

In a world where we’re looking to make technology and solution delivery faster and more efficient, the future of data migration may, in fact, be headed in the opposite direction.

Rather than phasing your migrations over outage windows of days or weeks, or from weekend-to-weekend, why not stretch this out to months even?

Now, before anyone cries ‘exorbitant bill-ables’, I’m not suggesting that the migration project itself be drawn out for an overly long period of time (months, a year).

No, the idea is not to keep a project team around for unforeseen, yet to-be-expected challenges that face them as previously mentioned above. Rather, as tech and business consultants and experts, a possible alternative is redirecting our efforts towards our quality of service, to focus on change management aspect with regards to end-user adoption of a new platform and associated process, and the capability of a given company’s managed IT serviced too, not only support the change but in fact incorporate the migration into as a standard service offering.

The Bright(er) Future for Data Migrations

How can managed services support a data migration, without specialisation in, say, PowerShell scripting or experience in performing a migration via a tool or otherwise, before? Nowadays we are fortunate enough that vendors are developing migration tools to be highly user-friendly and purposed for ongoing enterprise use. They are doing this to shift the view that a relationship with a solution provider for projects such as this should simply be a one-off, and that the focus on migration software capability is more important than the capability of the resource performing the migration (still important, but ‘technical skills’ in this space becoming more of a level playing field).

From a business consultancy angle, an opportunity to provide an improved quality of service is presented by looking at ways in which we can utilise our engagement and discovery skills to bridge the gaps which can often be prevalent between managed services and an understanding of the businesses everyday processes. A lot of this will hinge on the very data being migrated. This can onset positive action from a business given time and with full support from managed services. Data migrations as a BAU activity can become iterative and via request; active and relevant data first, followed potentially by a ‘house-cleaning’ activity where the business effectively de clutters data which it no longer needs or is no longer relevant.

It’s early days and we’re likely still toeing the line between old data migration methodology and exploring what could be. But ultimately, enabling a client or company to be more technologically capable, starting with data migrations, is definitely worth a cent or two.

Deploying Azure Functions with ARM Templates

There are many different ways in which an Azure Function can be deployed. In a future blog post I plan to go through the whole list. There is one deployment method that isn’t commonly known though, and it’s of particular interest to those of us who use ARM templates to deploy our Azure infrastructure. Before I describe it, I’ll quickly recap ARM templates.

ARM Templates

Azure Resource Manager (ARM) templates are JSON files that describe the state of a resource group. They typically declare the full set of resources that need to be provisioned or updated. ARM templates are idempotent, so a common pattern is to run the template deployment regularly—often as part of a continuous deployment process—which will ensure that the resource group stays in sync with the description within the template.

In general, the role of ARM templates is typically to deploy the infrastructure required for an application, while the deployment of the actual application logic happens separately. However, Azure Functions’ ARM integration has a feature whereby an ARM template can be used to deploy the files required to make the function run.

How to Deploy Functions in an ARM Template

In order to deploy a function through an ARM template, we need to declare a resource of type Microsoft.Web/sites/functions, like this:

There are two important parts to this.

First, the config property is essentially the contents of the function.json file. It includes the list of bindings for the function, and in the example above it also includes the disabled property.

Second, the files property is an object that contains key-value pairs representing each file to deploy. The key represents the filename, and the value represents the full contents of the file. This only really works for text files, so this deployment method is probably not the right choice for precompiled functions and other binary files. Also, the file needs to be inlined within the template, which may quickly get unwieldy for larger function files—and even for smaller files, the file needs to be escaped as a JSON string. This can be done using an online tool like this, or you could use a script to do the escaping and pass the file contents as a parameter into the template deployment.

Importantly, in my testing I found that using this method to deploy over an existing function will remove any files that are not declared in the files list, so be careful when testing this approach if you’ve modified the function or added any files through the portal or elsewhere.

Examples

There are many different ways you can insert your function file into the template, but one of the ways I tend to use is a PowerShell script. Inside the script, we can read the contents of the file into a string, and create a HashTable for the ARM template deployment parameters:

Then we can use the New-AzureRmResourceGroupDeployment cmdlet to execute the deployment, passing in $templateParameters to the -TemplateParameterObject argument.

You can see the full example here.

Of course, if you have a function that doesn’t change often then you could instead manually convert the file into a JSON-encoded string using a tool like this one, and paste the function right into the ARM template. To see a full example of how this can be used, check out this example ARM template from a previous blog article I wrote.

When to Use It

Deploying a function through an ARM template can make sense when you have a very simple function that is comprised of one, or just a few, files to be deployed. In particular, if you already deploy the function app itself through the ARM template then this might be a natural extension of what you’re doing.

This type of deployment can also make sense if you’re wanting to quickly deploy and test a function and don’t need some of the more complex deployment-related features like control over handling locked files. It’s also a useful technique to have available for situations where a full deployment script might be too heavyweight.

However, for precompiled functions, functions that have binary files, and for complex deployments, it’s probably better to use another deployment mechanism. Nevertheless, I think it’s useful to know that this is a tool in your Azure Functions toolbox.

Provisioning complex Modern Sites with Azure Functions and Flow – Part 2 – Create and Apply Template

In the previous blog here, we got an overview of the high level Architecture of a Complex Modern team site provisioning process. In this blog, we will look at the step 1 of the process – Create and Apply template process, in detail.

Before that, below are few links to earlier blogs, as a refresher, to prerequisties for the blog.

  1. Set up a Graph App to call Graph Service using App ID and Secret – link
  2. Sequencing HTTP Trigger Azure Functions for simultaneous calls – link
  3. Adding and Updating owners using Microsoft Graph Async calls – link

Overview

The Create and Apply Template process aims at the following

1. Create a blank modern team site using Groups Template (Group#0 Site template)

2. Apply the provisioning template on the created site.

Step 1 : Create a blank Modern team site

For creating a modern team site using CSOM we will use the TeamSiteCollectionCreationInformation class of OfficeDevPnP.  Before we create the site, we will make sure the site doesn’t already exist.

Note: There is an issue with the Site Assets library not getting intialized 
when the site is created using the below code. 
Hence, calling the EnsureSiteAssets library is necessary.

Step 2:  Apply the Provisioning Template

Note: The Apply template process is a long running process and takes from 60-90 min to complete 
for a complex provisioning template with many site columns, content types and libraries. 
In order to prevent the Azure function from timing out, it is required to host the Azure Function 
using a App Service Plan instead of a Consumption plan so the Azure function 
is not affected by the 10 min time out. 

For the Apply Provisioning Template process, use the below steps.

1. Reading the Template

It is important to note that the XMLPnPSchemaFormatter version (in the code below) must match the PnP version used to generate the PnP template. If the version is older, then set the XMLPnPSchemaFormatter to read from the older version. In order to find the version of the PnP Template, open the xml and look at the start of the file

PnPTemplateVersion

2. Apply the Template

For applying the template, we will use the ProvisioningTemplateApplyingInformation class of the OfficeDevPnP module. The ProvisioningTemplateApplyingInformation also has a property called HandlerToProcess which could be used the invoke the particular handler in the provisioning template process. Below is the code for the same.

After the apply template process is complete, since the flow will have timed out, we will invoke another flow to do the post process by updating a list item in the SharePoint list.

Conclusion

In this blog, we saw how we could create a modern team site and apply the template on it. The next blog we will finalize the process by doing site specfic changes after applying the template.

Processing Azure Event Grid events across Azure subscriptions

Consider a scenario where you need to listen to Azure resource events happening in one Azure subscription from another Azure subscription. A use case for such a scenario can be when you are developing a solution where you listen to events happening in your customers’ Azure subscriptions, and then you need to handle those events from an Azure Function or Logic App running in your subscription.

A solution for such a scenario could be:
1. Create an Azure Function in your subscription that will handle Azure resource events received from Azure Event Grid.
2. Handle event validation in the above function, which is required to perform a handshake with Event Grid.
3. Create an Azure Event Grid subscription in the customers’ Azure subscriptions.

Before, I go into details let’s have a brief overview of Azure Event Grid.

Azure Event Grid is a routing service based on a publish/subscribe model, which is used for developing event-based applications. Event sources publish events, and event handlers can subscribe to these events via Event Grid subscriptions.

event-grids

Figure 1. Azure event grid publishers and handlers

Azure Event Grid subscriptions can be used to subscribe to system topics as well as custom topics. Various Azure services automatically send events to Event Grid. The system-level event sources that currently send events to Event Grid are Azure subscriptions, resource groups, Event Hubs, IOT Hubs, Azure Media Services, Service Bus, and blob storage

You can listen to these events by creating an event handler. Azure Event Grid supports several Azure Services and custom webhooks for event handlers. There are number of Azure services that can be used as event handlers, including Azure Functions, Logic Apps, Event Hubs, Azure Automation, Hybrid Connections, and storage queues.

In this post I’ll focus on using Azure Functions as an event handler to which an Event Grid subscription will send events to whenever an event occurs at the whole Azure subscription level. You can also create an Event Grid subscription at a resource group level to be notified only for the resources belonging to a particular resource group. The figure 1 posted above, shows various event sources that can publish events, and various supported event handlers. As per our solution Azure subscriptions and Azure Functions are marked.

Create an Azure Function in your subscription and handle the validation event from Event Grid

If our Event Grid subscription and function were in the same subscription, then we could have simply created an Event Grid-triggered Azure Function. Using that you can simply specify the Event Grid subscription details with this function specified as an endpoint in the Event Grid subscription. However, in our case this cannot be done as we need to have the Event Grid subscription in the customer subscription, and the Azure Function in our subscription. Therefore, we will simply create a HTTP-triggered function or a webhook function

Because we’re not selecting an Event Grid triggered function, we need us to do an extra validation step. At the time of creating a new Azure Event Grid subscription, Event Grid requires the endpoint to prove the ownership of the webhook, so that Event Grid can deliver the events to that endpoint. For built-in event handlers such as Logic Apps, Azure Automation, and Event Grid triggered functions, this process of validation is not necessary. However, in our scenario where we are using a HTTP-triggered function we need to handle the validation handshake

When an Event Grid subscription is created, it sends a subscription validation event in a POST request to the endpoint. All we need to do is to handle this event, read the request body, read the validationCode property in the data object in the request, and send it back in the response. Once Event Grid receives the same validation code back it knows that endpoint is validated, and it will start delivering events to our function. Following is an example of a POST request that Event Grid sends to the endpoint for validation.

Our function can check if the eventType is Microsoft.EventGrid.SubscriptionValidationEvent , which indicates it is meant for validation, and send back the value in data.validationCode. In all other scenarios, eventType will be based on the resource on which the event occurred, and the function can process those events accordingly. Also, the resource validation event contains a header aeg-event-type with value SubscriptionValidation. You should also validate this header.

Following is the sample code for a Node.js function to handle the validation event and send back the validation code and hence completing the validation handshake.

Processing Resource Events

To process the resource events, you can filter them on the resourceProvider or operationName properties. For example, the operationName property for a VM create event is set to Microsoft.Compute/virtualMachines/write. The event payload follows a fixed schema as described here. An event for a virtual machine creation looks like below:

Authentication

While creating the Event Grid subscription, detailed in next section, it should be created with the endpoint URL pointing to function URL including the function key.. Also, event validation done for the handshake acts as another means of authentication. To add an extra layer of authentication, you can generate your own access token, and append it to your function URL when specifying the endpoint for the Event Grid subscription. Your function can now also validate this access token before further processing.

Create an Azure Event Grid Subscription in customer’s subscription

A subscription owner/administrator should be able to run an Azure CLI or PowerShell command for creating the Event Grid subscription in customer subscription.

Important: This step must be done after the above step of creating the Azure Function is done. Otherwise, when you try to create an Event Grid subscription, and it raises the subscription validation event, Event Grid will not get a valid response back, and the creation of the Event Grid subscription will fail.

You can add filters to your Event Grid subscription to filter the events by subject. Currently, events can only be filtered with text comparison of the subject property value starting with or ending with some text. The subject filter doesn’t support a wildcard or regex search.

Azure CLI or PowerShell

An example Azure CLI command to create an Event Grid Subscription, which receives all the events occurring at subscription level is as below:

Here https://myhttptriggerfunction.azurewebsites.net/api/f1?code= is the URL of the function app.

Azure REST API

Instead of asking customer to run a CLI or PowerShell script to create the Event Grid subscription, you can automate this process by writing another Azure Function that calls Azure REST API. The API call can be invoked using service principal with rights on the customer’s subscription.

To create an Event Grid subscription for the customer’s Azure Subscription, you submit the following PUT request:

PUT https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /providers/Microsoft.EventGrid/eventSubscriptions/ eg-subscription-test?api-version=2018-01-01

Request Body:

{
"properties": {
"destination": {
"endpointType": "WebHook",
"properties": {
"endpointUrl": " https://myhttptriggerfunction.azurewebsites.net/api/f1?code="
}
},
"filter": {
"isSubjectCaseSensitive": false
}
}
}

 

Hub-Spoke communication using vNet Peering and User Defined Routes

Introduction

Recently, I was working on a solution for a customer where they wanted to implement a Hub-Spoke virtual network topology that enabled the HUB to communicate with its Spoke networks via vNet Peering. They also required the SPOKE networks to be able to communicate with each other but peering between them was NOT allowed.

Drawing1

As we know, vNet peering is Non-Transitive – which means, even though SPOKE 1 is peered with the HUB network and the HUB is peered with SPOKE 2, this does not enable automatic communication between SPOKE 1 and SPOKE 2 unless they are exclusively peered which in our requirement we were not allowed to do.

So, let’s explore a couple of options on how we can enable communication between the Spoke networks without peering.

Solutions

There are several ways to implement Spoke to Spoke communication, but in this blog I’d like to provide details of the 2 feasible options that worked for us.

Option 1– is to place a Network Virtual Appliance (NVA) basically a Virtual Machine with a configured firewall/router within the HUB and configure it to forward traffic to and from the SPOKE networks.

If you search the Azure Market Place with the keywords “Network Virtual Appliance“, you will be presented with several licensed products that you could install and configure in the HUB network to establish this communication. Configuration of these virtual appliances varies and installation instructions can easily be found on their product websites.

Option 2- is to have a Virtual Network Gateway attached to the HUB network and make use of User Defined Routes, to enable communication between the SPOKES.

The above information was sourced from this very helpful blog post.

The rest of this blog is a detailed step by step guide and the testing performed for implementing the approach mentioned in Option 2.

Implementation

1.) Create 3 Virtual Networks with non-overlapping IP addresses

  • Log on to the Azure Portal and create the Hub Virtual Network as follows

1

  • Create the 2 additional virtual networks as the SPOKES with the following settings:

2

3

2.) Now that we have the 3 Virtual Networks provisioned, let’s start Peering them as follows:

a.) HubNetwork <> Spoke1Network

b.) HubNetwork <> Spoke2Network

  • Navigate to the Hub Virtual Network and create a new peering with the following settings:

4

Select the “Allow gateway transit” option.

  • Repeat the above step to create a peering with Spoke2Network as well.

3.) To establish a successful connection, we will have to create a peering to the HUB Virtual Network from each of the SPOKE Networks too

  • Navigate to Spoke1Network and create a new Peering

6

Notice, that when we select the “Use remote gateways” option, we get an error as we haven’t yet attached a Virtual Network Gateway to the HUB network. Once a Gateway has been attached, we will come back to re-configure this.

For now, Do Not select this option and click Create.

  • Repeat the above step for Spoke2 Virtual Network

4.) Let’s now provision a Virtual Network Gateway

  • Before provisioning a gateway, a Gateway Subnet is required within the Hub Virtual Network. To create this, click on the “Subnets” option in the blade of the Hub Virtual Network and then Click on “Gateway subnet

7

For the purpose of this demo, we will create a Gateway Subnet with the smallest possible network address space with CIDR /29 which provides us with 8 addresses of which the first and last IP are reserved for protocol conformance and x.x.x.1 – x.x.x.3 for azure services. For production environments, a Gateway Subnet with at least /27 address space is advised.

Let’s assume for now that when we provision the Virtual Network Gateway, the internal IP address it gets assigned to will be from the 4th address on wards which in our case would be 10.4.1.4

  • Provision the Virtual Network Gateway

Create a new Virtual Network Gateway with the following settings:

8

Ensure that you select the Hub Virtual Network in the Virtual network field which is where we want the Gateway to be attached. Click Create.

  • The Gateway provisioning process may take a while to complete and you will need to wait for the Updating status to disappear. It can take anywhere between 30-45 mins.

9

5.) Once the Gateway has been provisioned, lets now go back to the Peering section of each of the SPOKE Networks and configure “Use Remote gateways” option

10

  • Repeat the above step for Spoke2ToHub peering

6.) We will now create the Route Tables and define user routes needed for the SPOKE to SPOKE communication

  • Create 2 new Route tables in the portal with the following settings:

11

12

  • Define the User Routes as follows:

13

In the Address Prefix field, insert the CIDR Subnet address of the Spoke2 Virtual Network which in our case is 10.6.0.0/16

Select Next hop type as Virtual appliance and the Next hop address as the internal address of the Virtual Network Gateway. In our case, we are going to have this set as 10.4.1.4 as mentioned earlier.

  • Repeat this step to create a new Route in the Spoke2RouteTable as well by inserting the Subnet CIDR address of Spoke1 Virtual Network

7.) Let’s now associate these Route tables with our Virtual Networks

  • Navigate to the Spoke1Network and in the “Subnets” section of the blade, select the default subnet

14

In the Route table field select, Spoke1RouteTable and click Save

15

  • Repeat the above step to associate Spoke2RouteTable with the Spoke2 Virtual Network

We have now completed the required steps to ensure that both SPOKE Virtual Networks are able to communicate with each other via the HUB

Testing

  • In order to test our configurations, let’s provision a virtual machine in each of the Spoke networks and conduct a simple ping test

1.) Provision a basic Virtual Machine in each of the Spoke networks

2.) Run the following Powershell command in each VM to allow ICMP ping in the windows firewall as this port is blocked by default:

New-NetFirewallRule –DisplayName "Allow ICMPv4-In" –Protocol ICMPv4

3.) In my testing the VM’s had the following internal IP

The VM running in Spoke 1 network: 10.5.0.4

The VM running in Spoke 2 network: 10.6.0.4

16

Pinging 10.6.0.4 from 10.5.0.4 returns a successful response!

Provisioning complex Modern Sites with Azure Functions and Microsoft Flow – Part 1 – Architecture

In one of my previous blog here,  I have discussed about creating Office 365 groups using Azure Function and Flow. The same process could be used also to provision Modern Team sites in SharePoint Online because Modern Team Sites are Office 365 groups too. However, if you are creating a Complex Modern Team Site with lots of Libraries, Content types, Termstore associated columns etc. it will challenging to do it with a single Azure Function.

Thus, in this blog (part 1), we will look at the Architecture of a Solution to provision a complex Modern Team Site using multiple Azure Function and Flows. This is an approach that went through four months of validation and testing. There might be other options but this one worked for the complex team site which takes around 45-90 mins to provision.

Solution Design

To start with lets’ look at the solution design. The solution consists of two major components

1. Template Creation – Create a SharePoint Modern Team site to be used as a template and generate a Provisioning template from it

2. Provisioning Process – Create a SharePoint Inventory List to run the Flow and Azure Function. There will be three Azure Functions that will run three separate parts of the provisioning lifecycle. More details about the Azure Functions will in upcoming blog.

Get the Provisioning Template

The first step in the process is to  create a clean site that will be used as a reference template site for the Provisioning template. In this site, create all the lists, libraries, site columns, content type and set other necessary site settings.

In order to make sure that the generated template doesn’t have any elements which are not needed for provisioning, use the following PnP PowerShell cmdlet. The below cmdlet removes any content type hub association, ALM api handles and site security for provisioning requirements.

Get-PnPProvisioningTemplate -Out "" -ExcludeHandlers ApplicationLifecycleManagement, SiteSecurity -ExcludeContentTypesFromSyndication

The output of the above cmdlet is ProvisioningTemplate.xml file which could be applied to new sites for setting up the same SharePoint elements. To know more about the provisioning template file, schema and allowed tags, check the link here.

ModernSitesProvisioningFlow_GetTemplate

Team Site Provsioning Process

The second step in the process would be to create and apply the template to a Modern SharePoint Team site using Flow and Azure Function. The detail steps would be as follows:

1. Create an Inventory list to capture all the requirements for Site Creation

2. Create two flows

a) Create and Apply Template flow, and

b) Post Provisioning Flow

3. Create three Azure Functions –

a) Create a blank Modern Team Site

b) Apply Provisioning Template on the above site. This is a long running process and can take about 45-90 min for applying a complex template with about 20 libraries, 20-30 site columns and 10-15 content types

Note: Azure Functions on Consumption plan have a timeout of 10 min. Host the Azure function on an App Service Plan for the above to work without issues

c) Post Provisioning to apply changes that are not supported by Provisioning Template such as Creating default folders etc.

Below is the process flow for the provisioning process. It has steps from 1 – 11 which goes from creating the site to applying it. The brief list of the steps are as follows

  1. Call the Create Site flow to start the Provisioning Process
  2. Call the Create Site Azure Function
  3. Create the Modern Team Site in Azure Function and set any dependencies required for the Apply template such as Navigation items, pages etc, and then return to flow
  4. Call the Apply Template Azure Function.
  5. Get the previously generated ProvisioningTemplate.xml file from a shared location
  6. Apply the Template onto the newly created Modern site. Note: The flow call times out because it cannot wait for such a long running process
  7. Update the status column in the Site Directory for the post provisioning flow to start
  8. Call the Post provisioning flow to run the Post provisioning azure function
  9. The Post provisioning azure function will complete the remaining SharePoint changes which were not completed by the apply template such as, set field default values, create folders in any libraries, associate default values to taxonomy fields etc.

ModernSitesProvisioningFlow_ProvisioningProcess

Conclusion:

Hence in the above blog, we saw how to create a provisioning process to handle complex modern team site creation at a high architectural level. Next, we will deep dive into the Azure functions to create, apply template and post process in the next upcoming blogs.

Happy Coding!!!

IaaS Application Migration – Go Live Decision Responsibility Model, High Level Run Sheet and Change Management Life Cycle

Go Live Decision Responsibility Model

A go-live decision model helps to assign accountability to key project stakeholders in order to make decision to proceed with go-live on an agreed date or not. Below is an example responsibility model that will guide to create a required decision responsibility model.

Decision.jpg

High Level Run Sheet

run sheet is a list of procedures or events organised in progressive sequence to execute the required agreed outcome. Below sheet is an example that can be used as part of application migration to cloud.

run sheet.jpg

Change Management Life Cycle

The objective of change management in this context is to ensure that standardised methods and procedures are used for efficient and prompt handling of all changes to control IT infrastructure, in order to minimise the number and impact of any related incidents upon service (ITSM Best Practice).

In this context, below is a simple change management practice model that can be used to control all changes to IT infrastructure in an IaaS application migration.

change mgt.jpg

Summary

Hope you found these examples useful for your application migration to assist and complete transition.