Promoting and Demoting Site pages to News in Modern SharePoint Sites using SPFx extension and Azure Function

The requirement that I will be addressing in this blog is how to Promote and Demote site pages to news articles in Modern SharePoint sites. This approach allows us to promote any site page to News, add approval steps and demote news articles to site pages if the news need to be updated. The news also shows in the modern news web part when the site page is promoted.

Solution Approach:

To start with, create a site page. For creating a Modern page using Azure Function, please refer to this blog. After the site page is created, we will be able to use a status column to track the news status and promote a site page to news status. The status column could have three values – draft, pending approval and published.

We will use a SPFx extension to set the values of the status column and call the Azure Function to promote the site page to news page using SharePoint Online CSOM.

Promoting a site page to news page

Below are the attributes that need to be set for site pages to promote as news article.

1. Promoted State Column set to 2 – set through SPFx extension
2. First Published date value set to published date – set through SPFx extension
3. Promoted state tag in the news site page to be set to value 2 – done in Azure Function
4. Site page needs to be published – done in Azure Function

For a detailed walkthrough on how to create a custom site page with metadata values, please refer to this blog. In order to set the values of ‘Promoted State’ and ‘First Published Date’ metadata values, use the below code after the page is created.

For calling Azure Function from SPFx extension, which will promote the site page to news, can be done using the below method.

Inside the Azure Function, use the below to promote a site page to news article.

Demoting a news to site page

Below are the attributes that needs to be set for demoting a news article to site page

1. Promoted State Column set to 0 – set through SPFx extension
2. First Published date value set to blank – set through SPFx extension
3. Promoted state tag in the news site page to be set to value 0 – done in Azure Function
4. Site page needs to be published – done in Azure Function

For setting the metadata values, the method calls as done above during promotion of site page, can be used. Next in Azure Function, use the below to demote a site page.

Conclusion:

Hence above we saw how we can use SPFx extension and Azure Function to promote and demote site pages to news articles in Modern SharePoint sites.

How to quickly copy an Azure Web App between Azure Tenants using ‘Zip Push Deploy’

In the last couple of weeks I’ve had to copy a bunch of Azure WebApps and Functions from one Azure Tenant to another. I hadn’t had to do this for a while and went looking for the quickest and easiest way to accomplish it. As with anything cloud based, things move fast. Some of the methods I found were too onerous and more complex than they needed to be. There is of course the Backup option as well. However for WebApps that is only available if you are on a Standard or above tier Plan. Mine weren’t and I didn’t have the desire to uplift to get that feature.

Overview

In this post I show my method to quickly copy an Azure WebApp from one Azure Tenant to another. I cover copying Azure Functions in another post. My approach is;

  • In the Source Tenant from the WebApp
    • Download the Automation Scripts for the WebApp
    • Using Kudu take a backup of the wwwroot folder
  • In the Target Tenant
    • Create a new Resource from a Template
    • Import the Deployment Automation Scripts from above
    • Modify for any changes, Resource Group, Location etc
    • Use Zip Push Deploy to upload the wwwroot archive and deploy it

Backing up the WebApp in the Source Tenant

Open your WebApp in the Azure Portal. Select Automation Script

WebApp Deployment Script

Download the Automation Script

Save Deployment Script

Select Advanced Tools

Kudu Adv Tools

Select the Site Folder then on the right menu of wwwroot select the download icon and save the backup of the WebApp.

Download WWWRoot Folder 3.png

Expand the Deployment Script archive file from the first step above. The contents will look like those below.

Expand the Deploy Script Archive.PNG

Deploy the WebApp to another Tenant

In the Azure Portal select Create a Resource from the top of the menu list on the left hand side. Type Template in the search box and select Template Deployment then select Create. Select Build your own template in the editor. Select Load File and select the parameters.json file. Then select Load File again and select the template.json file. Select Save.

Load Parameters then Template JSON Files

Make any changes to naming, and provide an existing or new Resource Group for the WebApp. Select Purchase.

New Template Deployment - Change Parameters

The WebApp will be created. Once completed select it from the Resource Group you specified and select Advanced Tools. From the Tools menu select Zip Push Deploy.

Tools Zip Push Deploy

Drag and drop the Zip file with the archive of the wwwroot folder you created earlier.

Drop WebApp ZipFile Export via Kudu

The zip will be processed and the WebApp deployed.

Deployed WebApp

Selecting the App in the new Tenant we can see it is deployed and running.

App Running.PNG

Hitting the App URL we can see that is being served.

Deployed App.PNG

This WebApp is the Microsoft Identity Manager User Object Report that I detailed in this post here.

Summary

In less that 10 minutes the WebApp is copied. No modifying JSON files, no long command lines, no FTP clients. Pretty simple. In the next post I’ll detail how I copied Azure Functions using a similar process.

Keep in mind if your WebApp is using Application Settings, KeyVaults, Managed Service Identity type options you’ll need to add those settings, certificates/credentials in the target environment.

Demystifying Managed Service Identities on Azure

Managed service identities (MSIs) are a great feature of Azure that are being gradually enabled on a number of different resource types. But when I’m talking to developers, operations engineers, and other Azure customers, I often find that there is some confusion and uncertainty about what they do. In this post I will explain what MSIs are and are not, where they make sense to use, and give some general advice on how to work with them.

What Do Managed Service Identities Do?

A managed service identity allows an Azure resource to identify itself to Azure Active Directory without needing to present any explicit credentials. Let’s explain that a little more.

In many situations, you may have Azure resources that need to securely communicate with other resources. For example, you may have an application running on Azure App Service that needs to retrieve some secrets from a Key Vault. Before MSIs existed, you would need to create an identity for the application in Azure AD, set up credentials for that application (also known as creating a service principal), configure the application to know these credentials, and then communicate with Azure AD to exchange the credentials for a short-lived token that Key Vault will accept. This requires quite a lot of upfront setup, and can be difficult to achieve within a fully automated deployment pipeline. Additionally, to maintain a high level of security, the credentials should be changed (rotated) regularly, and this requires even more manual effort.

With an MSI, in contrast, the App Service automatically gets its own identity in Azure AD, and there is a built-in way that the app can use its identity to retrieve a token. We don’t need to maintain any AD applications, create any credentials, or handle the rotation of these credentials ourselves. Azure takes care of it for us.

It can do this because Azure can identify the resource – it already knows where a given App Service or virtual machine ‘lives’ inside the Azure environment, so it can use this information to allow the application to identify itself to Azure AD without the need for exchanging credentials.

What Do Managed Service Identities Not Do?

Inbound requests: One of the biggest points of confusion about MSIs is whether they are used for inbound requests to the resource or for outbound requests from the resource. MSIs are for the latter – when a resource needs to make an outbound request, it can identify itself with an MSI and pass its identity along to the resource it’s requesting access to.

MSIs pair nicely with other features of Azure resources that allow for Azure AD tokens to be used for their own inbound requests. For example, Azure Key Vault accepts requests with an Azure AD token attached, and it evaluates which parts of Key Vault can be accessed based on the identity of the caller. An MSI can be used in conjunction with this feature to allow an Azure resource to directly access a Key Vault-managed secret.

Authorization: Another important point is that MSIs are only directly involved in authentication, and not in authorization. In other words, an MSI allows Azure AD to determine what the resource or application is, but that by itself says nothing about what the resource can do. For some Azure resources this is Azure’s own Identity and Access Management system (IAM). Key Vault is one exception – it maintains its own access control system, and is managed outside of Azure’s IAM. For non-Azure resources, we could communicate with any authorisation system that understands Azure AD tokens; an MSI will then just be another way of getting a valid token that an authorisation system can accept.

Another important point to be aware of is that the target resource doesn’t need to run within the same Azure subscription, or even within Azure at all. Any service that understands Azure Active Directory tokens should work with tokens for MSIs.

How to Use MSIs

Now that we know what MSIs can do, let’s have a look at how to use them. Generally there will be three main parts to working with an MSI: enabling the MSI; granting it rights to a target resource; and using it.

  1. Enabling an MSI on a resource. Before a resource can identify itself to Azure AD,it needs to be configured to expose an MSI. The way that you do this will depend on the specific resource type you’re enabling the MSI on. In App Services, an MSI can be enabled through the Azure Portal, through an ARM template, or through the Azure CLI, as documented here. For virtual machines, an MSI can be enabled through the Azure Portal or through an ARM template. Other MSI-enabled services have their own ways of doing this.

  2. Granting rights to the target resource. Once the resource has an MSI enabled, we can grant it rights to do something. The way that we do this is different depending on the type of target resource. For example, Key Vault requires that you configure its Access Policies, while to use the Event Hubs or the Azure Resource Manager APIs you need to use Azure’s IAM system. Other target resource types will have their own way of handling access control.

  3. Using the MSI to issue tokens. Finally, now that the resource’s MSI is enabled and has been granted rights to a target resource, it can be used to actually issue tokens so that a target resource request can be issued. Once again, the approach will be different depending on the resource type. For App Services, there is an HTTP endpoint within the App Service’s private environment that can be used to get a token, and there is also a .NET library that will handle the API calls if you’re using a supported platform. For virtual machines, there is also an HTTP endpoint that can similarly be used to obtain a token. Of course, you don’t need to specify any credentials when you call these endpoints – they’re only available within that App Service or virtual machine, and Azure handles all of the credentials for you.

Finding an MSI’s Details and Listing MSIs

There may be situations where we need to find our MSI’s details, such as the principal ID used to represent the application in Azure AD. For example, we may need to manually configure an external service to authorise our application to access it. As of April 2018, the Azure Portal shows MSIs when adding role assignments, but the Azure AD blade doesn’t seem to provide any way to view a list of MSIs. They are effectively hidden from the list of Azure AD applications. However, there are a couple of other ways we can find an MSI.

If we want to find a specific resource’s MSI details then we can go to the Azure Resource Explorer and find our resource. The JSON details for the resource will generally include an identity property, which in turn includes a principalId:

Screenshot 1

That principalId is the client ID of the service principal, and can be used for role assignments.

Another way to find and list MSIs is to use the Azure AD PowerShell cmdlets. The Get-AzureRmADServicePrincipal cmdlet will return back a complete list of service principals in your Azure AD directory, including any MSIs. MSIs have service principal names starting with https://identity.azure.net, and the ApplicationId is the client ID of the service principal:

Screenshot 2

Now that we’ve seen how to work with an MSI, let’s look at which Azure resources actually support creating and using them.

Resource Types with MSI and AAD Support

As of April 2018, there are only a small number of Azure services with support for creating MSIs, and of these, currently all of them are in preview. Additionally, while it’s not yet listed on that page, Azure API Management also supports MSIs – this is primarily for handling Key Vault integration for SSL certificates.

One important note is that for App Services, MSIs are currently incompatible with deployment slots – only the production slot gets assigned an MSI. Hopefully this will be resolved before MSIs become fully available and supported.

As I mentioned above, MSIs are really just a feature that allows a resource to assume an identity that Azure AD will accept. However, in order to actually use MSIs within Azure, it’s also helpful to look at which resource types support receiving requests with Azure AD authentication, and therefore support receiving MSIs on incoming requests. Microsoft maintain a list of these resource types here.

Example Scenarios

Now that we understand what MSIs are and how they can be used with AAD-enabled services, let’s look at a few example real-world scenarios where they can be used.

Virtual Machines and Key Vault

Azure Key Vault is a secure data store for secrets, keys, and certificates. Key Vault requires that every request is authenticated with Azure AD. As an example of how this might be used with an MSI, imagine we have an application running on a virtual machine that needs to retrieve a database connection string from Key Vault. Once the VM is configured with an MSI and the MSI is granted Key Vault access rights, the application can request a token and can then get the connection string without needing to maintain any credentials to access Key Vault.

API Management and Key Vault

Another great example of an MSI being used with Key Vault is Azure API Management. API Management creates a public domain name for the API gateway, to which we can assign a custom domain name and SSL certificate. We can store the SSL certificate inside Key Vault, and then give Azure API Management an MSI and access to that Key Vault secret. Once it has this, API Management can automatically retrieve the SSL certificate for the custom domain name straight from Key Vault, simplifying the certificate installation process and improving security by ensuring that the certificate is not directly passed around.

Azure Functions and Azure Resource Manager

Azure Resource Manager (ARM) is the deployment and resource management system used by Azure. ARM itself supports AAD authentication. Imagine we have an Azure Function that needs to scan our Azure subscription to find resources that have recently been created. In order to do this, the function needs to log into ARM and get a list of resources. Our Azure Functions app can expose an MSI, and so once that MSI has been granted reader rights on the resource group, the function can get a token to make ARM requests and get the list without needing to maintain any credentials.

App Services and Event Hubs/Service Bus

Event Hubs is a managed event stream. Communication to both publish onto, and subscribe to events from, the stream can be secured using Azure AD. An example scenario where MSIs would help here is when an application running on Azure App Service needs to publish events to an Event Hub. Once the App Service has been configured with an MSI, and Event Hubs has been configured to grant that MSI publishing permissions, the application can retrieve an Azure AD token and use it to post messages without having to maintain keys.

Service Bus provides a number of features related to messaging and queuing, including queues and topics (similar to queues but with multiple subscribers). As with Event Hubs, an application could use its MSI to post messages to a queue or to read messages from a topic subscription, without having to maintain keys.

App Services and Azure SQL

Azure SQL is a managed relational database, and it supports Azure AD authentication for incoming connections. A database can be configured to allow Azure AD users and applications to read or write specific types of data, to execute stored procedures, and to manage the database itself. When coupled with an App Service with an MSI, Azure SQL’s AAD support is very powerful – it reduces the need to provision and manage database credentials, and ensures that only a given application can log into a database with a given user account. Tomas Restrepo has written a great blog post explaining how to use Azure SQL with App Services and MSIs.

Summary

In this post we’ve looked into the details of managed service identities (MSIs) in Azure. MSIs provide some great security and management benefits for applications and systems hosted on Azure, and enable high levels of automation in our deployments. While they aren’t particularly complicated to understand, there are a few subtleties to be aware of. As long as you understand that MSIs are for authentication of a resource making an outbound request, and that authorisation is a separate thing that needs to be managed independently, you will be able to take advantage of MSIs with the services that already support them, as well as the services that may soon get MSI and AAD support.

Deploy active/active FortiGate NGFW in Azure

I recently was tasked with deploying two Fortinet FortiGate firewalls in Azure in a highly available active/active model. I quickly discovered that there is currently only two deployment types available in the Azure marketplace, a single VM deployment and a high availability deployment (which is an active/passive model and wasn’t what I was after).

FG NGFW Marketplace Options

I did some digging around on the Fortinet support sites and discovered that to you can achieve an active/active model in Azure using dual load balancers (a public and internal Azure load balancer) as indicated in this Fortinet document: https://www.fortinet.com/content/dam/fortinet/assets/deployment-guides/dg-fortigate-high-availability-azure.pdf.

Deployment

To achieve an active/active model you must deploy two separate FortiGate’s using the single VM deployment option and then deploy the Azure load balancers separately.

I will not be going through how to deploy the FortiGate’s and required VNets, subnets, route tables, etc. as that information can be found here on Fortinet’s support site: http://cookbook.fortinet.com/deploying-fortigate-azure/.

NOTE: When deploying each FortiGate ensure they are deployed into different frontend and backend subnets, otherwise the route tables will end up routing all traffic to one FortiGate.

Once you have two FortiGate’s, a public load balancer and an internal load balancer deployed in Azure you are ready to configure the FortiGate’s.

Configuration

NOTE: Before proceeding ensure you have configured static routes for all your Azure subnets on each FortiGate otherwise the FortiGate’s will not be able to route Azure traffic correctly.

Outbound traffic

To direct all internet traffic from Azure via the FortiGate’s will require some configuration on the Azure internal load balancer and a user defined route.

  1. Create a load balance rule with:
    • Port: 443
    • Backend Port: 443
    • Backend Pool:
      1. FortiGate #1
      2. FortiGate #2
    • Health probe: Health probe port (e.g. port 22)
    • Session Persistence: Client IP
    • Floating IP: Enabled
  2. Repeat step 1 for port 80 and any other ports you require
  3. Create an Azure route table with a default route to the Azure internal load balancer IP address
  4. Assign the route table to the required Azure subnets

IMPORTANT: In order for the load balance rules to work you must add a static route on each FortiGate for IP address: 168.63.129.16. This is required for the Azure health probe to communicate with the FortiGate’s and perform health checks.

FG Azure Health Probe Cfg

Once complete the outbound internet traffic flow will be as follows:

FG Internet Traffic Flow

Inbound traffic

To publish something like a web server to the internet using the FortiGate’s will require some configuration on the Azure public load balancer.

Let’s say I have a web server that resides on my Azure DMZ subnet that hosts a simple website on HTTPS/443. For this example the web server has IP address: 172.1.2.3.

  1. Add an additional public IP address to the Azure public load balancer (for this example let’s say the public IP address is: 40.1.2.3)
  2. Create a load balance rule with:
    • Frontend IP address: 40.1.2.3
    • Port: 443
    • Backend Port: 443
    • Backend Pool:
      1. FortiGate #1
      2. FortiGate #2
    • Session Persistence: Client IP
  3. On each FortiGate create a VIP address with:
    • External IP Address: 40.1.2.3
    • Mapped IP Address: 172.1.2.3
    • Port Forwarding: Enabled
    • External Port: 443
    • Mapped Port: 443

FG WebServer VIP Cfg

You can now create a policy on each FortiGate to allow HTTPS to the VIP you just created, HTTPS traffic will then be allowed to your web server.

For details on how to create policies/VIPs on FortiGate’s refer to the Fortinet support website: http://cookbook.fortinet.com.

Once complete the traffic flow to the web server will be as follows:

FG Web Traffic Flow

Building a Teenager Notification Service using Azure IoT an Azure Function, Microsoft Flow, Mongoose OS and a Micro Controller

Introduction

This is the third and final post on my recent experiments integrating small micro controllers (ESP8266) running Mongoose OS integrated with Azure IoT Services.

In the first post in this series I detailed creating the Azure IoT Hub and registering a NodeMCU (ESP8266 based) micro controller with it. The post detailing that can be found here. Automating the creation of Azure IoT Hubs and the registration of IoT Devices with PowerShell and VS Code

In the second post I detailed communicating with the micro controller (IoT device) using MQTT and PowerShell. That post can be found here. Integrating Azure IoT Devices with MongooseOS MQTT and PowerShell

Now that we have end to end functionality it’s time to do something with it.

I have two teenagers who’ve been trained well to use headphones. Whilst this is great at not having to hear the popular teen bands of today, and numerous Facetime, Skype, Snapchat and similar communications it does come with the downside of them not hearing us when we require their attention and they are at the other end of the house. I figured to avoid the need to shout to get attention, a simple visual notification could be built to achieve the desired result. Different colours for different requests? Sure why not. This is that project, and the end device looks like this.

Overview

Quite simply the solution goes like this;

  • With the Microsoft Flow App on our phones we can select the Flow that will send a notification

2018-03-25 18.56.38 500px.png

  • Choose the Notification intent which will drive the color displayed on the Teenager Notifier.

2018-03-25 18.56.54 500px

  • The IoT Device will then display the color in a revolving pattern as shown below.

The Architecture

The end to end architecture of the solution looks like this.

IoT Cloud to Device - NeoPixel - 640px

Using the Microsoft Flow App on a mobile device gives a nice way of having a simple interface that can be used to trigger the notification. Microsoft Flow sends the desired message and details of the device to send it to, to an Azure Function that puts a message into an MQTT queue associated with the Mongoose OS driven Azure IoT Device (ESP8266 based NodeMCU micro controller) connected to an Azure IoT Hub. The Mongoose OS driven Azure IoT Device takes the message and displays the visual notification in the color associated with the notification type chosen in Microsoft Flow at the beginning of the process.

The benefits of this architecture are;

  • the majority of the orchestration happens in Azure, yet thanks to Azure IoT and MQTT no inbound connection is required where the IoT device resides. No port forwarding / inbound rules to configure on your home router. The micro controller is registered with our Azure IoT Hub and makes an outbound connection to subscribe to its MQTT topic. As soon as there is a message for the device it triggers its logic and does what we’ve configured
  • You can initiate a notification from anywhere in the world (most simply using the Flow mobile app as shown above)
  • And using Mongoose OS allows for the device to be managed remote via the Mongoose OS Dashboard. This means that if I want to add an additional notification (color) I can update Flow for a new option to select and update the configuration on the Notifier device to display the new color if it receives such a command.

Solution Prerequisites

This post builds on the previous two. As such the prerequisites are;

  • you have an Azure account and have set up an IoT Hub, and registered an IoT Device with it
  • your IoT device (micro controller) can run Mongoose OS on. I’m using a NodeMCU ESP8266 that I purchased from Amazon here.
  • the RGB LED Light Ring (generic Neopixel) I used I purchased from Amazon here.
  • 3D printer if you want to print an enclosure for the IoT device

With those sorted we can;

  • Install and configure my Mongoose OS Application. It includes all the necessary libraries and sample config to integrate with a Neopixel, Azure IoT, Mongoose Dashboard etc.
  • Create the Azure PowerShell Function App that will publish the MQTT message the IoT Device will consume
  • Create the Microsoft Flow that will kick off the notifications and give use a nice interface to send what we want
  • Build an enclosure for our IoT device

How to build this project

The order I’ve detailed the elements of the architecture here is how I’d recommend approaching this project. I’d also recommend working through the previous two blog posts linked at the beginning of this one as that will get you up to speed with Mongoose OS, Azure IoT Hub, Azure IoT Devices, MQTT etc.

Installing the AzureIoT-Neopixel-js Application

I’ve made the installation of my solution easy by creating a Mongoose OS Application. It includes all the libraries required and sample code for the functionality I detail in this post.

Clone it from Github here and put it into your .mos directory that should be in the root of your Windows profile directory. e.g C:\Users\Darren.mos\apps-1.26 then from the MOS Configuration page select Projects, select AzureIoT-Neopixel-JS then select the Rebuild App spanner icon from the toolbar. When it completes select the Flash icon from the toolbar.  When your micro controller restarts select the Device Setup from the top menu bar and configure it for your WiFi network. Finally configure your device for Azure MQTT as per the details in my first post in this series (which will also require you to create an Azure IoT Hub if you don’t already have one and register your micro controller with it as an Azure IoT Device). You can then test sending a message to the device using PowerShell or Device Explorer as shown in post two in this series.

I have the Neopixel connected to D1 (GPIO 5) on the NodeMCU. If you use a different micro controller and a different GPIO then update the init.js configuration accordingly.

Creating the Azure Function App

Now that you have the micro controller configured and working with Azure IoT, lets abstract the sending of the MQTT messages into an Azure Function. We can’t send MQTT messages from Microsoft Flow, so I’ve created an Azure Function that uses the AzureIoT Powershell module to do that.

Note: You can send HTTP messages to an Azure IoT device but … 

Under current HTTPS guidelines, each device should poll for messages every 25 minutes or more. MQTT and AMQP support server push when receiving cloud-to-device messages.

….. that doesn’t suit my requirements 

I’m using the Managed Service Identity functionality to access the Azure Key Vault where credentials for the identity that can interact with my Azure IoT Hub is stored. To enable and use that (which I highly recommend) follow the instructions in my blog post here to configure MSI on an Azure Function App. If you don’t already have an Azure Key Vault then follow my blog post here to quickly set one up using PowerShell.

Azure PowerShell Function App

The Function App is an HTTP Trigger Based one using PowerShell. In order to interact with Azure IoT Hub and integrate with the IoT Device via Azure I’m using the same modules as in the previous posts. So they need to be located within the Function App.

Specifically they are;

  • AzureIoT v1.0.0.5
  • AzureRM v5.5.0
  • AzureRM.IotHub v3.1.0
  • AzureRM.profile v4.2.0

I’ve put them in a bin directory (which I created) under my Function App. Even though AzureRM.EventHub is shown below, it isn’t required for this project. I uploaded the modules from my development laptop (C:\Program Files\WindowsPowerShell\Modules) using WinSCP after configuring Deployment Credentials under Platform Features for my Azure Function App. Note the path relative to mine as you will need to update the Function App script to reflect this path so the modules can be loaded.

Azure Function PS Modules.PNG

The configuration in WinSCP to upload to the Function App for me is

WinSCP Configuration

Edit the AzureRM.IotHub.psm1 file

The AzureRM.IotHub.psm1 will locate an older version of the AzureRM.IotHub PowerShell module from within Azure Functions. As we’ve uploaded the version we need, we need to comment out the following lines in AzureRM.IotHub.psm1 so that it doesn’t do a version check. See below the lines to remark out (put a # in front of the lines indicated below) that are near the start of the module. The AzureRM.IotHub.psm1 file can be edited via WinSCP & notepad.

#$module = Get-Module AzureRM.Profile
#if ($module -ne $null -and $module.Version.ToString().CompareTo("4.2.0") -lt 0)
#{
# Write-Error "This module requires AzureRM.Profile version 4.2.0. An earlier version of AzureRM.Profile is imported in the current PowerShell session. Please open a new session before importing this module. This error could indicate that multiple incompatible versions of the Azure PowerShell cmdlets are installed on your system. Please see https://aka.ms/azps-version-error for troubleshooting information." -ErrorAction Stop
#}
#elseif ($module -eq $null)
#{
# Import-Module AzureRM.Profile -MinimumVersion 4.2.0 -Scope Global
#}

HTTP Trigger Azure PowerShell Function App

Here is my Function App Script. You’ll need to update it for the location of your PowerShell Modules (I created a bin directory under my Function App D:\home\site\wwwroot\myFunctionApp\bin), your Key Vault details and the user account you will be using. The User account will need permissions to your Key Vault to retrieve the password (credential) for the account you will run the process as and to your Azure IoT Hub.

You can test the Function App from within the Azure Portal where you created the Function App as shown below. Update for the names of the IoT Hub, IoT Device and the Resource Group in your associated environment.

Testing Function App.PNG

Microsoft Flow Configuration

The Flow is very simple. A manual button and a resulting HTTP Post.

Microsoft Flow Config 1

For the message I have configured a list. This is where you can choose the color of the notification.

Manual Trigger.PNG

The Action is an HTTP Post to the Azure Function URL. The body has the configuration for the IoTHub, IoTDevice, Resource Group Name, IoTKeyName and the Message selected from the manual button above. You will have the details for those settings from your initial testing via the Function App (or PowerShell).

The Azure Function URL you get from the top of the Azure Portal screen where you configure your Function App. Look for “Get Function URL”.

HTTP Post

Testing

Now you have all the elements configured, install the Microsoft Flow App on your mobile if you don’t already have it for Apple iOS Appstore and Android Google Play Log in with the account you created the Flow as, select the Flow, the message and done. Depending on your internet connectivity you should see the notification in < 10 seconds displayed on the Notifier device.

Case 3D Printer Files

Lastly, we need to make it look all pretty and make the notification really pop. I’ve created a housing for the neopixel that sits on top of a little case for the NodeMCU.

As you can see from the final unit, I’ve printed the neopixel holder in a white PLA that allows the RGB LED light to be diffused nicely and display prominently even in brightly lit conditions.

Neopixel Enclosure

I’ve printed the base that holds the micro controller in a different color. The top fits snugly through the hole in the micro controller case. The wires from the neopixel to connect it to the micro controller slide through the shaft of the top housing. It also has a backplate that attaches to the back of the enclosure that I secure with a little hot glue.

Here is a link to the Neopixel (WS2812) 16 RGB LED light holder I created on Thingiverse.

NodeMCU Enclosure.PNG

Depending on your micro controller you will also need an appropriately sized case for that. I’ve designed the neopixel light holder top assembly to sit on top of my micro controller case. Also available on Thingiverse here.

Summary

Using a combination of Azure IoT, Azure PaaS Services, Mongoose OS and a cheap micro controller with an RGB LED light ring we have a very versatile Internet of Things device. The application here is a simple visual notifier. A change of output device or even in conjunction with an input device could change the application, whilst still re-using all the elements of the solution that glues it all together (micro-controller, Mongoose OS, Azure IoT, Azure PaaS). Did you build one? Did you use this as inspiration to build something else? Let me know.

Creating SharePoint Modern Team sites using Site Scripts, Flow and Azure Function

With Site Scripts and Site design, it is possible to invoke custom PnP Provisioning for Modern Team Sites from a Site Script. In the previous blog, we saw how we can provision Simple modern sites using Site Scripts JSON. However, there are some scenarios where we would need a custom provisioning template or process such as listed below:

  • Auto deploy custom web components such as SPFx extension apps
  • Complex Site Templates which couldn’t be configured
  • Complex Document libs, content types that are provided by JSON schema. For an idea of support items using the OOB schema, please check here.

Hence, in this blog, we will see how we can use Flow and Azure Functions to apply more complex templates and customization on SharePoint Modern Sites.

Software Prerequisites:

  • Azure Subscription
  • Office 365 subscription or MS Flow subscription
  • PowerShell 3.0 or above
  • SharePoint Online Management Shell
  • PnP PowerShell
  • Azure Storage Emulator*
  • Postman*

* Optional, helpful for Dev and Testing

High Level Overview Steps:

1. Create an Azure Queue Storage Container
2. Create a Microsoft Flow with Request Trigger
3. Put an item into Azure Queue from Flow
4. Create an Azure Function to trigger from the Queue
5. Use the Azure Function to apply the PnP Provisioning template

Detailed Steps:

This can get quite elaborate, so hold on!!

Azure

1. Create an Azure Queue Store.

Note: For dev and testing, you can use the Azure Storage Emulator to emulate the queue requirements. For more details to configure Azure Emulator on your system, please check here.

Microsoft Flow

2. Create a Microsoft Flow with Request trigger and then add the below JSON.

Note: If you have an Office 365 enterprise E3 license, you get a Flow Free Subscription or else you can also register for a trial for this here.

3. Enter a message into the Queue in the Flow using the “Add message to Azure Queue” action.

FlowSiteDesignAzureQueue

Note: The flow trigger URL has an access key which allows it to be called from any tenant. For security reasons, please don’t share it with any third parties unless needed.

Custom SharePoint Site Template (PowerShell)

4. Next create a template site for provisioning. Make all the configurations that you will need for the initial implementation. Then create the template using PnPPowerShell, use the PnP Provisioning Command as shown below.

Get-PnPProvisioningTemplate -Out .\TestCustomTeamTemplate.xml -ExcludeHandlers Navigation, ApplicationLifecycleManagement -IncludeNativePublishingFiles

Note: The ExclueHandlers option depends on your requirement, but the configuration in the above command will save a lot of issues which you could potentially encounter while applying the template later. So, use the above as a starting template.

Note: Another quick tip, if you have any custom theme applied on the template site, then the provisioning template doesn’t carry it over. You might have to apply the theme again!

5. Export and save the PowerShell PnP Module to a local drive location. We will use it later in the Azure Function.

powershell Save-Module -Name SharePointPnPPowershellOnline -Path “[Location on your system or Shared drive]”

SharePoint
6. Register an App key and App Secret in https://yourtenant.sharepoint.com/_layouts/appregnew.aspx and provide the below settings.
7. Copy the App Id and Secret which we will use later for Step 9 and 10. Below is a screenshot of the App registration page.
8. Trust the app at https://yourtenant-admin.sharepoint.com/_layouts/appinv.aspx by providing the below xml. Fill in the App Id to get the details of the App.

Azure Function

9. Create a Queue Trigger PowerShell Azure function
10. After the function is created, go to Advance Editor (kudu) and then create a sub folder “SharePointPnPPowerShellOnline” in site -> wwwroot -> [function_name] -> modules. Upload all the files from the saved PowerShell folder in the Step above into this folder.
11. Add the below PowerShell to the Azure Function

12. Test the Function by the below input in PowerShell

$uri = "[the URI you copied in step 14 when creating the flow]"
$body = "{webUrl:'somesiteurl'}"
Invoke-RestMethod -Uri $uri -Method Post -ContentType "application/json" -Body $body

PowerShell and JSON

13. Create a Site Script with the below JSON and add it to a Site Design. For more details, please check the link here for more detailed steps.

14. After the above, you are finally ready to run the provisioning process. Yay!!

But before we finish off, one quick tip is that when you click manual refresh, the changes are not immediately updated on the site. It may take a while, but it will apply.

Conclusion:

In the above blog we saw how we can create Site templates using custom provisioning template by Flow and Azure Function using SharePoint site scripts and design.

Create Modern Pages and update metadata using SPFx Extensions, SP PnP JS and Azure Functions

Modern Site Pages (Site Page content type) have a constraint to associate custom metadata with it. In other words, the “Site Page” content type cannot have other site columns added to it as can be seen below.

SitePageContentTypeMissing

On another note, even though we can create a child content types from Site Page content type, the New Site page creation (screenshot below) process doesn’t associate the new content type when the Page is created. So, the fields from the child content type couldn’t be associated.

For eg. In the below screenshot, we have created a new site page – test.aspx using “Intranet Site Page Content Type” which is a child of “Site Page” content type. After the page is created, it gets associated to Site Page Content type instead of Intranet Site Page Content type. We can edit it again to get it associated to Intranet Page content type but that adds another step for end users to do and added training effort.

 

 

 

Solution Approach:

To overcome the above constraints, we implemented a solution to associate custom metadata into Modern Site Pages creation using SharePoint Framework (SPFx) List View Command Set extension and Azure Function. In this blog, I am going to briefly talk about the approach so it could be useful for anyone trying to do the same.

1. Create a List View Command Item for creating site pages, editing properties of site pages and promoting site pages to news
2. Create an Azure function that will create the Page using SharePoint Online CSOM
3. Call the Azure Function from the SPFx command.

A brief screenshot of the resulting SPFx extension dialog is below.

NewSitePage

Steps:

To override the process for modern page creation, we will use an Azure Function with SharePoint Online PnP core CSOM. Below is an extract of the code for the same. On a broad level, the Azure Function basically does the following

1. Get the value of the Site Url and Page name from the Query parameters
2. Check if the Site page is absent
3. Create the page if absent
4. Save the page

Note: The below code also includes the code to check if the page exists.

Next, create a SPFx extension list view command and SP dialog component that will allow us to call the Azure Function from Site Pages Library to create pages. The code uses ‘fetch api’ to call the Azure Function and pass the parameters for the Site Url and page name required for the Azure Function to create the page. After the page is created, the Azure function will respond with a success status, which can be used to confirm the page creation.

Note: Make sure that the dialog is locked while this operation is working. So, implement the code to stop closing or resubmitting the form.

After the pages are created, lets update the properties of the item using PnP JS library using the below code.

Conclusion:

As we can see above, we have overridden the Page Creation process using our own Azure Function using SPFx List View command and PnP JS. I will be detailing the SP dialog for SPFx extension in another upcoming blog, so keep an eye for it.

There are still some limitations of the above approach as below. You might have to get business approval for the same.

1. Cannot hide the out of the box ‘New Page’ option from inside the extension.
2. Cannot rearrange order of the Command control and it will be displayed at last to the order of SharePoint Out of the box elements.

Cosmos DB Server-Side Programming with TypeScript – Part 6: Build and Deployment

So far in this series we’ve been compiling our server-side TypeScript code to JavaScript locally on our own machines, and then copying and pasting it into the Azure Portal. However, an important part of building a modern application – especially a cloud-based one – is having a reliable automated build and deployment process. There are a number of reasons why this is important, ranging from ensuring that a developer isn’t building code on their own machine – and therefore may be subject to environmental variations or differences that cause different outputs – through to running a suite of tests on every build and release. In this post we will look at how Cosmos DB server-side code can be built and released in a fully automated process.

This post is part of a series:

  • Part 1 gives an overview of the server side programmability model, the reasons why you might want to consider server-side code in Cosmos DB, and some key things to watch out for.
  • Part 2 deals with user-defined functions, the simplest type of server-side programming, which allow for adding simple computation to queries.
  • Part 3 talks about stored procedures. These provide a lot of powerful features for creating, modifying, deleting, and querying across documents – including in a transactional way.
  • Part 4 introduces triggers. Triggers come in two types – pre-triggers and post-triggers – and allow for behaviour like validating and modifying documents as they are inserted or updated, and creating secondary effects as a result of changes to documents in a collection.
  • Part 5 discusses unit testing your server-side scripts. Unit testing is a key part of building a production-grade application, and even though some of your code runs inside Cosmos DB, your business logic can still be tested.
  • Finally, part 6 (this post) explains how server-side scripts can be built and deployed into a Cosmos DB collection within an automated build and release pipeline, using Microsoft Visual Studio Team Services (VSTS).

Build and Release Systems

There are a number of services and systems that provide build and release automation. These include systems you need to install and manage yourself, such as Atlassian Bamboo, Jenkins, and Octopus Deploy, through to managed systems like Amazon CodePipeline/CodeBuild, Travis CI, and AppVeyor. In our case, we will use Microsoft’s Visual Studio Team System (VSTS), which is a managed (hosted) service that provides both build and release pipeline features. However, the steps we use here can easily be adapted to other tools.

I will assume that you have a VSTS account, that you have loaded the code into a source code repository that VSTS can access, and that you have some familiarity with the VSTS build and release system.

Throughout this post, we will use the same code that we used in part 5 of this series, where we built and tested our stored procedure. The exact same process can be used for triggers and user-defined functions as well. I’ll assume that you have a copy of the code from part 5 – if you want to download it, you can get it from the GitHub repository for that post. If you want to refer to the finished version of the whole project, you can access it on GitHub here.

Defining our Build Process

Before we start configuring anything, let’s think about what we want to achieve with our build process. I find it helpful to think about the start point and end point of the build. We know that when we start the build, we will have our code within a Git repository. When we finish, we want to have two things: a build artifact in the form of a JavaScript file that is ready to deploy to Cosmos DB; and a list of unit test results. Additionally, the build should pass if all of the steps ran successfully and the tests passed, and it should fail if any step or any test failed.

Now that we have the start and end points defined, let’s think about what we need to do to get us there.

  • We need to install our NPM packages. On VSTS, every time we run a build our build environment will be reset, so we can’t rely on any files being there from a previous build. So the first step in our build pipeline will be to run npm install.
  • We need to build our code so it’s ready to be tested, and then we need to run the unit tests. In part 5 of this series we created an NPM script to help with this when we run locally – and we can reuse the same script here. So our second build step will be to run npm run test.
  • Once our tests have run, we need to report their results to VSTS so it can visualise them for us. We’ll look at how to do this below. Importantly, VSTS won’t fail the build automatically if there are any test failures, so we’ll look at how to do this ourselves shortly.
  • If we get to this point in the build then our code is successfully passing the tests, so now we can create the real release build. Again we have already defined an NPM script for this, so we can reuse that work and call npm run build.
  • Finally, we can publish the release JavaScript file as a build artifact, which makes it available to our release pipeline.

We’ll soon see how we can actually configure this. But before we can write our build process, we need to figure out how we’ll report the results of our unit tests back to VSTS.

Reporting Test Results

When we run unit tests from inside a VSTS build, the unit test runner needs some way to report the results back to VSTS. There are some built-in integrations with common tools like VSTest (for testing .NET code). For Jasmine, we need to use a reporter that we configure ourselves. The jasmine-tfs-reporter NPM package does this for us – its reporter will emit a specially formatted results file, and we’ll tell VSTS to look at this.

Let’s open up our package.json file and add the following line into the devDependencies section:

Run npm install to install the package.

Next, create a file named spec/vstsReporter.ts and add the following lines, which will configure Jasmine to send its results to the reporter we just installed:

Finally, let’s edit the jasmine.json file. We’ll add a new helpers section, which will tell Jasmine to run that script before it starts running our tests. Here’s the new jasmine.json file we’ll use:

Now run npm run test. You should see that a new testresults folder has been created, and it contains an XML file that VSTS can understand.

That’s the last piece of the puzzle we need to have VSTS build our code. Now let’s see how we can make VSTS actually run all of these steps.

Creating the Build Configuration

VSTS has a great feature – currently in preview – that allows us to specify our build definition in a YAML file, check it into our source control system, and have the build system execute it. More information on this feature is available in a previous blog post I wrote. We’ll make use of this feature here to write our build process.

Create a new file named build.yaml. This file will define all of our build steps. Paste the following contents into the file:

This YAML file tells VSTS to do the following:

  • Run the npm install command.
  • Run the npm run test command. If we get any test failures, this command will cause VSTS to detect an error.
  • Regardless of whether an error was detected, take the test results that have been saved into the testresults folder and publish them. (Publishing just means showing them within the build; they won’t be publicly available.)
  • If everything worked up till now, run npm run build to build the releaseable JavaScript file.
  • Publish the releasable JavaScript file as a build artifact, so it’s available to the release pipeline that we’ll configure shortly.

Commit this file and push it to your Git repository. In VSTS, we can now set up a new build configuration, point it to the YAML file, and let it run. After it finishes, you should see something like this:

release-1

We can see that four tests ran and passed. If we click on the Artifacts tab, we can view the artifacts that were published:

release-2

And by clicking the Explore button and expanding the drop folder, we can see the exact file that was created:

release-3

You can even download the file from here, and confirm that it looks like what we expect to be able to send to Cosmos DB. So, now we have our code being built and tested! The next step is to actually deploy it to Cosmos DB.

Deciding on a Release Process

Cosmos DB can be used in many different types of applications, and the way that we deploy our scripts can differ as well. In some applications, like those that are heavily server-based and have initialisation logic, we might provision our database, collections, and scripts through our application code. In other systems, like serverless applications, we want to provision everything we need during our deployment process so that our application can immediately start to work. This means there are several patterns we can adopt for installing our scripts.

Pattern 1: Use Application Initialisation Logic

If we have an Azure App Service, Cloud Service, or another type of application that provides initialisation lifecycle events, we can use the initialisation code to provision our Cosmos DB database and collection, and to install our stored procedures, triggers, and UDFs. The Cosmos DB client SDKs provide a variety of helpful methods to do this. For example, the .NET and .NET Core SDKs provide this functionality. If the platform you are using doesn’t have an SDK, you can also use the REST API provided by Cosmos DB.

This approach is also likely to be useful if we dynamially provision databases and collections while our application runs. We can also use this approach if we have an application warmup sequence where the existence of the collection can be confirmed and any missing pieces can be added.

Pattern 2: Initialise Serverless Applications with a Custom Function

When we’re using serverless technologies like Azure Functions or Azure Logic Apps, we may not have the opportunity to initialise our application the first time it loads. We could check the existence of our Cosmos DB resources whenever we are executing our logic, but this is quite wasteful and inefficient. One pattern that can be used is to write a special ‘initialisation’ function that is called from our release pipeline. This can be used to prepare the necessary Cosmos DB resources, so that by the time our callers execute our code, the necessary resources are already present. However, this presents some challenges, including the fact that it necessitates mixing our deployment logic and code with our main application code.

Pattern 3: Deploying from VSTS

The approach that I will adopt in this post is to deploy the Cosmos DB resources from our release pipeline in VSTS. This means that we will keep our release process separate from our main application code, and provide us with the flexibility to use the Cosmos DB resources at any point in our application logic. This may not suit all applications, but for many applications that use Cosmos DB, this type of workflow will work well.

There is a lot more to release configuration than I’ll be able to discuss here – that could easily be its own blog series. I’ll keep this particular post focused just on installing server-side code onto a collection.

Defining the Release Process

As with builds, it’s helpful to think through the process we want the release to follow. Again, we’ll think first about the start and end points. When we start the release pipeline, we will have the build that we want to release (which will include our compiled JavaScript script). For now, I’ll also assume that you have a resource group containing a Cosmos DB account with an existing database and collection, and that you know the account key. In a future post I will elaborate how some of this process can also be automated, but this is outside of the scope of this series. Once the release process finishes, we expect that the collection will have the server-side resource installed and ready to use.

VSTS doesn’t have built-in support for Cosmos DB. However, we can easily use a custom PowerShell script to install Cosmos DB scripts on our collection. I’ve written such a script, and it’s available for download here. The script uses the Cosmos DB API to deploy stored procedures, triggers, and user-defined functions to a collection.

We need to include this script into our build artifacts so that we can use it from our deployment process. So, download the file and save it into a deploy folder in the project’s source repository. Now that we have that there, we need to tell the VSTS build process to include it as an artifact, so open the build.yaml file and add this to the end of the file, being careful to align the spaces and indentation with the sections above it:

Commit these changes, and then run a new build.

Now we can set up a release definition in VSTS and link it to our build configuration so it can receive the build artifacts. We only need one step currently, which will deploy our stored procedure using the PowerShell script we included as a build artifact. Of course, a real release process is likely to do a lot more, including deploying your application. For now, though, let’s just add a single PowerShell step, and configure it to run an inline script with the following contents:

This inline script does the following:

  • It loads in the PowerShell file from our build artifact, so that the functions within that file are available for us to use.
  • It then runs the DeployStoredProcedure function, which is defined in that PowerShell file. We pass in some parameters so the function can contact Cosmos DB:
    • AccountName – this is the name of your Cosmos DB account.
    • AccountKey – this is the key that VSTS can use to talk to Cosmos DB’s API. You can get this from the Azure Portal – open up the Cosmos DB account and click the Keys tab.
    • DatabaseName – this is the name of the database (in our case, Orders).
    • CollectionName – this is the name of the collection (in our case again, Orders).
    • StoredProcedureName – this is the name we want our stored procedure to have in Cosmos DB. This doesn’t need to match the name of the function inside our code file, but I recommend it does to keep things clear.
    • SourceFilePath – this is the path to the JavaScript file that contains our script.

Note that in the script above I’ve assumed that the build configuration’s name is CosmosServer-CI, so that appears in the two file paths. If you have a build configuration that uses a different name, you’ll need to replace it. Also, I strongly recommend you don’t hard-code the account name, account key, database name, and collection name like I’ve done here – you would instead use VSTS variables and have them dynamically inserted by VSTS. Similarly, the account key should be specified as a secret variable so that it is encrypted. There are also other ways to handle this, including creating the Cosmos DB account and collection within your deployment process, and dynamically retrieving the account key. This is beyond the scope of this series, but in a future blog post I plan to discuss some ways to achieve this.

After configuring our release process, it will look something like this:

release-4

Now that we’ve configured our release process we can create a new release and let it run. If everything has been configured properly, we should see the release complete successfully:

release-5

And if we check the collection through the Azure Portal, we can see the stored procedure has been deployed:

release-6

This is pretty cool. It means that whenever we commit a change to our stored procedure’s TypeScript file, it can automatically be compiled, tested, and deployed to Cosmos DB – without any human intervention. We could now adapt the exact same process to deploy our triggers (using the DeployTrigger function in the PowerShell script) and UDFs (using the DeployUserDefinedFunction function). Additionally, we can easily make our build and deployments into true continuous integration (CI) and continuous deployment (CD) pipelines by setting up automated builds and releases within VSTS.

Summary

Over this series of posts, we’ve explored Cosmos DB’s server-side programming capabilities. We’ve written a number of server-side scripts including a UDF, a stored procedure, and two triggers. We’ve written them in TypeScript to ensure that we’re using strongly typed objects when we interact with Cosmos DB and within our own code. We’ve also seen how we can unit test our code using Jasmine. Finally, in this post, we’ve looked at how our server-side scripts can be built and deployed using VSTS and the Cosmos DB API.

I hope you’ve found this series useful! If you have any questions or similar topics that you’d like to know more about, please post them in the comments below.

Key Takeaways

  • Having an automated build and release pipeline is very important to ensure reliable, consistent, and safe delivery of software. This should include our Cosmos DB server-side scripts.
  • It’s relatively easy to adapt the work we’ve already done with our build scripts to work on a build server. Generally it will simply be a matter of executing npm install and then npm run build to create a releasable build of our code.
  • We can also run our unit tests by simply executing npm run test.
  • Test results from Jasmine can be published into VSTS using the jasmine-tfs-reporter package. Other integrations are available for other build servers too.
  • Deploying our server-side scripts onto Cosmos DB can be handled in different ways for different applications. With many applications, having server-side code deployed within an existing release process is a good idea.
  • VSTS doesn’t have built-in support for Cosmos DB, but I have provided a PowerShell script that can be used to install stored procedures, triggers, and UDFs.
  • You can view the code for this post on GitHub.

Cosmos DB Server-Side Programming with TypeScript – Part 5: Unit Testing

Over the last four parts of this series, we’ve discussed how we can write server-side code for Cosmos DB, and the types of situations where it makes sense to do so. If you’re building a small sample application, you now have enough knowledge to go and build out UDFs, stored procedures, and triggers. But if you’re writing production-grade applications, there are two other major topics that need discussion: how to unit test your server-side code, and how to build and deploy it to Cosmos DB in an automated and predictable manner. In this part, we’ll discuss testing. In the next part, we’ll discuss build and deployment.

This post is part of a series:

  • Part 1 gives an overview of the server side programmability model, the reasons why you might want to consider server-side code in Cosmos DB, and some key things to watch out for.
  • Part 2 deals with user-defined functions, the simplest type of server-side programming, which allow for adding simple computation to queries.
  • Part 3 talks about stored procedures. These provide a lot of powerful features for creating, modifying, deleting, and querying across documents – including in a transactional way.
  • Part 4 introduces triggers. Triggers come in two types – pre-triggers and post-triggers – and allow for behaviour like validating and modifying documents as they are inserted or updated, and creating secondary effects as a result of changes to documents in a collection.
  • Part 5 (this post) discusses unit testing your server-side scripts. Unit testing is a key part of building a production-grade application, and even though some of your code runs inside Cosmos DB, your business logic can still be tested.
  • Finally, part 6 explains how server-side scripts can be built and deployed into a Cosmos DB collection within an automated build and release pipeline, using Microsoft Visual Studio Team Services (VSTS).

Unit Testing Cosmos DB Server-Side Code

Testing JavaScript code can be complex, and there are many different ways to do it and different tools that can be used. In this post I will outline one possible approach for unit testing. There are other ways that we could also test our Cosmos DB server-side code, and your situation may be a bit different to the one I describe here. Some developers and teams place different priorities on some of the aspects of testing, so this isn’t a ‘one size fits all’ approach. In this post, the testing approach we will build out allows for:

  • Mocks: mocking allows us to pass in mocked versions of our dependencies so that we can test how our code behaves independently of a working external system. In the case of Cosmos DB, this is very important: the getContext() method, which we’ve looked at throughout this series, provides us with access to objects that represent the request, response, and collection. Our code needs to be tested without actually running inside Cosmos DB, so we mock out the objects it sends us.
  • Spies: spies are often a special type of mock. They allow us to inspect the calls that have been made to the object to ensure that we are triggering the methods and side-effects that we expect.
  • Type safety: as in the rest of this series, it’s important to use strongly typed objects where possible so that we get the full benefit of the TypeScript compiler’s type system.
  • Working within the allowed subset of JavaScript: although Cosmos DB server-side code is built using the JavaScript language, it doesn’t provide all of the features of JavaScript. This is particularly important when testing our code, because many test libraries make assumptions about how the code will be run and the level of JavaScript support that will be available. We need to work within the subset of JavaScript that Cosmos DB supports.

I will assume some familiarity with these concepts, but even if they’re new to you, you should be able to follow along. Also, please note that this series only deals with unit testing. Integration testing your server-side code is another topic, although it should be relatively straightforward to write integration tests against a Cosmos DB server-side script.

Challenges of Testing Cosmos DB Server-Side Code

Cosmos DB ultimately executes JavaScript code, and so we will use JavaScript testing frameworks to write and run our unit tests. Many of the popular JavaScript and TypeScript testing frameworks and helpers are designed specifically for developers who write browser-based JavaScript or Node.js applications. Cosmos DB has some properties that can make these frameworks difficult to work with.

Specifically, Cosmos DB doesn’t support modules. Modules in JavaScript allow for individual JavaScript files to expose a public interface to other blocks of code in different files. When I was preparing for this blog post I spent a lot of time trying to figure out a way to handle the myriad testing and mocking frameworks that assume modules are able to be used in our code. Ultimately I came to the conclusion that it doesn’t really matter if we use modules inside our TypeScript files as long as the module code doesn’t make it into our release JavaScript files. This means that we’ll have to build our code twice – once for testing (which include the module information we need), and again for release (which doesn’t include modules). This isn’t uncommon – many development environments have separate ‘Debug’ and ‘Release’ build configurations, for example – and we can use some tricks to achieve our goals while still getting the benefit of a good design-time experience.

Defining Our Tests

We’ll be working with the stored procedure that we built out in part 3 of this series. The same concepts could be applied to unit testing triggers, and also to user-defined functions (UDFs) – and UDFs are generally easier to test as they don’t have any context variables to mock out.

Looking back at the stored procedure, the purpose is to do return the list of customers who have ordered any of specified list of product IDs, grouped by product ID, and so an initial set of test cases might be as follows:

  1. If the productIds parameter is empty, the method should return an empty array.
  2. If the productIds parameter contains one item, it should execute a query against the collection containing the item’s identifier as a parameter.
  3. If the productIds parameter contains one item, the method should return a single CustomersGroupedByProduct object in the output array, which should contain the productId that was passed in, and whatever customerIds the mocked collection query returned.
  4. If the method is called with a valid productIds array, and the queryDocuments method on the collection returns false, an error should be returned by the function.

You might have others you want to focus on, and you may want to split some of these out – but for now we’ll work with these so we can see how things work. Also, in this post I’ll assume that you’ve got a copy of the stored procedure from part 3 ready to go – if you haven’t, you can download it from the GitHub repository for that part.

If you want to see the finished version of the whole project, including the tests, you can access it on GitHub here.

Setting up TypeScript Configurations

The first change we’ll need to make is to change our TypeScript configuration around a bit. Currently we only have one tsconfig.json file that we use to build. Now we’ll need to add a second file. The two files will be used for different situations:

  • tsconfig.json will be the one we use for local builds, and for running unit tests.
  • tsconfig.build.json will be the one we use for creating release builds.

First, open up the tsconfig.json file that we already have in the repository. We need to change it to the following:

The key changes we’re making are:

  • We’re now including files from the spec folder in our build. This folder will contain the tests that we’ll be writing shortly.
  • We’ve added the line "module": "commonjs". This tells TypeScript that we want to compile our code with module support. Again, this tsconfig.json will only be used when we run our builds locally or for running tests, so we’ll later make sure that the module-related code doesn’t make its way into our release builds.
  • We’ve changed from using outFile to outDir, and set the output directory to output/test. When we use modules like we’re doing here, we can’t use the outFile setting to combine our files together, but this won’t matter for our local builds and for testing. We also put the output files into a test subfolder of the output folder so that we keep things organised.

Now we need to create a tsconfig.build.json file with the following contents:

This looks more like the original tsconfig.json file we had, but there are a few minor differences:

  • The include element now looks for files matching the pattern *.ready.ts. We’ll look at what this means later.
  • The module setting is explicitly set to none. As we’ll see later, this isn’t sufficient to get everything we need, but it’s good to be explicit here for clarity.
  • The outFile setting – which we can use here because module is set to none – is going to emit a JavaScript file within the build subfolder of the output folder.

Now let’s add the testing framework.

Adding a Testing Framework

In this post we’ll use Jasmine, a testing framework for JavaScript. We can import it using NPM. Open up the package.json file and replace it with this:

There are a few changes to our previous version:

  • We’ve now imported the jasmine module, as well as the Jasmine type definitions, into our project; and we’ve imported moq.ts, a mocking library, which we’ll discuss below.
  • We’ve also added a new test script, which will run a build and then execute Jasmine, passing in a configuration file that we will create shortly.

Run npm install from a command line/terminal to restore the packages, and then create a new file named jasmine.json with the following contents:

We’ll understand a little more about this file as we go on, but for now, we just need to understand that this file defines the Jasmine specification files that we’ll be testing against. Now let’s add our Jasmine test specification so we can see this in action.

Starting Our Test Specification

Let’s start by writing a simple test. Create a folder named spec, and within it, create a file named getGroupedOrdersImpl.spec.ts. Add the following code to it:

This code does the following:

  • It sets up a new Jasmine spec named getGroupedOrdersImpl. This is the name of the method we’re testing for clarity, but it doesn’t need to match – you could name the spec whatever you want.
  • Within that spec, we have a test case named should return an empty array.
  • That test executes the getGroupedOrdersImpl function, passing in an empty array, and a null object to represent the Collection.
  • Then the test confirms that the result of that function call is an empty array.

This is a fairly simple test – we’ll see a slightly more complex one in a moment. For now, though, let’s get this running.

There’s one step we need to do before we can execute our test. If we tried to run it now, Jasmine would complain that it can’t find the getGroupedOrdersImpl method. This is because of the way that JavaScript modules work. Our code needs to export its externally accessible methods so that the Jasmine test can see it. Normally, exporting a module from a Cosmos DB JavaScript file will mean that Cosmos DB doesn’t accept the file anymore – we’ll see a solution to that shortly.

Open up the src/getGroupedOrders.ts file, and add the following at the very bottom of the file:

The export statement sets up the necessary TypeScript compilation instruction to allow our Jasmine test spec to reach this method.

Now let’s run our test. Execute npm run test, which will compile our stored procedure (including the export), compile the test file, and then execute Jasmine. You should see that Jasmine executes the test and shows 1 spec, 0 failures, indicating that our test successfully ran and passed. Now let’s add some more sophisticated tests.

Adding Tests with Mocks and Spies

When we’re testing code that interacts with external services, we often will want to use mock objects to represent those external dependencies. Most mocking frameworks allow us to specify the behaviour of those mocks, so we can simulate various conditions and types of responses from the external system. Additionally, we can use spies to observe how our code calls the external system.

Jasmine provides a built-in mocking framework, including spy support. However, the Jasmine mocks don’t support TypeScript types, and so we lose the benefit of type safety. In my opinion this is an important downside, and so instead we will use the moq.ts mocking framework. You’ll see we have already installed it in the package.json.

Since we’ve already got it available to us, we need to add this line to the top of our spec/getGroupedOrders.spec.ts file:

This tells TypeScript to import the relevant mocking types from the moq.ts module. Now we can use the mocks in our tests.

Let’s set up another test, in the same file, as follows:

This test does a little more than the last one:

  • It sets up a mock of the ICollection interface.
  • This mock will send back a hard-coded string (self-link) when the getSelfLink() method is called.
  • It also provides mock behaviour for the queryDocuments method. When the method is called, it invokes the callback function, passing back a list of documents with a single empty string, and then returns true to indicate that the query was accepted.
  • The mock.object() method is used to convert the mock into an instance that can be provided to the getGroupedOrderImpl function, which then uses that in place of the real Cosmos DB collection. This means we can test out how our code will behave, and we can emulate the behaviour of Cosmos DB as we wish.
  • Finally, we call mock.verify to ensure that the getGroupedOrdersImpl function executed the queryDocuments method on the mock collection exactly once.

You can run npm run test again now, and verify that it shows 2 specs, 0 failures, indicating that our new test has successfully passed.

Now let’s fill out the rest of the spec file – here’s the complete file with all of our test cases included:

You can execute the tests again by calling npm run test. Try tweaking the tests so that they fail, then re-run them and see what happens.

Building and Running

All of the work we’ve just done means that we can run our tests. However, if we try to build our code to submit to Cosmos DB, it won’t work anymore. This is because the export statement we added to make our tests work will emit code that Cosmos DB’s JavaScript engine doesn’t understand.

We can remove this code at build time by using a preprocessor. This will remove the export statement – or anything else we want to take out – from the TypeScript file. The resulting cleaned file is the one that then gets sent to the TypeScript compiler, and it emits a Cosmos DB-friendly JavaScript file.

To achieve this, we need to chain together a few pieces. First, let’s open up the src/getGroupedOrders.ts file. Replace the line that says export { getGroupedOrdersImpl } with this section:

The extra lines we’ve added are preprocessor directives. TypeScript itself doesn’t understand these directives, so we need to use an NPM package to do this. The one I’ve used here is jspreproc. It will look through the file and handle the directives it finds in specially formatted comments, and then emits the resulting cleaned file. Unfortunately, the preprocessor only works on a single file at a time. This is OK for our situation, as we have all of our stored procedure code in one file, but we might not do that for every situation. Therefore, I have also used the foreach-cli NOM package to search for all of the *.ts files within our src folder and process them. It saves the cleaner files with a .ready.ts extension, which our tsconfig.build.json file refers to.

Open the package.json file and replace it with the following contents:

Now we can run npm install to install all of the packages we’re using. You can then run npm run test to run the Jasmine tests, and npm run build to build the releasable JavaScript file. This is emitted into the output/build/sp-getGroupedOrders.js file, and if you inspect that file, you’ll see it doesn’t have any trace of module exports. It looks just like it did back in part 3, which means we can send it to Cosmos DB without any trouble.

Summary

In this post, we’ve built out the necessary infrastructure to test our Cosmos DB server-side code. We’ve used Jasmine to run our tests, and moq.ts to mock out the Cosmos DB server objects in a type-safe manner. We also adjusted our build script so that we can compile a clean copy of our stored procedure (or trigger, or UDF) while keeping the necessary export statements to enable our tests to work. In the final post of this series, we’ll look at how we can automate the build and deployment of our server-side code using VSTS, and integrate it into a continuous integration and continuous deployment pipeline.

Key Takeaways

  • It’s important to test Cosmos DB server-side code. Stored procedures, triggers, and UDFs contain business logic and should be treated as a fully fledged part of our application code, with the same quality criteria we would apply to other types of source code.
  • Because Cosmos DB server-side code is written in JavaScript, it is testable using JavaScript and TypeScript testing frameworks and libraries. However, the lack of support for modules means that we have to be careful in how we use these since they may emit release code that Cosmos DB won’t accept.
  • We can use Jasmine for testing. Jasmine also has a mocking framework, but it is not strongly typed.
  • We can get strong typing using a TypeScript mocking library like moq.ts.
  • By structuring our code correctly – using a single entry-point function, which calls out to getContext() and then sends the necessary objects into a function that implements our actual logic – we can easily mock and spy on our calls to the Cosmos DB server-side libraries.
  • We need to export the functions we are testing using the export statement. This makes them available to the Jasmine test spec.
  • However, these export statements need to be removed before we can compile our release version. We can use a preprocessor to remove those statements.
  • You can view the code for this post on GitHub.

Recommendations on using Terraform to manage Azure resources

siliconvalve

If you’ve been working in the cloud infrastructure space for the last few years you can’t have missed the buzz around Hashicorp’s Terraform product. Terraform provides a declarative model for infrastructure provisioning that spans multiple cloud providers as well as on-premises services from the likes of VMWare.

I’ve recently had the opportunity to use Terraform to do some Azure infrastructure provisioning so I thought I’d share some recommendations on using Terraform with Azure (as at January 2018). I’ll also preface this post by saying that I have only been provisioning Azure PaaS services (App Service, Cosmos DB, Traffic Manager, Storage and Application Insights) and haven’t used any IaaS components at all.

In the beginning

I needed to provide an easy way to provision around 30 inter-related services that together constitute the hosting environment for single customer solution. Ideally I wanted a way to make it easy to re-provision these…

View original post 1,113 more words