How to quickly copy Azure Functions between Azure Tenants and implement ‘Run From Zip’

As mentioned in this post yesterday I needed to copy a bunch of Azure WebApps and Functions from one Tenant to another. With anything cloud based, things move fast. Some of the methods I found were too onerous and more complex than they needed to be. There is of course the Backup option as well for Azure Functions. This does require a storage account associated with the Function App Plan. My Functions didn’t have the need for storage and the plan tier they were on meant that wasn’t a prerequisite. I didn’t have the desire to add a storage account to backup to then migrate.

Overview

In this post I show my method to quickly copy Azure Functions from one Azure Tenant to another. My approach is;

  • In the Source Tenant from the Azure Functions App
    • Using Kudu take a backup of the wwwroot folder (that will contain one or more Functions)
  • In the Target Tenant
    • Create an Azure Function App
    • Using Kudu locate the wwwroot archive in the new Azure Function App
    • Configure Azure Function Run From Zip

Backing up the Azure Functions in the Source Tenant

Using the Azure Portal in the Source Tenant go to your Function App => Application Settings and select Advanced Tools. Select Debug Console – Powershell and navigate to the Site Folder. Next to wwwroot select the download icon to obtain an archive of your functions.

Download WWWRoot Folder 2.PNG

Copying the Azure Functions to the Target Tenant

In the Target Tenant first create a New Azure Function App. I did this as I wanted to change the naming, the plan and a few other configuration items. Then using the Azure Portal go to your new Function App, Application Settings and select Advanced Tools.

Function Advanced Tools

Create a folder under D:\home\data named SitePackages.

Create Site Packages Folder

Drag and drop your wwwroot.zip file into the SitePackages Folder.

Drag Drop wwwroot

In the same folder select the + icon to create a file named siteversion.txt

Site Packages

Inside the file give the name of your archive file e.g.  wwwroot.zip Select Save.

Siteversion.txt.png

Back in your new Function App select Application Settings

Application Settings

Under Application Settings add a new setting for Website_Use_Zip with a setting value of ‘1’.

Website Use Zip.PNG

Refresh your Function App and you’ll notice it is now Read Only as it is running from Zip. All the Functions that were in the Zip are displayed.

Functions Migrated.PNG

Summary

This is a quick and easy method to get your functions copied from one Tenant to another. Keep in mind if your functions are using Application Settings, KeyVaults, Managed Service Identity type options you’ll need to add those settings, certificates, credentials in the target environment.

Adding a Display to the Teenager Notification Service Azure IoT Device

Overview

A couple of weeks back I wrote this post that detailed Building a Teenager Notification Service using Azure IoT an Azure Function, Microsoft Flow, Mongoose OS and a Micro Controller. 

Over the Easter break I enhanced it with the inclusion of a display. I was rummaging around in a box of parts when I found a few LCD displays I’d purchased on speculation some time ago. They are SSD1306 LCD driven units that can be found on Amazon here. A quick upgrade later and …

… scrolling text to go with rotating lights. The addition of the display requires the following changes to the previous project which are detailed in this post;

  • inclusion of the SSD1306 library
  • configure your micro controller for the display
  • a few changes in the Mongoose OS Init.JS file to have the appropriate text displayed for the notification
  • change to the Notifier Base case to integrate the display
    • it is available in the Thingiverse Project for this thing here and named NodeMCU with Display Window.stl

Incorporating the SSD1306 Library

Before starting, with your micro controller connected and using the MOS UI, take a copy of your Init.js configuration file by selecting Device Files, then Init.js and copying the content to somewhere safe. Also the Device Config by choosing Device Config, Expert View and Save Configuration.

From the MOS UI select Projects, select the AzureIoT-Neopixel-js project then from the drop down menu select mos.yml.

Add the line  – origin: https://github.com/mongoose-os-libs/arduino-adafruit-ssd1306 then select the Spanner icon to Rebuild the App. Once completed select the Flash icon to update your micro controller.

Include SSD1306 Library.PNG

Once written to your micro controller check your Init.js and copy back your backup. Check your Configuration and make sure your MQTT settings are still present. Copy your previous config back if required.

Configure your Micro Controller for the SSD1306 Display

We need to tell your micro controller which GPIO Pins we have attached the display too. I actually also moved the GPIO Pin I attached for the Neopixel as part of this. The configuration is;

  • Neopixel connected to GPIO 12
  • SSD1306 SDA connected to GPIO 4
  • SSD1306 SCL connected to GPIO 5

In the Expert Device Config mode update the I2C section as shown below. Save the configuration.

 "i2c": {
 "enable": true,
 "freq": 100000,
 "debug": false,
 "sda_gpio": 4,
 "scl_gpio": 5
 },

Wiring the SSD1306 to the Micro Controller

Looking at the NodeMCU diagram you can see where the connections need to be made for the NeoPixel and SSD1306 display. SSD1306 SCL to D1, SDA to D2. The Neopixel data connection is now on D6. Power and GND using the PWR and GND pins. I’m using them all on the same side of the NodeMCU to make it fit cleanly into the case later.

NodeMCU.png

Init.js code additions

Incorporate the display library in your Init.js by including the line below.

load('api_arduino_ssd1306.js');

With that done we to initialize the display also in the Init.js. The following lines initialize the display address, SCL pin the display is connected to, the size of the text we are going to display and color. Put them before or after the initialization for the Neopixel.

//------------ Setting up Display ----------------
let oled_addr = 0x3C; // I2C Address for SSD1306let 
oled = Adafruit_SSD1306.create_i2c(5 /* RST GPIO */, Adafruit_SSD1306.RES_128_32);

// Initialize the display. 
oled.clearDisplay();
oled.setTextSize(2);
oled.setTextColor(Adafruit_SSD1306.WHITE);

In the MQTT Subscriber section where you are looking at the MQTT message being sent from the Microsoft Flow and displaying a color on the Neopixel add the following lines to send output to the display. The following below outputs Pink to the display. If Pink indicates some task then change oled.write(‘PINK’); to oled.write(‘TASK’); or similar.

 if (msg === "Pink"){
 // PINK 
 oled.clearDisplay();
 oled.setTextSize(2);
 oled.setCursor(1, 10);
 oled.write('PINK');
 oled.display();
 oled.startScrollLeft(0x00, 0x0F);

Following the Neopixel loop after

 strip.clear();
 strip.show(strip);

add the following to clear the display as the the Neopixel has finished displaying its color notification.

 oled.clearDisplay();
 oled.display();

Repeat for the differing colors and their tasks/meanings.

Summary

Now the notifier includes both a visual color notification AND the text associated with the notification. No confusion here, or does it need a buzzer as well?

 

Evaluating the migration of Azure Functions to Microsoft Flow – Twitter IoT Integration

Introduction

Almost 18 months ago I wrote this post on integrating Twitter with Azure Functions to Tweet IoT data. A derivative of that solution has been successfully running for about the same period. Azure Functions have been bullet proof for me.

After recently implementing Microsoft Flow as detailed in my Teenager Notification Device post here I started looking at a number of the Azure Functions I have running and looked at what would be better suited to being implemented with Flow. What could I simplify by migrating to Microsoft Flow?

The IoT Twitter Function linked above was one the simpler Functions I had running that I’ve transposed and it has been running seamlessly. I chose this particular function to migrate as the functions it was performing were actions that Microsoft Flow supported. Keep in mind (see the Summary), that there isn’t a one size fits all. Flow and Functions each have their place and often work even better together.

Comparison

Transposing the IoT Twitter Function App to Microsoft Flow provided me with the same outcome, however the effort to get to that outcome is considerably less. As a quick comparison I’ve compared the key steps I needed to perform with the Azure Function to enable the integration vs what it took to implement with Microsoft Flow.

Function vs Flow.PNG

That’s pretty compelling. For the Azure Function I needed to register an App with Twitter and I needed to create an Azure Function App Plan to host my Azure Function. With Microsoft Flow I just created a Flow.

To setup and configure the Azure Function I needed to set up Deployment Options to upload the Twitter PowerShell Module (this is the third-party module), and I needed to store the two credential sets associated with the Twitter Account/App. In Microsoft Flow I just chose Twitter as an Action and provided conscent to the oAuth2 challenge.

Finally for the logic of the Azure Function I had to write the script to retrieve the data, manipulate it, and then post it to Twitter. In Microsoft Flow it was simply a case of configuring the workflow logic.

Microsoft Flow

As detailed above, the logic is still the same. On a schedule, get the data from the IoT Devices via a RestAPI, manipulate/parse the response and output a Tweet with the environment info. Doing that in Flow though means selection of an action and configuring it. No code, no modules, no keys.

Below is a resultant Flow (overview) to achieve the same result as my Azure Function that I originally implemented as an Azure Function as detailed here.

MS Flow - Twitter.PNG

The schedule part is triggered hourly. Using Recurrence it is easy to set the schedule (much easier than a CRON format in Azure Functions) complete with timezone (within the advanced section). I then get the Current time to allow me to acquire the Date and Time in a format that I will use in the resulting tweet.

Schedule

Next is to perform the first RestAPI call to get the data from the first of the IoT devices. Parse the JSON response to get the temperature value.

GET

Repeat the above step for the other IoT Device located in a different environment and parse that. Formulate the Tweet using elements of information from the Flow.

Repeat and Tweet

Looking at Twitter we see a resultant Tweet from the Flow.

Tweet.PNG

Summary

This is a relatively simple flow. Bare in mind I haven’t included any logic to validate what is returned or perform any conditional operations during processing. But very quickly it is possible to retrieve, manipulate and output to a different medium.

So why don’t I used Flow for everything? The recent post I mentioned at the beginning for the Teenager Notification Device that also used a Flow, also uses an Azure Function. For that use case the integration of the IoT Device with Azure IoT is via MQTT. There isn’t currently that capability in Flow. But Flow was used to initiate an Action of initiating a trigger for an Azure Function that in turn sent an MQTT message to an IoT Device. The combination of Flow with Functions provides a lot of flexibility and power.

 

Create Modern Pages and update metadata using SPFx Extensions, SP PnP JS and Azure Functions

Modern Site Pages (Site Page content type) have a constraint to associate custom metadata with it. In other words, the “Site Page” content type cannot have other site columns added to it as can be seen below.

SitePageContentTypeMissing

On another note, even though we can create a child content types from Site Page content type, the New Site page creation (screenshot below) process doesn’t associate the new content type when the Page is created. So, the fields from the child content type couldn’t be associated.

For eg. In the below screenshot, we have created a new site page – test.aspx using “Intranet Site Page Content Type” which is a child of “Site Page” content type. After the page is created, it gets associated to Site Page Content type instead of Intranet Site Page Content type. We can edit it again to get it associated to Intranet Page content type but that adds another step for end users to do and added training effort.

 

 

 

Solution Approach:

To overcome the above constraints, we implemented a solution to associate custom metadata into Modern Site Pages creation using SharePoint Framework (SPFx) List View Command Set extension and Azure Function. In this blog, I am going to briefly talk about the approach so it could be useful for anyone trying to do the same.

1. Create a List View Command Item for creating site pages, editing properties of site pages and promoting site pages to news
2. Create an Azure function that will create the Page using SharePoint Online CSOM
3. Call the Azure Function from the SPFx command.

A brief screenshot of the resulting SPFx extension dialog is below.

NewSitePage

Steps:

To override the process for modern page creation, we will use an Azure Function with SharePoint Online PnP core CSOM. Below is an extract of the code for the same. On a broad level, the Azure Function basically does the following

1. Get the value of the Site Url and Page name from the Query parameters
2. Check if the Site page is absent
3. Create the page if absent
4. Save the page

Note: The below code also includes the code to check if the page exists.

Next, create a SPFx extension list view command and SP dialog component that will allow us to call the Azure Function from Site Pages Library to create pages. The code uses ‘fetch api’ to call the Azure Function and pass the parameters for the Site Url and page name required for the Azure Function to create the page. After the page is created, the Azure function will respond with a success status, which can be used to confirm the page creation.

Note: Make sure that the dialog is locked while this operation is working. So, implement the code to stop closing or resubmitting the form.

After the pages are created, lets update the properties of the item using PnP JS library using the below code.

Conclusion:

As we can see above, we have overridden the Page Creation process using our own Azure Function using SPFx List View command and PnP JS. I will be detailing the SP dialog for SPFx extension in another upcoming blog, so keep an eye for it.

There are still some limitations of the above approach as below. You might have to get business approval for the same.

1. Cannot hide the out of the box ‘New Page’ option from inside the extension.
2. Cannot rearrange order of the Command control and it will be displayed at last to the order of SharePoint Out of the box elements.

Automate SharePoint Document Libraries Management using Flow, Azure Functions and SharePoint CSOM

I’ve been working on a client requirement to automate SharePoint library management via scripts to implement a document lifecycle with many document libraries that have custom content types and requires regular housekeeping for ownership and permissions.

Solution overview

To provide a seamless user experience, we decided to do the following:

1. Create a document library template (.stp) with all the prerequisite folders and content types applied.

2. Create a list to store the data about entries for said libraries. Add the owner and contributors for the library as columns in that list.

3. Whenever the title, owners or contributors are changed, the destination document library will be updated.

Technical Implementation

The solution has two main elements to automate this process

1. Microsoft Flow – Trigger when an item is created or modified

2. Two Azure Functions – Create the library and update permissions

The broad steps and code are as follows:

1. When the flow is triggered, we would check the status field to find if it is a new entry or a change.

Note: Since Microsoft flow doesn’t have conditional triggers to differentiate between create and modified list item events, use a text column in the SharePoint list which is set to start, in progress and completed values to identify create and update events.

2. The flow will call an Azure function via an HTTP Post action in a Function. Below is the configuration of this.

AzureFunctionFromFlow

3. For the “Create Library” Azure function, create a HTTP C# Function in your Azure subscription.

4. In the Azure Function, open Properties -> App Service Editor. Then add a folder called bin and then copy two files to it.

  • Microsoft.SharePoint.Client
  • Microsoft.SharePoint.Client.Runtime

KuduTools_AzureFunction

Create Lib App Service Editor

Please make sure to get the latest copy of the Nuget package for SharepointPnPOnlineCSOM. To do that, you can set up a VS solution and copy the files from there, or download the Nuget package directly and extract the files from it.

5. After copying the files, reference them in the Azure function using the below code

#r "Microsoft.SharePoint.Client.dll"
#r "Microsoft.SharePoint.Client.Runtime.dll"
#r "System.Configuration"
#r "System.Web"

6. Then create the SharePoint client context and create a connection to the source list.

7. After that, use the ListCreationInformation class to create the Document library from the library template using the code below.

8. After the library is created, break the role inheritance for the library as per the requirement

9. Update the library permissions using the role assignment object

10. To differentiate between People, SharePoint Groups and AD Groups, find the unique ID and add the group as per the script below.

Note: In case you have people objects that are not in AD anymore because they have left the organisation, please refer to this blog for validating them before updating – Resolving “User not found” issue while assigning permissions using SharePoint CSOM

      Note: Try to avoid item.Update() from the Azure Function as that will trigger a second flow run, causing an iterative loop, instead use item.SystemUpdate()

11. After the update is done, return to the Flow with the success value from the Azure Function which will complete the loop.

As shown above, we saw how we can automate document library creation from a template and permissions management using Flow and Azure Functions

Automating the generation of Microsoft Identity Manager Configuration Documentation

Introduction

Last year Microsoft released the Microsoft Identity Manager Configuration Documenter which is available here. It is a fantastic little tool from Microsoft that supersedes its predecessor from the Microsoft Identity Manager 2003 Resource Toolkit (which only documented the Sync Server Configuration).

Running the tool (a PowerShell Module) against a base out-of-the-box reference configuration for FIM/MIM Servers reconciled against an exported configuration from the MIM Sync and Service Servers from an implementation, generates an HTML Report document that details the existing configuration of the MIM Service and MIM Sync.

Overview

Last year I wrote this post based on an automated solution I implemented to perform nightly backups of a FIM/MIM environment during development.

This post details how I’ve automated another daily task for a large development environment where a number of changes are going on and I wanted to have documentation generated that detailed the configuration for each day. Partly to quickly be able to work out what has changed when needing to roll back/re-validate changes, and also to have the individual configs from each day so they could also be used if we need to rollback.

The process uses an Azure Function App that uses Remote PowerShell into MIM to;

  1. Leverage a modified (stream lined version) of my nightly backup Azure Function to generate the Schema.xml and Policy.xml MIM Service configuration files and the Lithnet MIIS Automation PowerShell Module installed on the MIM Sync Server to export of the MIM Sync Server Configuration
  2. Create a sub-directory for each day under the MIM Documenter Tool to hold the daily configs
  3. Execute the generation of the Report and have the Report copied to the daily config/documented solution

Obtaining and configuring the MIM Configuration Documenter

Download the MIM Configuration Documenter from here and extract it to somewhere like c:\FIMDoco on your FIM/MIM Sync Server. In this example in my Dev environment I have the MIM Sync and Service/Portal all on a single server.

Then update the Invoke-Documenter-Contoso.ps1 (or whatever you’ve renamed the script to) to make the following changes;

  • Update the following lines for your version and include the new variable $schedulePath and add it to the $pilotConfig variable. Create the C:\FIMDoco\Customer and C:\FIMDoco\Customer\Dev directories (replace Customer with something appropriate.
######## Edit as appropriate ####################################
$schedulePath = Get-Date -format dd-MM-yyyy
$pilotConfig = "Customer\Dev\$($schedulePath)" # the path of the Pilot / Target config export files relative to the MIM Configuration Documenter "Data" folder.
$productionConfig = "MIM-SP1-Base_4.4.1302.0" # the path of the Production / Baseline config export files relative to the MIM Configuration Documenter "Data" folder.
$reportType = "SyncAndService" # "SyncOnly" # "ServiceOnly"
#################################################################
  • Remark out the Host Settings as these won’t work via a WebJob/Azure Function
#$hostSettings = (Get-Host).PrivateData
#$hostSettings.WarningBackgroundColor = "red"
#$hostSettings.WarningForegroundColor = "white"
  • Remark out the last line as this will be executed as part of the automation and we want it to complete silently at the end.
# Read-Host "Press any key to exit"

It should then look something like this;

Azure Function to Automate execution of the Documenter

As per my nightly backup process;

  • I configured my MIM Sync Server to accept Remote PowerShell Sessions. That involved enabling WinRM, creating a certificate, creating the listener, opening the firewall port and enabling the incoming port on the NSG . You can easily do all that by following my instructions here. From the same post I setup up the encrypted password file and uploaded it to my Function App and set the Function App Application Settings for MIMSyncCredUser and MIMSyncCredPassword.
  • I created an Azure PowerShell Timer Function App. Pretty much the same as I show in this post, except choose Timer.
    • I configured my Schedule for 6am every morning using the following CRON configuration
0 0 6 * * *
  • I also needed to increase the timeout for the Azure Function as generation of the files to execute the report and the time to execute the report exceed the default timeout of 5 mins in my environment (19 Management Agents). I increased the timeout to the maximum of 10 mins as detailed here. Essentially added the following to the host.json file in the wwwroot directory of my Function App.
{
 "functionTimeout": "00:10:00"
}

Azure Function PowerShell Timer Script (Run.ps1)

This is the Function App PowerShell Script that uses Remote PowerShell into the MIM Sync/Service Server to export the configuration using the Lithnet MIIS Automation and Microsoft FIM Automation PowerShell modules.

Note: If your MIM Service is on a different host you will need to install the Microsoft FIM Automation PowerShell Module on your MIM Sync Server and update the script below to change references to http://localhost:5725 to whatever your MIM Service host is.

Testing the Function App

With everything configured, manually running the Function App and checking the output window if you’ve configured everything correct will show success in the Logs as shown below. In this environment with 19 Management Agents it takes 7 minutes to run.

Running the Azure Function.PNG

The Report

The outcome everyday just after 6am is I have (via automation);

  • an Export of the Policy and Schema Configuration from my MIM Service
  • an Export of the MIM Sync Server Configuration (the Metaverse and all Management Agents)
  • I have the MIM Configuration Documenter Report generated
  • If I need to rollback changes I have the ability to do that on a daily interval (either for a MIM Service change or an individual Management Agent change

Under the c:\FIMDoco\Data\Customer\Dev\Report directory is the HTML Configuration Report.

Report Output.PNG

Opening the report in a browser we have the configuration of the MIM Sync and MIM Service.

Report

 

Azure Functions Cold Start Workaround

Intro

I love Azure Functions. So much power for so little effort or cost. The only downside is that the consumption model that keeps the cost so dirt-cheap means that unless you are using your Function constantly (in which case, you might be better off with the non-consumption options anyway), you will often be hit with a long delay as your Function wakes up from hibernation.

So very cold…

This isn’t a big deal if you are dealing with a fire and forget queue trigger scenario, but if you have web app that is calling the HTTP trigger and you need to wait for the Function to do it’s job before responding with a 200 OK… that’s a long wait (well over 15 seconds in my experience with a PowerShell function that loads a bunch of modules).

Now, the blunt way to mitigate this (as suggested by some in github issues on the subject) is to set up a timer function in the same Function App to run every 5 minutes to keep things warm. This to me seems like a wasteful and potentially expensive approach. For my use-case, there was a better way that would work.

The Classic CRUD Use-case

Here’s my use case: I’m building some custom SharePoint forms for a customer and using my preferred JS framework, good old Angular 1.x. Don’t believe the hype around the newer frameworks, ng1 still gets the job done with no performance problems at the scale I’m dealing with in SharePoint. It also comes with a very strong ecosystem of libraries to support doing amazing things in the browser. But that’s a topic for another blog.

Anyway, the only thing I couldn’t do effectively on the client-side was break permissions on the list item created using the form and secure it to the creator and some other users (eg. their manager, etc). You need elevated permissions for that. I called on the awesome power of the PnP PowerShell library (specifically the Set-PnPListItemPermission cmdlet) to do this and wrapped it in a PowerShell Azure Function:

Pretty simple. Nice and clean – gets the job done. My Angular service calls this right after saving the item based on the form input by the user. If the Azure Function is starting from cold, then that adds an extra 20 seconds to save operation. Unacceptable and avoidable.

A more nuanced warmup for those who can…

This seemed like such an obvious solution once I hit on it – but I hadn’t thought of it before. When the form is first opened, it’s pretty reasonable to assume that (unless the form is a monster), the user should be hitting that submit/save button in under 5 mins. So that’s when we hit our function with a modified HTTP payload of ‘WARMUP’.

Just a simple ‘If’ statement to bypass the function if the ‘WARMUP’ payload is detected. The Function immediately responds with a 200. We ignore that – this is effectively fire-and-forget. Yes, this would be even simpler if we had a separate warmup Function that did absolutely nothing except warm up the Functions App, but I like that this ensures that my dependent PnP dlls (in the ‘modules’ folder of my Function) have been fired up on the current instance before I hit the function for real. Maybe it makes no difference. Don’t really care – this is simple enough anyway.

Here’s the Angular code that calls the Function (both as a warmup and for real):

Anyway, nothing revolutionary here, I know. But I hadn’t come across this approach before, so I thought it was worth writing up as it suits this standard CRUD forms over data scenario so nicely.

Till next time!

Azure Function Proxies for API Mocking

In my previous posts, Is Your Serverless Application Testable? – Azure Logic Apps and API Mocking for Developers, we have looked how to mock APIs with various approaches including Azure API Management, AWS API Gateway, MuleSoft and Azure Functions. Quite recently, Azure Functions Team released a new mocking feature in Azure Function Proxies. With this feature, API mocking can’t be even easier. In this post, I’m going to show how to use this mocking feature in several ways.

Enable API Mocking via Azure Portal

This is pretty straight forward. Within Azure Portal, simply enable the Azure Function Proxies feature:

If you have already enabled this feature, make sure that its runtime version is 0.2. Then click the plus sign next to Proxies (preview) and put details like:

At this stage, we might have found an interesting one. This proxy doesn’t have to link to an existing HTTP endpoint. In other words, we don’t need to have a real API for mocking. Just create a mocked HTTP endpoint and use it. Once we save this, we’ll have an endpoint like:

Use Postman to hit this mocked URL:

We receive the mocked HTTP status code and response body. How easy!

But if we have an existing Azure Function app and want to integrate this mocking feature via CI/CD pipeline, what can we do? Let’s move on.

Enable Azure Function Proxy Option via ARM Template

Azure Functions app instance is basically the same web app instance. So we can use the same ARM template definition for app settings configurations. In order to enable Azure Function Proxies, we simply add a ROUTING_EXTENSION_VERSION key with a value of ~0.2. Here’s a sample ARM template definition:

Once we update our ARM template, we can simply deploy this template to Azure.

Enable Azure Function Proxy Option via PowerShell

If ARM template is not feasible to enable this proxy option, then we can use a PowerShell script (or Azure CLI equivalent). Here’s a sample script:

Hence, this PowerShell script can be invoked with parameters like:

After either way, ARM template or PowerShell, is executed, the function app instance will have the value like:

Now, we have proxies feature enabled through code! Let’s move on.

Deploy Azure Function Proxy with Azure Function App

Defining Azure Function proxies is just to add another JSON file, proxies.json, to the app instance. The JSON schema definition can be found at http://json.schemastore.org/proxies. So, basically the proxies.json looks like:

As defined above, the mocked API has a URL of /mock/hello-world with response code of 200 and body { "message": "Hello World" }. As long as we follow this structure, we can add as many mocked APIs as we like.

Once we add this proxies.json into our Azure Functions project, deploy it to Azure. Then we’ll be able to see the list of mocked APIs.

So far, we have looked how to add mocked APIs to an Azure Function instance with various ways. Depending on preferences, any approach would be fine. But just make sure that this is the easiest and cheapest way to mock API endpoints.

Monitoring Azure Storage Queues with Application Insights and Azure Monitor

Azure Queues provides an easy queuing system for cloud-based applications. Queues allow for loose coupling between application components, and applications that use queues can take advantage of features like peek-locking and multiple retry attempts to enable application resiliency and high availability. Additionally, when Azure Queues are used with Azure Functions or Azure WebJobs, the built-in poison queue support allows for messages that repeatedly fail processing attempts to be moved to a dedicated queue for later inspection.

An important part of operating a queue-based application is monitoring the length of queues. This can tell you whether the back-end parts of the application are responding, whether they are keeping up with the amount of work they are being given, and whether there are messages that are causing problems. Most applications will have messages being added to and removed from queues as part of their regular operation. Over time, an operations team will begin to understand the normal range for each queue’s length. When a queue goes out of this range, it’s important to be alerted so that corrective action can be taken.

Azure Queues don’t have a built-in queue length monitoring system. Azure Application Insights allows for the collection of large volumes of data from an application, but it does not support monitoring queue lengths with its built-in functionality. In this post, we will create a serverless integration between Azure Queues and Application Insights using an Azure Function. This will allow us to use Application Insights to monitor queue lengths and set up Azure Monitor alert emails if the queue length exceeds a given threshold.

Solution Architecture

There are several ways that Application Insights could be integrated with Azure Queues. In this post we will use Azure Functions. Azure Functions is a serverless platform, allowing for blocks of code to be executed on demand or at regular intervals. We will write an Azure Function to poll the length of a set of queues, and publish these values to Application Insights. Then we will use Application Insights’ built-in analytics and alerting tools to monitor the queue lengths.

Base Application Setup

For this sample, we will use the Azure Portal to create the resources we need. You don’t even need Visual Studio to follow along. I will assume some basic familiarity with Azure.

First, we’ll need an Azure Storage account for our queues. In our sample application, we already have a storage account with two queues to monitor:

  • processorders: this is a queue that an API publishes to, and a back-end WebJob reads from the queue and processes its items. The queue contains orders that need to be processed.
  • processorders-poison: this is a queue that WebJobs has created automatically. Any messages that cannot be processed by the WebJob (by default after five attempts) will be moved into this queue for manual handling.

Next, we will create an Azure Functions app. When we create this through the Azure Portal, the portal helpfully asks if we want to create an Azure Storage account to store diagnostic logs and other metadata. We will choose to use our existing storage account, but if you prefer, you can have a different storage account than the one your queues are in. Additionally, the portal offers to create an Application Insights account. We will accept this, but you can create it separately later if you want.

1-FunctionsApp

Once all of these components have been deployed, we are ready to write our function.

Azure Function

Now we can write an Azure Function to connect to the queues and check their length.

Open the Azure Functions account and click the + button next to the Functions menu. Select a Timer trigger. We will use C# for this example. Click the Create this function button.

2-Function

By default, the function will run every five minutes. That might be sufficient for many applications. If you need to run the function on a different frequency, you can edit the schedule element in the function.json file and specify a cron expression.

Next, paste the following code over the top of the existing function:

This code connects to an Azure Storage account and retrieves the length of each queue specified. The key parts here are:

var connectionString = System.Configuration.ConfigurationManager.AppSettings["AzureWebJobsStorage"];

Azure Functions has an application setting called AzureWebJobsStorage. By default this refers to the storage account created when we provisioned the functions app. If you wanted to monitor a queue in another account, you could reference the storage account connection string here.

var queue = queueClient.GetQueueReference(queueName);
queue.FetchAttributes();
var length = queue.ApproximateMessageCount;

When you obtain a reference to a queue, you must explicitly fetch the queue attributes in order to read the ApproximateMessageCount. As the name suggests, this count may not be completely accurate, especially in situations where messages are being added and removed at a high rate. For our purposes, an approximate message count is enough for us to monitor.

log.Info($"{queueName}: {length}");

For now, this line will let us view the length of the queues within the Azure Functions log window. Later, we will switch this out to log to Application Insights instead.

Click Save and run. You should see something like the following appear in the log output window below the code editor:

2017-09-07T00:35:00.028 Function started (Id=57547b15-4c3e-42e7-a1de-1240fdf57b36)
2017-09-07T00:35:00.028 C# Timer trigger function executed at: 9/7/2017 12:35:00 AM
2017-09-07T00:35:00.028 processorders: 1
2017-09-07T00:35:00.028 processorders-poison: 0
2017-09-07T00:35:00.028 Function completed (Success, Id=57547b15-4c3e-42e7-a1de-1240fdf57b36, Duration=9ms)

Now we have our function polling the queue lengths. The next step is to publish these into Application Insights.

Integrating into Azure Functions

Azure Functions has integration with Appliation Insights for logging of each function execution. In this case, we want to save our own custom metrics, which is not currently supported by the built-in integration. Thankfully, integrating the full Application Insights SDK into our function is very easy.

First, we need to add a project.json file to our function app. To do this, click the View files tab on the right pane of the function app. Then click the + Add button, and name your new file project.json. Paste in the following:

This adds a NuGet reference to the Microsoft.ApplicationInsights package, which allows us to use the full SDK from our function.

Next, we need to update our function so that it writes the queue length to Application Insights. Click on the run.csx file on the right-hand pane, and replace the current function code with the following:

The key new parts that we have just added are:

private static string key = TelemetryConfiguration.Active.InstrumentationKey = System.Environment.GetEnvironmentVariable("APPINSIGHTS_INSTRUMENTATIONKEY", EnvironmentVariableTarget.Process);
private static TelemetryClient telemetry = new TelemetryClient() { InstrumentationKey = key };

This sets up an Application Insights TelemetryClient instance that we can use to publish our metrics. Note that in order for Application Insights to route the metrics to the right place, it needs an instrumentation key. Azure Functions’ built-in integration with Application Insights means that we can simply reference the instrumentation key it has set up for us. If you did not set up Application Insights at the same time as the function app, you can configure this separately and set your instrumentation key here.

Now we can publish our custom metrics into Application Insights. Note that Application Insights has several different types of custom diagnostics that can be tracked. In this case, we use metrics since they provide the ability to track numerical values over time, and set up alerts as appropriate. We have added the following line to our foreach loop, which publishes the queue length into Application Insights.

telemetry.TrackMetric($"Queue length - {queueName}", (double)length);

Click Save and run. Once the function executes successfully, wait a few minutes before continuing with the next step – Application Insights takes a little time (usually less than five minutes) to ingest new data.

Exploring Metrics in Application Insights

In the Azure Portal, open your Application Insights account and click Metrics Explorer in the menu. Then click the + Add chart button, and expand the Custom metric collection. You should see the new queue length metrics listed.

4-AppInsights

Select the metrics, and as you do, notice that a new chart is added to the main panel showing the queue length count. In our case, we can see the processorders queue count fluctuate between one and five messages, while the processorders-poison queue stays empty. Set the Aggregation property to Max to better see how the queue fluctuates.

5-AppInsightsChart

You may also want to click the Time range button in the main panel, and set it to Last 30 minutes, to fully see how the queue length changes during your testing.

Setting up Alerts

Now that we can see our metrics appearing in Application Insights, we can set up alerts to notify us whenever a queue exceeds some length we specify. For our processorders queue, this might be 10 messages – we know from our operations history that if we have more than ten messages waiting to be processed, our WebJob isn’t processing them fast enough. For our processorders-poison queue, though, we never want to have any messages appear in this queue, so we can have an alert if more than zero messages are on the queue. Your thresholds may differ for your application, depending on your requirements.

Click the Alert rules button in the main panel. Azure Monitor opens. Click the + Add metric alert button. (You may need to select the Application Insights account in the Resource drop-down list if this is disabled.)

On the Add rule pane, set the values as follows:

  • Name: Use a descriptive name here, such as `QueueProcessOrdersLength`.
  • Metric: Select the appropriate `Queue length – queuename` metric.
  • Condition: Set this to the value and time period you require. In our case, I have set the rule to `Greater than 10 over the last 5 minutes`.
  • Notify via: Specify how you want to be notified of the alert. Azure Monitor can send emails, call a webhook URL, or even start a Logic App. In our case, I have opted to receive an email.

Click OK to save the rule.

6-Alert.PNG

If the queue count exceeds your specified limit, you will receive an email shortly afterwards with details:

7-Alert

Summary

Monitoring the length of your queues is a critical part of operating a modern application, and getting alerted when a queue is becoming excessively long can help to identify application failures early, and thereby avoid downtime and SLA violations. Even though Azure Storage doesn’t provide a built-in monitoring mechanism, one can easily be created using Azure Functions, Application Insights, and Azure Monitor.

Receive Push Notifications from Microsoft Identity Manager on your Mobile/Tablet/Computer

Background

Recently in a FIM/MIM environment a daily automated process was executing but the task it was performing was dependent on an upstream process that generates a feed, and the schedule for that feed had changed (without notice to me). Needless to say FIM/MIM wasn’t getting the information it needed to process. This got me thinking about notifications.

If you’re anything like me you probably have numerous email accounts and your subconscious has all but programmed itself to ignore “new email” notifications. However Push Notifications I typically do notice. Whilst in the example above I did have some error handling in place if the process completely failed (it is a development environment), I didn’t have anything for partial failures. Anyway it did get me thinking that I’d like to receive a notification if something that should happen didn’t.

Overview

This post details using push notifications to advise when expected events don’t transpire. In this particular example, I have an Azure Function App that connects once a day to a FTP Server and retrieves a series of exports and puts them on my FIM/MIM Synchronisation Sever. The Push Notification service I am using is Push Bullet. Push Bullet for free accounts (without a Pro subscription) are limited to 500 pushes per month. That should be more than enough. If I’ve got errors in excess of 500 per month I’ve got much bigger problems.

Getting Started

First up you will need to sign up for Push Bullet. It is very straight forward if you have a Facebook or Google account. As you’re probably wanting multiple people to receive the notifications it would pay to set up a shared Google Account that your team can use to connect to with their devices. Now you have an account head to your new Account Settings page and create an Access Token. Record it for use in the scripts below.

Connecting to the API

Test you can access the Push Bullet API using your Access Token and PowerShell. Update the following script for your Access Token in line 3 and execute. You should see information returned associated with your new Push Bullet account.

Next you will want to install the Push Bullet App on the device(s) you want to get the notification(s) on. I installed it on my Apple iPhone and also installed the Chrome Browser extension.

Using PowerShell we can then query to get the devices connected to the account. In the same PowerShell session you tested the API with above run this API call

$devices = Invoke-RestMethod -Method Get -Headers $header -Uri ($apiURI +"v2/devices")
$devices

This will return your registered devices.

Devices

If we want a notification to target a particular device we need to provide the Iden value associated with that device. If we don’t specify a target, the push notification will hit all devices. In my example above with two devices registered my iPhone was device two. So the target Iden I could get with;

$iphoneIden = $devices.devices[1].iden

Push Bullet allows for different notification types (Note, Link and File). Note is the one that’ll I’ll be using. More info on the other types here.

Sending Test Notification

To perform a notification test, update the following script for your Access Token (line 3). I’ve omitted the Device Identifier to send the message to all devices. I also had to logout of the iOS Push Bullet App and log in again to get the notifications to show.

Success. I received the notification on my iPhone and also in my Chrome browser.

IMG-5194

Implementation

Getting back to my requirement of being notified when a job didn’t find what it expected, I updated my PowerShell Function App that is based off this blog post here to evaluate what it processed and if it didn’t find what is expected, it sends me a notification. I already had some error handling in my implementation based off that blog post but it was based on full failure, not partial (which is what I was experiencing whereby only one part of the process wasn’t returning data).

NOTE: I had to also add the ServerCertificateValidationCallback line into my Function App script before calling the API POST to send the notification as I was getting the dreaded following PowerShell Invoke-RestMethod / Invoke-WebRequest error when sending the notification via the Function App. I didn’t get that error on my dev workstation which is a bit weird.

Invoke-WebRequest : The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure 
channel.

If you also receive the error above (or you will be sending Push Notifications via Azure Function Apps) insert this line before your invoke-restmethod call.

 [System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}

Summary

Essentially this is my first foray into enabling anything for Push Notifications and this post is food for thought on what can be easily enabled within FIM/MIM to give timely visibility to automated scheduled functions when they don’t perform as expected. It was incredibly simple to set up and get working. I see myself enabling more FIM/MIM functions with Push Notifications in the future.