Automatic Key Rotation for Azure Services

Securely managing keys for services that we use is an important, and sometimes difficult, part of building and running a cloud-based application. In general I prefer not to handle keys at all, and instead rely on approaches like managed service identities with role-based access control, which allow for applications to authenticate and authorise themselves without any keys being explicitly exchanged. However, there are a number of situations where do we need to use and manage keys, such as when we use services that don’t support role-based access control. One best practice that we should adopt when handling keys is to rotate (change) them regularly.

Key rotation is important to cover situations where your keys may have compromised. Common attack vectors include keys having been committed to a public GitHub repository, a log file having a key accidentally written to it, or a disgruntled ex-employee retaining a key that had previously been issued. Changing the keys means that the scope of the damage is limited, and if keys aren’t changed regularly then these types of vulnerability can be severe.

In many applications, keys are used in complex ways and require manual intervention to rotate. But in other applications, it’s possible to completely automate the rotation of keys. In this post I’ll explain one such approach, which rotates keys every time the application and its infrastructure components are redeployed. Assuming the application is deployed regularly, for example using a continuous deployment process, we will end up rotating keys very frequently.

Approach

The key rotation process I describe here relies on the fact that the services we’ll be dealing with – Azure Storage, Cosmos DB, and Service Bus – have both a primary and a secondary key. Both keys are valid for any requests, and they can be changed independently of each other. During each release we will pick one of these keys to use, and we’ll make sure that we only use that one. We’ll deploy our application components, which will include referencing that key and making sure our application uses it. Then we’ll rotate the other key.

The flow of the script is as follows:

  1. Decide whether to use the primary key or the secondary key for this deployment. There are several approaches to do this, which I describe below.
  2. Deploy the ARM template. In our example, the ARM template is the main thing that reads the keys. The template copies the keys into an Azure Function application’s configuration settings, as well as into a Key Vault. You could, of course, output the keys and have your deployment script put them elsewhere if you want to.
  3. Run the other deployment logic. For our simple application we don’t need to do anything more than run the ARM template deployment, but for many deployments  you might copy your application files to a server, swap the deployment slots, or perform a variety of other actions that you need to run as part of your release.
  4. Test the application is working. The Azure Function in our example will perform some checks to ensure the keys are working correctly. You might also run other ‘smoke tests’ after completing your deployment logic.
  5. Record the key we used. We need to keep track of the keys we’ve used in this deployment so that the next deployment can use the other one.
  6. Rotate the other key. Now we can rotate the key that we are not using. The way that we rotate keys is a little different for each service.
  7. Test the application again. Finally, we run one more check to ensure that our application works. This is mostly a last check to ensure that we haven’t accidentally referenced any other keys, which would break our application now that they’ve been rotated.

We don’t rotate any keys until after we’ve already switched the application to using the other set of keys, so we should never end up in a situation where we’ve referenced the wrong keys from the Azure Functions application. However, if we wanted to have a true zero-downtime deployment then we could use something like deployment slots to allow for warming up our application before we switch it into production.

A Word of Warning

If you’re going to apply this principle in this post or the code below to your own applications, it’s important to be aware of an important limitation. The particular approach described here only works if your deployments are completely self-contained, with the keys only used inside the deployment process itself. If you provide keys for your components to any other systems or third parties, rotating keys in this manner will likely cause their systems to break.

Importantly, any shared access signatures and tokens you issue will likely be broken by this process too. For example, if you provide third parties with a SAS token to access a storage account or blob, then rotating the account keys will cause the SAS token to be invalidated. There are some ways to avoid this, including generating SAS tokens from your deployment process and sending them out from there, or by using stored access policies; these approaches are beyond the scope of this post.

The next sections provide some detail on the important steps in the list above.

Step 1: Choosing a Key

The first step we need to perform is to decide whether we should use the primary or secondary keys for this deployment. Ideally each deployment would switch between them – so deployment 1 would use the primary keys, deployment 2 the secondary, deployment 3 the primary, deployment 4 the secondary, etc. This requires that we store some state about the deployments somewhere. Don’t forget, though, that the very first time we deploy the application we won’t have this state set. We need to allow for this scenario too.

The option that I’ve chosen to use in the sample is to use a resource group tag. Azure lets us use tags to attach custom metadata to most resource types, as well as to resource groups. I’ve used a custom tag named CurrentKeys to indicate whether the resources in that group currently use the primary or secondary keys.

There are other places you could store this state too – some sort of external configuration system, or within your release management tool. You could even have your deployment scripts look at the keys currently used by the application code, compare them to the keys on the actual target resources, and then infer which key set is being used that way.

A simpler alternative to maintaining state is to randomly choose to use the primary or secondary keys on every deployment. This may sometimes mean that you end up reusing the same keys repeatedly for several deployments in a row, but in many cases this might not be a problem, and may be worth the simplicity of not maintaining state.

Step 2: Deploy the ARM Template

Our ARM template includes the resource definitions for all of the components we want to create – a storage account, a Cosmos DB account, a Service Bus namespace, and an Azure Function app to use for testing. You can see the full ARM template here.

Note that we are deploying the Azure Function application code using the ARM template deployment method.

Additionally, we copy the keys for our services into the Azure Function app’s settings, and into a Key Vault, so that we can access them from our application.

Step 4: Testing the Keys

Once we’ve finished deploying the ARM template and completing any other deployment steps, we should test to make sure that the keys we’re trying to use are valid. Many deployments include some sort of smoke test – a quick test of core functionality of the application. In this case, I wrote an Azure Function that will check that it can connect to the Azure resources in question.

Testing Azure Storage Keys

To test connectivity to Azure Storage, we run a query against the storage API to check if a blob container exists. We don’t actually care if the container exists or not; we just check to see if we can successfully make the request:

Testing Cosmos DB Keys

To test connectivity to Cosmos DB, we use the Cosmos DB SDK to try to retrieve some metadata about the database account. Once again we’re not interested in the results, just in the success of the API call:

Testing Service Bus Keys

And finally, to test connectivity to Service Bus, we try to get a list of queues within the Service Bus namespace. As long as we get something back, we consider the test to have passed:

You can view the full Azure Function here.

Step 6: Rotating the Keys

One of the last steps we perform is to actually rotate the keys for the services. The way in which we request key rotations is different depending on the services we’re talking to.

Rotating Azure Storage Keys

Azure Storage provides an API that can be used to regenerate an account key. From PowerShell we can use the New-AzureRmStorageAccountKey cmdlet to access this API:

Rotating Cosmos DB Keys

For Cosmos DB, there is a similar API to regenerate an account key. There are no first-party PowerShell cmdlets for Cosmos DB, so we can instead a generic Azure Resource Manager cmdlet to invoke the API:

Rotating Service Bus Keys

Service Bus provides an API to regenerate the keys for a specified authorization rule. For this example we’re using the default RootManageSharedAccessKey authorization rule, which is created automatically when the Service Bus namespace is provisioned. The PowerShell cmdlet New-AzureRmServiceBusKey can be used to access this API:

You can see the full script here.

Conclusion

Key management and rotation is often a painful process, but if your application deployments are completely self-contained then the process described here is one way to ensure that you continuously keep your keys changing and up-to-date.

You can download the full set of scripts and code for this example from GitHub.

Deploying Azure Functions with ARM Templates

There are many different ways in which an Azure Function can be deployed. In a future blog post I plan to go through the whole list. There is one deployment method that isn’t commonly known though, and it’s of particular interest to those of us who use ARM templates to deploy our Azure infrastructure. Before I describe it, I’ll quickly recap ARM templates.

ARM Templates

Azure Resource Manager (ARM) templates are JSON files that describe the state of a resource group. They typically declare the full set of resources that need to be provisioned or updated. ARM templates are idempotent, so a common pattern is to run the template deployment regularly—often as part of a continuous deployment process—which will ensure that the resource group stays in sync with the description within the template.

In general, the role of ARM templates is typically to deploy the infrastructure required for an application, while the deployment of the actual application logic happens separately. However, Azure Functions’ ARM integration has a feature whereby an ARM template can be used to deploy the files required to make the function run.

How to Deploy Functions in an ARM Template

In order to deploy a function through an ARM template, we need to declare a resource of type Microsoft.Web/sites/functions, like this:

There are two important parts to this.

First, the config property is essentially the contents of the function.json file. It includes the list of bindings for the function, and in the example above it also includes the disabled property.

Second, the files property is an object that contains key-value pairs representing each file to deploy. The key represents the filename, and the value represents the full contents of the file. This only really works for text files, so this deployment method is probably not the right choice for precompiled functions and other binary files. Also, the file needs to be inlined within the template, which may quickly get unwieldy for larger function files—and even for smaller files, the file needs to be escaped as a JSON string. This can be done using an online tool like this, or you could use a script to do the escaping and pass the file contents as a parameter into the template deployment.

Importantly, in my testing I found that using this method to deploy over an existing function will remove any files that are not declared in the files list, so be careful when testing this approach if you’ve modified the function or added any files through the portal or elsewhere.

Examples

There are many different ways you can insert your function file into the template, but one of the ways I tend to use is a PowerShell script. Inside the script, we can read the contents of the file into a string, and create a HashTable for the ARM template deployment parameters:

Then we can use the New-AzureRmResourceGroupDeployment cmdlet to execute the deployment, passing in $templateParameters to the -TemplateParameterObject argument.

You can see the full example here.

Of course, if you have a function that doesn’t change often then you could instead manually convert the file into a JSON-encoded string using a tool like this one, and paste the function right into the ARM template. To see a full example of how this can be used, check out this example ARM template from a previous blog article I wrote.

When to Use It

Deploying a function through an ARM template can make sense when you have a very simple function that is comprised of one, or just a few, files to be deployed. In particular, if you already deploy the function app itself through the ARM template then this might be a natural extension of what you’re doing.

This type of deployment can also make sense if you’re wanting to quickly deploy and test a function and don’t need some of the more complex deployment-related features like control over handling locked files. It’s also a useful technique to have available for situations where a full deployment script might be too heavyweight.

However, for precompiled functions, functions that have binary files, and for complex deployments, it’s probably better to use another deployment mechanism. Nevertheless, I think it’s useful to know that this is a tool in your Azure Functions toolbox.

Provisioning complex Modern Sites with Azure Functions and Flow – Part 2 – Create and Apply Template

In the previous blog here, we got an overview of the high level Architecture of a Complex Modern team site provisioning process. In this blog, we will look at the step 1 of the process – Create and Apply template process, in detail.
Before that, below are few links to earlier blogs, as a refresher, to prerequisties for the blog.

  1. Set up a Graph App to call Graph Service using App ID and Secret – link
  2. Sequencing HTTP Trigger Azure Functions for simultaneous calls – link
  3. Adding and Updating owners using Microsoft Graph Async calls – link

Overview
The Create and Apply Template process aims at the following
1. Create a blank modern team site using Groups Template (Group#0 Site template)
2. Apply the provisioning template on the created site.
Step 1 : Create a blank Modern team site
For creating a modern team site using CSOM we will use the TeamSiteCollectionCreationInformation class of OfficeDevPnP.  Before we create the site, we will make sure the site doesn’t already exist.

Note: There is an issue with the Site Assets library not getting intialized
when the site is created using the below code.
Hence, calling the EnsureSiteAssets library is necessary.

Step 2:  Apply the Provisioning Template

Note: The Apply template process is a long running process and takes from 60-90 min to complete
for a complex provisioning template with many site columns, content types and libraries.
In order to prevent the Azure function from timing out, it is required to host the Azure Function
using a App Service Plan instead of a Consumption plan so the Azure function
is not affected by the 10 min time out. 

For the Apply Provisioning Template process, use the below steps.
1. Reading the Template
It is important to note that the XMLPnPSchemaFormatter version (in the code below) must match the PnP version used to generate the PnP template. If the version is older, then set the XMLPnPSchemaFormatter to read from the older version. In order to find the version of the PnP Template, open the xml and look at the start of the file
PnPTemplateVersion

2. Apply the Template
For applying the template, we will use the ProvisioningTemplateApplyingInformation class of the OfficeDevPnP module. The ProvisioningTemplateApplyingInformation also has a property called HandlerToProcess which could be used the invoke the particular handler in the provisioning template process. Below is the code for the same.

After the apply template process is complete, since the flow will have timed out, we will invoke another flow to do the post process by updating a list item in the SharePoint list.
Conclusion
In this blog, we saw how we could create a modern team site and apply the template on it. The next blog we will finalize the process by doing site specfic changes after applying the template.

Processing Azure Event Grid events across Azure subscriptions

Consider a scenario where you need to listen to Azure resource events happening in one Azure subscription from another Azure subscription. A use case for such a scenario can be when you are developing a solution where you listen to events happening in your customers’ Azure subscriptions, and then you need to handle those events from an Azure Function or Logic App running in your subscription.
A solution for such a scenario could be:
1. Create an Azure Function in your subscription that will handle Azure resource events received from Azure Event Grid.
2. Handle event validation in the above function, which is required to perform a handshake with Event Grid.
3. Create an Azure Event Grid subscription in the customers’ Azure subscriptions.
Before, I go into details let’s have a brief overview of Azure Event Grid.
Azure Event Grid is a routing service based on a publish/subscribe model, which is used for developing event-based applications. Event sources publish events, and event handlers can subscribe to these events via Event Grid subscriptions.

event-grids

Figure 1. Azure event grid publishers and handlers


Azure Event Grid subscriptions can be used to subscribe to system topics as well as custom topics. Various Azure services automatically send events to Event Grid. The system-level event sources that currently send events to Event Grid are Azure subscriptions, resource groups, Event Hubs, IOT Hubs, Azure Media Services, Service Bus, and blob storage
You can listen to these events by creating an event handler. Azure Event Grid supports several Azure Services and custom webhooks for event handlers. There are number of Azure services that can be used as event handlers, including Azure Functions, Logic Apps, Event Hubs, Azure Automation, Hybrid Connections, and storage queues.
In this post I’ll focus on using Azure Functions as an event handler to which an Event Grid subscription will send events to whenever an event occurs at the whole Azure subscription level. You can also create an Event Grid subscription at a resource group level to be notified only for the resources belonging to a particular resource group. The figure 1 posted above, shows various event sources that can publish events, and various supported event handlers. As per our solution Azure subscriptions and Azure Functions are marked.

Create an Azure Function in your subscription and handle the validation event from Event Grid

If our Event Grid subscription and function were in the same subscription, then we could have simply created an Event Grid-triggered Azure Function. Using that you can simply specify the Event Grid subscription details with this function specified as an endpoint in the Event Grid subscription. However, in our case this cannot be done as we need to have the Event Grid subscription in the customer subscription, and the Azure Function in our subscription. Therefore, we will simply create a HTTP-triggered function or a webhook function
Because we’re not selecting an Event Grid triggered function, we need us to do an extra validation step. At the time of creating a new Azure Event Grid subscription, Event Grid requires the endpoint to prove the ownership of the webhook, so that Event Grid can deliver the events to that endpoint. For built-in event handlers such as Logic Apps, Azure Automation, and Event Grid triggered functions, this process of validation is not necessary. However, in our scenario where we are using a HTTP-triggered function we need to handle the validation handshake
When an Event Grid subscription is created, it sends a subscription validation event in a POST request to the endpoint. All we need to do is to handle this event, read the request body, read the validationCode property in the data object in the request, and send it back in the response. Once Event Grid receives the same validation code back it knows that endpoint is validated, and it will start delivering events to our function. Following is an example of a POST request that Event Grid sends to the endpoint for validation.

Our function can check if the eventType is Microsoft.EventGrid.SubscriptionValidationEvent , which indicates it is meant for validation, and send back the value in data.validationCode. In all other scenarios, eventType will be based on the resource on which the event occurred, and the function can process those events accordingly. Also, the resource validation event contains a header aeg-event-type with value SubscriptionValidation. You should also validate this header.
Following is the sample code for a Node.js function to handle the validation event and send back the validation code and hence completing the validation handshake.

Processing Resource Events

To process the resource events, you can filter them on the resourceProvider or operationName properties. For example, the operationName property for a VM create event is set to Microsoft.Compute/virtualMachines/write. The event payload follows a fixed schema as described here. An event for a virtual machine creation looks like below:

Authentication

While creating the Event Grid subscription, detailed in next section, it should be created with the endpoint URL pointing to function URL including the function key.. Also, event validation done for the handshake acts as another means of authentication. To add an extra layer of authentication, you can generate your own access token, and append it to your function URL when specifying the endpoint for the Event Grid subscription. Your function can now also validate this access token before further processing.

Create an Azure Event Grid Subscription in customer’s subscription

A subscription owner/administrator should be able to run an Azure CLI or PowerShell command for creating the Event Grid subscription in customer subscription.
Important: This step must be done after the above step of creating the Azure Function is done. Otherwise, when you try to create an Event Grid subscription, and it raises the subscription validation event, Event Grid will not get a valid response back, and the creation of the Event Grid subscription will fail.
You can add filters to your Event Grid subscription to filter the events by subject. Currently, events can only be filtered with text comparison of the subject property value starting with or ending with some text. The subject filter doesn’t support a wildcard or regex search.

Azure CLI or PowerShell

An example Azure CLI command to create an Event Grid Subscription, which receives all the events occurring at subscription level is as below:

Here https://myhttptriggerfunction.azurewebsites.net/api/f1?code= is the URL of the function app.

Azure REST API

Instead of asking customer to run a CLI or PowerShell script to create the Event Grid subscription, you can automate this process by writing another Azure Function that calls Azure REST API. The API call can be invoked using service principal with rights on the customer’s subscription.
To create an Event Grid subscription for the customer’s Azure Subscription, you submit the following PUT request:
PUT https://management.azure.com/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx /providers/Microsoft.EventGrid/eventSubscriptions/ eg-subscription-test?api-version=2018-01-01
Request Body:
{
"properties": {
"destination": {
"endpointType": "WebHook",
"properties": {
"endpointUrl": " https://myhttptriggerfunction.azurewebsites.net/api/f1?code="
}
},
"filter": {
"isSubjectCaseSensitive": false
}
}
}

 

Provisioning complex Modern Sites with Azure Functions and Microsoft Flow – Part 1 – Architecture

In one of my previous blog here,  I have discussed about creating Office 365 groups using Azure Function and Flow. The same process could be used also to provision Modern Team sites in SharePoint Online because Modern Team Sites are Office 365 groups too. However, if you are creating a Complex Modern Team Site with lots of Libraries, Content types, Termstore associated columns etc. it will challenging to do it with a single Azure Function.
Thus, in this blog (part 1), we will look at the Architecture of a Solution to provision a complex Modern Team Site using multiple Azure Function and Flows. This is an approach that went through four months of validation and testing. There might be other options but this one worked for the complex team site which takes around 45-90 mins to provision.
Solution Design
To start with lets’ look at the solution design. The solution consists of two major components
1. Template Creation – Create a SharePoint Modern Team site to be used as a template and generate a Provisioning template from it
2. Provisioning Process – Create a SharePoint Inventory List to run the Flow and Azure Function. There will be three Azure Functions that will run three separate parts of the provisioning lifecycle. More details about the Azure Functions will in upcoming blog.
Get the Provisioning Template
The first step in the process is to  create a clean site that will be used as a reference template site for the Provisioning template. In this site, create all the lists, libraries, site columns, content type and set other necessary site settings.
In order to make sure that the generated template doesn’t have any elements which are not needed for provisioning, use the following PnP PowerShell cmdlet. The below cmdlet removes any content type hub association, ALM api handles and site security for provisioning requirements.

Get-PnPProvisioningTemplate -Out "" -ExcludeHandlers ApplicationLifecycleManagement, SiteSecurity -ExcludeContentTypesFromSyndication

The output of the above cmdlet is ProvisioningTemplate.xml file which could be applied to new sites for setting up the same SharePoint elements. To know more about the provisioning template file, schema and allowed tags, check the link here.
ModernSitesProvisioningFlow_GetTemplate
Team Site Provsioning Process
The second step in the process would be to create and apply the template to a Modern SharePoint Team site using Flow and Azure Function. The detail steps would be as follows:
1. Create an Inventory list to capture all the requirements for Site Creation
2. Create two flows
a) Create and Apply Template flow, and
b) Post Provisioning Flow
3. Create three Azure Functions –
a) Create a blank Modern Team Site
b) Apply Provisioning Template on the above site. This is a long running process and can take about 45-90 min for applying a complex template with about 20 libraries, 20-30 site columns and 10-15 content types
Note: Azure Functions on Consumption plan have a timeout of 10 min. Host the Azure function on an App Service Plan for the above to work without issues
c) Post Provisioning to apply changes that are not supported by Provisioning Template such as Creating default folders etc.
Below is the process flow for the provisioning process. It has steps from 1 – 11 which goes from creating the site to applying it. The brief list of the steps are as follows

  1. Call the Create Site flow to start the Provisioning Process
  2. Call the Create Site Azure Function
  3. Create the Modern Team Site in Azure Function and set any dependencies required for the Apply template such as Navigation items, pages etc, and then return to flow
  4. Call the Apply Template Azure Function.
  5. Get the previously generated ProvisioningTemplate.xml file from a shared location
  6. Apply the Template onto the newly created Modern site. Note: The flow call times out because it cannot wait for such a long running process
  7. Update the status column in the Site Directory for the post provisioning flow to start
  8. Call the Post provisioning flow to run the Post provisioning azure function
  9. The Post provisioning azure function will complete the remaining SharePoint changes which were not completed by the apply template such as, set field default values, create folders in any libraries, associate default values to taxonomy fields etc.

ModernSitesProvisioningFlow_ProvisioningProcess
Conclusion:
Hence in the above blog, we saw how to create a provisioning process to handle complex modern team site creation at a high architectural level. Next, we will deep dive into the Azure functions to create, apply template and post process in the next upcoming blogs.
Happy Coding!!!

A Voice Assistant for Microsoft Identity Manager

This is the third and final post in my series around using your voice to query/search Microsoft Identity Manager or as I’m now calling it, the Voice Assistant for Microsoft Identity Manager.
The two previous posts in this series detail some of my steps and processes in developing and fleshing out this concept. The first post detailed the majority of the base functionality whilst the second post detailed the auditing and reporting aspects into Table Storage and Power BI.
My final architecture is depicted below.
Identity Manager integration with Cognitive Services and IoT Hub 4x3
I’ve put together more of an overview in a presentation format using GitPitch you can checkout here.
The why and how of the Voice Assistant for Microsoft Identity Manager
If you’re interested in building the solution checkout the Github Repo here which includes the Respeaker Python Script, Azure Function etc.
Let me know how you go @darrenjrobinson

Avoiding Cosmos DB Bill Shock with Azure Functions

Cosmos DB is a fantastic database service for many different types of applications. But it can also be quite expensive, especially if you have a number of instances of your database to maintain. For example, in some enterprise development teams you may need to have dev, test, UAT, staging, and production instances of your application and its components. Assuming you’re following best practices and keeping these isolated from each other, that means you’re running at least five Cosmos DB collections. It’s easy for someone to accidentally leave one of these Cosmos DB instances provisioned at a higher throughput than you expect, and before long you’re racking up large bills, especially if the higher throughput is left overnight or over a weekend.

In this post I’ll describe an approach I’ve been using recently to ensure the Cosmos DB collections in my subscriptions aren’t causing costs to escalate. I’ve created an Azure Function that will run on a regular basis. It uses a managed service identity to identify the Cosmos DB accounts throughout my whole Azure subscription, and then it looks at each collection in each account to check that they are set at the expected throughput. If it finds anything over-provisioned, it sends an email so that I can investigate what’s happening. You can run the same function to help you identify over-provisioned collections too.

Step 1: Create Function App

First, we need to set up an Azure Functions app. You can do this in many different ways; for simplicity, we’ll use the Azure Portal for everything here.

Click Create a Resource on the left pane of the portal, and then choose Serverless Function App. Enter the information it prompts for – a globally unique function app name, a subscription, a region, and a resource group – and click Create.

Screen Shot 2018-07-23 at 9.07.43 pm

Step 2: Enable a Managed Service Identity

Once we have our function app ready, we need to give it a managed service identity. This will allow us to connect to our Azure subscription and list the Cosmos DB accounts within it, but without us having to maintain any keys or secrets. For more information on managed service identities, check out my previous post.

Open up the Function Apps blade in the portal, open your app, and click Platform Features, then Managed service identity:

Screen Shot 2018-07-23 at 9.09.51 pm
Switch the feature to On and click Save.

Step 3: Create Authorisation Rules

Now we have an identity for our function, we need to grant it access to the parts of our Azure subscription we want it to examine for us. In my case I’ll grant it the rights over my whole subscription, but you could just give it rights on a single resource group, or even just a single Cosmos DB account. Equally you can give it access across multiple subscriptions and it will look through them all.

Open up the Subscriptions blade and choose the subscription you want it to look over. Click Access Control (IAM):

Screen Shot 2018-07-23 at 9.13.33 pm copy

Click the Add button to create a new role assignment.

The minimum role we need to grant the function app is called Cosmos DB Account Reader Role. This allows the function to discover the Cosmos DB accounts, and to retrieve the read-only keys for those accounts, as described here. The function app can’t use this role to make any changes to the accounts.

Finally, enter the name of your function app, click it, and click Save:

Screen Shot 2018-07-23 at 9.14.46 pm
This will create the role assignment. Your function app is now authorised to enumerate and access Cosmos DB accounts throughout the subscription.

Step 4: Add the Function

Next, we can actually create our function. Go back into the function app and click the + button next to Functions. We’ll choose to create a custom function:

Screen Shot 2018-07-23 at 9.34.58 pm

Then choose a timer trigger:

Screen Shot 2018-07-23 at 9.35.27 pm

Choose C# for the language, and enter the name CosmosChecker. (Feel free to use a name with more panache if you want.) Leave the timer settings alone for now:

Screen Shot 2018-07-23 at 9.36.46 pm

Your function will open up with some placeholder code. We’ll ignore this for now. Click the View files button on the right side of the page, and then click the Add button. Create a file named project.json, and then open it and paste in the following, then click Save:

This will add the necessary package references that we need to find and access our Cosmos DB collections, and then to send alert emails using SendGrid.

Now click on the run.csx file and paste in the following file:

I won’t go through the entire script here, but I have added comments to try to make its purpose a little clearer.

Finally, click on the function.json file and replace the contents with the following:

This will configure the function app with the necessary timer, as well as an output binding to send an email. We’ll discuss most of these settings later, but one important setting to note is the schedule setting. The value I’ve got above means the function will run every hour. You can change it to other values using CRON expressions, such as:

  • Run every day at 9.30am UTC: 0 30 9 * * *
  • Run every four hours: 0 0 */4 * * *
  • Run once a week: 0 0 * * 0

You can decide how frequently you want this to run and replace the schedule with the appropriate value from above.

Step 5: Get a SendGrid Account

We’re using SendGrid to send email alerts. SendGrid has built-in integration with Azure Functions so it’s a good choice, although you’re obviously welcome to switch out for anything else if you’d prefer. You might want an SMS message to be sent via Twilio, or a message to be sent to Slack via the Slack webhook API, for example.

If you don’t already have a SendGrid account you can sign up for a free account on their website. Once you’ve got your account, you’ll need to create an API key and have it ready for the next step.

Step 6: Configure Function App Settings

Click on your function app name and then click on Application settings:

Screen Shot 2018-07-23 at 9.39.34 pm

Scroll down to the Application settings section. We’ll need to enter three settings here:

  1. Setting name: SendGridKey. This should have a value of your SendGrid API key from step 5.
  2. Setting name: AlertToAddress. This should be the email address that you want alerts to be sent to.
  3. Setting name: AlertFromAddress. This should be the email address that you want alerts to be sent from. This can be the same as the ‘to’ address if you want.

Your Application settings section should look something like this:
Screen Shot 2018-07-23 at 9.31.26 pm

Step 7: Run the Function

Now we can run the function! Click on the function name again (CosmosChecker), and then click the Run button. You can expand out the Logs pane at the bottom of the screen if you want to watch it run:

Screen Shot 2018-07-23 at 9.42.36 pm

Depending on how many Cosmos DB accounts and collections you have, it may take a minute or two to complete.

If you’ve got any collections provisioned over 2000 RU/s, you should receive an email telling you this fact:

Screen Shot 2018-07-23 at 9.48.37 pm.png

Configuring Alert Policies

By default, the function is configured to alert whenever it sees a Cosmos DB collection provisioned over 2000 RU/s. However, your situation may be quite different to mine. For example, you may want to be alerted whenever you have any collections provisioned over 1000 RU/s. Or, you may have production applications that should be provisioned up to 100,000 RU/s, but you only want development and test collections provisioned at 2000 RU/s.

You can configure alert policies in two ways.

First, if you have a specific collection that should have a specific policy applied to it – like the production collection I mentioned that should be allowed to go to 100,000 RU/s – then you can create another application setting. Give it the name MaximumThroughput:{account_name}:{database_name}:{collection_name}, and set the value to the limit you want for that collection.

For example, a collection named customers in a database named customerdb in an account named myaccount-prod would have a setting named MaximumThroughput:myaccount-prod:customerdb:customers. The value would be 100000, assuming you wanted the function to check this collection against a limit of 100,000 RU/s.

Second, by default the function has a default quota of 2000 RU/s. You can adjust this to whatever value you want by altering the value on line 17 of the function code file (run.csx).

ARM Template

If you want to deploy this function for yourself, you can also use an ARM template I have prepared. This performs all the steps listed above except step 3, which you still need to do manually.
 

Of course, you are also welcome to adjust the actual logic involved in checking the accounts and collections to suit your own needs. The full code is available on GitHub and you are welcome to take and modify it as much as you like! I hope this helps to avoid some nasty bill shocks.

Using your Voice to Search Microsoft Identity Manager – Part 2

Introduction

Last month I wrote this post that detailed using your voice to search/query Microsoft Identity Manager. That post demonstrated a working solution (GitHub repository coming next month) but was still incomplete if it was to be used in production within an Enterprise. I hinted then that there were additional enhancements I was looking to make. One is an Auditing/Reporting aspect and that is what I cover in this post.

Overview

The one element of the solution that has visibility of each search scenario is the IoT Device. As a potential future enhancement this could also be a Bot. For each request I wanted to log/audit;

  • Device the query was initiated from (it is possible to have many IoT devices; physical or bot leveraging this function)
  • The query
  • The response
  • Date and Time of the event
  • User the query targeted

To achieve this my solution is to;

  • On my IoT Device the query, target user and date/time is held during the query event
  • At the completion of the query the response along with the earlier information is sent to the IoT Hub using the IoT Hub REST API
  • The event is consumed from the IoT Hub by an Azure Event Hub
  • The message containing the information is processed by Stream Analytics and put into Azure Table Storage and Power BI.

Azure Table Storage provides the logging/auditing trail of what requests have been made and the responses.  Power BI provides the reporting aspect. These two services provide visibility into what requests have been made, against who, when etc. The graphic below shows this in the bottom portion of the image.
Auditing Reporting Searching MIM with Speech.png

Sending IoT Device Events to IoT Hub

I covered this piece in a previous post here in PowerShell. I converted it from PowerShell to Python to run on my device. In PowerShell though for initial end-to-end testing when developing the solution the body of the message being sent and sending it looks like this;

[string]$datetime = get-date
$datetime = $datetime.Replace("/","-")
$body = @{
 deviceId = $deviceID
 messageId = $datetime
 messageString = "$($deviceID)-to-Cloud-$($datetime)"
 MIMQuery = "Does the user Jerry Seinfeld have an Active Directory Account"
 MIMResponse = "Yes. Their LoginID is jerry.seinfeld"
 User = "Jerry Seinfeld"
}
$body = $body | ConvertTo-Json
Invoke-RestMethod -Uri $iotHubRestURI -Headers $Headers -Method Post -Body $body

Event Hub and IoT Hub Configuration

First I created an Event Hub. Then on my IoT Hub I added an Event Subscription and pointed it to my Event Hub.
IoTHub Event Hub.PNG

Streaming Analytics

I then created a Stream Analytics Job. I configured two Inputs. One each from my IoT Hub and from my Event Hub.
Stream Analytics Inputs.PNG
I then created two Outputs. One for Table Storage for which I used an existing Storage Group for my solution, and the other for Power BI using an existing Workspace but creating a new Dataset. For the Table storage I specified deviceId for Partition key and messageId for Row key.
Stream Analytics Outputs.PNG
Finally as I’m keeping all the data simple in what I’m sending, my query is basically copying from the Inputs to the Outputs. One is to get the events to Table Storage and the other to get it to Power BI. Therefore the query looks like this.
Stream Analytics Query.PNG

Events in Table Storage

After sending through some events I could see rows being added to Table Storage. When I added an additional column to the data the schema-less Table Storage obliged and dynamically added another column to the table.
Table Storage.PNG
A full record looks like this.
Full Record.PNG

Events in Power BI

Just like in Table Storage, in Power BI I could see the dataset and the table with the event data. I could create a report with some nice visuals just as you would with any other dataset. When I added an additional field to the event being sent from the IoT Device it magically showed up in the Power BI Dataset Table.
PowerBI.PNG

Summary

Using the Azure IoT Hub REST API I can easily send information from my IoT Device and then have it processed through Stream Analytics into Table Storage and Power BI. Instant auditing and reporting functionality.
Let me know what you think on twitter @darrenjrobinson

Using your Voice to Search Microsoft Identity Manager – Part 1

Introduction

Yes, you’ve read the title correctly. Speaking to Microsoft Identity Manager. The concept behind this was born off the back of some other work I was doing with Microsoft Cognitive Services. I figured it shouldn’t be that difficult if I just break down the concept into individual elements of functionality and put together a proof of concept to validate the idea. That’s what I did and this is the first post of the solution as an overview.
Here’s a quick demo.

Overview

The diagram below details the basis of the solution. There are a few extra elements I’m still working on that I’ll cover in a future post if there is any interest in this.
Searching MIM with Speech Overview
The solution works like this;

  1. You speak to a microphone connected to a single board computer with the query for Microsoft Identity Manager
  2. The spoken phrase is converted to text using Cognitive Speech to Text (Bing Speech API)
  3. The text phrase is;
    1. sent to Cognitive Services Language Understanding Intelligent Service (LUIS) to identify the target of the query (firstname lastname) and the query entity (e.g. Mailbox)
    2. Microsoft Identity Manager is queried via API Management and the Lithnet REST API for the MIM Service
  4. The result is returned to the single board computer as a text result phase which it then uses Cognitive Services Text to Speech to convert the response to audio
  5. The result is spoken back

Key Functional Elements

  • The microphone array I’m using is a ReSpeaker Core v1 with a ReSpeaker Mic Array
  • All credentials are stored in an Azure Key Vault
  • An Azure Function App (PowerShell) interfaces with the majority of the Cognitive Services being used
  • Azure API Management is used to front end the Lithnet MIM Webservice
  • The Lithnet REST API for the MIM Service provides easy integration with the MIM Service

Summary

Leveraging a lot of Serverless (PaaS) Services, a bunch of scripting (Python on the ReSpeaker and PowerShell in the Azure Function) and the Lithnet REST API it was pretty simple to integrate the ReSpeaker with Microsoft Identity Manager. An alternative to MIM could be any other service you have an API interface into. MIM is obviously a great choice as it can aggregate from many other applications/services.
Why a female voice? From a small response it was the popular majority.
Let me know what you think on twitter @darrenjrobinson

Automation and Creation of Office 365 groups using Flow, Microsoft Graph and Azure Function – Part 2

In the Part 1 blog here, we discussed an approach for the Group creation process and important considerations for provisioning groups. In this blog, we will look at getting a Graph App ID and App secret for invoking the graph service and then implementation of the group provisioning process.
MS Graph App Set up
Before we start creating groups we will need to set up a Graph App that will be used to create the group in the Office 365 tenancy. The details are in this blog here on how to create a Microsoft Graph app.
Regarding the permissions, below are the settings that are necessary to allow creating groups through the graph service.
GroupApp_Rights
Creating a Group
As discussed in Part 1 here, below are the broad level steps for automating group creation using a SharePoint inventory list, Microsoft Flow and Azure Function
1. Create a SharePoint list, with the metadata necessary for Group and SharePoint assets provisioning
We can use a SharePoint list to act as a trigger to create groups with the custom metadata necessary for provisioning the groups such as Owners and metadata necessary for creating site assets for SharePoint sites. As a best practice, I recommend you create multiple master lists to manage the details separately if there are too many to manage. In our case, we have created three separate lists for managing the Group details.
1. Group details and metadata
2. Owners and Team Members List
3. Site Assets configuration list
2. Create a Microsoft flow. The flow will validate a new or existing group and pick the unique Group Alias from the list which will allow us to find the group if it exists.
The flow will act as a trigger to start the provisioning process and call the Azure function passing the appropriate metadata as shown below. The flow also allows error handling scenarios as described in the Part 1 blog here
Note: The GroupAlias is the unique name of the Group and is not necessarily the SharePoint URL. For example, in the case where a group was created and subsequently deleted, the unique alias could be used again but the Site URL will be different (unless cleared from the SharePoint recycle bin).
Group_FlowAzureFunctionCall
3. Create the Group in an Azure Function using SharePoint Online CSOM
In order to create the group, we will need to authenticate to the Graph service using the Graph App created earlier. For authenticating the app through Azure AD, please install the NuGet Package for Microsoft.IdentityModel.Clients.ActiveDirectory.
After authenticating, we will create the group using the UnifiedGroup Utility provided through the SharePoint Online CSOM.
Below is a quick snapshot of the code. Note the inclusion of Graph module of the OfficeDevPnP class.

Note: One important bit to note is that, in the above code owners and members email array is the same. If the owners and members email array differ, then the group provisioning delays significantly. Also, it is important to keep the other parameters same as during creation in the below method because it might reset the other properties to default otherwise. For eg. if isPrivate is not set, then the group becomes public.

4. After the group is created, we can fetch the details of the group as below.

5. The group provisioning generally takes about 2-3 mins to provision. However, if there are multiple hits, then one request might override the second request causing the group creation to fail. In such cases, we can sequence the Azure Functions to run by modifying the host.json file. A quick blog covering this can be found here.
Provisioning SharePoint Assets in Azure Function after Group Creation
1. For provisioning of the SharePoint assets, we might have to wait for the Office 365 AD sync to finish granting access to the Admin account.
Sometimes, the AD sync process takes much longer, so to grant direct access to the SharePoint Site Collection using tenant admin, we could use the below code. Recommendation: Only proceed with the below code approach if the access fails for more than few mins.
Note: As a best practice, I would recommend using a Service Account when working on the SharePoint Site. We could also use an App as suggested in the Site Scripting blog here.

2. Once you have access, you can use the normal SharePoint CSOM to do the activities that are pertaining to SharePoint asset provisioning such as Libraries, Site Pages content, Lists, etc.
3. After you’re done, you can return the success from the Azure function as below.
Note: Use HttpStatusCode.Accepted instead of HttpStatusCode.Error in case there is error handling in the Flow or else Flow will trigger another instance of the flow when the Azure Function fails

Conclusion:
Above we saw how we can have a SharePoint Inventory list and create groups using Flow and Azure Functions. For a quick reference, below are the links to the other related blogs.
Part 1 – Automation and Creation of Office 365 groups approach
How to create a Microsoft Graph App
Sequencing calls in Azure Functions

Follow ...+

Kloud Blog - Follow