Swashbuckle Pro Tips for ASP.NET Web API – XML

In the previous post, we implemented IOperationFilter of Swashbuckle to handle example objects with combination of AutoFixture. According to Swagger spec, it doesn’t only handle JSON payloads, but also copes with XML payloads. In this post, we’re going to use the Swashbuckle library again to handle XML payloads properly.

The sample codes used in this post can be found here.

Acknowledgement

The sample application uses the following spec:

Built-In XML Support of Web API

ASP.NET Web API contains the Httpconfiguration instance out-of-the-box and it handles both JSON and XML payloads serialisation/deserialisation. Therefore, without any extra efforts, we can see those Swagger UI pages showing both JSON and XML payload.


As Swashbuckle is a JSON-biased library, camelCasing in JSON payloads is well supported, while XML payloads is not. All XML nodes is PascalCasing, which is inconsistent. How can we support camel-case on both JSON and XML? Here’s the tip:

It’s a little bit strange that, in order for XML to support camel-casing, we should change JSON serialisation settings, by the way. Go back to the Swagger UI screen and we’ll be able to find out all XML nodes except the root one are now camel-cased. The root node will be dealt later on.

Let’s send a REST API request to the endpoint using an XML payload.

However, the model instance is null! The Web API action can’t recognise the XML payload.

This is because the HttpConfiguration instance basically uses the DataContractSerializer instance for payload serialisation/deserialisation, but this doesn’t work as expected. What if we change the serialiser to XmlSerializer from DataContractSerializer? It would work. Add the following a line of code like:

Go back to the screen again and we’ll confirm the same screen as before.

What was the change? Let’s send an XML request again with some value modifications.

Now, the Web API action recognises the XML payload, but something weird happened. We sent lorem ipsum and 123, but they’re not passed. What’s wrong this time? There might be still the casing issue. So, instead of camel-cased payload, use pascal-cased XML payload and see how it’s going.

Here we go! Now the API action method can get the XML payload properly!

In other words, Swagger UI screen showed the proper camel-cased payload, but in fact it wasn’t that payload. What could be the cause? Previously we set the UseXmlSerializer property value to true for nothing. Now, all payload models should have XmlRoot and/or XmlElement decorators. Let’s update the payload models like:

Send the XML request again and we’ll get the payload!


However, it’s still not quite right. Let’s have a look at the actual payload and example payload.

The example payload defines ValueRequestModel as a root node. However, the actual payload uses request as declared in the XmlRoot decorator. Where should I change it? Swagger spec defines the xml object for this case. This can be easily done by implementing the ISchemaFilter interface of Swashbuckle.

Defining xml in Schema Object

Here’s the sample code defining the request root node by implementing the ISchemaFilter interface.

It applies the Visitor Pattern to filter out designated types only.

Once implemented, Swagger configuration needs to include the filter to register both ValueRequestModel and ValueResponseModel so that the request model has request on its root node and the response model has response on its root node. Try the Swagger UI screen again.

Now the XML payload looks great and works great.

So far, we’ve briefly walked through how Swagger document deals with XML payloads, using Swashbuckle.

Swashbuckle Pro Tips for ASP.NET Web API – Example(s) Using AutoFixture

In the previous post, we implemented IOperationFilter of Swashbuckle to emit the consumes and produces properties in a Swagger document. This post will implement another IOperationFilter to emit example(s) properties containing auto-generated values by AutoFixture.

The sample codes used in this post can be found here.

Acknowledgement

The sample application uses the following spec:

Defining example(s) in Operation Object

There are a few places, in a Swagger definition, where example objects are declared:

  • example property in parameter
  • examples property in responses
  • example field in definitions

Swashbuckle library automatically generates those example data with default values based on their data type. In other words, if its data type is string, the example value becomes string, if its data type is number, the example value becomes 0, and so on:

Is there any way that we can have a sort of meaningful data in those example objects? Of course there is. This NuGet package helps us create more meaningful example objects. Download the package and write some codes like:

Then add a decorator to an action of Web API:

The first one, SwaggerResponse, is built-in Swashbuckle decorator and the second one, SwaggerResponseExample is what we’re going to use for example data generation. Once the decorator is added, we need to add another OptionFilter acton in the Swagger config file like:

Reload the Swagger UI page and we can see the example object with more meaningful values:

This is how the Swagger definition looks like:

This is certainly a good way to show example data. But when we refresh the page, the example objects still show the same value as we hard-coded them.

Automatic Example Data Generation with AutoFixture

We were able to generate an example object by implementing IExampleProvider of Swashbuckle.Examples library. What if we want to create an example object with automatically generated values? AutoFixture is for unit testing by generating a fixture instance with random values. Why not using this for our randomly generating examples? Let’s have a look. This time, we create an abstract class to apply both requests and responses.

The abstract class, ModelExample, internally implements IFixture from AutoFixture and creates a random instance of the type T. Here are request and response classes inheriting the abstract class:

And this is the action method of a Web API. SwaggerRequestModel defines the request example and SwaggerResponseExample defines the response example.

Reload the Swagger UI page several times.

Each time the page is refreshed, the example objects have different values. If we need to put some restrictions like string length or value range, we can simply put other decorators like StringLength or Range on those request/response models.


So far, we have walked through how Swashbuckle library could generate example(s) object with meaningful values rather than default values. There are many other cases that we can customise Swashbuckle library to enrich Swagger document. I’ll introduce some other cases later on.

Swashbuckle Pro Tips for ASP.NET Web API – Content Types

Open API 2.0 (AKA Swagger) is a de-facto standard to document Web API. For ASP.NET Web API applications, Swashbuckle helps developers build the Swagger definition a lot easier. As Swashbuckle hasn’t fully implemented the Swagger specification, we need to develop some extensions using a few interfaces provided by Swashbuckle. In this post we’re going to talk about a couple of extensions to make Swagger definition more completed.

The sample codes used in this post can be found here.

Acknowledgement

The sample application uses the following spec:

Defining consumes and produdes in Operation Object

In Swagger, HTTP verb like GET, POST, PUT, PATCH or DELETE is referred as Operation Object. Each Operation Object can define which content types are to be requested (consumes) and which content types are to be returned (produces). Therefore, with Swashbuckle, Swagger document page produces like:

In other words, Swashbuckle assumes those five content types as default for requests – application/json, text/json, application/xml, text/xml, and application/x-www-form-urlencoded. And those four content types are the default response ones – application/json, text/json, application/xml and text/xml. Here’s a part of the Swagger definition automatically generated.

If we want to globally apply those content types, that can be done within the global configuration. Here’s the sample OWIN configuration:

It clears everything and add only one content type – application/json. By doing so, we can globally set one content type for both requests and responses. If we want to have more control on each endpoint, the IOperationFilter interface of Swashbuckle gives us that flexibility.

Implementing SwaggerConsumesAttribute Decorator

First of all, we need to write a simple decorator, called SwaggerConsumesAttribute which handles the consumes field of the Operation Object.

Through this decorator, we simply define number of content types to pass. How does it apply?

Let’s move on.

Implementing Consumes Filter

We now write the Consumes filter class by implementing the IOperationFilter interface like.

What it does are:

  • To check the content type values passed from the SwaggerConsumerAttribute decorator, and
  • To add all content types passed from the decorator to operation.consumes.

This needs to be added to the Swashbuckle configuration like:

Now run the Web API again and we’ll see the result like:

Implementing SwaggerProducesAttribute and Produces

Both SwaggerProducesAttribute decorator and Produces filter can be written as what we did above for consumes.

The SwaggerProducesAttribute decorator can be applied to:

The picture below is the Swagger definition based on the extensions above:

We’ve so far had a look how to extend Swashbuckle to fill the missing parts in Swagger definition. In the next post, we’ll walk through another extension for Swagger definition.

Tools for Testing Webhooks

In a microservices environment, APIs are the main communication methods between services. Most of time each API merely sends a request and wait for its response. But there are definitely cases that need longer period to complete the requests. Even some cases stop processing at some stage until they get a signal to continue. Those are not uncommon in the API world. However, implementing those features might be a little bit tricky because most APIs use HTTP protocol, which are basically web-based applications that have timeout constraints. The timeout limitation is pretty critical to achieve those long-running processes within a given period of time. Fortunately, there are already solutions (or patterns) to overcome those sorts of challenges – asynchronous patterns and/or webhook patterns. As both are quite similar to each other, they are used interchangeably or together. While those approaches sort out the long-running process issues, they are hard for testing or debugging without some external tools. In this post, we are going to have a look at a couple of useful tools for debugging and testing, when we develop REST API applications, especially webhook APIs.

Disclaimer: those services introduced in this post do not have any sort of relationship to us, Kloud.

RequestBin

RequestBin is an online webhook request sneaking tool. It has a very simple user interface so that developers can hop into the service straight away. If we want to check webhook request data, follow the steps below:

  1. Click the Create a RequestBin button at the first page.

  2. A temporary bin URL is generated. Grab it.

  3. Use this bin URL as a webhook callback URL. Here’s the screenshot using Postman sending a request to the bin URL.

  4. The request data sent from Postman is captured like below. If the bin URL is https://requestb.in/1fbhrlf1, just append the querystring parameter of ?inspect so that we can inspect the request data. Can we find out the same request data sent?

This service brings a few merits. Firstly, we can simply use the bin URL when we register a webhook. The webhook request body is captured at the bin URL. Secondly, we don’t have to develop a test application to analyse the webhook payload because the data has already been captured by the bin URL. Thirdly, we can save time and resources for those testing application development. And finally, it’s free!

Of course, there are demerits. Each bin URL is only available for limited period of time. According to the service website, the bin URL is only valid for the first 48 hours after it’s generated. In fact, the lifetime varies from 5 mins to 48 hours. If we close a web browser and open it again, the bin URL is no longer valid. Therefore, this is only for testing webhooks that have a short lifecycle. In addition to this, RequestBin doesn’t have a good fit when we try local debugging. There must be cases that the webhooks receive data and process it. RequestBin only shows how webhook request data is captured, nothing further.

If RequestBin is not our solution, what else can we use? How can we debug or test the webhook functions?

ngrok

According to this post, tunneling services take traffic from the Internet to local development environment. ngrok is one of those tunneling services. It has both free version and paid one, but the free one is enough for our purpose.

As it supports cross-platforms, download the suitable binary for OS. For Windows, there is only one binary, ngrok.exe. Copy this to the C:\ngrok folder (or wherever preferred) and enter the command below:

ngrok http 7071 -host-header=localhost
  • http: This specifies the protocol to watch incoming traffic.
  • 7071: This is the port number. Default is 80. If we debug Azure Functions, set this port number to 7071.
  • -host-header=localhost: Without this option, only ngrok captures the traffic but it can’t reach the local debugging environment.

After typing the command above, we can see the screen like below:

As we can see, external traffic hitting the endpoint of http://b46c7c81.ngrok.io can reach our locally running Azure Functions app. Let’s run Azure Functions in a local debugging mode.

Azure Functions is now up and running. Run Postman and send a request to the endpoint that ngrok has generated.

We can now confirm that the code stops at the break point within Visual Studio.

ngrok also provides a replay feature. If we access to http://localhost:4040, how ngrok has captured all requests since it is run.

The free version of ngrok keeps changing the endpoint every time it runs. Run history is also blown away. But this wouldn’t matter for our purpose.

So far, we have briefly looked at a couple of tools to sneak webhook traffic for debugging purpose. If we utilise those tools well, we can more easily perform checking how API request calls are made. It would be definitely worth noting.

Synchronizing Exchange Online/Office 365 User Profile Photos with FIM/MIM

Introduction

This is Part Two in the two-part blog post on managing users profile photos with Microsoft FIM/MIM. Part one here detailed managing users Azure AD/Active Directory profile photo. This post delves deeper into photos, specifically around Office 365 and the reason why you may want to manage these via FIM/MIM.

Background

User profile photos should be simple to manage. But in a rapidly moving hybrid cloud world it can be a lot more complex than it needs to be. The best summary I’ve found of this evolving moving target is from Paul Ryan here.

Using Paul’s sound advice we too are advising our customers to let users manage their profile photo (within corporate guidelines) via Exchange Online. However as described in this article photos managed in OnPremise Active Directory are synchronized to Azure AD and on to other Office365 services only once. And of course we want them to be consistent across AD DS, Azure AD, Exchange Online and all other Office365 Services.

This post details synchronizing user profile photos from Exchange Online to MIM for further synchronization to other systems. The approach uses a combination of Azure GraphAPI and Exchange Remote PowerShell to manage Exchange Online User Profile Photos.

The following graphic depicts the what the end goal is;

Current State

  • Users historically had a photo in Active Directory. DirSync/ADSync/AzureADConnect then synchronized that to Azure AD (and once only into Office 365).
  • Users update their photo in Office365 (via Exchange Online and Outlook Web Access)
    • the photo is synchronized across Office365 Services

Desired State

  • An extension of the Current State is the requirement to be able to take the image uploaded by users in Exchange Online, and synchronize it back to the OnPremise AD, and any other relevant services that leverage a profile photo
  • Have AzureADConnect keep AzureAD consistent with the new photo obtained from Office365 that is synchronized to the OnPrem Active Directory
  • Sync the current photo to the MIM Portal

Synchronizing Office365 Profile Photos

Whilst Part-one dealt with the AzureAD side of profile photos as an extension to an existing AzureAD PowerShell Management Agent for FIM/MIM, I’ve separated out the Office365 side to streamline it and make it as efficient as possible. More on that later. As such I’ve created a new PowerShell Management Agent specifically for Office365 User Profile Photos.

I’m storing the Exchange Online photo in the MIM Metaverse as a binary object just as I did for the AzureAD photo (but in a different attribute ). I’m also storing a checksum of the photos (as I did for the AzureAD Photo, but also in a different attribute) to make it easier for comparing what is in Azure AD and Exchange Online, to then be used to determine if changes have been made (eg. user updated their profile photo).

Photo Checksum

For generating the hash of the profile photos I’m using Get-Hash from the Powershell Community Extensions.  Whilst PowerShell has Get-FileHash I don’t want to write the profile photos out to disk and read them back in just to get the checksum. That slows the process up by 25%. You can get the checksum using a number of different methods and algorithms. Just be consistent and use the same method across both profile photos and you’ll be comparing apples with apples and the comparison logic will work.

Some notes on Photos and Exchange Online (and MFA)

This is where things went off on a number of tangents. Initially I tried accessing the photos using Exchange Online Remote PowerShell.

CAVEAT 1: If your Office365 Tenant is enabled for Multi-Factor Authentication (which it should be) you will need to get the Exchange Online Remote PowerShell Module as detailed here. Chances are you won’t have full Office365 Admin access though, so as long as the account you will be using is in the Recipient Management Role you should be able to go to the Exchange Control Panel using a URL like https://outlook.office365.com/ecp/?realm=<tenantname>&wa=wsignin1.0 where tenantname is something like customer.com.au From the Hybrid menu on in the right handside pane you will then be able to download the Microsoft.Online.CSE.PSModule.Client.application I had to use Internet Explorer to download the file and get it installed successfully. Once installed I used a few lines from this script here to load the Function and start my RPS session from within PowerShell ISE during solution development.

CAVEAT 2: The EXO RPS MFA PS Function doesn’t allow you to pass it your account password. You can pass it the identity you want to use, but not the password. That makes scheduled process automation with it impossible.

CAVEAT 3: The RPS session exposes the Get-UserPhoto cmdlet which is great. But the RPS session leverages the GraphAPI. The RPS PS Module doesn’t refresh it’s tokens, so if the import takes longer than 60 minutes then using this method you’re a bit stuffed.

CAVEAT 4: Using the Get-UserPhoto cmdlet detailed above, the syncing of photos is slow. As in I was only getting ~4 profile photos per minute slow. This also goes back to the token refresh issue as for pretty much any environment of the size I deal with, this is too slow and will timeout.

CAVEAT 5: You can whitelist the IP Address (or subnet) of your host so MFA is not required using Contextual IP Addressing Whitelisting. At that point there isn’t really a need to use the MFA Enabled PREVIEW EXO RPS function anyway. That said I still needed to whitelist my MIM Sync Server(s) from MFA to allow integration into the Graph API. I configured just the single host. The whitelist takes CIDR format so that looks like /32 (eg. 11.2.33.4/32)

Performance Considerations

As I mentioned above,

  • using the Get-UserPhoto cmdlet was slow. ~4 per minute slow
  • using the GraphAPI into Exchange Online and looking at each user and determining if they had a photo then downloading it, was also slow. Slow because at this customer only ~50% of their users have a photo on their mailbox. As such I was only able to retrieve ~145 photos in 25 minutes. *Note: all timings listed above were during development and actually outputting the images to disk to verify functionality. 

Implemented Solution

After all my trial and error on this, here is my final approach and working solution;

  1. Use the Exchange Online Remote PowerShell (non-MFA version) to query and return a collection of all mailboxes with an image *Note, add an exception for your MIM Sync host to the white-listed hosts for MFA (if your Office365 Tenant is enabled for MFA) so the process can be automated
  2. Use the Graph API to obtain those photos
    • with this I was able to retrieve ~1100 profile photos in ~17* minutes (after ~2 minutes to query and get the list of mailboxes with a profile photo)

Pre-requisites

There’s a lot of info above, so let me summarize the pre-requisties;

  • The Granfeldt PowerShell MA
  • Whitelist your FIM/MIM Sync Server from MFA (if your Office 365 environment is enabled for MFA)
  • Add the account you will run the MA as, that will in turn connect to EXO via RPS to the Recipient Management Role
  • Create a WebApp for the PS MA to use to access users Profile Photos via the Graph API (fastest method)
  • Powershell Community Extensions to generate the image checksum

Creating the WebApp to access Office365 User Profile Photos

Go to your Azure Portal and select the Azure Active Directory Blade from the Resource Menu bar on the left. Then select App Registrations and from the Manage Section of the Azure Active Directory menu, and finally from the top of the main pane select “New Application Registration“.

Give it a name and select Web app/API as the type of app. Make the sign-in URL https://localhost and then select Create.

Record the ApplicationID that you see in the Registered App Essentials window. You’ll need this soon.

Now select All Settings => Required Permissions. Select Read all users basic profiles in addition to Sign in and read user profile. Select Save.

Under Required Permissions select Add and then select 1 Select an API, and select Office 365 Exchange Online then click Select.

Choose 2 Select Permissions and then select Read user profiles and Read all users’ basic profiles. Click Select.

Select Grant Permissions

From Settings select Keys, give your key a Description, choose a key lifetime and select Save. RECORD the key value. You’ll need this along with the WebApp ApplicationID/ClientID for the Import.ps1 script.

Using the information from your newly registered WebApp, we need to perform the first authentication (and authorization of the WebApp) to the Graph API. Taking your ApplicationID, Key (Client Secret) and the account you will use on on the Management Agent (and that you have assigned the Recipient Management Role in Exchange Online) and run the script detailed in this post here. It will authenticate you to your new WebApp via the GraphAPI after asking you to provide the account you will use on the MA and Authorizing the permissions you selected when registering the app. It will also create a refresh.token file which we will give to the MA to automate our connection. The Authorization dialog looks like this.

Creating the Management Agent

Now we can create our Management Agent using the Granfeldt PowerShell Management Agent. If you haven’t created one before checkout a post like this one, that further down the post shows the creation of a Granfeldt PSMA. Don’t forget to provide blank export.ps1 and password.ps1 files on the directory where you place the PSMA scripts.

PowerShell Management Agent Schema.ps1

PowerShell Management Agent Import.ps1

As detailed above the PSMA will leverage the WebApp to read users Exchange Profile Photos via the Graph API. The Import script also leverages Remote Powershell into Exchange Online (for reasons also detailed above). The account you run the Management Agent as will need to be added to the Recipient Management Role Group in order to use Remote PowerShell into Exchange Online and get the information required.

Take the Import.ps1 script below and update;

  • Update lines 11, 24 and 42 for the path to where you have put your PSMA. Mine is under the Extensions directory in a directory named EXOPhotos.
  • copy the refresh.token generated when authenticating and authorizing the WebApp earlier into the directory you specified in line 42 above.
  • Create a Debug directory under the directory you specified in lines 11,24 and 42 above so you can see what the MA is doing as you implement and debug it the first few times.
  • I’ve written the Import to use Paged Imports, so make sure you tick the Paged Imports checkbox on the configuration of the MA
  •  Update Lines 79 and 80 with your ApplicationID and Client Secret that you recorded when creating your WebApp

Running the Exchange User Profile Photos MA

Now that you have created the MA, you should have select the EXOUser ObjectClass and the attributes defined in the schema. You should also create the EXOPhoto (as Binary) and EXOPhotoChecksum (as String) attributes in the Metaverse on the person ObjectType (assuming you are using the built-in person ObjectType).

Configure your flow rules to flow the EXOPhoto and EXOPhotoChecksum on the MA to their respective attributes in the MV.

Create a Stage Only run profile and run it. If you have done everything correctly you will see photos come into the Connector Space.

Looking at the Connector Space, I can see EXOPhoto and EXOPhotoChecksum have been imported.

After performing a Synchronization to get the data from the Connector Space into the Metaverse it is time to test the image that lands in the Metaverse. That is quick and easy via PowerShell and the Lithnet MIIS Automation PowerShell Module.

$me = Get-MVObject -ObjectType person -Attribute accountName -Value "drobinson"
$me.Attributes.EXOPhoto.Values.ValueBinary
[System.Io.File]::WriteAllBytes("c:\temp\myOutlookphoto.jpg" ,$me.Attributes.EXOPhoto.Values.ValueBinary )

The file is output to the directory with the filename specified.

Opening the file reveals correctly my Profile Photo.

Summary

In Part one we got the AzureAD/Active Directory photo. In this post we got the Office365 photo.

Now that we have the images from Office365 we need to synchronize any update to photos to Active Directory (and in-turn via AADConnect to Azure AD). Keep in mind the image size limits for Active Directory and that we retrieved the largest photo available from Office365 when synchronizing the photo on. There are a number of PowerShell modules for photo manipulation that will allow you to resize accordingly.

A quick start guide to leveraging the Azure Graph API with PowerShell and oAuth 2.0

Introduction

In September 2016 I wrote this post detailing integrating with the Azure Graph API via PowerShell and oAuth 2.0.

Since that point in time I’ve found myself doing considerably more via PowerShell and the Graph API using oAuth. I regularly find myself leveraging previous scripts to generate a new script for the initial connection. To the point that I decided to make this simpler and provide a nice clean starting point for new scripts.

This blog post details a simple script to generate a couple of PowerShell Functions that can be the basis for integration with Graph API using PowerShell via a WebApp using oAuth2.

Overview

This script will request the necessary information required to call into the Graph API and establish a session. Specifically;

Armed with this information the shell of a PowerShell script will be created that will;

  • Authenticate a user to Graph API via Powershell and oAuth 2.0
  • Request Authorization for the WebApp to access the Scope provided (if Admin approval scope is requested and the AuthN is performed by a non-admin an authorization failure message will appear detailing an Administrator must authorize).
  • Obtain and Authorization Code which will contain the Bearer Token and Refresh Token.
    • The Bearer token can be used to make Graph API calls for up to 1 hour.
    • The Refresh token will allow you to request a new token and allow your script to be used again to interact via Graph API without going through the Authentication process again.

The following graphic shows this flow.

Create/Register your Application

Go to the Application Registration Portal https://apps.dev.microsoft.com/ and sign in. This is the new portal for registering your apps. It will show any previous apps you registered within AzureAD and any of the new “Converged Apps” you’ve created via the new Application Registration Portal.

Select Add an app from the Converged applications list.

Give your app a name and select Create

Record the Application ID (previously known as the Client ID) and select Generate New Password.

You will be provided your Client Secret. Record this now as it is the only time you will see it. Select Ok.

By default you will get User.Read permissions on the API. That is enough for this sample. Depending on what you will do with the API you will probably need to come and change the permissions or do it dynamically via the values you supply the $resource setting in your API calls.

Select Platforms, select Web and add a reply URL of https://localhost

Scroll to the bottom of the Registration windows and select Save.

Generate your PowerShell Graph API oAuth Script

Copy the following script and put it into an Administrator PowerShell/PowerShell ISE session and run it.

It will ask you to choose a folder to output the resultant PowerShell Script to. You can create a new folder through this dialog window if require.

The script will prompt you for the Client/Application ID, Client Secret and the Reply URL you obtained when registering the Web App in the steps above.

The script will be written out to the folder you chose in the first step and it will be executed. It will prompt you to authenticate. Provide the credentials you used when you created the App in the Application Registration Portal.

You will be prompted to Authorize the WebApp. Select Accept

If you’ve executed the previous steps correctly you’ll receive an AuthCode in your PowerShell output window

You’ll then see the output for a sample query for your user account and below that the successful call for a refresh of the tokens.

Summary

In the folder you chose you will find a PowerShell script with the name Connect-to-Microsoft-Graph.ps1You will also find a file named refresh.token. You can use the script to authenticate with your new app, but more simply use the Get-NewTokens function to refresh your tokens and then write your own API queries to your app using the tokens. Unless you change the scope you don’t need to run Get-AzureAuthN again. Just use Get-NewTokens before your API calls.

e.g

Get-NewTokens  
$myManager = Invoke-RestMethod -Method Get -Headers @{Authorization = "Bearer $accesstoken"
 'Content-Type' = 'application/json'} `
 -Uri "https://graph.microsoft.com/v1.0/me/manager"

 $myManager

Change the scope of your app to get more information. If you add a scope that requires Admin consent (and you’re not an admin), when prompted to authenticate you will need to get an Admin to authenticate and authorize the scope. Because you’ve changed the scope you will need to run the Get-AzureAuthN function again after updating $scope (as per below) and the dependent $scopeEncoded.

As the screen shot below shows I added the Mail.Read permission. I changed the $scope in the script so that it reflected the changes e.g

#Scope
$scope = "User.Read Mail.Read"
$scopeEncoded = [System.Web.HttpUtility]::UrlEncode($scope)

When running the script again (because of the change of scope) you will be prompted to confirm the change of access.

You can then query your inbox, e.g.

 $myMail = Invoke-RestMethod -Method Get -Headers @{Authorization = "Bearer $accesstoken"
 'Content-Type' = 'application/json'} `
 -Uri "https://graph.microsoft.com/v1.0/me/messages"
 $mymail

And there is mail messages from your inbox.

I hope that makes getting started with the oAuth2 Graph API via PowerShell a lot simpler than it was for me initially, with the differing endpoints, evolving API and the associated documentation somewhere in-between.

How to use a Powershell Azure Function to Tweet IoT environment data

Overview

This blog post details how to use a Powershell Azure Function App to get information from a RestAPI and send a social media update.

The data can come from anywhere, and in the case of this example I’m getting the data from WioLink IoT Sensors. This builds upon my previous post here that details using Powershell to get environmental information and put it in Power BI.  Essentially the difference in this post is outputting the manipulated data to social media (Twitter) whilst still using a TimerTrigger Powershell Azure Function App to perform the work and leverage the “serverless” Azure Functions model.

Prerequisites

The following are prerequisites for this solution;

Create a folder on your local machine for the Powershell Module then save the module to your local machine using the powershell command ‘Save-Module” as per below.

Save-Module -Name InvokeTwitterAPIs -Path c:\temp\twitter

Create a Function App Plan

If you don’t already have a Function App Plan create one by searching for Function App in the Azure Management Portal. Give it a Name, Select Consumption so you only pay for what you use, and select an appropriate location and Storage Account.

Create a Twitter App

Head over to http://dev.twitter.com and create a new Twitter App so you can interact with Twitter using their API. Give you Twitter App a name. Don’t worry about the URL too much or the need for the Callback URL. Select Create your Twitter Application.

Select the Keys and Access Tokens tab and take a note of the API Key and the API Secret. Select the Create my access token button.

Take a note of your Access Token and Access Token Secret. We’ll need these to interact with the Twitter API.

Create a Timer Trigger Azure Function App

Create a new TimerTrigger Azure Powershell Function. For my app I’m changing from the default of a 5 min schedule to hourly on the top of the hour. I did this after I’d already created the Function App as shown below. To update the schedule I edited the Function.json file and changed the schedule to “schedule”: “0 0 * * * *”

Give your Function App a name and select Create.

Configure Azure Function App Application Settings

In your Azure Function App select “Configure app settings”. Create new App Settings for your Twitter Account, Twitter Account AccessToken, AccessTokenSecret, APIKey and APISecret using the values from when you created your Twitter App earlier.

Deployment Credentials

If you haven’t already configured Deployment Credentials for your Azure Function Plan do that and take note of them so you can upload the Twitter Powershell module to your app in the next step.

Take note of your Deployment Username and FTP Hostname.

Upload the Twitter Powershell Module to the Azure Function App

Create a sub-directory under your Function App named bin and upload the Twitter Powershell Module using a FTP Client. I’m using WinSCP.

From the Applications Settings option start Kudu.

Traverse the folder structure to get the path do the Twitter Powershell Module and note it.

Update the code to replace the sample from the creation of the Trigger Azure Function as shown below to import the Twitter Powershell Module. Include the get-help lines for the module so we can see in the logs that the modules were imported and we can see the cmdlets they contain.

Validating our Function App Environment

Update the code to replace the sample from the creation of the Trigger Azure Function as shown below to import the Twitter Powershell Module. Include the get-help line for the module so we can see in the logs that the module was imported and we can see the cmdlets they contain. Select Save and Run.

Below is my output. I can see the output from the Twitter Module.

Function Application Script

Below is my sample script. It has no error handling etc so isn’t production ready, but gives a working example of getting data in from an API (in this case IoT sensors) and sends a tweet out to Twitter.

Viewing the Tweet

And here is the successful tweet.

Summary

This shows how easy it is to utilise Powershell and Azure Function Apps to get data and transform it for use in other ways. In this example a social media platform. The input could easily be business data from an API and the output a corporate social platform such as Yammer.

Follow Darren on Twitter @darrenjrobinson

How to use a Powershell Azure Function App to get RestAPI IoT data into Power BI for Visualization

Overview

This blog post details using a Powershell Azure Function App to get IoT data from a RestAPI and update a table in Power BI with that data for visualization.

The data can come from anywhere, however in the case of this post I’m getting the data from WioLink IoT Sensors. This builds upon my previous post here that details using Powershell to get environmental information and put it in Power BI.  Essentially the major change is to use a TimerTrigger Azure Function to perform the work and leverage the “serverless” Azure Functions model. No need for a reporting server or messing around with Windows scheduled tasks.

Prerequisites

The following are the prerequisites for this solution;

  • The Power BI Powershell Module
  • Register an application for RestAPI Access to Power BI
  • A Power BI Dataset ready for the data to go into
  • AzureADPreview Powershell Module

Create a folder on your local machine for the Powershell Modules then save the modules to your local machine using the powershell command ‘Save-Module” as per below.

Save-Module -Name PowerBIPS -Path C:\temp\PowerBI
Save-Module -Name AzureADPreview -Path c:\temp\AzureAD 

Create a Function App Plan

If you don’t already have a Function App Plan create one by searching for Function App in the Azure Management Portal. Give it a Name, Select Consumption Plan for the Hosting Plan so you only pay for what you use, and select an appropriate location and Storage Account.

Register a Power BI Application

Register a Power BI App if you haven’t already using the link and instructions in the prerequisites. Take a note of the ClientID. You’ll need this in the next step.

Configure Azure Function App Application Settings

In this example I’m using Azure Functions Application Settings for the Azure AD AccountName, Password and the Power BI ClientID. In your Azure Function App select “Configure app settings”. Create new App Settings for your UserID and Password for Azure (to access Power BI) and our PowerBI Application Client ID. Select Save.

Not shown here I’ve also placed the URL’s for the RestAPI’s that I’m calling to get the IoT environment data as Application Settings variables.

Create a Timer Trigger Azure Function App

Create a new TimerTrigger Azure Powershell Function App. The default of a 5 min schedule should be perfect. Give it a name and select Create.

Upload the Powershell Modules to the Azure Function App

Now that we have created the base of our Function App we’re going to need to upload the Powershell Modules we’ll be using that are detailed in the prerequisites. In order to upload them to your Azure Function App, go to App Service Settings => Deployment Credentials and set a Username and Password as shown below. Select Save.

Take note of your Deployment Username and FTP Hostname.

Create a sub-directory under your Function App named bin and upload the Power BI Powershell Module using a FTP Client. I’m using WinSCP.

To make sure you get the correct path to the powershell module from Application Settings start Kudu.

Traverse the folder structure to get the path to the Power BI Powershell Module and note the path and the name of the psm1 file.

Now upload the Azure AD Preview Powershell Module in the same way as you did the Power BI Powershell Module.

Again using Kudu validate the path to the Azure AD Preview Powershell Module. The file you are looking for is the Microsoft.IdentityModel.Clients.ActiveDirectory.dll” file. My file after uploading is located in “D:\home\site\wwwroot\MyAzureFunction\bin\AzureADPreview\2.0.0.33\Microsoft.IdentityModel.Clients.ActiveDirectory.dll”

This library is used by the Power BI Powershell Module.

Validating our Function App Environment

Update the code to replace the sample from the creation of the Trigger Azure Function as shown below to import the Power BI Powershell Module. Include the get-help line for the module so we can see in the logs that the modules were imported and we can see the cmdlets they contain. Select Save and Run.

Below is my output. I can see the output from the Power BI Module get-help command. I can see that the module was successfully loaded.

Function Application Script

Below is my sample script. It has no error handling etc so isn’t production ready, but gives a working example of getting data in from an API (in this case IoT sensors) and puts the data directly into Power BI.

Viewing the data in Power BI

In Power BI it is then quick and easy to select our Inside and Outside temperature readings referenced against time. This timescale is overnight so both sensors are reading quite close to each other.

Summary

This shows how easy it is to utilise Powershell and Azure Function Apps to get data and transform it for use in other ways. In this example a visualization of IoT data into Power BI. The input could easily be business data from an API and the output a real time reporting dashboard.

Follow Darren on Twitter @darrenjrobinson

 

 

 

 

Forcing MFA in Amazon Web Services

Many organisations will want to enforce MFA for an added security layer for their users. As each service is different, in some cases enforcing MFA may not be as easy as it sound.

In AWS, an administrator cannot simply “tick” to enable MFA on all users (as of this writing). However, MFA can be enforced on API calling, to “force” a user to setup MFA. Think of it as a backdoor, to forcing or enabling MFA on all your IAM users.

The only way in which that can be achieved, is by creating a policy. This policy is a based on a JSON script. You could use AWS for that, or Visual Studio.

In this blog, I will demonstrate how to enforce MFA in AWS for all your IAM users. This blog will not go through setting up MFA for IAM users. This could be achieved by following the steps here .

This blog assumes you have some AWS knowledge and experience, and will not go into detailing how to attach a policy or step-by-step creating a policy, this can be found here .

How Does AWS define MFA?

AWS Multi-Factor Authentication (MFA) is a simple best practice that adds an extra layer of protection on top of your user name and password. With MFA enabled, when a user signs in to an AWS website, they will be prompted for their user name and password (the first factor—what they know), as well as for an authentication code from their AWS MFA device (the second factor—what they have). Taken together, these multiple factors provide increased security for your AWS account settings and resources.

You can enable MFA for your AWS account and for individual IAM users you have created under your account. MFA can be also be used to control access to AWS service APIs.

You can read more about it here.

What is API Calling?

Enforcing MFA on API calling means that an IAM user, for example will not be able to make any changes to EC2 instances, or launch new EC2 instances unless they’re MFA authenticated. Even if users have full admin permissions.

This is not to be confused with enforcing MFA upon signing in, the IAM user will still have to setup MFA to complete tasks in AWS.

Not very descriptive, but an IAM user will be faced with the following error messages if MFA isn’t setup.

ec2

In terms of Route53, the IAM users will have different error messages.

route53

In this case, it is up for administrators to let IAM users know that MFA should be setup upon signing in.

Setting up and creating the MFA Policy

The MFA policy that we will create and apply, will only allow the use of IAM services, for users to setup their MFA, among other configurations.

iam

Creating the MFA script

Before you start, you will need your IAM ARN (Amazon Resource Name). For IAM users, this will look like this “arn:aws:iam::123456789:user/test”

For groups, it will look like this “arn:aws:iam::123456789:group/AmazonEC2FullAccess

Basically, the ARN is your AWS Account ID which you can find under “My Account”.

The ARN will be used in the script, also too if required for particular users or groups (explained in the examples below). Nevertheless, in this script we will allow MFA to be enforced on all groups and users by default.

Don’t forget to replace your AWS ARN.


{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAllUsersToListAccounts",
"Effect": "Allow",
"Action": [
"iam:ListAccountAliases",
"iam:ListUsers"
],
"Resource": [
"arn:aws:iam::123456789:group/*"
]
},
{
"Sid": "AllowIndividualUserToSeeTheirAccountInformation",
"Effect": "Allow",
"Action": [
"iam:ChangePassword",
"iam:CreateLoginProfile",
"iam:DeleteLoginProfile",
"iam:GetAccountPasswordPolicy",
"iam:GetAccountSummary",
"iam:GetLoginProfile",
"iam:UpdateLoginProfile"
],
"Resource": [
"arn:aws:iam::123456789:group/${aws:username}"
]
},
{
"Sid": "AllowIndividualUserToListTheirMFA",
"Effect": "Allow",
"Action": [
"iam:ListVirtualMFADevices",
"iam:ListMFADevices"
],
"Resource": [
"arn:aws:iam::123456789:group/mfa/*",
"arn:aws:iam::123456789:user/${aws:username}"
]
},
{
"Sid": "AllowIndividualUserToManageThierMFA",
"Effect": "Allow",
"Action": [
"iam:CreateVirtualMFADevice",
"iam:DeactivateMFADevice",
"iam:DeleteVirtualMFADevice",
"iam:EnableMFADevice",
"iam:ResyncMFADevice"
],
"Resource": [
"arn:aws:iam::123456789:group/mfa/${aws:username}",
"arn:aws:iam::123456789:group/${aws:username}"
]
},
{
"Sid": "DoNotAllowAnythingOtherThanAboveUnlessMFAd",
"Effect": "Deny",
"NotAction": "iam:*",
"Resource": "*",
"Condition": {
"Null": {
"aws:MultiFactorAuthAge": "true"
}
}
}
]
}

If you want to enforce MFA on particular groups or users, then you will have to include them as follow.

Example:

“Resource”: [

“arn:aws:iam::123456789:group/AmazonEC2FullAccess:mfa/*”,

“arn:aws:iam::123456789:group/AmazonEC2FullAccess:user/${aws:username}”,

“arn:aws:iam::123456789:group/AmazonAppStreamFullAccess:mfa/*”,

“arn:aws:iam::123456789:group/AmazonAppStreamFullAccess:user/${aws:username}”

]

},

Note that in the script, in resource we specified all groups by adding * at the end. Same applies for users if you need to include all users.

Example:

“Resource”: [
“arn:aws:iam::123456789:group/*”
]
},

Previously created users may have to have this policy attached to them manually. Creating IAM users after creating this policy will be attached to the newly created users by default.

I hope you found this blog informative. Please feel free to post your comments below.

Thanks for reading.

Using an Azure Function to search the FIM/MIM Metaverse, create a Set and update the Set membership in the the FIM/MIM Service

Introduction

This is the third and last post in this series of integrating Microsoft Identity Manager with Azure Functions.

The first detailed how to use an Azure Function to retrieve data from the MIM Service Server. The second detailed how to use an Azure Function to retrieve data from the MIM Sync (Metaverse) Server.

This third post combines the two and then performs an action in the MIM Service. The practical purpose of this could be functions like “find all users in location y” and “enable them for entitlement x” or “add an attribute value on each of their objects”.

Overview

The reasoning for the two stage approach is that in my experience it is a lot easier to search the Metaverse than the MIM Service to find an object(s), but also the Metaverse has all the information about objects whereas the MIM Service is a ShadowVerse of the Metaverse containing a subset of the managed objects metadata.

Moving forward then the architecture is a hybrid of the first two posts that introduced the concepts associated with integrating MIM with Azure Functions. As per the other two posts this is a base working example and concept.

Prerequisites

The prerequisites are the same as for the 1st and 2nd posts in this series. You’ll need to work through those examples to setup the dependencies and prerequisites. From there you can create one more Azure Function that brings everything together. That’s what I’m covering in this post.

Therefore the prerequisites are;

  • Azure Tenant and a Function Plan
  • Microsoft Identity Manager implementation
  • Remote Powershell configured for your MIM Sync Server
  • Lithnet FIM MIIS Automation Powershell Module installed on your MIM Sync Server
  • The necessary Firewall Rules on your MIM Sync Server and your Azure Network Security Group (assuming your MIM Infrastructure is in Azure) to allow Azure Functions to communicate with MIM Sync and Service Servers

 

This Example performs the following

In this example the HTTP Trigger Azure Function;

  • Takes input for ObjectType, Attribute, AttributeValue, SetName
  • Searches the MIM Sync Metaverse for the input ObjectType, with the AttributeValue in the Attribute
  • Connects to the MIM Service
  • Creates the Set based of the input SetName if it doesn’t exist
  • Adds the objects from the search to the Set
  • Returns the objects added to the Set

In a real world implementation you’d do the above with a criteria based set. This post is a concept of search and find, performing a create and updating. That has many practical applications.

Create your new Azure Function

Just like the other two posts, we’re going to create a new Powershell HTTP Trigger Azure Function.

Upload the Lithnet RMA PS Module to your new Azure Function (as per blog post 1 in this series). You should also be using protected credentials now as well. So upload your username/password encryption key.

 

Here is the Azure Function Powershell Script that performs the process detailed above.

Test it out. Looks good. 88 users matched the value of Sydney in their location attribute.

Verify that the Set was created and the membership updated.

Test calling the Azure Function remotely

Now that it is all working in the Azure Function, lets try doing it from Powershell remotely. This time I’m again looking for Person objects that have Sydney in their location attribute and I’ll create a set named Sydney-NSW and put them in it.

Brilliant, that works nicely. Let’s verify that the Set was created and has the correct number of users in it. Yes, a perfect match.

Summary

Putting Azure Functions and Powershell together along with the Lithnet Powershell Modules opens up a world of possibilities for automation and integration of the MIM Service without the need for any additional infrastructure or any considerable effort.

Experiment and let me know what you do with this style of integration.

Follow Darren on Twitter @darrenjrobinson