Auto-redirect ADFS 4.0 home realm discovery based on client IP

As I mentioned in my previous post here that I will explain how to auto-redirect the home realm discovery page to an ADFS namespace (claims provider trust) based on client’s IP so here I am.

Let’s say you have many ADFS servers (claims providers trusts) linked to a central ADFS 4.0 server and you want to auto-redirect the user to a linked ADFS server login page based on user’s IP instead of letting the user to choose a respective ADFS server from the list on the home realm discovery page as explained in the below request flow diagram.

You can do so by doing some customization as mentioned below:

  1. Create a database of IP ranges mapped to ADFS namespaces
  2. Develop a Web API which returns the relevant ADFS namespace based on request IP
  3. Add custom code in onload.js file on the central ADFS 4.0 server to call the Web API and do the redirection

It is assumed that all the boxes including Central ADFS, linked ADFS, Web Server, SQL Server are setup. All the nitties and gritties are sorted out in terms of firewall rules, DNS lookups, SSL certificates. If not then you can get help from an infrastructure guy on that.

Lets perform the required action on SQL, Web and ADFS Server.

SQL Server

Perform the following actions on the SQL Server:

  1. Create a new database
  2. Create a new table called Registration as shown below

  1. Insert some records in the table for the linked ADFS server IP range, for example

Start IP: 172.31.117.1, End IP: 172.31.117.254, Redirect Name: http://adfs.adminlab.com/adfs/services/trust

Web Server

Perform the following actions for the Web API development and deployment:

  1. Create a new ASP.NET MVC Web API project using Visual Studio
  2. Create a new class called Redirect.cs as shown below (I would have used the same name as database table name ‘Registration’ but it’s OK for now)

  1. Insert a new Web API controller class called ResolverController.cs as shown below. What we are doing here is getting the request IP address and getting the IP ranges from the database, comparing the request IP address with the IP ranges from the database by converting both to long IP address. If the request IP is in range then returning the redirect object.

  1. Add a connection string in the web.config named DbConnectionString pointing to the database we created above.
  2. Deploy this web API project to the web server IIS
  3. Configure the HTTPS binding as well for this web API project using the SSL certificate
  4. Note down the URL of the web API, something like ‘https://{Web-Server-Web-API-URL}/api/resolver/get’, this will be used in the onload.js

Central ADFS 4.0 Server

Perform the following actions on the central ADFS 4.0 server:

  1. Run the following PowerShell command to export current theme to a location

Export-AdfsWebTheme -Name default -DirectoryPath D:\Themes\Custom

  1. Run the following PowerShell command to create a new custom theme based on current theme

New-AdfsWebTheme -Name custom -SourceName default 

  1. Update onload.js file extracted in step 1 at D:\Themes\Custom\theme\script with following code added at the end of the file. What we are doing here is calling the web API which returns the matched Redirect object with RedirectName as ADFS namespace and setting the HRD.selection as that redirect name.

  1. Run the following PowerShell command to update back the onload.js file in the theme

Set-AdfsWebTheme -TargetName custom -AdditionalFileResource @{Uri=’/adfs/portal/script/onload.js’;path=”D:\Themes\Custom\theme\script\onload.js”} 

  1. Run the following PowerShell command to make the custom theme as your default theme

Set-AdfsWebConfig -ActiveThemeName custom -HRDCookieEnabled $false

Now when you test from your linked ADFS server or a client machine linked to the linked ADFS server (which is linked to a central ADFS server), the auto-redirect kicks in from onload.js and forwards it to web API which gets the client IP and matches it with relevant ADFS where the request came from and redirects the user to the relevant ADFS login page, instead of user selecting the relevant ADFS namespace from the available list on home realm discovery page.

If the relevant match is not found, the default home realm discovery page with list of available ADFS namespaces is shown.

Implementing Azure API Management with the Lithnet Microsoft Identity Manager Rest API

Introduction

Earlier this week I wrote this post that detailed implementing the Lithnet REST API for FIM/MIM Service. I also detailed using PowerShell to interact with the API Endpoint.

Now lets imagine you are looking to have a number of Azure Serverless features leverage your Rest API enabled Microsoft Identity Manager environment. Or even offer it “as-a-Service”. You’ll want to have some visibility as to how it is performing, and you’ll probably want to implement features such as caching and rate limiting let alone putting more security controls around it. Enter Azure API Management, which provides all those functions and more.

In this post I detail getting started with Azure API Management by using it to front-end the Lithnet FIM/MIM Rest API.

Overview

In this post I will detail;

  • Enabling Azure API Management
  • Configuring the Lithnet FIM/MIM Rest API integration with Azure API Management
  • Accessing MIM via Azure API Management and the Lithnet FIM/MIM Rest API using PowerShell
  • Reporting

Prerequisites

For this particular scenario I’m interfacing Azure API Management with a Rest API that uses Digest Authentication. So even though it is a Windows WCF Webservice you could do something similar with a similar API Endpoint. If the backend API endpoint is using SSL it will need to have a valid certificate. Even though Azure API Management allows you to add your own certificates I had issues with Self Signed Certificates. I have it working fine with Lets Encrypt issued certificates. Obviously you’ll need an Azure Subscription as well as an App/Servive with an API.

Enabling Azure API Management

From the Azure Portal select Create a resource and search for API management and select it.

Add API Mgmt.PNG

Select Create

Create API Mgmt.PNG

Give your API Management Service a name, select a subscription, resource group etc and select Create.

API Mgmt Config 1.PNG

Once you select Create it will take about 30 minutes to be deployed.

Configuring the Lithnet FIM/MIM Rest API integration with Azure API Management

Once your new API Management service has been deployed, from the Azure Portal select the API Management services blade and select the API Management service that you just created. Select APIs.

API Config 1.PNG

Select Add API and then select Add a new API

API Mgmt Config 2.PNG

Give the API a name, description, enter the URI for your API EndPoint, and select HTTPS. I’m going to call this MIMSearcher so have entered that under API URL Suffix. For initial testing under Products select starter. Finally select Create.

API Mgmt Config 4.PNG

We now have our base API setup. From the Backend tile select the Edit icon.

API Mgmt Config 5.PNG

As the backed is authenticated using Basic Authentication, select Basic in Gateway credentials and enter the details of an account with access that will be used by the API Gateway. Select Save.

API Mgmt Config 6.PNG

Now from our API Configuration select Add operation.

API Mgmt Config 7.PNG

First we will create a test operation for the Help page on the Lithnet FIM/MIM Rest API. Provide a Display name, and for the URL add /v2/help. Give it a description and select Create.

Note: I could have had v2 as part of the base URI for the API in the previous steps. I didn’t as I will be using API’s from both v1 and v2 and didn’t want to create multiple operations.

API Mgmt Config 8.PNG

Select the new Operation (Help)

API Mgmt Config 9.PNG

Select the Test menu. Select Send.

API Mgmt Config 10.PNG

If everything is set up correctly you will get a 200 Success OK response as below.

API Mgmt Config 11.PNG

Accessing MIM via Azure API Management and the Lithnet FIM/MIM Rest API using PowerShell

Head over to your API Portal. The URL is https://.portal.azure-api.net/ where is the name you gave your API Management Service shown in the third screenshot at the top of this post. If you are doing this from the browser you used to create the API Management Service you should be signed in already. From the Administrator menu on the right select Profile.

Test API Mgmt 1.PNG

Click on Show under one of the keys and record its value.

Test API Mgmt 2.PNG

Using PowerShell ISE or VSCode update the following Code Snippet and test.

$APIURL = 'https://.azure-api.net//v2/help'
$secret = 'yourSecret'
$Headers = @{'Ocp-Apim-Subscription-Key' = $secret} 
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

$response = Invoke-RestMethod -Uri $APIURL -Headers $Headers -ContentType "application/json" -UseBasicParsing -Method Get
$response

The snippet will create a Web Request to the new API and display the results.

Test API Mgmt 3.PNG

Querying the Lithnet Rest API via Azure API Management

Now that we have a working solution end-to-end, let’s do something useful with it. Looking at the Lithnet Rest API, the Resources URI is the key one exposing Resources from the MIM Service.

Resources.PNG

Let’s create a new Operation for Resources similar to what we did for the Help. After selecting Create configure the Backend for Basic Authentication like we did for Help.

Get Resources.PNG

Testing out the newly exposed endpoint is very similar to before. Just a new APIURL with the addition of /?Person to return all Person Resources from the MIM Portal. It lets us know it’s returned 7256 Person Objects, and the Results are still paged (100 by default).

Get Persons.PNG

Let’s now Search for just a single user. Search for a Person object whose Display Name is ‘darrenjrobinson’.

$query = "Person[DisplayName='darrenjrobinson']"
$queryEncoded = [System.Web.HttpUtility]::UrlEncode($query)

$APIURL = "https://.azure-api.net//v2/resources/?filter=/$($queryEncoded)" 
$secret = 'yourSecret'
$Headers = @{'Ocp-Apim-Subscription-Key' = $secret} 
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12

$user = Invoke-RestMethod -Uri $APIURL -Headers $Headers -ContentType "application/json" -UseBasicParsing -Method Get
$user

Executing, we get a single user returned.

Search for User.PNG

Reporting

Using the Publisher Portal we can get some Stats on what is happening with our API Management implementation.

Go to https://.portal.azure-api.net/admin and select Analytics.

We then have visibility to what has been using the API Management Service. At a Glance gives and overview and you can drill down into;

  • Top Users
  • Top Products
  • Top subscriptions
  • Top APIs
  • Top Operations

At a glance looks like this;

At a Glance Stats.PNG

And Top Operations looks like this;

Top Operations.PNG

Summary

That is a quick start guide to implementing Azure API Management in front of a Rest API and using PowerShell to integrate with it. Next steps would be to enable caching, and getting into more of the advanced features. Enjoy.

 

Swashbuckle Pro Tips for ASP.NET Web API – XML

In the previous post, we implemented IOperationFilter of Swashbuckle to handle example objects with combination of AutoFixture. According to Swagger spec, it doesn’t only handle JSON payloads, but also copes with XML payloads. In this post, we’re going to use the Swashbuckle library again to handle XML payloads properly.

The sample codes used in this post can be found here.

Acknowledgement

The sample application uses the following spec:

Built-In XML Support of Web API

ASP.NET Web API contains the Httpconfiguration instance out-of-the-box and it handles both JSON and XML payloads serialisation/deserialisation. Therefore, without any extra efforts, we can see those Swagger UI pages showing both JSON and XML payload.


As Swashbuckle is a JSON-biased library, camelCasing in JSON payloads is well supported, while XML payloads is not. All XML nodes is PascalCasing, which is inconsistent. How can we support camel-case on both JSON and XML? Here’s the tip:

It’s a little bit strange that, in order for XML to support camel-casing, we should change JSON serialisation settings, by the way. Go back to the Swagger UI screen and we’ll be able to find out all XML nodes except the root one are now camel-cased. The root node will be dealt later on.

Let’s send a REST API request to the endpoint using an XML payload.

However, the model instance is null! The Web API action can’t recognise the XML payload.

This is because the HttpConfiguration instance basically uses the DataContractSerializer instance for payload serialisation/deserialisation, but this doesn’t work as expected. What if we change the serialiser to XmlSerializer from DataContractSerializer? It would work. Add the following a line of code like:

Go back to the screen again and we’ll confirm the same screen as before.

What was the change? Let’s send an XML request again with some value modifications.

Now, the Web API action recognises the XML payload, but something weird happened. We sent lorem ipsum and 123, but they’re not passed. What’s wrong this time? There might be still the casing issue. So, instead of camel-cased payload, use pascal-cased XML payload and see how it’s going.

Here we go! Now the API action method can get the XML payload properly!

In other words, Swagger UI screen showed the proper camel-cased payload, but in fact it wasn’t that payload. What could be the cause? Previously we set the UseXmlSerializer property value to true for nothing. Now, all payload models should have XmlRoot and/or XmlElement decorators. Let’s update the payload models like:

Send the XML request again and we’ll get the payload!


However, it’s still not quite right. Let’s have a look at the actual payload and example payload.

The example payload defines ValueRequestModel as a root node. However, the actual payload uses request as declared in the XmlRoot decorator. Where should I change it? Swagger spec defines the xml object for this case. This can be easily done by implementing the ISchemaFilter interface of Swashbuckle.

Defining xml in Schema Object

Here’s the sample code defining the request root node by implementing the ISchemaFilter interface.

It applies the Visitor Pattern to filter out designated types only.

Once implemented, Swagger configuration needs to include the filter to register both ValueRequestModel and ValueResponseModel so that the request model has request on its root node and the response model has response on its root node. Try the Swagger UI screen again.

Now the XML payload looks great and works great.

So far, we’ve briefly walked through how Swagger document deals with XML payloads, using Swashbuckle.

Swashbuckle Pro Tips for ASP.NET Web API – Example(s) Using AutoFixture

In the previous post, we implemented IOperationFilter of Swashbuckle to emit the consumes and produces properties in a Swagger document. This post will implement another IOperationFilter to emit example(s) properties containing auto-generated values by AutoFixture.

The sample codes used in this post can be found here.

Acknowledgement

The sample application uses the following spec:

Defining example(s) in Operation Object

There are a few places, in a Swagger definition, where example objects are declared:

  • example property in parameter
  • examples property in responses
  • example field in definitions

Swashbuckle library automatically generates those example data with default values based on their data type. In other words, if its data type is string, the example value becomes string, if its data type is number, the example value becomes 0, and so on:

Is there any way that we can have a sort of meaningful data in those example objects? Of course there is. This NuGet package helps us create more meaningful example objects. Download the package and write some codes like:

Then add a decorator to an action of Web API:

The first one, SwaggerResponse, is built-in Swashbuckle decorator and the second one, SwaggerResponseExample is what we’re going to use for example data generation. Once the decorator is added, we need to add another OptionFilter acton in the Swagger config file like:

Reload the Swagger UI page and we can see the example object with more meaningful values:

This is how the Swagger definition looks like:

This is certainly a good way to show example data. But when we refresh the page, the example objects still show the same value as we hard-coded them.

Automatic Example Data Generation with AutoFixture

We were able to generate an example object by implementing IExampleProvider of Swashbuckle.Examples library. What if we want to create an example object with automatically generated values? AutoFixture is for unit testing by generating a fixture instance with random values. Why not using this for our randomly generating examples? Let’s have a look. This time, we create an abstract class to apply both requests and responses.

The abstract class, ModelExample, internally implements IFixture from AutoFixture and creates a random instance of the type T. Here are request and response classes inheriting the abstract class:

And this is the action method of a Web API. SwaggerRequestModel defines the request example and SwaggerResponseExample defines the response example.

Reload the Swagger UI page several times.

Each time the page is refreshed, the example objects have different values. If we need to put some restrictions like string length or value range, we can simply put other decorators like StringLength or Range on those request/response models.


So far, we have walked through how Swashbuckle library could generate example(s) object with meaningful values rather than default values. There are many other cases that we can customise Swashbuckle library to enrich Swagger document. I’ll introduce some other cases later on.

Swashbuckle Pro Tips for ASP.NET Web API – Content Types

Open API 2.0 (AKA Swagger) is a de-facto standard to document Web API. For ASP.NET Web API applications, Swashbuckle helps developers build the Swagger definition a lot easier. As Swashbuckle hasn’t fully implemented the Swagger specification, we need to develop some extensions using a few interfaces provided by Swashbuckle. In this post we’re going to talk about a couple of extensions to make Swagger definition more completed.

The sample codes used in this post can be found here.

Acknowledgement

The sample application uses the following spec:

Defining consumes and produdes in Operation Object

In Swagger, HTTP verb like GET, POST, PUT, PATCH or DELETE is referred as Operation Object. Each Operation Object can define which content types are to be requested (consumes) and which content types are to be returned (produces). Therefore, with Swashbuckle, Swagger document page produces like:

In other words, Swashbuckle assumes those five content types as default for requests – application/json, text/json, application/xml, text/xml, and application/x-www-form-urlencoded. And those four content types are the default response ones – application/json, text/json, application/xml and text/xml. Here’s a part of the Swagger definition automatically generated.

If we want to globally apply those content types, that can be done within the global configuration. Here’s the sample OWIN configuration:

It clears everything and add only one content type – application/json. By doing so, we can globally set one content type for both requests and responses. If we want to have more control on each endpoint, the IOperationFilter interface of Swashbuckle gives us that flexibility.

Implementing SwaggerConsumesAttribute Decorator

First of all, we need to write a simple decorator, called SwaggerConsumesAttribute which handles the consumes field of the Operation Object.

Through this decorator, we simply define number of content types to pass. How does it apply?

Let’s move on.

Implementing Consumes Filter

We now write the Consumes filter class by implementing the IOperationFilter interface like.

What it does are:

  • To check the content type values passed from the SwaggerConsumerAttribute decorator, and
  • To add all content types passed from the decorator to operation.consumes.

This needs to be added to the Swashbuckle configuration like:

Now run the Web API again and we’ll see the result like:

Implementing SwaggerProducesAttribute and Produces

Both SwaggerProducesAttribute decorator and Produces filter can be written as what we did above for consumes.

The SwaggerProducesAttribute decorator can be applied to:

The picture below is the Swagger definition based on the extensions above:

We’ve so far had a look how to extend Swashbuckle to fill the missing parts in Swagger definition. In the next post, we’ll walk through another extension for Swagger definition.

Tools for Testing Webhooks

In a microservices environment, APIs are the main communication methods between services. Most of time each API merely sends a request and wait for its response. But there are definitely cases that need longer period to complete the requests. Even some cases stop processing at some stage until they get a signal to continue. Those are not uncommon in the API world. However, implementing those features might be a little bit tricky because most APIs use HTTP protocol, which are basically web-based applications that have timeout constraints. The timeout limitation is pretty critical to achieve those long-running processes within a given period of time. Fortunately, there are already solutions (or patterns) to overcome those sorts of challenges – asynchronous patterns and/or webhook patterns. As both are quite similar to each other, they are used interchangeably or together. While those approaches sort out the long-running process issues, they are hard for testing or debugging without some external tools. In this post, we are going to have a look at a couple of useful tools for debugging and testing, when we develop REST API applications, especially webhook APIs.

Disclaimer: those services introduced in this post do not have any sort of relationship to us, Kloud.

RequestBin

RequestBin is an online webhook request sneaking tool. It has a very simple user interface so that developers can hop into the service straight away. If we want to check webhook request data, follow the steps below:

  1. Click the Create a RequestBin button at the first page.

  2. A temporary bin URL is generated. Grab it.

  3. Use this bin URL as a webhook callback URL. Here’s the screenshot using Postman sending a request to the bin URL.

  4. The request data sent from Postman is captured like below. If the bin URL is https://requestb.in/1fbhrlf1, just append the querystring parameter of ?inspect so that we can inspect the request data. Can we find out the same request data sent?

This service brings a few merits. Firstly, we can simply use the bin URL when we register a webhook. The webhook request body is captured at the bin URL. Secondly, we don’t have to develop a test application to analyse the webhook payload because the data has already been captured by the bin URL. Thirdly, we can save time and resources for those testing application development. And finally, it’s free!

Of course, there are demerits. Each bin URL is only available for limited period of time. According to the service website, the bin URL is only valid for the first 48 hours after it’s generated. In fact, the lifetime varies from 5 mins to 48 hours. If we close a web browser and open it again, the bin URL is no longer valid. Therefore, this is only for testing webhooks that have a short lifecycle. In addition to this, RequestBin doesn’t have a good fit when we try local debugging. There must be cases that the webhooks receive data and process it. RequestBin only shows how webhook request data is captured, nothing further.

If RequestBin is not our solution, what else can we use? How can we debug or test the webhook functions?

ngrok

According to this post, tunneling services take traffic from the Internet to local development environment. ngrok is one of those tunneling services. It has both free version and paid one, but the free one is enough for our purpose.

As it supports cross-platforms, download the suitable binary for OS. For Windows, there is only one binary, ngrok.exe. Copy this to the C:\ngrok folder (or wherever preferred) and enter the command below:

ngrok http 7071 -host-header=localhost
  • http: This specifies the protocol to watch incoming traffic.
  • 7071: This is the port number. Default is 80. If we debug Azure Functions, set this port number to 7071.
  • -host-header=localhost: Without this option, only ngrok captures the traffic but it can’t reach the local debugging environment.

After typing the command above, we can see the screen like below:

As we can see, external traffic hitting the endpoint of http://b46c7c81.ngrok.io can reach our locally running Azure Functions app. Let’s run Azure Functions in a local debugging mode.

Azure Functions is now up and running. Run Postman and send a request to the endpoint that ngrok has generated.

We can now confirm that the code stops at the break point within Visual Studio.

ngrok also provides a replay feature. If we access to http://localhost:4040, how ngrok has captured all requests since it is run.

The free version of ngrok keeps changing the endpoint every time it runs. Run history is also blown away. But this wouldn’t matter for our purpose.

So far, we have briefly looked at a couple of tools to sneak webhook traffic for debugging purpose. If we utilise those tools well, we can more easily perform checking how API request calls are made. It would be definitely worth noting.

Synchronizing Exchange Online/Office 365 User Profile Photos with FIM/MIM

Introduction

This is Part Two in the two-part blog post on managing users profile photos with Microsoft FIM/MIM. Part one here detailed managing users Azure AD/Active Directory profile photo. This post delves deeper into photos, specifically around Office 365 and the reason why you may want to manage these via FIM/MIM.

Background

User profile photos should be simple to manage. But in a rapidly moving hybrid cloud world it can be a lot more complex than it needs to be. The best summary I’ve found of this evolving moving target is from Paul Ryan here.

Using Paul’s sound advice we too are advising our customers to let users manage their profile photo (within corporate guidelines) via Exchange Online. However as described in this article photos managed in OnPremise Active Directory are synchronized to Azure AD and on to other Office365 services only once. And of course we want them to be consistent across AD DS, Azure AD, Exchange Online and all other Office365 Services.

This post details synchronizing user profile photos from Exchange Online to MIM for further synchronization to other systems. The approach uses a combination of Azure GraphAPI and Exchange Remote PowerShell to manage Exchange Online User Profile Photos.

The following graphic depicts the what the end goal is;

Current State

  • Users historically had a photo in Active Directory. DirSync/ADSync/AzureADConnect then synchronized that to Azure AD (and once only into Office 365).
  • Users update their photo in Office365 (via Exchange Online and Outlook Web Access)
    • the photo is synchronized across Office365 Services

Desired State

  • An extension of the Current State is the requirement to be able to take the image uploaded by users in Exchange Online, and synchronize it back to the OnPremise AD, and any other relevant services that leverage a profile photo
  • Have AzureADConnect keep AzureAD consistent with the new photo obtained from Office365 that is synchronized to the OnPrem Active Directory
  • Sync the current photo to the MIM Portal

Synchronizing Office365 Profile Photos

Whilst Part-one dealt with the AzureAD side of profile photos as an extension to an existing AzureAD PowerShell Management Agent for FIM/MIM, I’ve separated out the Office365 side to streamline it and make it as efficient as possible. More on that later. As such I’ve created a new PowerShell Management Agent specifically for Office365 User Profile Photos.

I’m storing the Exchange Online photo in the MIM Metaverse as a binary object just as I did for the AzureAD photo (but in a different attribute ). I’m also storing a checksum of the photos (as I did for the AzureAD Photo, but also in a different attribute) to make it easier for comparing what is in Azure AD and Exchange Online, to then be used to determine if changes have been made (eg. user updated their profile photo).

Photo Checksum

For generating the hash of the profile photos I’m using Get-Hash from the Powershell Community Extensions.  Whilst PowerShell has Get-FileHash I don’t want to write the profile photos out to disk and read them back in just to get the checksum. That slows the process up by 25%. You can get the checksum using a number of different methods and algorithms. Just be consistent and use the same method across both profile photos and you’ll be comparing apples with apples and the comparison logic will work.

Some notes on Photos and Exchange Online (and MFA)

This is where things went off on a number of tangents. Initially I tried accessing the photos using Exchange Online Remote PowerShell.

CAVEAT 1: If your Office365 Tenant is enabled for Multi-Factor Authentication (which it should be) you will need to get the Exchange Online Remote PowerShell Module as detailed here. Chances are you won’t have full Office365 Admin access though, so as long as the account you will be using is in the Recipient Management Role you should be able to go to the Exchange Control Panel using a URL like https://outlook.office365.com/ecp/?realm=<tenantname>&wa=wsignin1.0 where tenantname is something like customer.com.au From the Hybrid menu on in the right handside pane you will then be able to download the Microsoft.Online.CSE.PSModule.Client.application I had to use Internet Explorer to download the file and get it installed successfully. Once installed I used a few lines from this script here to load the Function and start my RPS session from within PowerShell ISE during solution development.

CAVEAT 2: The EXO RPS MFA PS Function doesn’t allow you to pass it your account password. You can pass it the identity you want to use, but not the password. That makes scheduled process automation with it impossible.

CAVEAT 3: The RPS session exposes the Get-UserPhoto cmdlet which is great. But the RPS session leverages the GraphAPI. The RPS PS Module doesn’t refresh it’s tokens, so if the import takes longer than 60 minutes then using this method you’re a bit stuffed.

CAVEAT 4: Using the Get-UserPhoto cmdlet detailed above, the syncing of photos is slow. As in I was only getting ~4 profile photos per minute slow. This also goes back to the token refresh issue as for pretty much any environment of the size I deal with, this is too slow and will timeout.

CAVEAT 5: You can whitelist the IP Address (or subnet) of your host so MFA is not required using Contextual IP Addressing Whitelisting. At that point there isn’t really a need to use the MFA Enabled PREVIEW EXO RPS function anyway. That said I still needed to whitelist my MIM Sync Server(s) from MFA to allow integration into the Graph API. I configured just the single host. The whitelist takes CIDR format so that looks like /32 (eg. 11.2.33.4/32)

Performance Considerations

As I mentioned above,

  • using the Get-UserPhoto cmdlet was slow. ~4 per minute slow
  • using the GraphAPI into Exchange Online and looking at each user and determining if they had a photo then downloading it, was also slow. Slow because at this customer only ~50% of their users have a photo on their mailbox. As such I was only able to retrieve ~145 photos in 25 minutes. *Note: all timings listed above were during development and actually outputting the images to disk to verify functionality. 

Implemented Solution

After all my trial and error on this, here is my final approach and working solution;

  1. Use the Exchange Online Remote PowerShell (non-MFA version) to query and return a collection of all mailboxes with an image *Note, add an exception for your MIM Sync host to the white-listed hosts for MFA (if your Office365 Tenant is enabled for MFA) so the process can be automated
  2. Use the Graph API to obtain those photos
    • with this I was able to retrieve ~1100 profile photos in ~17* minutes (after ~2 minutes to query and get the list of mailboxes with a profile photo)

Pre-requisites

There’s a lot of info above, so let me summarize the pre-requisties;

  • The Granfeldt PowerShell MA
  • Whitelist your FIM/MIM Sync Server from MFA (if your Office 365 environment is enabled for MFA)
  • Add the account you will run the MA as, that will in turn connect to EXO via RPS to the Recipient Management Role
  • Create a WebApp for the PS MA to use to access users Profile Photos via the Graph API (fastest method)
  • Powershell Community Extensions to generate the image checksum

Creating the WebApp to access Office365 User Profile Photos

Go to your Azure Portal and select the Azure Active Directory Blade from the Resource Menu bar on the left. Then select App Registrations and from the Manage Section of the Azure Active Directory menu, and finally from the top of the main pane select “New Application Registration“.

Give it a name and select Web app/API as the type of app. Make the sign-in URL https://localhost and then select Create.

Record the ApplicationID that you see in the Registered App Essentials window. You’ll need this soon.

Now select All Settings => Required Permissions. Select Read all users basic profiles in addition to Sign in and read user profile. Select Save.

Under Required Permissions select Add and then select 1 Select an API, and select Office 365 Exchange Online then click Select.

Choose 2 Select Permissions and then select Read user profiles and Read all users’ basic profiles. Click Select.

Select Grant Permissions

From Settings select Keys, give your key a Description, choose a key lifetime and select Save. RECORD the key value. You’ll need this along with the WebApp ApplicationID/ClientID for the Import.ps1 script.

Using the information from your newly registered WebApp, we need to perform the first authentication (and authorization of the WebApp) to the Graph API. Taking your ApplicationID, Key (Client Secret) and the account you will use on on the Management Agent (and that you have assigned the Recipient Management Role in Exchange Online) and run the script detailed in this post here. It will authenticate you to your new WebApp via the GraphAPI after asking you to provide the account you will use on the MA and Authorizing the permissions you selected when registering the app. It will also create a refresh.token file which we will give to the MA to automate our connection. The Authorization dialog looks like this.

Creating the Management Agent

Now we can create our Management Agent using the Granfeldt PowerShell Management Agent. If you haven’t created one before checkout a post like this one, that further down the post shows the creation of a Granfeldt PSMA. Don’t forget to provide blank export.ps1 and password.ps1 files on the directory where you place the PSMA scripts.

PowerShell Management Agent Schema.ps1

PowerShell Management Agent Import.ps1

As detailed above the PSMA will leverage the WebApp to read users Exchange Profile Photos via the Graph API. The Import script also leverages Remote Powershell into Exchange Online (for reasons also detailed above). The account you run the Management Agent as will need to be added to the Recipient Management Role Group in order to use Remote PowerShell into Exchange Online and get the information required.

Take the Import.ps1 script below and update;

  • Update lines 11, 24 and 42 for the path to where you have put your PSMA. Mine is under the Extensions directory in a directory named EXOPhotos.
  • copy the refresh.token generated when authenticating and authorizing the WebApp earlier into the directory you specified in line 42 above.
  • Create a Debug directory under the directory you specified in lines 11,24 and 42 above so you can see what the MA is doing as you implement and debug it the first few times.
  • I’ve written the Import to use Paged Imports, so make sure you tick the Paged Imports checkbox on the configuration of the MA
  •  Update Lines 79 and 80 with your ApplicationID and Client Secret that you recorded when creating your WebApp

Running the Exchange User Profile Photos MA

Now that you have created the MA, you should have select the EXOUser ObjectClass and the attributes defined in the schema. You should also create the EXOPhoto (as Binary) and EXOPhotoChecksum (as String) attributes in the Metaverse on the person ObjectType (assuming you are using the built-in person ObjectType).

Configure your flow rules to flow the EXOPhoto and EXOPhotoChecksum on the MA to their respective attributes in the MV.

Create a Stage Only run profile and run it. If you have done everything correctly you will see photos come into the Connector Space.

Looking at the Connector Space, I can see EXOPhoto and EXOPhotoChecksum have been imported.

After performing a Synchronization to get the data from the Connector Space into the Metaverse it is time to test the image that lands in the Metaverse. That is quick and easy via PowerShell and the Lithnet MIIS Automation PowerShell Module.

$me = Get-MVObject -ObjectType person -Attribute accountName -Value "drobinson"
$me.Attributes.EXOPhoto.Values.ValueBinary
[System.Io.File]::WriteAllBytes("c:\temp\myOutlookphoto.jpg" ,$me.Attributes.EXOPhoto.Values.ValueBinary )

The file is output to the directory with the filename specified.

Opening the file reveals correctly my Profile Photo.

Summary

In Part one we got the AzureAD/Active Directory photo. In this post we got the Office365 photo.

Now that we have the images from Office365 we need to synchronize any update to photos to Active Directory (and in-turn via AADConnect to Azure AD). Keep in mind the image size limits for Active Directory and that we retrieved the largest photo available from Office365 when synchronizing the photo on. There are a number of PowerShell modules for photo manipulation that will allow you to resize accordingly.

A quick start guide to leveraging the Azure Graph API with PowerShell and oAuth 2.0

Introduction

In September 2016 I wrote this post detailing integrating with the Azure Graph API via PowerShell and oAuth 2.0.

Since that point in time I’ve found myself doing considerably more via PowerShell and the Graph API using oAuth. I regularly find myself leveraging previous scripts to generate a new script for the initial connection. To the point that I decided to make this simpler and provide a nice clean starting point for new scripts.

This blog post details a simple script to generate a couple of PowerShell Functions that can be the basis for integration with Graph API using PowerShell via a WebApp using oAuth2.

Overview

This script will request the necessary information required to call into the Graph API and establish a session. Specifically;

Armed with this information the shell of a PowerShell script will be created that will;

  • Authenticate a user to Graph API via Powershell and oAuth 2.0
  • Request Authorization for the WebApp to access the Scope provided (if Admin approval scope is requested and the AuthN is performed by a non-admin an authorization failure message will appear detailing an Administrator must authorize).
  • Obtain and Authorization Code which will contain the Bearer Token and Refresh Token.
    • The Bearer token can be used to make Graph API calls for up to 1 hour.
    • The Refresh token will allow you to request a new token and allow your script to be used again to interact via Graph API without going through the Authentication process again.

The following graphic shows this flow.
active-directory-oauth-code-flow-native-app

Create/Register your Application

Go to the Application Registration Portal https://apps.dev.microsoft.com/ and sign in. This is the new portal for registering your apps. It will show any previous apps you registered within AzureAD and any of the new “Converged Apps” you’ve created via the new Application Registration Portal.

Select Add an app from the Converged applications list.

New Converged App.PNG

Give your app a name and select Create

AppReg2.PNG

Record the Application ID (previously known as the Client ID) and select Generate New Password.

AppReg3.PNG

You will be provided your Client Secret. Record this now as it is the only time you will see it. Select Ok.

AppReg4.PNG

By default you will get User.Read permissions on the API. That is enough for this sample. Depending on what you will do with the API you will probably need to come and change the permissions or do it dynamically via the values you supply the $resource setting in your API calls.

AppReg5

Select Platforms, select Web and add a reply URL of https://localhost

AppReg6

Scroll to the bottom of the Registration windows and select Save.

Generate your PowerShell Graph API oAuth Script

Copy the following script and put it into an Administrator PowerShell/PowerShell ISE session and run it.

It will ask you to choose a folder to output the resultant PowerShell Script to. You can create a new folder through this dialog window if require.

OutputFolder.PNG

The script will prompt you for the Client/Application ID, Client Secret and the Reply URL you obtained when registering the Web App in the steps above.

ScriptPrompts.PNG

The script will be written out to the folder you chose in the first step and it will be executed. It will prompt you to authenticate. Provide the credentials you used when you created the App in the Application Registration Portal.

OfficeAuthN1.png

You will be prompted to Authorize the WebApp. Select Accept

AuthNtoAuthZ.PNG

If you’ve executed the previous steps correctly you’ll receive an AuthCode in your PowerShell output window

AuthCode.PNG

You’ll then see the output for a sample query for your user account and below that the successful call for a refresh of the tokens.

UserQueryOutput.png

Summary

In the folder you chose you will find a PowerShell script with the name Connect-to-Microsoft-Graph.ps1You will also find a file named refresh.token. You can use the script to authenticate with your new app, but more simply use the Get-NewTokens function to refresh your tokens and then write your own API queries to your app using the tokens. Unless you change the scope you don’t need to run Get-AzureAuthN again. Just use Get-NewTokens before your API calls.

e.g

Get-NewTokens  
$myManager = Invoke-RestMethod -Method Get -Headers @{Authorization = "Bearer $accesstoken"
 'Content-Type' = 'application/json'} `
 -Uri "https://graph.microsoft.com/v1.0/me/manager"

 $myManager

Change the scope of your app to get more information. If you add a scope that requires Admin consent (and you’re not an admin), when prompted to authenticate you will need to get an Admin to authenticate and authorize the scope. Because you’ve changed the scope you will need to run the Get-AzureAuthN function again after updating $scope (as per below) and the dependent $scopeEncoded.

As the screen shot below shows I added the Mail.Read permission. I changed the $scope in the script so that it reflected the changes e.g

#Scope
$scope = "User.Read Mail.Read"
$scopeEncoded = [System.Web.HttpUtility]::UrlEncode($scope)

MailRead.png

When running the script again (because of the change of scope) you will be prompted to confirm the change of access.

Scope Change.PNG

You can then query your inbox, e.g.

 $myMail = Invoke-RestMethod -Method Get -Headers @{Authorization = "Bearer $accesstoken"
 'Content-Type' = 'application/json'} `
 -Uri "https://graph.microsoft.com/v1.0/me/messages"
 $mymail

And there is mail messages from your inbox.

MailAPI.png

I hope that makes getting started with the oAuth2 Graph API via PowerShell a lot simpler than it was for me initially, with the differing endpoints, evolving API and the associated documentation somewhere in-between.

How to use a Powershell Azure Function to Tweet IoT environment data

Overview

This blog post details how to use a Powershell Azure Function App to get information from a RestAPI and send a social media update.

The data can come from anywhere, and in the case of this example I’m getting the data from WioLink IoT Sensors. This builds upon my previous post here that details using Powershell to get environmental information and put it in Power BI.  Essentially the difference in this post is outputting the manipulated data to social media (Twitter) whilst still using a TimerTrigger Powershell Azure Function App to perform the work and leverage the “serverless” Azure Functions model.

Prerequisites

The following are prerequisites for this solution;

Create a folder on your local machine for the Powershell Module then save the module to your local machine using the powershell command ‘Save-Module” as per below.

Save-Module -Name InvokeTwitterAPIs -Path c:\temp\twitter

Create a Function App Plan

If you don’t already have a Function App Plan create one by searching for Function App in the Azure Management Portal. Give it a Name, Select Consumption so you only pay for what you use, and select an appropriate location and Storage Account.

Create a Twitter App

Head over to http://dev.twitter.com and create a new Twitter App so you can interact with Twitter using their API. Give you Twitter App a name. Don’t worry about the URL too much or the need for the Callback URL. Select Create your Twitter Application.

Select the Keys and Access Tokens tab and take a note of the API Key and the API Secret. Select the Create my access token button.

Take a note of your Access Token and Access Token Secret. We’ll need these to interact with the Twitter API.

Create a Timer Trigger Azure Function App

Create a new TimerTrigger Azure Powershell Function. For my app I’m changing from the default of a 5 min schedule to hourly on the top of the hour. I did this after I’d already created the Function App as shown below. To update the schedule I edited the Function.json file and changed the schedule to “schedule”: “0 0 * * * *”

Give your Function App a name and select Create.

Configure Azure Function App Application Settings

In your Azure Function App select “Configure app settings”. Create new App Settings for your Twitter Account, Twitter Account AccessToken, AccessTokenSecret, APIKey and APISecret using the values from when you created your Twitter App earlier.

Deployment Credentials

If you haven’t already configured Deployment Credentials for your Azure Function Plan do that and take note of them so you can upload the Twitter Powershell module to your app in the next step.

Take note of your Deployment Username and FTP Hostname.

Upload the Twitter Powershell Module to the Azure Function App

Create a sub-directory under your Function App named bin and upload the Twitter Powershell Module using a FTP Client. I’m using WinSCP.

From the Applications Settings option start Kudu.

Traverse the folder structure to get the path do the Twitter Powershell Module and note it.

Update the code to replace the sample from the creation of the Trigger Azure Function as shown below to import the Twitter Powershell Module. Include the get-help lines for the module so we can see in the logs that the modules were imported and we can see the cmdlets they contain.

Validating our Function App Environment

Update the code to replace the sample from the creation of the Trigger Azure Function as shown below to import the Twitter Powershell Module. Include the get-help line for the module so we can see in the logs that the module was imported and we can see the cmdlets they contain. Select Save and Run.

Below is my output. I can see the output from the Twitter Module.

Function Application Script

Below is my sample script. It has no error handling etc so isn’t production ready, but gives a working example of getting data in from an API (in this case IoT sensors) and sends a tweet out to Twitter.

Viewing the Tweet

And here is the successful tweet.

Summary

This shows how easy it is to utilise Powershell and Azure Function Apps to get data and transform it for use in other ways. In this example a social media platform. The input could easily be business data from an API and the output a corporate social platform such as Yammer.

Follow Darren on Twitter @darrenjrobinson

How to use a Powershell Azure Function App to get RestAPI IoT data into Power BI for Visualization

Overview

This blog post details using a Powershell Azure Function App to get IoT data from a RestAPI and update a table in Power BI with that data for visualization.

The data can come from anywhere, however in the case of this post I’m getting the data from WioLink IoT Sensors. This builds upon my previous post here that details using Powershell to get environmental information and put it in Power BI.  Essentially the major change is to use a TimerTrigger Azure Function to perform the work and leverage the “serverless” Azure Functions model. No need for a reporting server or messing around with Windows scheduled tasks.

Prerequisites

The following are the prerequisites for this solution;

  • The Power BI Powershell Module
  • Register an application for RestAPI Access to Power BI
  • A Power BI Dataset ready for the data to go into
  • AzureADPreview Powershell Module

Create a folder on your local machine for the Powershell Modules then save the modules to your local machine using the powershell command ‘Save-Module” as per below.

Save-Module -Name PowerBIPS -Path C:\temp\PowerBI
Save-Module -Name AzureADPreview -Path c:\temp\AzureAD 

Create a Function App Plan

If you don’t already have a Function App Plan create one by searching for Function App in the Azure Management Portal. Give it a Name, Select Consumption Plan for the Hosting Plan so you only pay for what you use, and select an appropriate location and Storage Account.

Register a Power BI Application

Register a Power BI App if you haven’t already using the link and instructions in the prerequisites. Take a note of the ClientID. You’ll need this in the next step.

Configure Azure Function App Application Settings

In this example I’m using Azure Functions Application Settings for the Azure AD AccountName, Password and the Power BI ClientID. In your Azure Function App select “Configure app settings”. Create new App Settings for your UserID and Password for Azure (to access Power BI) and our PowerBI Application Client ID. Select Save.

Not shown here I’ve also placed the URL’s for the RestAPI’s that I’m calling to get the IoT environment data as Application Settings variables.

Create a Timer Trigger Azure Function App

Create a new TimerTrigger Azure Powershell Function App. The default of a 5 min schedule should be perfect. Give it a name and select Create.

Upload the Powershell Modules to the Azure Function App

Now that we have created the base of our Function App we’re going to need to upload the Powershell Modules we’ll be using that are detailed in the prerequisites. In order to upload them to your Azure Function App, go to App Service Settings => Deployment Credentials and set a Username and Password as shown below. Select Save.

Take note of your Deployment Username and FTP Hostname.

Create a sub-directory under your Function App named bin and upload the Power BI Powershell Module using a FTP Client. I’m using WinSCP.

To make sure you get the correct path to the powershell module from Application Settings start Kudu.

Traverse the folder structure to get the path to the Power BI Powershell Module and note the path and the name of the psm1 file.

Now upload the Azure AD Preview Powershell Module in the same way as you did the Power BI Powershell Module.

Again using Kudu validate the path to the Azure AD Preview Powershell Module. The file you are looking for is the Microsoft.IdentityModel.Clients.ActiveDirectory.dll” file. My file after uploading is located in “D:\home\site\wwwroot\MyAzureFunction\bin\AzureADPreview\2.0.0.33\Microsoft.IdentityModel.Clients.ActiveDirectory.dll”

This library is used by the Power BI Powershell Module.

Validating our Function App Environment

Update the code to replace the sample from the creation of the Trigger Azure Function as shown below to import the Power BI Powershell Module. Include the get-help line for the module so we can see in the logs that the modules were imported and we can see the cmdlets they contain. Select Save and Run.

Below is my output. I can see the output from the Power BI Module get-help command. I can see that the module was successfully loaded.

Function Application Script

Below is my sample script. It has no error handling etc so isn’t production ready, but gives a working example of getting data in from an API (in this case IoT sensors) and puts the data directly into Power BI.

Viewing the data in Power BI

In Power BI it is then quick and easy to select our Inside and Outside temperature readings referenced against time. This timescale is overnight so both sensors are reading quite close to each other.

Summary

This shows how easy it is to utilise Powershell and Azure Function Apps to get data and transform it for use in other ways. In this example a visualization of IoT data into Power BI. The input could easily be business data from an API and the output a real time reporting dashboard.

Follow Darren on Twitter @darrenjrobinson