commands

How to configure a Graphical PowerShell Dev/Admin/Support User Interface for Azure/Office365/Microsoft Identity Manager

During the development of an identity management solution I find myself with multiple PowerShell/RDP sessions connected to multiple environments using different credentials often to obtain trivial data/information. It is easy to trip yourself up as well with remote powershell sessions to differing environments. If only there was a simple UI that could front-end a set of PowerShell modules and make those simple queries quick and painless. Likewise to allow support staff to execute a canned set of queries without providing them elevated permissions.

I figured someone would have already solved this problem and after some searching with the right keywords I found the powershell-command-executor-ui from bitsofinfo . Looking into it he had solved a lot of the issues with building a UI front-end for PowerShell with the powershell-command-executor and the stateful-process-command-proxy. That solution provided the framework for what I was thinking. The ability to provide a UI for PowerShell using powershell modules including remote powershell was exactly what I was after. And it was built on NodeJS and AngularJS so simple enough for some customization.

Introduction

In this blog post I’ll detail how I’ve leveraged the projects listed above for integration with;

Initially I had a vision of serving up the UI from an Azure WebApp. NodeJS on Azure WebApp’s is supported, however with all the solution dependencies I just couldn’t get it working.

My fallback was to then look to serve up the UI from a Windows Server 2016 Nano Server. I learnt from my efforts that a number of the PowerShell modules I was looking to provide a UI for, have .NET Framework dependencies. Nano Server does not have full .NET Framework support. Microsoft state to do so would mean the server would no longer be Nano.

For now I’ve deployed an Azure Windows Server 2016 Server secured by an Azure NSG to only allow my machine to access it. More on security later.

Overview

Simply, put the details in Github for the powershell-command-executor provide the architecture and integration. What I will detail is the modifications I’ve made to utilize the more recent AzureADPreview PowerShell Module over the MSOL PowerShell Module. I also updated the dependencies of the solution for the latest versions and hooked it into Microsoft Identity Manager. I also made a few changes to allow different credentials to be used for Azure and Microsoft Identity Manager.

Getting Started

I highly recommend you start with your implementation on a local development workstation/development virtual machine. When you have a working version you’re happy with you can then look at other ways of presenting and securing it.

NodeJS

NodeJS is the webserver for this solution. Download NodeJS for your Windows host here. I’m using the 64-bit version, but have also implemented the solution on 32-bit. Install NodeJS on your local development workstation/development virtual machine.

You can accept all the defaults.

Following the installation of NodeJS download the powershell-command-executor-ui from GitHub. Select Clone 0r Download, Download ZIP and save it to your machine.

Right click the download when it has finished and select Extract All. Select Browse and create a folder at the root of C:\ named nodejs. Extract powershell-command-executor-ui.

Locate the c:\nodejs\powershell-command-executor-ui-master\package.json file.

Using an editor such as Notepad++ update the package.json file ……

…… so that it looks like the following. This will utilise the latest versions of the dependencies for the solution.

From an elevated (Administrator) command prompt in the c:\nodejs\powershell-command-executor-ui-master directory run “c:\program files\nodejs npm” installThis will read the package.json file you edited and download the dependencies for the solution.

You can see in the screenshot below NodeJS has downloaded all the items in package.json including the powershell-command-executor and stateful-process-command-proxy.

When you now list the directories under C:\nodejs\powershell-command-executor-ui-master\node_modules you will see those packages and all their dependencies.

We can now test that we have a working PowerShell UI NodeJS website. From an elevated command prompt whilst still in the c:\nodejs\powershell-command-executor-ui-master directory run “c:\Program Files\nodejs\node.exe” bin\www

Open a browser on the same host and go to http://localhost:3000”. You should see the default UI.

Configuration and Customization

Now it is time to configure and customize the PowerShell UI for our needs.

The files we are going to edit are:

  • C:\nodejs\powershell-command-executor-ui-master\routes\index.js
    • Update Paths to the encrypted credentials files used to connect to Azure, MIM. We’ll create the encrypted credentials files soon.
  • C:\nodejs\powershell-command-executor-ui-master\public\console.html
    • Update for your customizations for CSS etc.
  • C:\nodejs\powershell-command-executor-ui-master\node_modules\powershell-command-executor\O365Utils.js
    • Update for PowerShell Modules to Import
    • Update for Commands to make available in the UI

We also need to get a couple of PowerShell Modules installed on the host so they are available to the site. The two I’m using I’ve mentioned earlier. With WMF5 intalled using Powershell we can simply install them as per the commands below.

Install-Module AzureADPreview
Install-Module LithnetRMA

In order to connect to our Microsoft Identity Manager Synchronization Server we are going to need to enable Remote Powershell on our Microsoft Identity Manager Synchronization Server. This post I wrote here details all the setup tasks to make that work. Test that you can connect via RPS to your MIM Sync Server before updating the scripts below.

Likewise for the Microsoft Identity Manager Service Server. Make sure after installing the LithnetRMA Powershell Module you can connect to the MIM Service using something similar to:

# Import LithnetRMA PS Module
import-module lithnetrma

# MIM AD User Admin
$username = "mimadmin@mim.mydomain.com"
# Password 
$password = "Secr3tSq1rr3l!" | convertto-securestring -AsPlainText -Force
# PS Creds
$credentials = New-Object System.Management.Automation.PSCredential $Username,$password

# Connect to the FIM service instance
# Will require an inbound rule for TCP 5725 (or your MIM Service Server Port) in you Resource Group Network Security Group Config
Set-ResourceManagementClient -BaseAddress http://mymimportalserver.:5725 -Credentials $credentials

 

\routes\index.js

This file details the encrypted credentials the site uses. You will need to generate the encrypted credentials for your environment. You can do this using the powershell-credentials-encryption-tools. Download that script to your workstation and unzip it. Open the credentialEncryptor.ps1 script using an Administrator PowerShell ISE session.

I’ve changed the index.js to accept two sets of credentials. This is because your Azure Admin Credentials are going to be different from your MIM Administrator Credentials (both in name and password). The username for my Azure account looks something like myname@mycompany.com whereas for MIM it is Domainname\Username.

Provide an account name for your Azure environment and the associated password.

The tool will create the encrypted credential files.

Rename the encrypted.credentials file to whatever makes sense for your environment. I’ve renamed it creds1.encrypted.credentials.

Now we re-run the script to create another set of encrypted credentials. This time for Microsoft Identity Manager. Once created, rename the encrypted.credentials file to something that makes sense in your environment. I’ve renamed the second set to creds2.encrypted.credentials.

We now need to copy the following files to your UI Website C:\nodejs\powershell-command-executor-ui-master directory:

  • creds1.encrypted.credentials
  • creds2.encrypted.credentials
  • decryptUtil.ps1
  • secret.key

Navigate back to Routes.js and open the file in an editor such as Notepad++

Update the index.js file for the path to your credentials files. We also need to add in the additional credentials file.

The changes to the file are, the paths to the files we just copied above along with the addition var PATH_TO_ENCRYPTED_RPSCREDENTIALS_FILE for the second set of credentials used for Microsoft Identity Manager.

var PATH_TO_DECRYPT_UTILS_SCRIPT = "C:\\nodejs\\powershell-command-executor-ui-master\\decryptUtil.ps1";
var PATH_TO_ENCRYPTED_CREDENTIALS_FILE = "C:\\nodejs\\powershell-command-executor-ui-master\\creds1.encrypted.credentials";
var PATH_TO_ENCRYPTED_RPSCREDENTIALS_FILE = "C:\\nodejs\\powershell-command-executor-ui-master\\creds2-encrypted.credentials";
var PATH_TO_SECRET_KEY = "C:\\nodejs\\powershell-command-executor-ui-master\\secret.key";


Also to initCommands to pass through the additional credentials file


initCommands: o365Utils.getO365PSInitCommands(
 PATH_TO_DECRYPT_UTILS_SCRIPT,
 PATH_TO_ENCRYPTED_CREDENTIALS_FILE,
 PATH_TO_ENCRYPTED_RPSCREDENTIALS_FILE,
 PATH_TO_SECRET_KEY,
 10000,30000,3600000),

Here is the full index.js file for reference.

 

public/console.html

The public/console.html file is for formatting and associated UI components. The key things I’ve updated are the Bootstrap and AngularJS versions. Those are contained in the top of the html document. A summary is below.

https://ajax.googleapis.com/ajax/libs/angularjs/1.6.1/angular.min.js
https://cdnjs.cloudflare.com/ajax/libs/angular.js/1.6.1/angular-resource.min.js
http://javascripts/ui-bootstrap-tpls-2.4.0.min.js
http://javascripts/console.js
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap-theme.min.css">

You will also need to download the updated Bootstrap UI (ui-bootstrap-tpls-2.4.0.min.js). I’m using v2.4.0 which you can download from here. Copy it to the javascripts directory.

I’ve also updated the table types, buttons, colours, header, logo etc in the appropriate locations (CSS, Tables, Div’s etc). Here is my full file for reference. You’ll need to update for your colours, branding etc.

powershell-command-executor\O365Utils.js

Finally the O365Utils.js file. This contains the commands that will be displayed along with their options, as well as the connection information for your Microsoft Identity Manager environment.

You will need to change:

  • Line 52 for the address of your MIM Sync Server
  • Line 55 for the addresses of your MIM Service Server
  • Line 141 on-wards for what commands and parameters for those commands you want to make available in the UI

Here is an example with a couple of AzureAD commands, a MIM Sync and a MIM Service command.

Show me my PowerShell UI Website

Now that we have everything configured let’s start the site and browse to it. If you haven’t stopped the NodeJS site from earlier go to the command window and press Cntrl+C a couple of times. Run “c:\Program Files\nodejs\node.exe” bin\www again from the C:\nodejs\powershell-command-executor-ui-master directory unless you have restarted the host and now have NodeJS in your environment path.

In a browser on the same host go to http://localhost:3000 again and you should see the site as it is below.

Branding and styling from the console.html, menu options from the o365Utils.js and when you select a command and execute it data from the associated service …….

… you can see results. From the screenshot below a Get-AzureADUser command for the associated search string executed in milliseconds.

 

Summary

The powershell-command-executor-ui from bitsofinfo is a very extensible and powerful NodeJS website as a front-end to PowerShell.

With a few tweaks and updates the look and feel can be easily changed along with the addition of any powershell commands that you wish to have a UI for.

As it sits though keep in mind you have a UI with hard-coded credentials that can do whatever commands you expose.

Personally I am running one for my use only and I have it hosted in Azure in its own Resource Group with an NSG allowing outgoing traffic to Azure and my MIM environment. Incoming traffic is only allowed from my personal management workstations IP address. I also needed to allow port 3000 into the server on the NSG as well as the firewall on the host. I did that quickly using the command below.

# Enable the WebPort NodeJS is using on the firewall 
netsh advfirewall firewall add rule name="NodeJS WebPort 3000" dir=in action=allow protocol=TCP localport=3000

Follow Darren on Twitter @darrenjrobinson

Azure AD Connect – Using AuthoritativeNull in a Sync Rule

There is a feature in Azure AD Connect that became available in the November 2015 build 1.0.9125.0 (listed here), which has not had much fanfare but can certainly come in handy in tricky situations. I happened to be working on a project that required the DNS domain linked to an old Office 365 tenant to be removed so that it could be used in a new tenant. Although the old tenant was no long used for Exchange Online services, it held onto the domain in question, and Azure AD Connect was being used to synchronise objects between the on-premise Active Directory and Azure Active Directory.

Trying to remove the domain using the Office 365 Portal will reveal if there are any items that need to be remediated prior to removing the domain from the tenant, and for this customer it showed that there were many user and group objects that still had the domain used as the userPrincipalName value, and in the mail and proxyAddresses attribute values. The AuthoritativeNull literal could be used in this situation to blank out these values against user and groups (ie. Distribution Lists) so that the domain can be released. I’ll attempt to show the steps in a test environment, and bear with me as this is a lengthy blog.

Trying to remove the domain minnelli.net listed the items needing attention, as shown in the following screenshots:

This report showed that three actions are required to remove the domain values out of the attributes:

  • userPrincipalName
  • proxyAddresses
  • mail

userPrincipalName is simple to resolve by changing the value in the on-premise Active Directory using a different domain suffix, then synchronising the changes to Azure Active Directory so that the default onmicrosoft.com or another accepted domain is set.

Clearing the proxyAddresses and mail attribute values is possible using the AuthoritativeNull literal in Azure AD Connect. NOTE: You will need to assess the outcome of performing these steps depending on your scenario. For my customer, we were able to perform these steps without affecting other services required from the old Office 365 tenant.

Using the Synchronization Rules Editor, locate and edit the In from AD – User Common rule. Editing the out-of-the-box rules will display a message to suggest you create an editable copy of the rule and disable the original rule which is highly recommended, so click Yes.

The rule is cloned as shown below and we need to be mindful of the Precedence value which we will get to shortly.

Select Transformations and edit the proxyAddresses attribute, set the FlowType to Expression, and set the Source to AuthoritativeNull.

I recommend setting the Precedence value in the new cloned rule to be the same as the original rule, in this case value 104. Firstly edit the original rule to a value such as 1001, and you can also notice the original rule is already set to Disabled.

Set the cloned rule Precedence value to 104.

Prior to performing a Full Synchronization run profile to process the new logic, I prefer and recommend to perform a Preview by selecting a user affected and previewing a Full Synchronization change. As can be seen below, the proxyAddresses value will be deleted.

The same process would need to be done for the mail attribute.

Once the rules are set, launch the following PowerShell command to perform a Full Import/Full Synchronization cycle in Azure AD Connect:

  • Start-ADSyncSyncCycle -PolicyType Initial

Once the cycle is completed, attempt to remove the domain again to check if any other items need remediation, or you might see a successful domain removal. I’ve seen it take upto 30 minutes or so before being able to remove the domain if all remediation tasks have been completed.

There will be other scenarios where using the AuthoritativeNull literal in a Sync Rule will come in handy. What others can you think of? Leave a description in the comments.

Active Directory – What are Linked Attributes?

A customer request to add some additional attributes to their Azure AD tenant via Directory Extensions feature in the Azure AD Connect tool, lead me into further investigation. My last blog here set out the customer request, but what I didn’t detail in that blog was one of the attributes they also wanted to extend into Azure AD was directReports, an attribute they had used in the past for their custom built on-premise applications to display the list of staff the user was a manager for. This led me down a rabbit hole where it took a while to reach the end.

With my past experience in using Microsoft Identity Manager (formally Forefront Identity Manager), I knew directReports wasn’t a real attribute stored in Active Directory, but rather a calculated value shown using the Active Directory Users and Computers console. The directReports was based on the values of the manager attribute that contained the reference to the user you were querying (phew, that was a mouthful). This is why directReport and other similar type of attributes such as memberOf were not selectable for Directory Extension in the Azure AD Connect tool. I had never bothered to understand it further than that until the customer also asked for a list of these type of attributes so that they could tell their application developers they would need a different technique to determine these values in Azure AD. This is where the investigation started which I would like to summarise as I found it very difficult to find this information in one place.

In short, these attributes in the Active Directory schema are Linked Attributes as detailed in this Microsoft MSDN article here:

Linked attributes are pairs of attributes in which the system calculates the values of one attribute (the back link) based on the values set on the other attribute (the forward link) throughout the forest. A back-link value on any object instance consists of the DNs of all the objects that have the object’s DN set in the corresponding forward link. For example, “Manager” and “Reports” are a pair of linked attributes, where Manager is the forward link and Reports is the back link. Now suppose Bill is Joe’s manager. If you store the DN of Bill’s user object in the “Manager” attribute of Joe’s user object, then the DN of Joe’s user object will show up in the “Reports” attribute of Bill’s user object.

I then found this article here which further explained these forward and back links in respect of which are writeable and which are read-only, the example below referring to the linked attributes member/memberOf:

Not going too deep into the technical details, there’s another thing we need to know when looking at group membership and forward- and backlinks: forward-links are writable and backlinks are read-only. This means that only forward-links changed and the corresponding backlinks are computed automatically. That also means that only forward-links are replicated between DCs whereas backlinks are maintained by the DCs after that.

The take-out from this is the value in the forward-link can be updated, the member attribute in this case, but you cannot update the back-link memberOf. Back-links are always calculated automatically by the system whenever an attribute that is a forward-link is modified.

My final quest was to find the list of linked attributes without querying the Active Directory schema which then led me to this article here, which listed the common linked attributes:

  • altRecipient/altRecipientBL
  • dLMemRejectPerms/dLMemRejectPermsBL
  • dLMemSubmitPerms/dLMemSubmitPermsBL
  • msExchArchiveDatabaseLink/msExchArchiveDatabaseLinkBL
  • msExchDelegateListLink/msExchDelegateListBL
  • publicDelegates/publicDelegatesBL
  • member/memberOf
  • manager/directReports
  • owner/ownerBL

There is further, deeper technical information about linked attributes such as “distinguished name tags” (DNT) and what is replicated between DCs versus what is calculated locally on a DC, which you can read in your own leisure in the articles listed throughout this blog. But I hope the summary is enough information on how they work.

Azure AD Connect – Multi-valued Directory Extensions

I happened to be at a customer site working on an Azure project when I was asked to cast a quick eye over an issue they had been battling with. They had an Azure AD Connect server synchronising user and group objects between their corporate Active Directory and their Azure AD, used for Office 365 services and other Azure-based applications. Their intention was to synchronise some additional attributes from their Active Directory to Azure AD so that they could be used by some of their custom built Azure applications. These additional attributes were a combination of standard Active Directory attributes as well as some custom schema extended attributes.

They were following the guidance from the Microsoft article listed here. As mentioned in the article, ‘Directory extensions allows you to extend the schema in Azure AD with your own attributes from on-premises Active Directory‘. When running the Azure AD Connect installation wizard and trying to find the attributes in the dropdown list, some of their desired attributes were not listed as shown below. An example attribute they wanted to synchronise was postalAddress which was not in the list.

After browsing the dropdown list trying to determine why some of their desired attributes were missing, I noticed multi-valued attributes were missing, such as the description standard Active Directory attribute which I knew was a multi-valued attribute.

I checked the schema in Active Directory and it was clear the postalAddress attribute was multi-valued.

The customer pointed me back to the Microsoft article which clearly stated that the valid attribute candidates included multi-valued strings and binary attributes. With the time I had remaining, I reviewed their Azure AD Connect implementation and tried a few techniques in the synchronisation service such as:

  • refreshing the schema of the on-premise Active Directory management agent
  • enabled the attribute in the properties of the on-premise Active Directory management agent as by default it was not checked

I next checked the Azure AD Connect release notes (here) and quickly noticed the cause of the issue which had to do with the version of Connect they were using, which was a few releases old. It was from version 1.1.130.0 released in April 2016 which added support for multi-valued attributes to Directory Extensions, while the version running by the customer was 1.1.110.0 from only a couple of months earlier.

After upgrading Azure AD Connect to currently released version, the customer was able to successfully select these multi-valued attributes.

Microsoft are very good at keeping the release notes upto date as new versions of Azure AD Connect are released, currently every 1-2 months. The lessons learned are to check the release notes to view the new features and bug fixes in releases to determine if you need to upgrade the tool.

Azure AD Application SSO and Provisioning – Things to consider

I’ve had the opportunity to work on a couple of customer engagements recently integrating SaaS based cloud applications with Azure Active Directory, one being against a cloud-only Azure AD tenant and the other federated with on-premises Active Directory using ADFS. The Azure AD Application Gallery now has over 2,700 applications listed which provide a supported and easy process to integrate applications with Azure AD, although not every implementation is the same. Most of them have a prescribed tutorial on how to perform the integration (listed here), while some application vendors have their own guides.

This blog won’t describe what is Single Sign-On (SSO) or User Provisioning with Azure AD (which is detailed here), but rather to highlight some things to consider when you start planning your application integrations with Azure AD.

User provisioning into the application

Azure AD supports user provisioning and de-provisioning into some target SaaS applications based on changes made in Windows Server Active Directory and/or Azure AD. Organisations will generally either be managing user accounts in these SaaS applications manually, using scripts or some other automated method. Some notes about provisioning in Azure AD:

  • ‘Featured’ apps in the Azure AD Application Gallery support automatic provisioning and de-provisioning. A privileged service account with administrative permissions within the SaaS application is required to allow Azure AD the appropriate access
  • Some applications (i.e. like Lucidchart and Aha!) are able to perform the function of provisioning of new users on their own, which is managed directly by the application with no control from Azure AD. Some applications (i.e. like Lucidchart) will also automatically apply a license to the new user account, which makes the correct user assignment to the application important. Other applications (i.e. like Aha!) will automatically create the new accounts but not assign a license or provide access to any information. License assignment in this case would be a manual task by an administrator in the target SaaS application
  • All other applications that do not provide the capability for automatic provisioning require the user accounts to be present within the target application. User accounts will therefore need to be either manually created, scripted, or make use of another form of Identity Management. You need to ensure the method on how the application matches the user from Azure AD is known so that accurate matching can be performed. For example, the ‘UserPrincipleName’ value from the Azure AD user account matches the ‘NameID’ value within the application

Role mapping between Azure AD and the application

Access to applications can be assigned either directly against user accounts in Azure AD or by using groups. Using groups is recommended to ease administration with the simplest form using a single Security Group to allow users access to the application. Some applications like Splunk support the capability to assign user access based on roles which is performed using ‘Role Mapping’. This provides the capability to have a number of Security Groups represent different roles, and assigning users to these groups in Azure AD not only enables access to the application but also assigns the user’s role. Consider how you manage user memberships to these groups, how they are named so that they are easily identified for management, and how the application knows about the groups. Splunk for example requires groups to be created using the ‘Group Object Id’ of the Azure AD group and not it’s name, as shown in this example:

The Group Object Id for groups can be found by going to your Directory Page and then navigating to the group whose Object Id is to be retrieved.


User interface changes

Some applications have a modified interface when SSO is enabled, allowing users to select whether to login with userID/password credentials or a federated login. You need to consider end-user communications for any sudden changes to the application that users may face and let them know which option they should select. For example, the Aha! application changes from this:

to this when SSO is enabled:

Have a backout plan

In addition to the point above, some applications require an all-or-nothing approach when SSO is enabled. If you have not sufficiently planned and prepared, you may be locked out of the application as an administrator. If you have access to a test subscription of your application, test, test and test! If you only have your production subscription, I would suggest having an open dialog with the application support team in case you inadvertently lock yourself out of authenticating. For example, the New Relic application allows the SSO configuration to be made in advance which then requires only the account owner to enable it. Once enabled, all authentication is using SSO and you had better hoped the configuration is correct or else you’ll be asking support to backout the changes.

Applications such as ServiceNow have the ability to have multiple SSO providers where you can implement a phased approach for the enabling of SSO to users with the ultimate goal to make SSO authentication default. ServiceNow also has a ‘side door’ feature where you can by-pass external authentication and login with a local ServiceNow user (as detailed here).

Accessing applications

Users will be able to access applications configured for SSO with Azure AD using either of the following methods:

  • the Microsoft MyApps application panel (https://myapps.microsoft.com)
  • the Office 365 application launch portal
  • Service Provider Initiated Authentication, where authentication is initiated directly at the SaaS application

Most federated applications that support SAML 2.0 also support the ability for users to start at the application login page, and then be signed in through Azure AD either by automatic redirection or by clicking on a SSO link to sign in as shown in the Aha! images above. This is known as Service Provider Initiated Authentication, and most federated applications in the Azure AD Application Gallery support this capability. Some applications such as AWS do not support Service Provider Initiated Authentication and SSO does not work if users attempt to authenticate from the application’s login screen. The alternate methods to access the application via SSO need to be followed and communication for end-users to inform them on how to access these types of applications. For AWS, you can access the application using SSO from the MyApps application panel, with an alternate method by providing users with a ‘Single Sign-On URL’ which can be used as a direct, deep link to access the application (as detailed here).

Conclusion

Hopefully you can see that although it can seem quite simple to integrate an application with Azure AD, take the time to plan and test the integration of your applications.

Developing and configuring Multi-tenant applications using AngularJs, WebAPI and Azure Active Directory

In this post, I am going to share my experience about publishing multi-tenant applications in Azure Active Directory where Azure Active Directory’s role is OAuth server.

You can read more about OAuth2.0 at https://oauth.net/2/ . I am going to use implicit flow where client is an un-trusted application. For instance AngularJs application or phone application etc. Why these clients are called un-trusted because they cannot hide the secrets given/shared by OAuth server.

Let’s have a look at OAuth 2.0 actors in implicit flow. Below is the diagram of the implicit flow in OAuth 2.0:

1.0

Implicit flows:

  1. User tries to access the resource (e.g. WebAPI) via Client Application (e.g. AngularJS app)
  2. OAuth 2.0 server presents consent screen to user (e.g. resource owner)
  3. User accepts/rejects the consents
  4. If user accepts the consent, OAuth 2.0 grants client application to resource by presenting the auth token to resource

 

Now, we have got some terminology related to OAuth 2.0 implicit flow so we can map it to actual implementation. Please see below high level diagram:

2

We can see from the diagram, we have got all 4 actors but resource owners (or Users) are spread across various active directories.

I am extending angularjs todo list application published at https://azure.microsoft.com/en-us/documentation/articles/active-directory-devquickstarts-angular/ and moved all AngularJs stuff to its own project called “TodoAngularJsClient” like shown below:

3

Following changes will configure the WebAPI to use Bearer token for authentication:

NOTE: We are not validating issuer as we don’t know the issuer in advance but we can control if desired via custom Authorize filter to allow access to certain tenants.

You can clone the project from https://github.com/mmasooddatascience/angularjs-webapi-multi-tenant-app and follow below steps to configure applications in your active directory.

  1. Configure AngularJs Client application
    1. Navigate to Azure Active Directory > Applications
    2. Click on Add link at the bottom
    3. Click on “Add an application my organization is developing” link
    4. In Name field, provide a descriptive name which will be shown in consent screen. For instance “Todo Application”
    5. Select “Web Application And/Or Web API” as type
    6. Click Next Arrow
    7. In Sign-On URL, please type the url of the application where you are planning to deploy. For instance https://angularjs-client-app-url/
    8. In APP ID URI, please type following: https://<YOUR-TENANT-NAME>/todo-angularjs-client you should replace <YOUR-TENANT-NAME> with actual tenant name you have.
    9. Click Tick button at the bottom. It will take you to the configuration page of the application.
    10. Now download the Manifest and make following changes:
      1. Enable oauth2AllowImplicitFlow to true.
      2. Enable availableToOtherTenants to true.
      3. Save the .json file and upload it again.
  2. Configure WebAPI application
    1. Navigate to Azure Active Directory > Applications
    2. Click on Add link at the bottom
    3. Click on “Add an application my organization is developing” link
    4. In Name field, provide a descriptive name. For instance Todo WebAPI.
    5. Select “Web Application And/Or Web API” as type
    6. Click Next Arrow
    7. In Sign-On URL, please type the url of the application where you are planning to deploy. For instance https://todo-webapiurl/
    8. In APP ID URI, please type following: https://<YOUR-TENANT-NAME>/todo-webapi you should replace <YOUR-TENANT-NAME> with actual tenant name you have.
    9. Click Tick button at the bottom. It will take you to the configuration page of the application.
    10. Now download the Manifest and make following changes:
      1. Add client id GUID from Angularjs client configuration and paste it to knownClientApplications array like: “knownClientApplications”: [“client-id-guid”]
      2. Enable oauth2AllowImplicitFlow to true.
      3. Enable availableToOtherTenants to true.
      4. Save the .json file and upload it again.
  3. Now update your angularjs client application code
    1. Update adal configuration in app.js
    2. Update todoListSvc.js
    3. Now re-publish the Angularjs client application to App service

ADAL Configuration changes:

todoListSvc.js changes:

Before the application can be access by the users of a directory, we need initiate some process that copies the applications metadata to tenant’s directory. For this purpose a signup process is introduced like shown below that initiate the consent process:

4.0.1png

The following code snippet initiates and presents consent screen to administrator:

When an administrator clicks on Signup button, ADAL presents consent screen on behalf of every users in his/her directory like shown below:

4

The number shown in above consent screen has following characteristics:

  1. Todo Application (This is the name you have configured in Azure Active directory. So put a proper name)
  2. It lists all the scope you have assigned when configured the application.

 

Once a signup process has been successfully completed, we need to let the admin know about it like shown below.

5

Also this is the step/point where we can capture the active directory information (e.g. active directory tenant id, admin name etc). These information can be used to control the access of the application to desired tenants.

If you want your application to be accessed by specific tenants, then you can introduce a workflow between signup processes that will let you (configured admin) know that tenant is just subscribed to use your application. If you approve that tenant then he/she and his/her users can use it. You can have this control by writing custom Authorize filter that query database and validate the incoming tenant.

For simplicity we allow every active directory’s tenant to have access to this application. So after signup process, Active directory will copy the application details to your active directory. Once above process is completed, you can access the application and start using it. Like shown below:

7

 

That’s it for now. Please provide your feedback.

 

 

 

 

 

Configuring Proxy for Azure AD Connect V1.1.105.0 and above

My colleague David Ross has written a previous blog about configuring proxy server settings to allow Azure AD Sync (the previous name of Azure AD Connect) to use a proxy server.

Starting with version 1.1.105.0, Azure AD Connect has completely changed the configuration steps required to allow the Azure AD Connect configuration wizard and Sync. Engine to use a proxy.

I ran into a specific proxy failure scenario that I thought I’d share to provide further help.

My Azure AD Connect (v.1.1.110.0) installation reached the following failure at the end of the initial installation wizard:

Installfailure1

The trace log just stated the following:

Apply Configuration Page: Failed to configure directory extension (True). Details: System.Management.Automation.CmdletInvocationException: user_realm_discovery_failed: User realm discovery failed —> Microsoft.IdentityManagement.PowerShell.ObjectModel.SynchronizationConfigurationValidationException: user_realm_discovery_failed: User realm discovery failed


In this environment, I had the following environmental components:

  • The AAD Connect software was going to operate under a service account
  • All Internet connectivity was through a proxy server which required authentication
  • Windows Server 2012 R2 platform
  • Two factor authentication was enabled for O365 Admin accounts

Previously, in order to get authentication working for O365, I set the proxy server settings in Internet Explorer.  I tested browsing and it appeared fine.  I also had to add the following URLs to the Internet Explorer’s ‘Trusted Sites’ to allow the new forms based authentication (which allowed the second factor to be entered) to work properly with the Azure AD connect wizard:

So even though my Internet proxy appeared to be working under my admin. account, and Office 365 was authenticating properly during the O365 ‘User Sign-In’ screen, I was still receiving a ‘User Realm Discovery’ error message at the end of the installation.

This is when I turned to online help and I found this Microsoft article on the way Azure AD Connect now handles proxy authentication.  It can be found here and is by and large an excellent guide.

Following Microsoft’s guidance, I ran the following proxy connectivity command and verified my proxy server was not blocking my access:

Invoke-WebRequest -Uri https://adminwebservice.microsoftonline.com/ProvisioningService.svc

Installfailure2

So that appeared to be fine and not causing my issue.  Reading further, the guidance in the article had previously stated at the start that my ‘machine.config’ file had to be properly configured.  When I re-read that, I wondered aloud “what file?”.  Digging deeper into the guidance, I ran into this step.

It appears that Azure AD connect now uses Modern Authentication to connect to Office 365 during the final part of the configuration wizard, and that the ‘web.config’ file has to be modified with your proxy server settings for it to complete properly.

Since the environment here requires a proxy which requires authentication, I added the following to the end of the file:

C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config\machine.config

All new required text are within the ‘<system.net>‘ flags.   NOTE:  The guidance from Microsoft states that the new code has to be ‘at the end of the file’, but be sure to place it BEFORE the text: ‘</configuration>’:

Installfailure4

I saved the file, and then clicked ‘Retry’ button on my original ‘user realm discovery failure’ message (thankfully not having to attempt a completely new install of Azure AD connect) and the problem was resolved.

Hope this helps!

 

Tips on moving your Visual Studio Online from Microsoft to Organisational Accounts

If like me you’ve been a keen user of Visual Studio Online since it first came into existence way back in 2012 you’ve probably gotten used to using it with Microsoft Accounts (you know, the ones everyone writes “formerly Live ID” after), and when, in 2014, Microsoft enabled the use of Work (or Organisational) Accounts you either thought “that’s nice” and immediately got back to writing code, or went ahead and migrated to Work Accounts.

If you are yet to cutover your Visual Studio Online (VSO) tenant to use Work Accounts, here are a few tips and gotchas to be aware of as part of your switch.

The VSO owner Microsoft Account must be in Azure AD

Yes, you read that correctly.

Azure Active Directory supports the invitation of users from other Azure AD instances as well as users with Microsoft Accounts (MSAs).

If you haven’t added the MSA that is currently listed as “Account owner” on the Settings page of your VSO tenant (https://yourvsotenant.visualstudio.com/_admin/_home/settings) to the Azure Active Directory instance you intend to use for Work Account authentication then you will have problems.

For the duration of the cutover you also need to have the account as an Azure Co-admin. This is because the directory connection process for VSO which you perform via the Azure Management Portal assumes that the user you are logged into the Portal as is also the Account owner of your VSO tenant. If this isn’t the case you’ll see this error:

HTTP403 while loading resource VS27043 from visualstudio : User “currentazureadmin@your.org.acc” is not the account owner of “yourvsotenant”.

Users with MSAs that match their Work Accounts are OK

Many organisations worked around the use of non-Azure AD accounts early on by having users create MSAs with logins matching Work Accounts. As I previously blogged about, it isn’t always possible to do this, but if you organisation has then you’re in a good state (with one draw back).

After you cut over to Azure AD, these users will no longer be able to use their MSA and will have log out and then log back in. When they do so they will be required to select whether they wish to use the MSA or the Work Account for access, which, while painful is hardly the end of the world.

On the upside, the existing Team Project membership the user had with their old MSA is retained as their user identifier (UPN / email address) hasn’t changed.

On the downside the user will need to create new Alternative Credentials in order to access Git repositories they previously had access too. Note that you can’t reuse the old Git credentials you had.

In summary on this one – it is quite a subtle change, particularly as the username is the same, but the password (and login source) changes. Expect to do some hand holding with people who don’t understand that they were not previously using an Azure AD-based identity.

Don’t forget they’ll need to change their Visual Studio settings too by using the ‘Switch User’ option on the bottom left of the “Connect to Team Foundation Server” dialog as shown below.

Switch User on Dialog

Don’t delete MSAs until you have mapped user access

Seems simple in hindsight, but deleting an MSA before you know what Team Projects it has access to will mean you can no longer determine which projects the new Work Account needs to be added to.

The easiest way to determine which groups to assign Work Accounts to is to switch to the VSO Admin Security Page (https://yourvsotenant.visualstudio.com/DefaultCollection/_admin/_security#_a=memberOf) and open the User view which will show you Team Project membership. Then for each MSA view the groups it is a member of and add the new Work Account to the same. This is maybe a time to pull access from old projects nobody really uses any more ;).

TFSVC – new user = new workspace

Ah yes, this old chestnut :).

As the Work Account represents a new user it isn’t possible to reuse a previous workspace which does mean… re-syncing code down into a new workspace on your machine again.

The tip here (and for Git also) is that you need to plan the cutover and ensure that developers working on the source trees in your tenant either check-in pending changes, or shelve them for later retrieval and re-application to a new workspace.

Get external users into Azure AD before the cutover

If you need to continue to have MSA logins for customers or third party services, make sure you add them to Azure AD prior to the cutover as this will make life afterwards so much easier.

If you have a large number of developers using MSAs that can’t be disrupted then you should consider adding their MSAs temporarily to Azure AD and then migrate them across gradually.

So there we have it, hopefully some useful content for you to use in your own migration. Do you have tips you want to share? Please go ahead and add them in the comments below.

Simon.

Sharing Azure SSO Access Tokens Across Multiple Native Mobile Apps

This blog post is the fourth and final in the series that cover Azure AD SSO in native mobile applications.

  1. Authenticating iOS app users with Azure Active Directory
  2. How to Best handle AAD access tokens in native mobile apps
  3. Using Azure SSO tokens for Multiple AAD Resources From Native Mobile Apps
  4. Sharing Azure SSO Access Tokens Across Multiple Native Mobile Apps (this post).

Introduction

Most enterprises have more than one mobile app and it’s not unusual for these mobile apps to interact with some back-end services or APIs to fetch and update data. In the previous posts of this series we looked at how to manage access to APIs and share tokens as a way to enable a mobile app to interact with multiple AAD-secured resources.

This post will cover how to share access tokens to AAD resources across multiple mobile apps. This is very useful if the enterprise (or any mobile app vendor) wants to provide convenience for the users by not asking them to authenticate on every mobile app the user has.

Once a mobile user logs into Azure AD and gets a token we want to reuse the same token with other apps. This is suitable for some scenarios, but it might not be ideal for apps and resources that deal with sensitive data – the judgement is yours.

Sharing Tokens Across Multiple Mobile Apps

Moving on from previous posts it is now time to enable our mobile apps to share the access token. In this scenario we are covering iOS devices (iOS 6 and above), however other mobile platforms provide similar capabilities too. So what are the options for sharing data on iOS:

KeyChain Store

iOS offers developers a simple utility for storing and sharing keys, this is called SecKeyChain. The API has been part of the iOS platform since before iOS 6, but in iOS 6 Apple integrated this tool with iCloud, to make it even easier to push any saved passwords and keys to Apple iCloud, and then share them on multiple devices.

We could use iOS SecKeyChain to store the token (and the refreshToken) once the user logs in on any of the apps. When the user starts using any of the other apps, we check the SecKeyChain first before attempting to authenticate the user.

public async Task AsyncInit(UIViewController controller, ITokensRepository repository)
{
	_controller = controller;
	_repository = repository;
	_authContext = new AuthenticationContext(authority);
}

public async Task&lt;string&gt; RefreshTokensLocally()
{
	var refreshToken = _repository.GetKey(Constants.CacheKeys.RefreshToken, string.Empty);
	var authorizationParameters = new AuthorizationParameters(_controller);

	var result = &quot;Refreshed an existing Token&quot;;
	bool hasARefreshToken = true;

	if (string.IsNullOrEmpty(refreshToken)) 
	{
		var localAuthResult = await _authContext.AcquireTokenAsync(
			resourceId1, clientId, 
                        new Uri (redirectUrl), 
                        authorizationParameters, 
                         UserIdentifier.AnyUser, null);

		refreshToken = localAuthResult.RefreshToken;
		_repository.SaveKey(Constants.CacheKeys.WebService1Token, localAuthResult.AccessToken, null);


		hasARefreshToken = false;
		result = &quot;Acquired a new Token&quot;; 
	} 

	var refreshAuthResult = await _authContext.AcquireTokenByRefreshTokenAsync(refreshToken, clientId, resourceId2);
	_repository.SaveKey(Constants.CacheKeys.WebService2Token, refreshAuthResult.AccessToken, null);

	if (hasARefreshToken) 
	{
		// this will only be called when we try refreshing the tokens (not when we are acquiring new tokens. 
		refreshAuthResult = await _authContext.AcquireTokenByRefreshTokenAsync(refreshAuthResult.RefreshToken,  clientId,  resourceId1);
		_repository.SaveKey(Constants.CacheKeys.WebService1Token, refreshAuthResult.AccessToken, null);
	}

	_repository.SaveKey(Constants.CacheKeys.RefreshToken, refreshAuthResult.RefreshToken, null);

	return result;
}

Some of the above code will be familiar from previous posts, but what has changed is that now we are passing ITokenRepository which would save any tokens (and refreshTokens) once the user logs in to make them available for other mobile apps.

I have intentionally passed an interface (ITokenRepository) to allow for different implementations, in case you opt to use a different approach for sharing the tokens. The internal implementation of the concrete TokenRepository is something like this:

public interface ITokensRepository 
{
	bool SaveKey(string key, string val, string keyDescription);
	string GetKey(string key, string defaultValue);
	bool SaveKeys(Dictionary&lt;string,string&gt; secrets);
}

public class TokensRepository : ITokensRepository
{
	private const string _keyChainAccountName = &quot;myService&quot;;

	public bool SaveKey(string key, string val, string keyDescription)
	{
		var setResult = KeychainHelpers.SetPasswordForUsername(key, val, _keyChainAccountName, SecAccessible.WhenUnlockedThisDeviceOnly, false );

		return setResult == SecStatusCode.Success;
	}

	public string GetKey(string key, string defaultValue)
	{
		return KeychainHelpers.GetPasswordForUsername(key, _keyChainAccountName, false) ?? defaultValue;
	}
		
	public bool SaveKeys(Dictionary&lt;string,string&gt; secrets)
	{
		var result = true;
		foreach (var key in secrets.Keys) 
		{
			result = result &amp;&amp; SaveKey(key, secrets [key], string.Empty);
		}

		return result;
	}
}

iCloud

We could use Apple iCloud to push the access tokens to the cloud and share them with other apps. The approach would be similar to what we have done above with the only difference being in the way we are storing these keys. Instead of storing them locally, we push them to Apple iCloud directly. As the SecKeyChain implementation above does support pushing data to iCloud, I won’t go through the implementation details here and simply note the option is available for you.

Third Party Cloud Providers (ie Azure)

Similar to the previous option, but offer more flexibility. This is a very good solution if we already are already using Azure Mobile Services for our mobile app. We can create one more table and then use this table to store and share access tokens. The implementation of this could be similar to the following:

public async Task&lt;string&gt; RefreshTokensInAzureTable()
{
	var tokensListOnAzure = await tokensTable.ToListAsync();
	var tokenEntry = tokensListOnAzure.FirstOrDefault();
	var authorizationParameters = new AuthorizationParameters(_controller);

	var result = &quot;Refreshed an existing Token&quot;;
	bool hasARefreshToken = true;

	if (tokenEntry == null) 
	{
		var localAuthResult = await _authContext.AcquireTokenAsync(resourceId1, clientId, new Uri (redirectUrl),  authorizationParameters, UserIdentifier.AnyUser, null);

		tokenEntry = new Tokens {
			WebApi1AccessToken = localAuthResult.AccessToken,
			RefreshToken = localAuthResult.RefreshToken,
			Email = localAuthResult.UserInfo.DisplayableId,
			ExpiresOn = localAuthResult.ExpiresOn
		};
		hasARefreshToken = false;
		result = &quot;Acquired a new Token&quot;; 
	} 
		
	var refreshAuthResult = await _authContext.AcquireTokenByRefreshTokenAsync(tokenEntry.RefreshToken, 
                                                                                    clientId, 
                                                                                    resourceId2);
	tokenEntry.WebApi2AccessToken = refreshAuthResult.AccessToken;
	tokenEntry.RefreshToken = refreshAuthResult.RefreshToken;
	tokenEntry.ExpiresOn = refreshAuthResult.ExpiresOn;

	if (hasARefreshToken) 
	{
		// this will only be called when we try refreshing the tokens (not when we are acquiring new tokens. 
		refreshAuthResult = await _authContext.AcquireTokenByRefreshTokenAsync (refreshAuthResult.RefreshToken, clientId, resourceId1);
		tokenEntry.WebApi1AccessToken = refreshAuthResult.AccessToken;
		tokenEntry.RefreshToken = refreshAuthResult.RefreshToken;
		tokenEntry.ExpiresOn = refreshAuthResult.ExpiresOn;
	}

	if (hasARefreshToken)
		await tokensTable.UpdateAsync (tokenEntry);
	else
		await tokensTable.InsertAsync (tokenEntry);

	return result;
}

Words of Warning

Bearer Tokens

Developers need to understand Bearer Tokens when using Azure AD authentication. Bearer Tokens mean anybody who has the token (bearer of the token) could access and interact with your AAD resource. This offers high flexibility but it could also be a security risk if your key was exposed somehow. This needs to be thought of when implementing any token sharing mechanism.

iOS SecKeyChain is “Secure”

iOS SecKeyChain is “Secure”, right? No, not at all. Apple calls it secure, but on jail-broken devices, you could see the key store as a normal file. Thus, I would highly recommend encrypting these access tokens and any key that you might want to store before persisting it. The same goes for iCloud, Azure, or any of the other approaches we went through above.

Apple AppStore Verification

If you intend on submitting your app to Apple AppStore, then you need to be extra careful with what approach you take to share data between your apps. For enterprises (locally deployed apps), you have the control and you make the call based on your use case. However, Apple has a history of rejecting apps (ie PastePane) for using some of iOS APIs in “an unintended” manner.

I hope you found this series of posts useful, and as usual, if there is something not clear or you need some help with similar projects that you are undertaking, then get in touch, and we will do our best to help. I have pushed the sample code from this post and the previous ones to GitHub, and can be found here

Has.

This blog post is the fourth and final in the series that cover Azure AD SSO in native mobile applications.

  1. Authenticating iOS app users with Azure Active Directory
  2. How to Best handle AAD access tokens in native mobile apps
  3. Using Azure SSO tokens for Multiple AAD Resources From Native Mobile Apps
  4. Sharing Azure SSO Access Tokens Across Multiple Native Mobile Apps (this post).

Using Azure SSO Tokens for Multiple AAD Resources From Native Mobile Apps

This blog post is the third in a series that cover Azure Active Directory Single Sign-On (SSO) authentication in native mobile applications.

  1. Authenticating iOS app users with Azure Active Directory
  2. How to Best handle AAD access tokens in native mobile apps
  3. Using Azure SSO tokens for Multiple AAD Resources From Native Mobile Apps (this post)
  4. Sharing Azure SSO access tokens across multiple native mobile apps.

Introduction

In an enterprise context it is highly likely there are multiple web services that your native mobile app needs to consume. I had exactly this scenario at one of my clients who asked if they could maintain the same SSO token in the background in the mobile app and use it for accessing multiple web services. I spent some time digging through the documentation and conducting some experiments to confirm some points and this post is to share my findings.

Cannot Share Azure AD Tokens for Multiple Resources

The first thing that comes to mind is to use the same access token for multiple Azure AD resources. Unfortunately this is not allowed. Azure AD issues a token for a certain resource (which is mapped to an Azure AD app). When we call AcquireToken, we need to provide a single resourceID. The result is the token can only be used for resource matching the supplied identifier.

There are ways where you could use the same token (as we will see later in this post), but it is not recommended as it complicates operations logging, authentication process tracing, etc. Therefore it is better to look at the other options provided by Azure and the ADAL library.

Use Refresh-Token to Acquire Tokens for Multiple Resources

The ADAL library supports acquiring multiple access tokens for multiple resources using a “refresh token”. This means once a user is authenticated, the ADAL’s authentication context is able to generate an access token to multiple resources without authenticating the user again. This is covered briefly by the MSDN documentation. A sample implementation to retrieve this token is shown below.

public async Task<string> RefreshTokens()
{
	var tokenEntry = await tokensRepository.GetTokens();
	var authorizationParameters = new AuthorizationParameters (_controller);

	var result = "Refreshed an existing Token";
	bool hasARefreshToken = true;

	if (tokenEntry == null) 
	{
		var localAuthResult = await _authContext.AcquireTokenAsync (
			resourceId1, 
                        clientId, 
                        new Uri (redirectUrl), 
                        authorizationParameters, 
                        UserIdentifier.AnyUser, 
                        null);

		tokenEntry = new Tokens {
			WebApi1AccessToken = localAuthResult.AccessToken,
			RefreshToken = localAuthResult.RefreshToken,
			Email = localAuthResult.UserInfo.DisplayableId,
			ExpiresOn = localAuthResult.ExpiresOn
		};
		hasARefreshToken = false;
		result = "Acquired a new Token"; 
	} 

	var refreshAuthResult = await _authContext.AcquireTokenByRefreshTokenAsync(
                                tokenEntry.RefreshToken, 
                                clientId, 
                                resourceId2);

	tokenEntry.WebApi2AccessToken = refreshAuthResult.AccessToken;
	tokenEntry.RefreshToken = refreshAuthResult.RefreshToken;
	tokenEntry.ExpiresOn = refreshAuthResult.ExpiresOn;

	if (hasARefreshToken) 
	{
		// this will only be called when we try refreshing the tokens (not when we are acquiring new tokens. 
		refreshAuthResult = await _authContext.AcquireTokenByRefreshTokenAsync (
                                                     refreshAuthResult.RefreshToken, 
                                                     clientId, 
                                                     resourceId1);

		tokenEntry.WebApi1AccessToken = refreshAuthResult.AccessToken;
		tokenEntry.RefreshToken = refreshAuthResult.RefreshToken;
		tokenEntry.ExpiresOn = refreshAuthResult.ExpiresOn;
	}

	await tokensRepository.InsertOrUpdateAsync (tokenEntry);

	return result;
}

As you can see from above, we check if we have an access token from previous calls, and if we do, we refresh the access tokens for both web services. Notice how the _authContext.AcquireTokenByRefreshTokenAsync() method provides an overloading parameter that takes a resourceId. This enables us to get multiple access tokens for multiple resources without having to re-authenticate the user. The rest of the code is similar to what we have seen in the previous two posts.

ADAL Library Can Produce New Tokens For Other Resources

In the previous two posts we looked at ADAL and how it uses the TokenCache. Although ADAL does not support persistent caching of tokens yet on mobile apps, it still uses the TokenCache for in-memory caching. This enables ADAL to generate new access tokens if the AuthenticationContext still exists from previous authentication calls. Remember in the previous post we said it is recommended to keep a reference to the authentication-context? Here it comes in handy as it enables us to generate new access tokens for accessing multiple Azure AD resources.

var localAuthResult = await _authContext.AcquireTokenAsync (
                                   resourceId2, 
                                   clientId, 
                                   new Uri(redirectUrl),
                                   authorizationParameters,
                                   UserIdentifier.AnyUser, 
                                   null
                                 );

Calling AcquireToken() (even with no refresh token) would give us a new access token to the requested resource. This is due to ADAL checking if we have a refresh token in-memory which ADAL then uses that to generate a new access token for the resource.

An alternative

The third alternative is the simplest (but not necessarily the best). In this option, we can use the same access token to consume multiple Azure AD resources. To do this, we need to use the same Azure AD app ID when setting the two APIs for authentication via Azure AD. This requires some understanding of how the Azure AD authentication happens on our web apps.

If you refer to Taiseer Joudeh’s tutorial you will see that in our web app, we need to tell the authentication framework what our Authority is and the Audience (Azure AD App Id). If we set up both of our web APIs to use the same Audience (Azure AD app Id) we link them both into the same Azure AD application which allows use of the same access token to use both web APIs.

// linking our web app authentication to an Azure AD application
private void ConfigureAuth(IAppBuilder app)
{
	app.UseWindowsAzureActiveDirectoryBearerAuthentication(
		new WindowsAzureActiveDirectoryBearerAuthenticationOptions
		{
			Audience = ConfigurationManager.AppSettings["Audience"],
			Tenant = ConfigurationManager.AppSettings["Tenant"]
		});
}
<appSettings>
    <add key="Tenant" value="hasaltaiargmail.onmicrosoft.com" />
    <add key="Audience" value="http://my-Azure-AD-Application-Id" />	
</appSettings>

As I said before, this is very simple and requires less code, but could cause complications in terms of security logging and maintenance. At the end of the day, it depends on your context and what you are trying to achieve.

Conclusion

We looked at how we could use Azure AD SSO with ADAL to access multiple resources from native mobile apps. As we saw, there are three main options, and the choice could be made based on the context of your app. I hope you find this useful and if you have any questions or you need help with some development that you are doing, then just get in touch.

This blog post is the third in a series that cover Azure Active Directory Single Sign On (SSO) Authentication in native mobile applications.

  1. Authenticating iOS app users with Azure Active Directory
  2. How to Best handle AAD access tokens in native mobile apps
  3. Using Azure SSO tokens for Multiple AAD Resources From Native Mobile Apps (this post)
  4. Sharing Azure SSO access tokens across multiple native mobile apps.