Where’s the source!

SauceIn this post I will talk about data (aka the source)! In IAM there’s really one simple concept that is often misunderstood or ignored. The data going out of any IAM solution is only as good as the data going in. This may seem simple enough but if not enough attention is paid to the data source and data quality then the results are going to be unfavourable at best and catastrophic at worst.
With most IAM solutions data is going to come from multiple sources. Most IAM professionals will agree the best place to source the majority of your user data is going to be the HR system. Why? Well simply put it’s where all important information about the individual is stored and for the most part kept up to date, for example if you were to change positions within the same company the HR systems are going to be updated to reflect the change to your job title, as well as any potential direct report changes which may come as a result of this sort of change.
I also said that data can come and will normally always come from multiple sources. At typical example of this generally speaking, temporary and contract staff will not be managed within the central HR system, the HR team simply put don’t care about contractors. So where do they come from, how are they managed? For smaller organisations this is usually something that’s manually done in AD with no real governance in place. For the larger organisations this is less ideal and can be a security nightmare for the IT team to manage and can create quite a large security risk to the business, so a primary data source for contractors becomes necessary what this is is entirely up to the business and what works for them, I have seen a standard SQL web application being used to populate a database, I’ve seen ITSM tools being used, and less common is using the IAM system they build to manage contractor accounts (within MIM 2016 this is through the MIM Portal).
There are many other examples of how different corporate applications can be used to augment the identity information of your user data such as email, phone systems and to a lessor extent physical security systems building access, and datacentre access, but we will try and keep it simple for the purpose of this post. The following diagram helps illustrate the dataflow for the different user types.

IAM Diagram

What you will notice from the diagram above, is even though an organisation will have data coming from multiple systems, they all come together and are stored in a central repository or an “Identity Vault”. This is able to keep an accurate record of the information coming from multiple sources to compile what is the users complete identity profile. From this we can then start to manage what information is flowed to downstream systems when provisioning accounts, and we can also ensure that if any information was to change, it can be updated to the users profiles in any attached system that is managed through the enterprise IAM Services.
In my next post I will go into the finer details of the central repository or the “Identity Vault”

So in summary, the source of data is very important in defining an IAM solution, it ensures you have the right data being distributed to any managed downstream systems regardless of what type of user base you have. My next post we will dig into the central repository or the Identity Vault, this will go into details around how we can set precedence to data from specific systems to ensure that if there is a difference in the data coming from the difference sources that only the highest precedence will be applied we will also discuss how we augment the data sets to ensure that we are also only collecting the necessary information related to the management of that user and the applications that use within your business.

As per usual, if you have any comments or questions on this post of any of my previous posts then please feel free to comment or reach out to me directly.

Decommissioning Exchange 2016 Server

I have created many labs over the years and never really spent the time to decommission my environment, I usually just blow it away and start again.

So I finally decided to go through the process and decommission my Exchange 2016 server in my lab environment.

My lab consisted of the following:

  • Domain Controller (Windows Server 2012 R2)
  • AAD Connect Server
  • Exchange 2016 Server/ Office 365 Hybrid
  • Office 365 tenant

Being a lab I only had one Exchange server which had the mailbox role configured and was also my hybrid server.

Proceed with the steps below to remove the Exchange Online configuration;

  1. Connect to Exchange Online using PowerShell (https://technet.microsoft.com/en-us/library/jj984289(v=exchg.160).aspx)
  2. Remove the Inbound connector
    Remove-InboundConnector -Identity "Inbound from [Inbound Connector ID]"
  3. Remove the Outbound connector
    Remove-OutboundConnector -Identity "Outbound to [Outbound Connector ID]"
  4. Remove the Office 365 federation
    Remove-OrganizationRelationship "O365 to On-premises - [Org Relationship ID]"
  5. Remove the OAuth configuration
    Remove-IntraOrganizationConnector -Identity "HybridIOC - [Intra Org Connector ID]"

Now connect to your Exchange 2016 server and conduct the following;

  1. Open Exchange Management Shell
  2. Remove the Hybrid Config
    Remove-HybridConfig
  3. Remove the federation configuration
    Remove-OrganizationRelationship "On-premises to O365 - [Org Relationship ID]"
  4. Remove OAuth configuration
    Remove-IntraorganizationConnector -Identity "HybridIOC - [Intra Org Connector ID]"
  5. Remove all mailboxes, mailbox databases, public folders etc.
  6. Uninstall Exchange (either via PowerShell or programs and features)
  7. Decommission server

I also confirmed Exchange was removed from Active Directory and I was able to run another installation of Exchange on the same Active Directory with no conflicts or issues.

Implementing Application with Office 365 Graph API in App-only Mode

Microsoft has recently release Microsoft Graph to easily integrate Office 365 resources with applications. Graph API basically provides one single endpoint to call bunch of Web APIs to get access Office 365 resources.

In order to use Graph API from another application, the application must be registered in Azure Active Directory (AAD) first. When the application is registered, we can choose how the application is permitted to use resources – application permissions or delegate permissions. The latter one typically requires users to provide user credentials like username and password to get a proper access token. However, there must be requirements that the application doesn’t require users credentials but should directly access to Office 365 resources, which is called app-only mode. For example, a back-end Web API application doesn’t require user credentials to communicate with Graph API. There are many articles on the Internet dealing with user credentials but not many articles coping with this app-only mode. In this post, I will walk through how to register an application as app-only mode and consume Graph API through the registered application.

The sample code used here can be found here. This sample application is built-in ASP.NET 5 and ASP.NET MVC 6. Therefore, in order to get the best development experience, I would recommend using Visual Studio 2015.

Registering Application

OK. First thing goes first. As the sample application is using AAD and Graph API, it’s a mandatory to register this app on your AAD first. Please follow the steps below.

Create User Account on AAD

You can skip this step, if you have already got a user account on AAD.
NOTE: Microsoft Account won’t work with this sample app.

Create a user account on AAD tenant. Once created, the user account should be a co-administrator of the Azure subscription that is currently being used.

Register Application to AAD

  • Create a new application.

  • Choose the "Add an application my organization is developing" option.

  • Enter the application name like "Graph API App-only Sample" and select the "Web Application and/or Web API" option.

  • Enter "https://(tenant-name)/GraphApiAppOnlySample" for both fields. (tenant-name) should look like contoso.onmicrosoft.com. Please note that both won’t be used at all.

Now the app has been registered.

Configure Application

  • Once the app is registered, click the configure tab.

  • Get the Client ID.

  • Get the secret key. Note that the key is only displayed once after click the Save button at the bottom.

  • Add another application called Microsoft Graph

  • Give "Read directory data" permission to Microsoft Graph, as only this permission is necessary to run the sample app.

ATTENTION: In the production environment, appropriate number of application permissions MUST be given to avoid any security breach.

The app has been configured.

Update Settings in Sample Application

As the app has been registered and configured, the sample Web API app should be setup with appropriate settings. Firstly, open appsettings.json

Then change values:

  • Tenant: contoso.onmicrosoft.com to your tenant name.
  • ClientId: Client ID from the app.
  • ClientSecret: Secret key from the app.
  • AppId: contoso.onmicrosoft.com to your tenant name.

Trust IIS or IIS Express with a Self-signed Certificate

  • You can skip this step, if you intend to publish this app to Azure.
  • You can skip this step, if you already have a self-signed certificate on your root certificate storage.

All communications with AAD and Graph API are performed through a secure channel (SSL/TLS), this sample app MUST be signed with a root certificate. However, this is a developer’s local environment, so a self-signed certificate should be issued and stored as a root certificate. If you don’t store the self-signed certificate, you’ll see the following message popped up when you run VS2015.

The following steps show how to register self-signed certificate to the root certificate store using PowerShell.

Check Self-signed Certificate

First, Check if you have a self-signed certificate in your personal certificate store.

If there’s no certificate with name of CN=localhost, you should create the one using makecert.exe. The easiest way to execute makecert.exe is to run Developer Command Prompt for VS2015.

  • -r: Create a self signed certificate.
  • -pe: Mark generated private key as exportable.
  • -n: Certificate subject X509 name. eg) -n "CN=localhost"
  • -b: Start of the validity period in mm/dd/yyyy format; default to now.
  • -e: End of validity period in mm/dd/yyyy format; defaults to 2039.
  • -ss: Subject’s certificate store name that stores the output certificate. eg) -ss My
  • -len: Generated Key Length (Bits). Default to 2048 for ‘RSA’ and 512 for ‘DSS’.

Store Self-signed Certificate to Root Store

Second, store the self-signed certificate to the root store.

Finally, you can verify the self-signed certificate has been stored into the root store.

Build and Run Sample Web API Application

All setup has been completed! Now, build the solution in Visual Studio 2015 and punch F5 key to get into the Debug mode. Then you’ll get a JSON response of your tenant organisation details. How does it work? Let’s look into the code.

Get Access Token from AAD Using ADAL

In order to get an access token from AAD, we need to setup AuthenticationContext instance and call a method to get one. The code snippet below is an extract from the sample application.

The graphApp instance is created from appsettings.json and injected to the controller. With this instance, both AuthenticationContext instance and ClientCredential instance have been created. Note that the ClientCredential instance only uses clientId and clientSecret, not user credentials. In other words, this sample application won’t require any user to login and provide their login details. Let’s see the actual Get() method.

This is the part to get access token. The AuthenticationContext instance sends a request for Graph API with the ClientCredential instance only containing clientId and clientSecret. When the authentication fails, the Get() action will return the exception details as a JSON format. The authResult instance now contains access token.

Call Graph API with Access Token

Now, we have the access token. In the same action method, let’s see how it is consumed to call Graph API.

The access token is set to the request header. In this sample code, we call organization details through Graph API. If you use Fiddler, send a request to the application and you will see the full response details.

So far, we have had a brief look how to implement a simple Web API application as app-only mode to consume Graph API. As stated above, this sample application is not for production use unless proper permissions are given. Once all appropriate permissions are setup, you can easily customise this app for your integration purpose. Hope this will help your development.

Azure Active Directory Connect high-availability using ‘Staging Mode’

With the Azure Active Directory Connect product (AAD Connect) being announced as generally available to the market (more here, download here), there is a new feature available that will provide a greater speed of recovery of the AAD Sync component. This feature was not available with the previous AAD Sync or DirSync tools and there is little information about it available in the community, so hopefully this model can be considered for your synchronisation design.

Even though the AAD Sync component within the AAD Connect product is based on the Forefront Identity Manager (FIM) synchronisation service, it does not take on the same recovery techniques as FIM. For AAD Sync, you prepare two servers (ideally in different data centres) and install AAD Connect. Your primary server would be configured to perform the main role to synchronise identities between your Active Directory and Azure Active Directory, and the second server installed in much the same way but configured with the setting ‘enable staging mode’ being selected. Both servers are independent and do not share any components such as SQL Server, and the second server is performing the same tasks as the primary except for the following:

  • No exports occur to your on-premise Active Directory
  • No exports occur to Azure Active Directory
  • Password synchronisation and password writeback are disabled.

Should the primary server go offline for a long period of time or become unrecoverable, you can enable the second server by simply running the installation wizard again and disabling staging mode. When the task schedule next runs, it will perform a delta import and synchronisation and identify any differences between the state of the previous primary server and the current server.

Some other items you might want to consider with this design.

  • Configure the task schedule on the second server so that it runs soon after the primary server completes. By default the task schedule runs every 3 hours and launches at a time which is based on when it was installed, therefore the second server task schedule can launch up to a few hours after the primary server runs. Based on the average amount of work the primary server takes, configure the task schedule on the second server to launch say 5-10 minutes later
  • AAD Sync includes a tool called CSExportAnalyzer which displays changes that are staged to be exported to Active Directory or Azure Active Directory. This tool is useful to report on pending exports while the server is in ‘staging mode’
  • Consider using ‘custom sync groups’ which are located in your Active Directory domain. The default installation of the AAD Sync component will create the following groups locally on the server: ADSyncAdmins, ADSyncOperators, ADSyncBrowse and ADSyncPasswordSet. With more than one AAD Sync server, these groups need to be managed on the servers and kept in sync manually. Having the groups in your Active Directory will simplify this administration.

    NOTE: This feature is not yet working with the current AAD Connect download and this blog will be updated when working 100%.

The last two items will be detailed in future blogs.

Microsoft Office 365 readiness assessment

Originally posted on Lucian’s blog over at clouduccino.com

Okay, you have the green light and it’s time to get cracking deploying Office 365. Before a mailbox can be migrated, before even an account can be AADSync’ed, before you even provision the O365 tenant, there is the matter of checking if the existing infrastructure is ready to handle the great features of Office 365.

What is always recommended before the design phase of a project even starts is to conduct an Office 365 readiness assessment. Working on a project recently and having it fresh in my mind, I thought I’d put finger to keyboard (pen to paper) and jot down the key items to check.

There’s allot of IT companies out there who offer this discovery and assessment process which is great. As a handy reference point, here’s the approach I take, with the a focus on Exchange Online messaging as that’s what I’m pretty good at…

Read More

Connecting Salesforce and SharePoint Online with MuleSoft – Nothing but NET

Often enterprises will choose their integration platform based on the development platform required to build integration solutions. That is, java shops typically choose Oracle ESB, JBoss, IBM WebSphere or MuleSoft to name but a few. Microsoft shops have less choice and typically choose to build custom .NET solutions or use Microsoft BizTalk Server. Choosing an integration platform based on the development platform should not be a driving factor and may limit your options.

Your integration platform should be focused on interoperability. It should support common messaging standards, transport protocols, integration patterns and adapters that allow you to connect to a wide range of line of business systems and SaaS applications. Your integration platform should provide frameworks and pattern based templates to reduce implementation costs and improve the quality and robustness of your interfaces.

Your integration platform should allow your developers to use their development platform of choice…no…wait…what!?!

In this post I will walkthrough integrating Salesforce.com and SharePoint Online using the java based Mule ESB platform while writing nothing but .NET code.

MuleSoft .NET Connector

The .NET connector allows developers to use .NET code in their flows enabling you to call existing .NET Framework assemblies, 3rd party .NET assemblies or custom .NET code. Java Native Interface (JNI) is used to communicate between the Java Virtual Machine (JVM), in which your MuleSoft flow is executing, and the Microsoft Common Language Runtime (CLR) where your .NET code is executed.

mule-net-connector---jni_thumb

To demonstrate how we can leverage the MuleSoft .NET Connector in our flows, I have put together a typical SaaS cloud integration scenario.

Our Scenario

  • Customers (Accounts) are entered into Salesforce.com by the Sales team
  • The team use O365 and SharePoint Online to manage customer and partner related documents.
  • When new customers are entered into Salesforce, corresponding document library folders need to be created in SharePoint.
  • Our interface polls Salesforce for changes and needs to create a new document library folder in SharePoint for this customer according to some business rules
  • Our developers prefer to use .NET and Visual Studio to implement the business logic required to determine the target document library based on Account type (Customer or Partner)

Our MuleSoft flow looks something like this:

mule_net_connector_-_flow_thumb5

  • Poll Salesforce for changes based on a watermark timestamp
  • For each change detected:
    • Log and update the last time we sync’d
    • Call our .NET business rules to determine target document library
    • Call our .NET helper class to create the folder in SharePoint

Salesforce Connector

Using the MuleSoft Salesforce Cloud Connector we configure it to point to our Salesforce environment and query for changes to the Accounts entity. Using DataSense, we configure which data items to pull back into our flow to form the payload message we wish to process.

mule_net_connector_-_sf_connector_th

Business Rules

Our business rules are implemented using a bog standard .NET class library that checks the account type and assign either “Customers” or “Partners” as the target document library. We then enrich the message payload with this value and return it back to our flow.

public object GetDocumentLibrary(SF_Account account)
{
    var docLib = "Unknown";     // default

    // Check for customer accounts
    if (account.Type.Contains("Customer"))
        docLib = "Customers";

    // Check for partner accounts
    if (account.Type.Contains("Partner"))
        docLib = "Partners";

    return new 
        { 
            Name = account.Name, 
            Id = account.Id, 
            LastModifiedDate = account.LastModifiedDate, 
            Type = account.Type,
            DocLib = docLib 
        };
}

Note: JSON is used to pass non-primitive types between our flow and our .NET class.

So our message payload looks like

{
	"LastModifiedDate":"2014-11-02T11:11:07.000Z",
	"Type":"TechnologyPartner",
	"Id":"00190000016hEhxAAE",
	"type":"Account",
	"Name":"Kloud"
}

and is de-serialised by the .NET connector into our .NET SF_Account object that looks like

public class SF_Account
{
    public DateTime LastModifiedDate;
    public string Type;
    public string Id;
    public string type;
    public string Name;
    public string DocLib;
}

Calling our .NET business rules assembly is a simple matter of configuration.

mule_net_connector_-_business_rules_

SharePoint Online Helper

Now that we have enriched our message payload with the target document library

{
	"LastModifiedDate":"2014-11-02T11:11:07.000Z",
	"Type":"TechnologyPartner",
	"Id":"00190000016hEhxAAE",
	"type":"Account",
	"Name":"Kloud",
	"DocLib":"Partners"
}

we can pass this to our .NET SharePoint client library to connect to SharePoint using our O365 credentials and create the folder in the target document library

public object CreateDocLibFolder(SF_Account account)
{
    using (var context = new Microsoft.SharePoint.Client.ClientContext(url))
    {
        try
        {
            // Provide client credentials
            System.Security.SecureString securePassword = new System.Security.SecureString();
            foreach (char c in password.ToCharArray()) securePassword.AppendChar(c);
            context.Credentials = new Microsoft.SharePoint.Client.SharePointOnlineCredentials(username, securePassword);

            // Get library
            var web = context.Web;
            var list = web.Lists.GetByTitle(account.DocLib);
            var folder = list.RootFolder;
            context.Load(folder);
            context.ExecuteQuery();

            // Create folder
            folder = folder.Folders.Add(account.Name);
            context.ExecuteQuery();
        }
        catch (Exception ex)
        {
            System.Diagnostics.Debug.WriteLine(ex.ToString());
        }
    }

    // Return payload to the flow 
    return new { Name = account.Name, Id = account.Id, LastModifiedDate = account.LastModifiedDate, Type = account.Type, DocLib = account.DocLib, Site = string.Format("{0}/{1}", url, account.DocLib) };
}

Calling our helper is the same as for our business rules

mule_net_connector_-_sharepoint_help

Configuration

Configuring MuleSoft to know where to load our .NET assemblies from is best done using global configuration references.

mule_net_connector_-_global_elements[2]

We have three options to reference our assembly:

  1. Local file path – suitable for development scenarios.
  2. Global Assembly Cache (GAC) – suitable for shared or .NET framework assemblies that are known to exist on the deployment server.
  3. Packaged – suitable for custom and 3rd party .NET assemblies that get packaged with your MuleSoft project and deployed together.

mule_net_connector_-_global_elements[1]

Deployment

With our flow completed, coding in nothing but .NET, we are good to test and deploy our package to our Mule ESB Integration Server. At the time of writing, CloudHub does not support the .NET Connector but this should be available in the not too distant future. To test my flow I simply spin up and instance on the development server and watch the magic happen.

We enter an Account in Salesforce with Type of “Customer – Direct”…

mule_net_connector_-_sf_customerng_t

and we see a new folder in our “Customers” document library for that Account name in a matter of seconds

mule_net_connector_-_sharepoint_fold[1]

SLAM DUNK!…Nothing but NET Smile

Conclusion

Integration is all about interoperability and not just at runtime. It should be a core capability of our Integration framework. In this post we saw how we can increase our Integration capability without the need to sacrifice our development platform of choice by using MuleSoft and the MuleSoft .NET Connector.