Set your eyes on the Target!

1015red_F1CoverStory.jpg

So in my previous posts I’ve discussed a couple of key points in what I define as the basic principles of Identity and Access Management;

Now that we have all the information needed, we can start to look at your target systems. Now in the simplest terms this could be your local Active Directory (Authentication Domain), but this could be anything, and with the adoption of cloud services, often these target systems are what drives the need for robust IAM services.

Something that we are often asked as IAM consultants is why. Why should the corporate applications be integrated with any IAM Service, and these are valid questions. Sometimes depending on what the system is and what it does, integrating with an IAM system isn’t a practical solution, but more often there are many benefits to having your applications integrated with and IAM system. These benefits include:

  1. Automated account provisioning
  2. Data consistency
  3. If supported Central Authentication services

Requirements

With any target system much like the untitled1IAM system itself, the one thing you must know before you go into any detail are the requirements. Every target system will have individual requirements. Some could be as simple as just needing basic information, first name, last name and date of birth. But for most applications there is allot more to it, and the requirements will be derived largely by the application vendor, and to a lessor extent the application owners and business requirements.

IAM Systems are for the most part extremely flexible in what they can do, they are built to be customized to an enormous degree, and the target systems used by the business will play a large part in determining the amount of customisations within the IAM system.

This could be as simple as requiring additional attributes that are not standard within both the IAM system and your source systems, or could also be the way in which you want the IAM system to interact with the application i.e. utilising web services and building custom Management Agents to connect and synchronise data sets between.

But the root of all this data is when using an IAM system you are having a constant flow of data that is all stored within the “Vault”. This helps ensure that any changes to a user is flowed to all systems, and not just the phone book, it also ensures that any changes are tracked through governance processes that have been established and implemented as part of the IAM System. Changes made to a users’ identity information within a target application can be easily identified, to the point of saying this change was made on this date/time because a change to this persons’ data occurred within the HR system at this time.

Integration

Most IAM systems will have management agents or connectors (the phases can vary depending on the vendor you use) built for the typical “Out of Box” systems, and these will for the most part satisfy the requirements of many so you don’t tend to have to worry so much about that, but if you have “bespoke” systems that have been developed and built up over the years for your business then this is where the custom management agents would play a key part, and how they are built will depend on the applications themselves, in a Microsoft IAM Service the custom management agents would be done using an Extensible Connectivity Management Agent (ECMA). How you would build and develop management agents for FIM or MIM is quite an extensive discussion and something that would be better off in a separate post.

One of the “sticky” points here is that most of the time in order to integrate applications, you need to have elevated access to the applications back end to be able to populate data to and pull data from the application, but the way this is done through any IAM system is through specific service accounts that are restricted to only perform the functions of the applications.

Authentication and SSO

Application integration is something seen to tighten the security of the data and access to applications being controlled through various mechanisms, authentication plays a large part in the IAM process.

During the provisioning process, passwords are usually set when an account is created. This is either through using random password generators (preferred), or setting a specific temporary password. When doing this though, it’s always done with the intent of the user resetting their password when they first logon. The Self Service functionality that can be introduced to do this enables the user to reset their password without ever having to know what the initial password was.

Depending on the application, separate passwords might be created that need to be managed. In most cases IAM consultants/architects will try and minimise this to not being required at all, but this isn’t always the case. In these situations, the IAM System has methods to manage this as well. In the Microsoft space this is something that can be controlled through Password Synchronisation using the “Password Change Notification Service” (PCNS) this basically means that if a user changes their main password that change can be propagated to all the systems that have separate passwords.

SONY DSCMost applications today use standard LDAP authentication to provide access to there application services, this enables the password management process to be much simpler. Cloud Services however generally need to be setup to do one of two things.

  1. Store local passwords
  2. Utilise Single Sign-On Services (SSO)

SSO uses standards based protocols to allow users to authenticate to applications with managed accounts and credentials which you control. Examples of these standard protocols are the likes of SAML, oAuth, WS-Fed/WS-Trust and many more.

There is a growing shift in the industry for these to be cloud services however, being the likes of Microsoft Azure Active Directory, or any number of other services that are available today.
The obvious benefit of SSO is that you have a single username or password to remember, this also greatly reduces the security risk that your business has from and auditing and compliance perspective having a single authentication directory can help reduce the overall exposure your business has to compromise from external or internal threats.

Well that about wraps it up, IAM for the most part is an enabler, it enables your business to be adequately prepared for the consumption of Cloud services and cloud enablement, which can help reduce the overall IT spend your business has over the coming years. But one thing I think I’ve highlighted throughout this particular series is requirements requirements requirements… repetitive I know, but for IAM so crucially important.

If you have any questions about this post or any of my others please feel free to drop a comment or contact me directly.

 

The Vault!

Vault

The vault or more precisely the “Identity Vault” is a single pane view of all the collated data of your users, from the various data source repositories. This sounds like a lot of jargon but it’s quite simple really.

In the diagram below we look at a really simple attribute firstName (givenName within AD) DataFlow

As you will see at the centre is the attribute, and branching off this is all the Connected Systems, i.e. Active Directory. What this doesn’t illustrate very well is the specific data flow, where this data is coming from and where it’s going to. This comes down to import and export rules as well as any precedence rules that you need to put in place.

The Identity Vault, or Central Data Repository, provides a central store of an Identities information aggregated from a number of sources. It’s also able to identify the data that exists within each of the connected systems from which it either collects the identity information from or provides the information to as a target system. Sounds pretty simple right?

Further to all the basics described above, each object in the Vault has a Unique Identifier, or an Anchor. This is a unique value that is automatically generated when the user is created to ensure that regardless of what happens to the users details throughout the lifecycle of the user object, we are able to track the user and update changes accordingly. This is particularly useful when you have multiple users with the same name for example, it avoids the wrong person being updated when changes occur.

Attribute User 1 User 2
FirstName John John
LastName Smith Smith
Department Sales Sales
UniqueGUID 10294132 18274932

So the table above provides the most simplest forms of a users identity profile, whereas a complete users identity profile will consist of many more attributes, some of which maybe custom attributes for specific purposes, as in the example demonstrated below;

Attribute ContributingMA Value
AADAccountEnabled AzureAD Users TRUE
AADObjectID AzureAD Users 316109a6-7178-4ba5-b87a-24344ce1a145
accountName MIM Service jsmith
cn PROD CORP AD Joe Smith
company PROD CORP AD Contoso Corp
csObjectID AzureAD Users 316109a6-7178-4ba5-b87a-24344ce1a145
displayName MIM Service Joe Smith
domain PROD CORP AD CORP
EXOPhoto Exchange Online Photos System.Byte[]
EXOPhotoChecksum Exchange Online Photos 617E9052042E2F77D18FEFF3CE0D09DC621764EC8487B3517CCA778031E03CEF
firstName PROD CORP AD Joe
fullName PROD CORP AD Joe Smith
mail PROD CORP AD joe.smith@contoso.com.au
mailNickname PROD CORP AD jsmith
o365AccountEnabled Office365 Licensing TRUE
o365AssignedLicenses Office365 Licensing 6fd2c87f-b296-42f0-b197-1e91e994b900
o365AssignedPlans Deskless, MicrosoftCommunicationsOnline, MicrosoftOffice, PowerAppsService, ProcessSimple, ProjectWorkManagement, RMSOnline, SharePoint, Sway, TeamspaceAPI, YammerEnterprise, exchange
o365ProvisionedPlans MicrosoftCommunicationsOnline, SharePoint, exchange
objectSid PROD CORP AD AQUAAAAAAAUVAAAA86Yu54D8Hn5pvugHOA0CAA==
sn PROD CORP AD Smith
source PROD CORP AD WorkDay
userAccountControl PROD CORP AD 512
userPrincipalName PROD CORP AD jsmith@contoso.com.au

So now we have more complete picture of the data, where it’s come from and how we connect that data to a users’ identity profile. We can start to look at how we synchronise that data to any and all Managed targets. It’s very important to control this flow though, to do so we need to have in place strict governance controls about what data is to be distributed throughout the environment.

One practical approach to managing this is by using a data exchange agreement. This helps the organisation have a more defined understanding of what data is being used by what application and for what purpose, it also helps define a strict control on what the application owners can do with the data being consumed for example, strictly prohibiting the application owners from sharing that data with anyone, without the written consent of the data owners.

In my next post we will start to discuss how we then manage target systems, how we use the data we have to provision services and manage the user information through what’s referred to as synchronisation rules.

As with all my posts if, you have any questions please drop me a note.

 

Where’s the source!

SauceIn this post I will talk about data (aka the source)! In IAM there’s really one simple concept that is often misunderstood or ignored. The data going out of any IAM solution is only as good as the data going in. This may seem simple enough but if not enough attention is paid to the data source and data quality then the results are going to be unfavourable at best and catastrophic at worst.
With most IAM solutions data is going to come from multiple sources. Most IAM professionals will agree the best place to source the majority of your user data is going to be the HR system. Why? Well simply put it’s where all important information about the individual is stored and for the most part kept up to date, for example if you were to change positions within the same company the HR systems are going to be updated to reflect the change to your job title, as well as any potential direct report changes which may come as a result of this sort of change.
I also said that data can come and will normally always come from multiple sources. At typical example of this generally speaking, temporary and contract staff will not be managed within the central HR system, the HR team simply put don’t care about contractors. So where do they come from, how are they managed? For smaller organisations this is usually something that’s manually done in AD with no real governance in place. For the larger organisations this is less ideal and can be a security nightmare for the IT team to manage and can create quite a large security risk to the business, so a primary data source for contractors becomes necessary what this is is entirely up to the business and what works for them, I have seen a standard SQL web application being used to populate a database, I’ve seen ITSM tools being used, and less common is using the IAM system they build to manage contractor accounts (within MIM 2016 this is through the MIM Portal).
There are many other examples of how different corporate applications can be used to augment the identity information of your user data such as email, phone systems and to a lessor extent physical security systems building access, and datacentre access, but we will try and keep it simple for the purpose of this post. The following diagram helps illustrate the dataflow for the different user types.

IAM Diagram

What you will notice from the diagram above, is even though an organisation will have data coming from multiple systems, they all come together and are stored in a central repository or an “Identity Vault”. This is able to keep an accurate record of the information coming from multiple sources to compile what is the users complete identity profile. From this we can then start to manage what information is flowed to downstream systems when provisioning accounts, and we can also ensure that if any information was to change, it can be updated to the users profiles in any attached system that is managed through the enterprise IAM Services.
In my next post I will go into the finer details of the central repository or the “Identity Vault”

So in summary, the source of data is very important in defining an IAM solution, it ensures you have the right data being distributed to any managed downstream systems regardless of what type of user base you have. My next post we will dig into the central repository or the Identity Vault, this will go into details around how we can set precedence to data from specific systems to ensure that if there is a difference in the data coming from the difference sources that only the highest precedence will be applied we will also discuss how we augment the data sets to ensure that we are also only collecting the necessary information related to the management of that user and the applications that use within your business.

As per usual, if you have any comments or questions on this post of any of my previous posts then please feel free to comment or reach out to me directly.

An Identity Consultants Summary of the recent Cloud Identity Summit 2017

I’ve just returned from Chicago and the Cloud Identity Summit that was held at the Sheraton Grand Chicago. It was my first CIS conference and reminded me a lot of the now defunct Quest Experts Conference and The Burton Group Conference, both in terms of the content and scale. It definitely had a more intimate feel than the massive Microsoft Ignite category of event which attracts 25k+ attendees. 1400 attendees at CIS was a record for this event, but it still meant you got the 1:1 time with vendors and speakers which is fantastic.

Just like the Quest “The Experts Conference” (TEC) and The Burton Group Conference if you pick your sessions based on the synopsis and the speaker the sessions can be highly technical 400+ level and worthy of the 30 hr journey to get to the conference. I focused on my particular subject of Identity, so this summary is biased towards that track.

A summary of my takeaways that I’ll briefly detail in the post are:

  • ID Pro
  • Strong Authentication / Goodbye Passwords
  • PAM and IGA
  • SCIM 2.0
  • FIDO 2.0

And before I forget, CIS is dead; long live CIS, now known as Identiverse which will be in Boston in June (24-27) 2018. Ping Identity have renamed the conference moving forward.

 

ID Pro

Ian Glazer in his keynote on Tuesday announced what has been missing from the IDAM Community. A professional organisation to represent it. Named ID Pro with all the details available here, it is professional organisation for IDAM exponents. Join now here for US$150. Supported by the Kantara initiative this organisation already has the support and backing of the industry.

 

Strong Authentication / Goodbye Passwords

There were numerous sessions around this topic. And it was fantastic to see that the eco-system to support the holy grail future of No Passwords, but Strong Authentication is now present. Alex Simons summed it up nicely in his keynote on Wednesday but setting the goal of 1 Billion Logins (without passwords) by 2018 launching the hashtag to go with it #1Billionby2018 Checkout the FIDO 2.0 summary further below.

 

Privileged Account Management and Identity Governance & Access

Privileged Account Management and Identity Governance & Access are better together. We knew this anyway and I’ve been approaching it this way with my solutions. It was therefore refreshing to be entertained by Kelly Grizzle in his session When meets through their mutual friend . In essence SailPoint have been working heavily on their IGA offering but with the help of SCIM now at 2.0 they’ve been working with PAM vendors such as CyberArk to provide the integration and visibility the two need. Kelly entertaining and informative presentation can be found here.

 

SCIM 2.0

Mentioned above in the PAM and IGA summary, SCIM 2.0 is now ready for prime time. Whilst SCIM has been around for some time it hasn’t seen widespread adoption in my circles. But that’s about to change. Microsoft have been using it as a primary integration method with Azure AD with the likes of Facebook for Work. Microsoft also have a SCIM MA for Microsoft Identity Manager. I’ll be experimenting with it in the near future.

 

FIDO 2.0

FIDO first came on to my radar about 4 years ago. It is in a lot of the workflows we do every day (if you have a modern operation system – Windows 8+ and bio-metrics on your device or a FIDO compliant token). With FIDO 2.0  and U2F v1.1 and UAF v1.1 now complete the foundation and enabling services for Strong Authentication are ready to go.

 

Summary of the Summary

I’ve tried hard to not make this too wordy, but the takeaway is this. Identity is the foundation of who you are and what you do. With all the other disruption in the IT industry around cloud and mobility, identity is always the enabler. Get it right and you can make life easier for your users, more visible for your information security officers and auditable for your compliance requirements. Just keep up, as it’s moving very fast.

It all comes down to “Requirements”

In my l last post, I discussed the basic concepts of Identity Management. In this post I’m going to talk about the need to clearly identify the business requirements for IAM as a whole and not just to a specific technical need. More often than not IAM projects are spawn to satisfy a specific need, and realistically these today are around the adoption of specific cloud technologies, and Office 365 is the most obvious one.

The issue with doing this is that when you implement a solution for a single purpose without putting any consideration into the “What else” question that is always overlooked. Over the course of my career I’ve discovered that when the next requirement comes up within the business that requires an Identity Management solution, they are often left in a situation which either requires them to completely rebuild the solution that has already been built to provide the initial service offering, or they are required to setup a completely separate environment which then complicates and increases the overall support for the solutions that could have, more often than not, been delivered within the single system.

This brings us back to the title, requirements! So, what are requirements? And why are they so important? Simply put, requirements tell you what you want to achieve within the business with the technology you have or are about to invest in.

When thinking of requirements for any IAM solution, there are basic principles that you should always take into consideration. An IAM solution is built to satisfy 4 basic functions;

  1. Authentication
  2. Authorization
  3. User Management
  4. Central User Store

When establishing requirements within your business for any IAM solution it’s important to understand these basic functions, and what they could mean for your business. So lets break them down.

Authentication

Most would say this is pretty self-explanatory, but there’s often a lot more to this than what many think. For example, you want to enable a SaaS application such as TechnologyOne Finance. Do you want to provide a seamless authentication model with Single Sign-On (SSO) or do you want to keep the authentication separate from your local security domains to provide a higher level of security. Do you want to provide more of a Same Sign-On solution where they use their local username and password but are forced to login every time which can be a less inviting end user experience.

Authorization

Authorization within IAM simply put provides authorization workflows to requests for access to resources or the creation of new resources managed by the IAM solutions. Authorizations ensure that access compliance and government processes are followed with all managed resources.

User Management

User management is simply that, it manages the user objects of which it knows about, this includes any add moves or changes that occur to these objects throughout their lifecycle, and when a user leaves the organisation for whatever reason they are terminated through the standard business processes that are established as part of the requirements that have been defined.

Central User Repository

This is effectively where everything is stored, in Microsoft it’s referred to as the metaverse, in the Novell space it’s referred to as the Identity Vault. But it’s simply a central repository of all user objects, as well as the configuration items such as workflows, policy rules as well as various other configuration items.

A common misconception with requirements is technical teams will look for what the Requirements comparisontechnical requirements are to satisfy the solutions which is being built! But this is where things will often go wrong, business requirements come down to simple business logic of what are the primary business objectives. These business requirements will often create a bunch of technical requirements, but it must start with the basic business requirements.

Summary

So, to summarise, complex IAM solutions all come down to basic principles, what am I trying to achieve as a business? It is for this reason you need to start with the basics and understand the purpose of the solution being built. You wouldn’t want to build a house without plans or you wouldn’t want to build a road without understanding where it’s going. So why would you do the same with your Identity Management solutions.

It’s time to get your head out of the clouds!

head-in-the-clouds1

For those of you who know me, you are probably thinking “Why on earth would we be wanting to get our heads out of the “Cloud” when all you’ve been telling me for years now is the need to adopt cloud!

This is true for the most part, but my point here is many businesses are being flooded by service providers in every direction to adopt or subscribe to their “cloud” based offering, furthermore ICT budgets are being squeezed forcing organisations into SaaS applications. The question that is often forgotten is the how? What do you need to access any of these services? And the answer is, an Identity!

It is for this reason I make the title statement. For many organisations, preparing to utilise a cloud offering` can require months if not years of planning and implementing organisational changes to better enable the ICT teams to offer such services to the business. Historically “enterprises” use local datacentres and infrastructure to provide all business services, where everything is centrally managed and controlled locally. Controlling access to applications is as easy as adding a user to a group for example. But to utilise any cloud service you need to take a step back and get your head back down on ground zero, look at whether your environment is suitably ready. Do you have an automated simple method of providing seamless access to these services, do you need to invest in getting your business “Cloud Ready”?

image1This is where Identity comes to the front and centre, and it’s really the first time for a long time now Identity Management has become a centrepiece to the way a business can potentially operate. This identity provides your business with the “how’ factor in a more secure model.

So, if you have an existing identity management platform great, but that’s not to say it’s ready to consume any of these offerings, this just says you have the potential means. For those who don’t have something in place then welcome! Now I’m not going to tell you products you should or shouldn’t use, this is something that is dictated by your requirements, there is no “one size fits all” platform out there, in fact some may need a multitude of applications to deliver on the requirements that your business has, but one thing every business needs is a strategy! A roadmap on how to get to that elusive finish line!

A strategy is not just around the technical changes that need to be made but also around the organisational changes, the processes, and policies that will inevitably change as a consequence of adopting a cloud model. One example of this could be around the service management model for any consumption based services, the way you manage the services provided by a consumption based model is not the same as a typical managed service model.

And these strategies start from the ground up, they work with the business to determine their requirements and their use cases. Those two single aspects can either make or break an IAM service. If you do not have a thorough and complete understanding of the requirements that your business needs, then whatever the solution being built is, will never succeed.

So now you may be asking so what next? What do we as an organisation need to do to get ourselves ready for adopting cloud? Well all in all there are basically 5 principles you need to undertake to set the foundation of being “Cloud Ready”

  1. Understand your requirements
  2. Know your source of truth/s (there could be more then one)
  3. Determine your central repository (Metaverse/Vault)
  4. Know your targets
  5. Know your universal authentication model

Now I’m not going to cover all these in this one post, otherwise you will be reading a novel, I will touch base on these separately. I would point out that many of these you have probably already read, these are common discussion points online and the top 2 topics are discussed extensively by my peers. The point of difference I will be making will be around the cloud, determining the best approach to these discussions so when you have your head in the clouds, you not going to get any unexpected surprises.

Using an Azure Function to search the FIM/MIM Metaverse, create a Set and update the Set membership in the the FIM/MIM Service

Introduction

This is the third and last post in this series of integrating Microsoft Identity Manager with Azure Functions.

The first detailed how to use an Azure Function to retrieve data from the MIM Service Server. The second detailed how to use an Azure Function to retrieve data from the MIM Sync (Metaverse) Server.

This third post combines the two and then performs an action in the MIM Service. The practical purpose of this could be functions like “find all users in location y” and “enable them for entitlement x” or “add an attribute value on each of their objects”.

Overview

The reasoning for the two stage approach is that in my experience it is a lot easier to search the Metaverse than the MIM Service to find an object(s), but also the Metaverse has all the information about objects whereas the MIM Service is a ShadowVerse of the Metaverse containing a subset of the managed objects metadata.

Moving forward then the architecture is a hybrid of the first two posts that introduced the concepts associated with integrating MIM with Azure Functions. As per the other two posts this is a base working example and concept.

Prerequisites

The prerequisites are the same as for the 1st and 2nd posts in this series. You’ll need to work through those examples to setup the dependencies and prerequisites. From there you can create one more Azure Function that brings everything together. That’s what I’m covering in this post.

Therefore the prerequisites are;

  • Azure Tenant and a Function Plan
  • Microsoft Identity Manager implementation
  • Remote Powershell configured for your MIM Sync Server
  • Lithnet FIM MIIS Automation Powershell Module installed on your MIM Sync Server
  • The necessary Firewall Rules on your MIM Sync Server and your Azure Network Security Group (assuming your MIM Infrastructure is in Azure) to allow Azure Functions to communicate with MIM Sync and Service Servers

 

This Example performs the following

In this example the HTTP Trigger Azure Function;

  • Takes input for ObjectType, Attribute, AttributeValue, SetName
  • Searches the MIM Sync Metaverse for the input ObjectType, with the AttributeValue in the Attribute
  • Connects to the MIM Service
  • Creates the Set based of the input SetName if it doesn’t exist
  • Adds the objects from the search to the Set
  • Returns the objects added to the Set

In a real world implementation you’d do the above with a criteria based set. This post is a concept of search and find, performing a create and updating. That has many practical applications.

Create your new Azure Function

Just like the other two posts, we’re going to create a new Powershell HTTP Trigger Azure Function.

Upload the Lithnet RMA PS Module to your new Azure Function (as per blog post 1 in this series). You should also be using protected credentials now as well. So upload your username/password encryption key.

 

Here is the Azure Function Powershell Script that performs the process detailed above.

Test it out. Looks good. 88 users matched the value of Sydney in their location attribute.

Verify that the Set was created and the membership updated.

Test calling the Azure Function remotely

Now that it is all working in the Azure Function, lets try doing it from Powershell remotely. This time I’m again looking for Person objects that have Sydney in their location attribute and I’ll create a set named Sydney-NSW and put them in it.

Brilliant, that works nicely. Let’s verify that the Set was created and has the correct number of users in it. Yes, a perfect match.

Summary

Putting Azure Functions and Powershell together along with the Lithnet Powershell Modules opens up a world of possibilities for automation and integration of the MIM Service without the need for any additional infrastructure or any considerable effort.

Experiment and let me know what you do with this style of integration.

Follow Darren on Twitter @darrenjrobinson

Windows Server 2012 R2 (ADFS 3.0): Migrating ADFS Configuration Database from WID to SQL

You already have a working ADFS setup which has been configured to use the Windows Internal Database (WID) to store its configuration database. However, things may have changed since you implemented it and you may now have one (or more) of the below requirements which will need an upgrade to SQL server.

  • Need more than five federation servers in the ADFS Farm (supporting more than 10 relying parties)
  • Leverage high availability features of SQL or
  • Enable support for SAML artefact resolution or WS Federation token replay detection.

The below step-by-step procedure should help you with the migration of the ADFS configuration database from WID to SQL with minimal or no downtime (however, plan accordingly such that it has the least impact in case something goes wrong).

The steps also cover configuration of each of the ADFS servers (Primary and Secondary) in the farm to use the SQL Server for its configuration database.

For simplicity, i have used the below scenario comprising of:

Proposed Design

DMZ

  • Two Web Application Proxies (WAP) – wap1 and wap2
  • External load balancer (ELB) in front of the WAPs.

Private / Corporate network

  • Two ADFS Servers – adfs1 and adfs2
  • Internal Load Balancer (ILB) in front of the ADFS Servers
  • SQL Server (Standalone). Additional steps need to be performed (not covered in this blog) when using SQL Server with high availability options such as SQL Always-On or Merge Replication

Backups

Ensure you have a complete backup of your ADFS servers. You can use Windows Server Backup or your thirty-party backup solution to backup the ADFS servers.

Load Balancer Configuration

During the course of this exercise the internal load balancer will be configured multiple times to ensure a smooth migration with minimal impact to end users.

Remove the primary ADFS Server (adfs1) from the internal load balancer configuration such that all traffic is directed to the secondary server (adfs2).

Primary ADFS Server steps

  • Stop the ADFS windows service by issuing “net stop adfssrv” in an elevated command prompt or via the Windows Services Manager.

net stop adfssrv

  • Download and install SQL Server Management Studio (SSMS) (if not already present)
  • Launch SSMS in Administrator mode
  • Connect to your WID using \\.\pipe\MICROSOFT##WID\tsql\query as the server name in SSMS.

SSMS connect dialog

You should be able to see the two ADFS databases (AdfsArtifactStore and AdfsConfiguration) as shown below:

SSMS showing the two ADFS databases

  • To find the physical location of the ADFSConfiguration and ADFSArtifactStore in WID, run the below query  by starting up a ‘New Query’. The default path is C:\Windows\WID\Data\.
SELECT name, physical_name AS current_file_location FROM sys.master_files

Results showing physical location of DB files

  • Restart WID from SSMS. This is just to ensure that there is no lock on the databases. Right Click on the WID db and select ‘Restart‘.

Restarting the database

Restarting the database

  • Now we need to detach both the databases. Run the below query on the WID using SSMS
USE [master]
GO
EXEC master.dbo.sp_detach_db @dbname = N'AdfsArtifactStore'
GO
EXEC master.dbo.sp_detach_db @dbname = N'AdfsConfiguration'
GO

Running the commands on the WID

  • Now copy the databases identified earlier from the Primary ADFS Server to your SQL Server’s Data directory (for example C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA).

p8

SQL Server – Steps

  • On the SQL Server, bring up the SQL Server Management Studio (SSMS) and connect to the SQL instance (or default instance) where the ADFS databases will be hosted.
  • Create a login with the ADFS windows service account (which was used for the initial ADFS setup and configuration). I used Contoso\svcadfs.

Adding SQL Server user

  • Now attach the databases copied earlier on the SQL server. Run the below using the SQL Server Management Studio. Modify the path as appropriate if the db files were copied to a location other than ‘C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA’

 USE [master]
 GO
 CREATE DATABASE [AdfsConfiguration] ON
 ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdfsConfiguration.mdf' ),
 ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdfsConfiguration_log.ldf' )
 FOR ATTACH
 GO

 CREATE DATABASE [AdfsArtifactStore] ON
 ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdfsArtifactStore.mdf' ),
 ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\DATA\AdfsArtifactStore_log.ldf' )
 FOR ATTACH
 GO
 ALTER DATABASE AdfsConfiguration set enable_broker with rollback immediate
 GO

  • On successful execution of the above, you should be able to see the two ADFS databases in SSMS (you may need to do a refresh if not displayed automatically)

Two databases shown in SSMS

  • Ensure that the ADFS Service Account has the “db_genevaservice” role membership on both the databases

Grant service account right database role

Firewall Configuration

Ensure that the SQL Server is reachable from the ADFS servers on port 1433. You may need to update network firewalls and / or host firewall configuration on the SQL Server (depending on the type of network setup you may have).

Primary ADFS Server Steps

  • Start the ADFS windows service by issuing “net start adfssrv” from an elevated command prompt or from the Windows Services Manager
  • Launch a PowerShell console in Administrator Mode and execute the below lines in order

 $temp= GEt-WmiObject -namespace root/ADFS -class SecurityTokenService
 $temp.ConfigurationdatabaseConnectionstring="data source=[sqlserver\instance];initial catalog=adfsconfiguration;integrated security=true"
 $temp.put()

Note: replace [sqlserver\instance] with actual server\instance. If not running as an instance, just server. I am using ‘SQLServer’ as it is the hostname of the SQL server being used in this example.

PowerShell Configuration

  • Change the connection string property in “AdfsProperties” by issuing the below command from the PowerShell console

Set-AdfsProperties -ArtifactDbConnection "Data Source=[sqlserver\instance];Initial Catalog=AdfsArtifactStore;Integrated Security=True"

Note: Change [sqlserver\instance]  with the name of your SQL server and instance (as applicable)

PowerShell Configuration

  • Restart the ADFS Service by executing “net stop adfssrv” and “Nnet start adfsrv” from an elevated command prompt or from the Windows Services Manager.

Restarting service

  • To check if the configuration has been successful, run “Get-AdfsProperties” from a PowerShell console. You should see the ADFS properties listed (as below) with the key being Data Source=SQLServer; Initial Catalog=AdfsArtifactStore; Integrated Security=True

Output from Get-AdfsProperties

This completes the migration of the ADFS configuration database from WID to SQL and also the configuration of the Primary ADFS server to use the SQL Database. Now we need to configure the secondary ADFS server(s) to use the SQL Database.

Load Balancer Configuration

Update the internal load balancer to:

  • Add the Primary ADFS (adfs1) to the load balance configuration and
  • Remove the secondary ADFS (adfs2) server which needs to be reconfigured to point to the SQL Server.

Secondary ADFS Server steps

  • Stop the ADFS Windows service by issuing “net stop adfssrv” in an elevated command prompt
  • To change the configuration database connection string to point to the new SQL ADFS configuration database run the below command lines (in order) from a PowerShell Console
$temp= Get-WmiObject -namespace root/ADFS -class SecurityTokenService
$temp.ConfigurationdatabaseConnectionstring=”data source=<SQLServer\SQLInstance>; initial catalog=adfsconfiguration;integrated security=true”
$temp.put()

Note: Change [sqlserver\instance] with the name of your SQL server / instance as used for the primary server configuration.

PowerShell Configuration

  • Start the ADFS Service by executing “Net Start ADFSSRV” from an elevated command prompt and verify that the service starts up successfully. ( I have had the issue where my ADFS server was (strangely) not able to resolve the NETBIOS name of the SQL Server, hence the service wouldn’t start properly. Also, check if the federation service is running using the service account that was provided login to the SQL Database)
  • To check if the configuration has been successful, run “Get-AdfsProperties” from a PowerShell console. You should see the ADFS properties listed (as below) with the key being  Data Source=SQLServer; Initial Catalog=AdfsArtifactStore; Integrated Security=True

Output from Get-AdfsProperties

Repeat above steps for each of the secondary servers (if you have more than one) and ensure that all ADFS servers are added back to the internal load balancer configuration.

I hope this post has been useful.