Building a Microsoft Identity Manager PowerShell Management Agent for Workday HR

Before I even get started with this post, let me state that the integration I describe here is not a standalone solution. Integrating with Workday for any organisation of significant size will require multiple integration points each providing coverage for the scenarios for your implementation. I list a few in this post, but Alexander Filipin has already done an awesome job here.

You may state, that there is of course the Azure Active Directory Provisioning Service for Workday. But what if you need more granular customisation than that provides, or you have requirements to get that data to a number of other systems and you desire to have connectivity to the authoritative source? Those are requirements I had and why I built a Management Agent for Workday to consume Workday HR data directly.

As the title implies it uses the ever versatile Granfeldt PowerShell Management Agent. The other key component is a PowerShell Module that eases the integration with Workdays’ SOAP API. Specifically the Workday API PowerShell Module available here.

Enabling the Workday (Get_Workers) API

In order to access the Workday API you need to have an API  account created. I pointed the Workday Support guys to this Microsoft Azure Inbound Workday Provisioning Documentation. Specifically the ‘Configure a system integration user in Workday‘ section in that link.

Once enabled they were able to give me a Service and Tenant name along with a Username and Password.

  • when using this information your Username is the username and the tenant. So if the username is ‘API User’ and the Tenant is ‘Identity_Corp’ then loginID for our purpose is API User@Identity_Corp
  • the URL you are provided will combine the Service and Tenant names. It will look something like this for the Human Resources Endpoint https://wd3-impl-services1.workday.com/ccx/service/TENANTNAME/Human_Resources/v30.2
    • where wd3-impl-services1 is the Service Name

Install the WorkdayAPI PowerShell Module

On your FIM/MIM Sync Server you will need to install the Workday API PowerShell Module available here. You will need to install it using an Elevated PowerShell session.

Unblock the PowerShell Module and Scripts

After installing the Workday API PowerShell Module it should be located in ‘C:\Program Files\WindowsPowerShell\Modules\WorkdayApi’. You will need to unblock the module and scripts. Run the following two commands in an elevated PowerShell session.

Get-ChildItem 'C:\Program Files\WindowsPowerShell\Modules\WorkdayApi' | Unblock-File
Get-ChildItem 'C:\Program Files\WindowsPowerShell\Modules\WorkdayApi\scripts' | Unblock-File

Verify your Execution Policy

As the PowerShell Module is unsigned you might need to do something similar to the following. The Get-ExecutionPolicy -List command will show you what the Execution Policy settings currently are.

Set-ExecutionPolicy "Unrestricted" -Scope Process -Confirm:$false
Set-ExecutionPolicy "Unrestricted" -Scope LocalMachine -Confirm:$false

Import Analytics

50k records with just the base profile (no -include work or -include personal options) takes ~7 minutes to ‘stage’ into the connector space. 50k records WITH work and personal metadata takes ~32 hours at a pretty consistent rate of ~20 mins/500 user records.

If you are retrieving just the Base record then the networking receive bandwidth consumption is ~240kbps. When retrieving the full records as a batch process the networking receive bandwidth consumption it ~3Mbps as shown below.

Full Object Network Graph

Why is this important?

The first “FULL” Sync depending on how many records you have in Workday will alter the approach you will need to take in order to obtain them all. I found that trying to retrieve full records in one call for anything over ~5000 records got inconsistent. I wouldn’t get the full dataset and the machine running it would start to run out of resources (processing power and memory). If you have only a few thousand records, requesting full records in one call will probably suffice.

Now I have ~100k records to return. What I found worked best is to get just the base record for all users then the full record for each user using pagination (via PSMA Paged Imports; I have my set to 500). The the PSMA Paged Imports feature will process the objects through the MA 500 at a time. That way you’re not stressing the host running the Sync Engine to the maximum and you don’t have to wait an hour+ to see any processing of objects on the MA.

Once you have completed a Full Sync and you are of any significant scale you will want to perform Delta Sync’s for the objects that have changed since your last sync. I’m not going to cover that in this post, but in a separate one in the future.

Here is a screenshot of showing the time taken for a Stage (Import) of 50k objects. Just under 33 hours.

Import - Stage Only.PNG

Other Options for Scale

If you are a large organisation this solution isn’t necessary a valid one (in isolation) as I indicated in the opening paragraph. Consider it ancillary augmentation to a multi-pronged implementation (as described nicely by Alexander Filipin here). Potentially something like;

  • Azure Active Directory Inbound Provisioning for object creation
  • A Management Agent such as the one I describe in this post for certain aspects
    • and a modification or two to identify new accounts from a Base Workday discovery and only import the full object for them on workdays and a full sync on the weekends or
    • delta syncs using the Workday Transaction Log Criteria Data and Transaction_Date_Range_Data
      • I’ll cover this in a future post but essentially on every sync I store a cookie-file with the watermark of the time of the sync. On the next deltasync I retrieve the cookie-file with the timestamp and make a call to get all objects changed since the previous sync up to the current time

PSMA Workday Management Agent Script Files

Wow, what a lot of caveats and clarifications. But with all that said, below are base  Schema and Import Scripts examples for the Grandfeldt PowerShell Management Agent that leverages the Workday API PowerShell module.

Schema.ps1

The schema is the base schema for my tenant. You shouldn’t have to change anything here unless you are retreiving additional attributes you want in MIM.

Import.ps1

The import script leverages AuthN creds from the MA config. Make sure the Username is in the format of UserID@TenantName. Also update;

  • Line 10 for the location you put your extension as well as the 8.3 format path to the MA Debug folder
  • Line 30 for the correct Service and Tenant info
  • Make sure you have Paged Imports selected on the Global Parameters screen of the MA Configuration

Export.ps1

I haven’t provided an example. The Workday API PowerShell Module has examples for updating Email, Phone and Photos. You can implement what you require.

Summary

The sample Workday MA Config in this post will give you a base integration with Workday. It is unlikely that it will give you everything you need and there isn’t a single solution that probably will, unless your organisation is quite small. There are other options as mentioned in this post and also the Workday Reports REST API. But those are topics for future posts.

Automate the Generation of a Granfeldt PowerShell Management Agent Schema Definition File

Generating Schema.ps1 for the Granfeldt FIM/MIM PowerShell Management Agent

Getting started writing your first Forefront/Microsoft Identity Manager Granfeldt PowerShell Management Agent can be a bit daunting. Before you can do pretty much anything you need to define the schema for the PSMA. Likewise if you have written many, the generation of the schema file often seems to take longer than it should and can be a little tedious when all you want to do is write the logic for the Import and Export scripts.

After a few chats with Soren around enhancements for the PSMA I suggested it would be awesome if the generation of the schema.ps1 file could be (semi)automated. So here is my first stab at doing just that.

My approach is;

  • Using PowerShell get an object that represents an object that will be managed on the PSMA
  • Enumerate the Properties of the PSObject and generate the Schema script accordingly
  • All that is left to do afterwards is;
    • define your anchor
    • define the name of the ObjectType
    • combine multiples if your MA will have multiple ObjectClasses
      • Update $obj to $obj2 etc for any additional object classes residing in the same schema file

Below I provide four examples covering the script that generates the schema definition along with the output. The four examples cover;

  • Azure AD User
  • Azure AD Group
  • Workday User
  • Flat File CSV

Example 1: Azure Active Directory User

The example below utilises the AzureAD PowerShell Module to connect to Azure AD. It then gets a User Object (update line 7 for a user to retrieve) and enumerates the properties of the User to generate the Schema file.

The output looks like this:

$obj = New-Object -Type PSCustomObject
$obj | Add-Member -Type NoteProperty -Name "Anchor-YourAnchorATTR|String" -Value ""
$obj | Add-Member -Type NoteProperty -Name "objectClass|String" -Value "YourObjectClass"
$obj | Add-Member -Type NoteProperty -Name "AccountEnabled|boolean" -Value $true
$obj | Add-Member -Type NoteProperty -Name "AgeGroup|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "AssignedLicenses|String[]" -Value ("","")
$obj | Add-Member -Type NoteProperty -Name "AssignedPlans|String[]" -Value ("","")
$obj | Add-Member -Type NoteProperty -Name "City|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "CompanyName|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "ConsentProvidedForMinor|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "Country|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "CreationType|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "DeletionTimestamp|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "Department|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "DirSyncEnabled|boolean" -Value $true
$obj | Add-Member -Type NoteProperty -Name "DisplayName|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "ExtensionProperty|String[]" -Value ("","")
$obj | Add-Member -Type NoteProperty -Name "FacsimileTelephoneNumber|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "GivenName|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "ImmutableId|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "IsCompromised|boolean" -Value $true
$obj | Add-Member -Type NoteProperty -Name "JobTitle|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "LastDirSyncTime|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "LegalAgeGroupClassification|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "Mail|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "MailNickName|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "Mobile|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "ObjectId|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "ObjectType|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "OnPremisesSecurityIdentifier|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "OtherMails|String[]" -Value ("","")
$obj | Add-Member -Type NoteProperty -Name "PasswordPolicies|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "PhysicalDeliveryOfficeName|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "PostalCode|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "PreferredLanguage|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "ProvisionedPlans|String[]" -Value ("","")
$obj | Add-Member -Type NoteProperty -Name "ProvisioningErrors|String[]" -Value ("","")
$obj | Add-Member -Type NoteProperty -Name "ProxyAddresses|String[]" -Value ("","")
$obj | Add-Member -Type NoteProperty -Name "RefreshTokensValidFromDateTime|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "ShowInAddressList|boolean" -Value $true
$obj | Add-Member -Type NoteProperty -Name "SignInNames|String[]" -Value ("","")
$obj | Add-Member -Type NoteProperty -Name "SipProxyAddress|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "State|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "StreetAddress|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "Surname|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "TelephoneNumber|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "UsageLocation|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "UserPrincipalName|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "UserType|string" -Value "string"

Update the Anchor for the attribute you’d like to use. I recommend ObjectId and give the ObjectClass a name for how you’d like it represented on your MA (User, AADUser or similar) and save it as something like schema.ps1 in you MA folder and you can get started.

Example 2: Azure Active Directory Group

The example below utilises the AzureAD PowerShell Module to connect to Azure AD. It then gets a Group Object (update line 7 for a group to retrieve) and enumerates the properties of the Group to generate the Schema file

The output looks like this:

$obj = New-Object -Type PSCustomObject
$obj | Add-Member -Type NoteProperty -Name "Anchor-YourAnchorATTR|String" -Value ""
$obj | Add-Member -Type NoteProperty -Name "objectClass|String" -Value "YourObjectClass"
$obj | Add-Member -Type NoteProperty -Name "DeletionTimestamp|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "Description|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "DirSyncEnabled|boolean" -Value $true
$obj | Add-Member -Type NoteProperty -Name "DisplayName|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "LastDirSyncTime|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "Mail|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "MailEnabled|boolean" -Value $true
$obj | Add-Member -Type NoteProperty -Name "MailNickName|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "ObjectId|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "ObjectType|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "OnPremisesSecurityIdentifier|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "ProvisioningErrors|String[]" -Value ("","")
$obj | Add-Member -Type NoteProperty -Name "ProxyAddresses|String[]" -Value ("","")
$obj | Add-Member -Type NoteProperty -Name "SecurityEnabled|boolean" -Value $true

Update the Anchor for the attribute you’d like to use. I recommend ObjectId and give the ObjectClass a name for how you’d like it represented on your MA (Group, AADGroup or similar) and save it as something like schema.ps1 in you MA folder and you can get started.

Example 3: Workday User

The example below utilises the Workday PowerShell Module to connect to Workday. It then gets a User Object (update line 7 for a user to retrieve) and enumerates the properties of the User to generate the Schema file.

Update

  • Line 6 for your ServiceName and Tenant.
  • Line 13 for an object to retrieve

This script differs from AAD User and Group above in that the PowerShell Object returned uses NoteProperty as the type. So I updated Line 14 for that. Also the attribute when parsed by Get-Member includes a value so I had to get a substring of the result for the attribute name. That is what this change does:

$d[1].substring(0,$d[1].indexof("="))

The output looks like this:

$obj = New-Object -Type PSCustomObject
$obj | Add-Member -Type NoteProperty -Name "Anchor-YourAnchorATTR|String" -Value ""
$obj | Add-Member -Type NoteProperty -Name "objectClass|String" -Value "YourObjectClass"
$obj | Add-Member -Type NoteProperty -Name "BusinessTitle|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "FirstName|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "JobProfileName|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "LastName|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "Location|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "PreferredName|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "UserId|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "WorkerDescriptor|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "WorkerId|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "WorkerType|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "WorkerTypeReference|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "WorkSpace|string" -Value "string"

Example 4: Flat File CSV

The example below utilises a sample CSV file with headers. It uses the Header row to generate the Schema file. It defaults all columns to strings.

Update;

  • Line 2 for your CSV File Name

The output looks like this (for my CSV File):

$obj = New-Object -Type PSCustomObject
$obj | Add-Member -Type NoteProperty -Name "Anchor-YourAnchorATTR|String" -Value ""
$obj | Add-Member -Type NoteProperty -Name "objectClass|String" -Value "YourObjectClass"
$obj | Add-Member -Type NoteProperty -Name "id|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "name|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "displayName|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "comments|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "created|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "endDate|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "lastLogon|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "modified|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "startDate|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "status|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "type|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "groups|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "costCenter|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "country|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "department|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "division|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "email|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "employeeNumber|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "familyName|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "givenName|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "honorificPrefix|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "honorificSuffix|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "locale|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "location|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "manager|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "middleName|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "organization|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "phoneNumber|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "preferredLanguage|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "preferredName|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "secondaryEmail|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "secondaryPhoneNumber|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "timezone|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "title|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "risk|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "WorkerWid|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "WorkerDescriptor|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "WorkerId|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "OtherId|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "JobProfileName|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "WorkSpace|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "WorkerTypeReference|string" -Value "string"
$obj | Add-Member -Type NoteProperty -Name "WorkerType|string" -Value "string"
$obj

Summary

Using a simple script and an example object we can quickly create the basis for a Granfeldt PSMA Schema Definition script.
As shown with the Workday example a minor tweak was required, but it was still a lot quicker than generating manually.

Hopefully this helps you get started quickly with your first, or next PSMA that you are building.

Squeezing the Design Process into an Agile world – a real world story

You’ve just been assigned to your first project. It’s to build a product. You’re excited and nervous at the same time, you think – “finally a project I can sink my teeth into, I can adopt design thinking, do my user research, find user pain points with the client and come up with a killer design that everyone loves and I’ll be the new star of my organisation”

You walk into your client’s office, and meet the “scrum master” what’s that again? You think. Oh yeah – this is one of those Agile projects. That’s cool, you think, as you quickly rummage through your notes from class and try to remember all the lingo, like product backlog, sprints, scrums, features, user stories, epics…hang on where do I fit into all this? Nevermind, I have to rush to my first scrum meeting.

Everyone is standing around a wall of post-it notes. Post-it notes! A UX designers’ best friend. I’m home. Wait a minute. They’re all saying a bunch of things I don’t understand and turns out we are already half way through the project and all the product features have already been decided on by a BA or if you’re lucky enough a previous UX designer that was involved in the ‘discovery phase’.

“Peter – can you make this section stand out a bit more, try a few things and can it be done so we can release it this sprint?” asks the scrum master. “Sure” you reply. Wait a minute, did I just get cornered into making things look pretty. Is this what I signed up to? Why am I making it stand out more?

You take your seat next to a developer after the scrum meeting feeling a bit deflated and introduce yourself. After speaking briefly, you realise he has a very different take on what UX design is and you think to yourself this is going to be a long uphill battle.

Fast forward to the end of your first sprint, you’ve slightly tweaked some features that needed to be finished and helped the dev guys put it into HTML and CSS.  You think you’ve gotten your head around the whole agile way of working, and are beginning to understand things like velocity points. You haven’t exactly gotten to work in the design process, but at least you’re learning how agile works. It’s time for the first ‘show case’ meeting and as it is an internal project you manage to sneak in some guerrilla testing that morning with users by walking around and asking them to have a look at the new features that’s just been released. Generally, they seem to get it, but you do notice a few holes in the design after seeing them struggle to use some features. At least now you can somewhat justify your design decisions and have some ideas for the next sprint.

It’s come time for a ‘retrospective’ and your team mates were nice enough to say that what went well was your onboarding. You personally thought it was a train wreck but at the same time are relieved, they have some trust in your UX abilities, even if you feel you didn’t really do any UX work. It comes time to voice what didn’t go so well, and you say something along the lines of the “the designer should be working on designs a sprint ahead of the dev team” you’re not quite sure if you even understood what you just said. Then the product owner chimes in and says “my boss doesn’t really get the my favourites section”. Finally! A chance to state that users weren’t getting it either. “I could use some of the next sprint to test out the ‘my favourites’ section with users to see where they are struggling and update the designs based on that” you say. “Great! How many points?” she replies. How many points?  How do I answer that? I’ll have to first think of scenarios to test, then test the current product, then update the design based on my findings, then make a prototype, then test again, then update, then…. Oh no they’re looking at me “umm 8 points” you say. “Can the developers work on other backlog items while you’re working on that?” She asks. “Yeah there’s other features to work on in this sprint, we can do that in the first week” replies one of the developers. Ok, so now I have not the whole sprint, but half the sprint to do this. Oh well. Looking back I could have said “how about we leave the development of the newly designed feature till the next sprint while I work on the design this sprint” Experience and a bit of confidence that you will inevitably build will ensure you will say this next time.

And thus, you can see the conundrum that many designers and no doubt, you, will face when working in an agile project.

“Although any sprint can theoretically aim to address design issues, the pressure on designers to not hold up the rest of the development team once normal sprinting has begun is enormous. However, the time needed to explore very complex design decisions could drastically slow down team momentum.”

 

Fitting Big-Picture UX into Agile Development, Smashing Magazine, Nov 6, 2012

I could have written a blog about things like sprint zero, with diagrams demonstrating how a UX designer should be involved in the discovery phase before user stories are created or how a UX designer can work a sprint ahead, but there’s plenty on that. In reality this doesn’t always happen. You may be thrown in mid project, or mid sprint! and it’s going to take time to get to know your team and how they work. The real world is a mess, as is, design. Yes, there is a design process, but as a designer you will realise, that the process isn’t always linear. You’re going back and forth through that double diamond (or whatever dogma you’ve been fed). You sometimes have to make compromises, you sometimes have to wait it out and gradually, hopefully, with your speaking prowess, more user adoption and your passion for design thinking you will convince your agile team mates to start allowing more time for human centred design before and during an agile project. Don’t be disheartened, pick your battles, and remember at the end of the day it’s not a process war, everyone on the team wants to build a great product, it’s how you get there that can get messy – something that design and agile have in common.

Ps – I will include a diagram after all.

messydesign

Deploy and Add SPFx webparts to Modern Pages using OfficeDevPnP CSOM library

In the previous blog here, we looked at how to install apps on a SharePoint site. With SharePoint and Office Dev PnP CSOM, we could also add web parts to Modern Pages, both out of the box (OOB) web parts and custom web parts. For out of box web parts, refer to Chris O’Brien article here , where he has provided steps and also the web part IDs for the OOB webparts which is really helpful.

In this blog, we will look at steps to add a custom web part and set it properties.

For the below code, we will use OfficeDevPnP CSOM library. The high level steps for implementation are below:

1. Find the page where to add the web part. For creating a modern page programmatically, refer to this link here.

2. Find if the web part is added to the page, if not then add it using web part ID. We will read the web part ID from a JSON class where we have kept the details of the custom web part

3. After the web part is added, then set the web part properties using JSON schema of the properties using NewtonSoft.Json.

Note: If the web part app is just installed, then it might take time for 
the web part to be ready for use. 
Hence put a wait state till the web part is loaded for use.

The below code has the steps for adding the web part programmatically using CSOM

Conclusion

The above process could be used to add web part programmatically onto a page and set it’s properties.

Happy Coding !!

Integrating with SailPoint IdentityNow Private (v1) API’s using PowerShell

How to generate the ‘Password Hash’ to leverage the IdentityNow Private API’s

Recently I’ve posted about integrating with the SailPoint IdentityNow API’s. Specifically;

So why another post on a very similar subject? Well, not all IdentityNow API’s are exposed on the v2 API Endpoints that were leveraged in the previous posts. The v1 (Private API List) is detailed on SailPoint Compass here and contains a number of functions that are extremely useful. And here is where it gets interesting. The authentication methods between v1 and v2 are not the same. There is a document that gives you most of what you need also on SailPoint Compass here that describes IdentityNow’s oAuth methods.

But here is the kicker. In order to use the older v1 API endpoints you need to generate a hash for the user associated with oAuth2.0 authentication which is in addition to the Client ID and Client Secret. The only method that I’m aware of (and believe me I lost time and effort searching) that SailPoint provide to generate the hash is to use the File Upload Utility which is available on Compass here. And that method only came to my attention after posting to the IdentityNow Community.

Now let’s say you run into the problem I did with the version of Java (yes it is a Java Utility) I had installed (v1.7);

java version "1.7.0_71"
Java(TM) SE Runtime Environment (build 1.7.0_71-b14)
Java HotSpot(TM) 64-Bit Server VM (build 24.71-b01, mixed mode)

Wrong Version of Java.PNG

I didn’t have a need for Java, in fact I was surprised it was even installed on my laptop. Seems it was from an installation of Virtual Box some time ago. I didn’t want to go and get another version etc etc. But I did spin up a VM and get the Java App to run and generate the hashed password. A hashed password for IdentityNow v1 API integration looks like this;

160a028019a1ce58c679f2216a7f707fa666a674772b4742cef7b08eab99de7b

Looking at that I guessed SHA256, but there was something else going on (check the case).

The Premise and Intent

I’ve generated the password hash, why not just move on? What I was looking to achieve was the ability for a number of services to interact with IdentityNow via v1 Private API’s. As part of those processes and for management of that privileged access, it needs to be automated. Running a Java app to generate the hashed password for each identity and on each password change wasn’t an option.

The Solution

What I ended up doing, as there was no accessible documentation around how the hashed password is generated for v1 API integration still makes me feel a bit dirty and guilty that I had to do it. Maybe I shouldn’t even be documenting it? But we are licensed for this service and integration with it is still limited to what functions are available via the API and identity you are authenticating with.

I decompiled the Java App, peeked inside and looked to see how the hash was being generated. Then mimicked the process in PowerShell.

Direct Integration with IdentityNow Private (v1) API’s

Direct integration is essentially like any other API. What you need is;

The process is;

  1. Generate the Request to the Token Endpoint
  2. Submit the request and obtain a token
  3. Execute subsquent requests with the Bearer Token received in Step 2
  4. Renew token before it expires if continuing to make requests longer than the token life (default 60 mins)

That all looks simple right. Except for that bit about ‘hashed password’.

The Special Sauce

You’ve made it this far, so you’re probably looking to do this. Here it is. The inputs are the IdentityNow Login Name for an Admin account and the plaintext password for that account.

In order to simplify generating the hash you will need to install the PowerShell Community Extensions (PSCX) Module which is in the PowerShell Gallery here. This provides the super handy Get-Hash cmdlet that I use extensively.

You should be on PowerShell 5.1 or later so the following will install it for you (from an elevated PowerShell console);

install-module -name pscx

Generate passwordhash Script

Update lines 2 and 3 for the Admin Identity you are generating the hash for. Line 8 and 9 are the lines that generate the hash and prepare it for use in requesting a token.

Calling IdentityNow Private API’s with PowerShell

Here is an example of using PowerShell to generate the hashed Password, obtain a token then call the API to return the list of Profiles in IdentityNow.

  • Update the obvious lines for your instance (Lines 2, 4, 8, 10, 11)
  • Update Line 25 if you want to call a different v1 API

Summary

Hopefully I’ve saved others the couple of hours of messing around to workout how to programatically generate the needed password hash to allow automated integration with the SailPoint IdentityNow v1 Private API’s.

Automatic Key Rotation for Azure Services

Securely managing keys for services that we use is an important, and sometimes difficult, part of building and running a cloud-based application. In general I prefer not to handle keys at all, and instead rely on approaches like managed service identities with role-based access control, which allow for applications to authenticate and authorise themselves without any keys being explicitly exchanged. However, there are a number of situations where do we need to use and manage keys, such as when we use services that don’t support role-based access control. One best practice that we should adopt when handling keys is to rotate (change) them regularly.

Key rotation is important to cover situations where your keys may have compromised. Common attack vectors include keys having been committed to a public GitHub repository, a log file having a key accidentally written to it, or a disgruntled ex-employee retaining a key that had previously been issued. Changing the keys means that the scope of the damage is limited, and if keys aren’t changed regularly then these types of vulnerability can be severe.

In many applications, keys are used in complex ways and require manual intervention to rotate. But in other applications, it’s possible to completely automate the rotation of keys. In this post I’ll explain one such approach, which rotates keys every time the application and its infrastructure components are redeployed. Assuming the application is deployed regularly, for example using a continuous deployment process, we will end up rotating keys very frequently.

Approach

The key rotation process I describe here relies on the fact that the services we’ll be dealing with – Azure Storage, Cosmos DB, and Service Bus – have both a primary and a secondary key. Both keys are valid for any requests, and they can be changed independently of each other. During each release we will pick one of these keys to use, and we’ll make sure that we only use that one. We’ll deploy our application components, which will include referencing that key and making sure our application uses it. Then we’ll rotate the other key.

The flow of the script is as follows:

  1. Decide whether to use the primary key or the secondary key for this deployment. There are several approaches to do this, which I describe below.
  2. Deploy the ARM template. In our example, the ARM template is the main thing that reads the keys. The template copies the keys into an Azure Function application’s configuration settings, as well as into a Key Vault. You could, of course, output the keys and have your deployment script put them elsewhere if you want to.
  3. Run the other deployment logic. For our simple application we don’t need to do anything more than run the ARM template deployment, but for many deployments  you might copy your application files to a server, swap the deployment slots, or perform a variety of other actions that you need to run as part of your release.
  4. Test the application is working. The Azure Function in our example will perform some checks to ensure the keys are working correctly. You might also run other ‘smoke tests’ after completing your deployment logic.
  5. Record the key we used. We need to keep track of the keys we’ve used in this deployment so that the next deployment can use the other one.
  6. Rotate the other key. Now we can rotate the key that we are not using. The way that we rotate keys is a little different for each service.
  7. Test the application again. Finally, we run one more check to ensure that our application works. This is mostly a last check to ensure that we haven’t accidentally referenced any other keys, which would break our application now that they’ve been rotated.

We don’t rotate any keys until after we’ve already switched the application to using the other set of keys, so we should never end up in a situation where we’ve referenced the wrong keys from the Azure Functions application. However, if we wanted to have a true zero-downtime deployment then we could use something like deployment slots to allow for warming up our application before we switch it into production.

A Word of Warning

If you’re going to apply this principle in this post or the code below to your own applications, it’s important to be aware of an important limitation. The particular approach described here only works if your deployments are completely self-contained, with the keys only used inside the deployment process itself. If you provide keys for your components to any other systems or third parties, rotating keys in this manner will likely cause their systems to break.

Importantly, any shared access signatures and tokens you issue will likely be broken by this process too. For example, if you provide third parties with a SAS token to access a storage account or blob, then rotating the account keys will cause the SAS token to be invalidated. There are some ways to avoid this, including generating SAS tokens from your deployment process and sending them out from there, or by using stored access policies; these approaches are beyond the scope of this post.

The next sections provide some detail on the important steps in the list above.

Step 1: Choosing a Key

The first step we need to perform is to decide whether we should use the primary or secondary keys for this deployment. Ideally each deployment would switch between them – so deployment 1 would use the primary keys, deployment 2 the secondary, deployment 3 the primary, deployment 4 the secondary, etc. This requires that we store some state about the deployments somewhere. Don’t forget, though, that the very first time we deploy the application we won’t have this state set. We need to allow for this scenario too.

The option that I’ve chosen to use in the sample is to use a resource group tag. Azure lets us use tags to attach custom metadata to most resource types, as well as to resource groups. I’ve used a custom tag named CurrentKeys to indicate whether the resources in that group currently use the primary or secondary keys.

There are other places you could store this state too – some sort of external configuration system, or within your release management tool. You could even have your deployment scripts look at the keys currently used by the application code, compare them to the keys on the actual target resources, and then infer which key set is being used that way.

A simpler alternative to maintaining state is to randomly choose to use the primary or secondary keys on every deployment. This may sometimes mean that you end up reusing the same keys repeatedly for several deployments in a row, but in many cases this might not be a problem, and may be worth the simplicity of not maintaining state.

Step 2: Deploy the ARM Template

Our ARM template includes the resource definitions for all of the components we want to create – a storage account, a Cosmos DB account, a Service Bus namespace, and an Azure Function app to use for testing. You can see the full ARM template here.

Note that we are deploying the Azure Function application code using the ARM template deployment method.

Additionally, we copy the keys for our services into the Azure Function app’s settings, and into a Key Vault, so that we can access them from our application.

Step 4: Testing the Keys

Once we’ve finished deploying the ARM template and completing any other deployment steps, we should test to make sure that the keys we’re trying to use are valid. Many deployments include some sort of smoke test – a quick test of core functionality of the application. In this case, I wrote an Azure Function that will check that it can connect to the Azure resources in question.

Testing Azure Storage Keys

To test connectivity to Azure Storage, we run a query against the storage API to check if a blob container exists. We don’t actually care if the container exists or not; we just check to see if we can successfully make the request:

Testing Cosmos DB Keys

To test connectivity to Cosmos DB, we use the Cosmos DB SDK to try to retrieve some metadata about the database account. Once again we’re not interested in the results, just in the success of the API call:

Testing Service Bus Keys

And finally, to test connectivity to Service Bus, we try to get a list of queues within the Service Bus namespace. As long as we get something back, we consider the test to have passed:

You can view the full Azure Function here.

Step 6: Rotating the Keys

One of the last steps we perform is to actually rotate the keys for the services. The way in which we request key rotations is different depending on the services we’re talking to.

Rotating Azure Storage Keys

Azure Storage provides an API that can be used to regenerate an account key. From PowerShell we can use the New-AzureRmStorageAccountKey cmdlet to access this API:

Rotating Cosmos DB Keys

For Cosmos DB, there is a similar API to regenerate an account key. There are no first-party PowerShell cmdlets for Cosmos DB, so we can instead a generic Azure Resource Manager cmdlet to invoke the API:

Rotating Service Bus Keys

Service Bus provides an API to regenerate the keys for a specified authorization rule. For this example we’re using the default RootManageSharedAccessKey authorization rule, which is created automatically when the Service Bus namespace is provisioned. The PowerShell cmdlet New-AzureRmServiceBusKey can be used to access this API:

You can see the full script here.

Conclusion

Key management and rotation is often a painful process, but if your application deployments are completely self-contained then the process described here is one way to ensure that you continuously keep your keys changing and up-to-date.

You can download the full set of scripts and code for this example from GitHub.

Programmatically deploy and add SharePoint Framework Extensions using SharePoint CSOM and PowerShell

In the previous blog here, we looked at how to deploy and install SharePoint Apps. Now let’s look at installing SharePoint Framework extensions – Listview command sets programmatically.

SharePoint CSOM

SharePoint Framework has three type of extensions that could be created – Application customiser, Listview command sets and Field customisers. In this blog, we will look at adding list view command sets programmatically.

Listview command extensions are actually custom actions installed in a library or list. Hence to activate it we will go to the library/list, find the installed custom actions, if not installed we will install the new custom action. Below is the code for that.

PowerShell

We could also use PnP PowerShell to add the Library extension onto a page using the code below

Hence, above we saw how we could add extensions onto a library or list using CSOM or PowerShell

Happy Coding!!

Lifecycle Management of Identities in SailPoint IdentityNow via API and PowerShell

Introduction

If you’ve been following along I’ve been posting about leveraging the SailPoint IdentityNow API for;

Now that I’ve covered Searching and Authoring all that is left is lifecycle management. And that’s what I’ll cover in this post. Updating and Deleting Entities via the API.

Updating SailPoint IdentityNow Entities

If you have not read the first post in this series, start there as ‘updating’ builds on top of Search/Reporting. It also covers enabling the API.

My quick start guide to updating IdentityNow Entities starts with searching to find the Entities (probably Users) you want to update. In my example below I’m searching for all objects on a Source. Then I iterate through the results and update them. I’m updating the Country attribute.

When updating an entity (e.g User) you need to perform a PATCH webrequest specifying the underlying ID (objectID) of the object. The URI format looks like;

https://orgName.api.identitynow.com/v2/accounts/2c91808365bd1f010165caf761625bcd?org=orgName

Example Script

Here is an example script. As per the previous two posts, change all the lines for your tenant and your API details.

  • Line 16 is the query for objects to update
  • Lines 39-41 is the attribute to update

Updating Manager

For manager, the attribute is a reference on the IdentityNow Source to the Manager. On my “External Entities” Source I locate the object representing the Manager and obtain their accountId (which in my case is firstname.lastname) and set that as the ManagerID. I then find the users that I want to update for this manager and update them as we did in the previous example, but with a reference to accountId of the Manager for the Manager attribute.

NOTE: When querying IdentityNow via the API the syntax is very important. Especially when also incorporating variables. If I have a variable $manager with a displayName value, that would normally contain a space. So we need to capture the whole string. Here is an example of doing that. So in order to query for $manager = “Rick Sanchez” in PowerShell that would be:

$queryManager = "attributes.displayName:"+'"'+"$($manager)"+'"'

which will give us attributes.displayName:”Rick Sanchez” which will return in my case the single object for Rich Sanchez not a list of references to Rick Sanchez.

Deleting SailPoint IdentityNow Entities

Deleting is very similar to Updating. Again the easiest method is to search and obtain the object(s) to be deleted and then delete via a DELETE webrequest specifying the underlying ID (objectID) of the object to be deleted. The URI looks like;

https://orgName.api.identitynow.com/v2/accounts/2c91808565bd1f110165cb628d1a702f?org=orgName

Example Script

Here is an example script. It searches IdentityNow based on object naming (see line 14), then finds the Source that the object is connected to that we wish to delete. In this example the Source is the one I created in the last post “External Entities”. Update for the name of your Source (line 25).

Summary

Using the API we can Search for Identities, Author and Update them.

Deploy and Install SharePoint Apps using SharePoint CSOM and PnP PowerShell

In this blog, we will look at steps to install and deploy SharePoint apps to Modern Sites using SharePoint ALM CSOM and PnP PowerShell. Using the below steps, it is possible to programmatically deploy and install custom SharePoint Framework apps using an Azure Function or a Local PowerShell script.

Installing SharePoint Apps

SharePoint Apps can be deployed on a site using ALM (Application Lifecycle Management) APIs. After the app is installed in the App catalog, we could add it to a SharePoint site.

SharePoint CSOM

The steps are simple. The below snippet has the code for deploying and installing apps.

1. Get the App Id
2. Create an App Manager Object
3. Deploy the App
4. After deploy the app, install the app.
5. Use Try – catch to handle if the installation has already done

PnP PowerShell

First, lets’ get a list of apps in the App catalog.

Note: There are two values that is supported by scope paramaters for Apps – Tenant and Site.

Get-PnPApp -Scope Tenant
or
Get-PnPApp -Scope Site

If the App is not installed, then we will add the app to the App catalog

Add-PnPApp -Path "<.sppkg file location>" -Scope Site

Then, publish the App

Publish-PnPApp -Identity  -Scope Tenant -SkipFeatureDeployment

-SkipFeatureDeployment is helpful to deploy Apps across the SharePoint tenancy if the app is developed for tenant wide deployments

After the above, we will install the App

Install-PnPApp -Identity  -Scope Site

After the app is installed, it is ready to be added or used at the site.

In the upcoming blog, we will see how to add SharePoint Framework extensions and web parts programmatically.

Happy Coding!!

Authoring Identities in SailPoint IdentityNow via the API and PowerShell

Introduction

A key aspect of any Identity Management project is having an Authoritative Source for Identity. Typically this is a Human Resources system. But what about identity types that aren’t in the authoritative source? External Vendors, contingent contractors and identities that are used by End User Computing systems such as Privileged Accounts, Service Accounts, Training Accounts.

Now some Identity Management Solutions allow you to Author identity through their Portals, and provide a nice GUI to create a user/training/service account. SailPoint IdentityNow however doesn’t have that functionality. However it does have an API and I’ll show you in the post how you can use it to Author identity into IdentityNow via the API.

Overview

So, now you’re thinking great, I can author Identity into IdentityNow via the API. But, am I supposed to get managers to interface with an API to kick off a workflow to create identities? Um, no. I don’t think we want to be giving them API access into our Identity Management solution.

The concept is this. An Identity Request WebApp would collect the necessary information for the identities to be authored and facilitate the creation of them in IdentityNow via the API. SailPoint kindly provide a sample project that does just that. It is available on Github here. Through looking at this project and the IdentityNow API I worked out how to author identity via the API using PowerShell. There were a few gotchas I had to work through so I’m providing a working example for you to base a solution around.

Getting Started

There are a couple of things to note.

  • Obviously you’ll need API access
  • You’ll want to create a Source that is of the Flat File type (Generic or Delimited File)
    • We can’t create accounts against Directly Connected Sources
  • There are a few attributes that are mandatory for the creation
    • At a minimum I supply id, name, givenName, familyName, displayName and e-mail
    • At an absolute bare minimum you need the following. Otherwise you will end up with an account in IdentityNow that will show as “Identity Exception”
      • id, name, givenName, familyName, e-mail*

* see note below on e-mail/email attribute format based on Source type

Creating a Flat File Source to be used for Identity Authoring

In the IdentityNow Portal under Admin => Connections => Sources select New.

Create New Source.PNG

I’m using Generic as the Source Type. Give it a name and description. Select Continue

New Generic Source.PNG

Assign an Owner for the Source and check the Accounts checkbox. Select Save.

New Source Properties.PNG

At the end of the URL of the now Saved new Source get and record the SourceID. This will be required so that when we create users via the API, they will be created against this Source.

SourceID.PNG

If we look at the Accounts on this Source we see there are obviously none.

Accounts.PNG

We’d better create some. But first you need to complete the configuration for this Source. Go and create an Identity Profile for this Source, and configure your Identity Mappings as per your requirements. This is the same as you would for any other IdentityNow Source.

Authoring Identities in IdentityNow with PowerShell

The following script is the bare minimum to use PowerShell to create an account in IdentityNow. Change;

  • line 2 for your Client ID
  • line 4 for your Client Secret
  • line 8 for your Tenant Org Name
  • line 12 for your Source ID
  • the body of the request for the account to be created (lines 16-21)

NOTE: By default on the Generic Source the email attribute is ’email’. By default on the Delimited Source the email attribute is ‘e-mail’. If your identities after executing the script and a correlation are showing as ‘Identity Exception’ then it’s probably because of this field being incorrect for the Source type. If in doubt check the Account Schema on the Source.

Execute the script and refresh the Accounts page. You’ll see we now have an account for Rick.

Rick Sanchez.PNG

Expanding Rick’s account we can see the full details.

Rick Full Details.PNG

Testing it out for a Bulk Creation

A few weeks ago I wrote this post about generating user data from public datasets. I’m going to take that and generate 50 accounts. I’ve added additional attributes to the Account Schema (for suburb, state, postcode, street). Here is a script combining the two.

Running the script creates our 50 users in conjunction to the couple I already had present.

Bulk Accounts Created.PNG

Summary

Using the IdentityNow API we can quickly leverage it to author identity into SailPoint IdentityNow. That’s the easy bit sorted. Now to come up with a pretty UI and a UX that passes the End-User usability tests. I’ll leave that with you.