Azure Active Directory B2B Pending and Accepted User Reports

One of the benefits of Cloud Services is the continual enhancements that vendors provide based on feedback from their customers. One such item of feedback that Microsoft has heard often is the request to know what state a Guest user in Azure AD is in. In the last couple of days Microsoft exposed two additional attributes on the User objectClass in Azure AD;

  • externalUserState
  • externalUserStateChangeDateTime

B2B State Tweet.PNG

This means we can now query the Microsoft Graph for B2B users and understand if they have Accepted or are PendingAcceptance, and the datetime of the last change.

My customers have been wanting such information and would like to report on it. Here is an example PowerShell script I’ve put together that queries Microsoft Graph to locate B2B Users in the Accepted and PendingAcceptance states and outputs summary information into two HTML Reports. One for Accepted and one for PendingAcceptance.

Update the following script;

  • Line 2 for a Microsoft Azure PowerShell Module such as AzureAD that has the Microsoft.IdentityModel.Clients.ActiveDirectory.dll library in it
  • Line 5 for your Tenant name
  • Lines 15 and 16 for a Username and Password
  • Line 19 for where you want the Reports written to

Running the Script will then generate the Accepted and Pending HTML Reports.

Output Reports.PNG

Here is an example of the AcceptedB2BUsers HTML Report.

Accepted Report.png


With the addition of these two additional attributes to the Microsoft Graph we can now query for B2B users based on their State and using PowerShell quickly report on them.

Address Space maintenance with VNet Peering

I recently had a scenario where I wanted to add an address space to a Virtual Network and encountered an issue where it was not possible to modify the address space while VNet Peering was in use. This is likely due to the fact that the routes to the peered VNet that are applied through the peering only get updated at the time the peer is created and cannot be dynamically updated.

The following error detailed this.

Failed to save virtual network changes Failed to save address space changes to virtual network ‘vnet1’. Error: Address space of the virtual network /subscriptions/xxxxxxx/resourceGroups/rg1/providers/Microsoft.Network/virtualNetworks/vnet1 cannot change when virtual network has peerings.


On the surface this isn’t a big deal, just delete the peering, modify the address space and re-create the peering. While the actual requirements to achieve the changes is straight forward the VNet I was working with was a Hub VNet, as such there were multiple VNet peerings in place to spoke VNets. Additionally, there was no documentation available on all of the configurations specific to the peering.

I wanted to export all of the peering configurations so that I had a backup and record of the configurations so that they could be re-applied the same way after the address space was added to the Hub network.

Exporting VNet Peering Configuration

The following snippet, will export all of the peering configurations for the VNet specified to a file named “vnetname-peerings.csv”. The configurations recorded for the peering include:

  • Name
  • ResourceGroupName
  • VirtualNetworkName
  • RemoteVirtualNetwork
  • AllowVirtualNetworkAccess
  • AllowForwardedTraffic
  • AllowGatewayTransit
  • UseRemoteGateways
  • RemoteGateways
  • RemoteVirtualNetworkAddressSpace

Important Note: Removing the VNet peering will disrupt communication between the VNets. You should plan for and accommodate this within your change control processes.

Now that that all of the peering configurations were exported, we can proceed to delete the peering and then make the required modifications to the address space.

Re-create VNet Peering Configuration

After the modifications were applied to the VNet, we are ready to re-create the peering configuration using the exported configuration. You can do this from the portal or PowerShell. In my example, I simply re-added it from the Portal using the values recorded from the exported CSV. If you wanted to use PowerShell the Add-AzureRmVirtualNetworkPeering command can be used, substituting the values that were exported into the CSV.

However, when I saved the configuration the following error was produced:

Failed to add virtual network peering ‘peer1’. Error: Cannot create or update peering /subscriptions/xxxxxxx/resourceGroups/rg1/providers/Microsoft.Network/virtualNetworks/vnet1/virtualNetworkPeerings/peer1 because remote peering /subscriptions/xxxxxxx/resourceGroups/rg2/providers/Microsoft.Network/virtualNetworks/vnet3/virtualNetworkPeerings/peer1 referencing parent virtual network /subscriptions/xxxxxxx/resourceGroups/rg1/providers/Microsoft.Network/virtualNetworks/vnet1 is in Disconnected state. Update or re-create the remote peering to get it back to Initiated state. Peering gets Disconnected when remote vnet or remote peering is deleted and re-created.


The reason for this error is that the corresponding peering on the remote VNet was in a disconnected state.

As the error suggests you have 2 options to resolve this, update or re-create the peering to get it back into the initiated state. Once this is done we can then create the peering in the parent VNet. You could use the script snippet above to backup the peering configuration and then delete and re-create it. However the easiest option is just to update the peering using the following command:

Get-AzureRmVirtualNetworkPeering -VirtualNetworkName vnet3 -ResourceGroupName rg2 | Set-AzureRmVirtualNetworkPeering

Once this is done, if you check the remote peering you will see that it is back in the initiated state and ready for the remote peering to establish the connection.

We can now finish by creating the peering configuration and both sides of the peering will now show as connected.

Options to consider for SharePoint Framework solutions deployment

There are various options to package and deploy a SharePoint Framework solution and as part of packaging and deployment process, the developers have to identify a best approach for their team. Sometimes it becomes a nightmare to plan the right approach for your solution, if you haven’t weighed the options properly.

Working at multiple implementations of SPFx solution for sometime now, I have been able to get an idea of various options and approach for them. Hence in this blog, we will at these options and look at merits and limitations for each.

At a high level, below are the main components that are deployed as part of SPFx package deployment:

  1. The minified js file for all code
  2. The webpart manifest file
  3. Webpart compiled file
  4. The package (.pckg) file with all package information

Deployment Options

Please check this blog for an overview of the steps for building and packaging a SPFx solution. The packaged solution (.sppkg) file can then be deployed to a App catalog site. The assets of the package (point 1-3 of above) could be deployed by any of the four below options. We will look at the merits and limitations for each.

1. Deploy to Azure CDN or Office 365 CDN

The assets could be deployed to an Azure CDN. The deployment script is already a part of SPFx solution and could be done from within the solution. More detailed steps for setting this up are here.

Note: Please remember to enable CORS on the storage account before deployment of the package.

If CORS is not enabled before CDN profile is used, you might have delete and recreate the Storage account.


  • Easy deployment using gulp
  • Faster access to assets and web part resources because of CDN hosting
  • Add geographical restrictions (if needed)


  • Dependency on Azure Subscription
  • Proper set up steps required for setting up Azure CDN. In some cases if the CDN if not set properly, then the deployment has to be done again.
  • Since the assets are deployed to a CDN endpoint, so if assets need restricted access then this mayn’t be recommended

2. Deploy to Office 365 Public CDN

For this option, you will need to enable and set up Office 365 CDN in your tenancy before deployment. For more details of setting this up, check the link here.


  • Faster access to assets and web part resources because of CDN hosting
  • No Azure subscription requirement
  • Content is accessible from SharePoint Interface


  • Manual copy of assets files to CDN enabled SharePoint library
  • Office 365 CDN is a tenant setting and has to be enabled for the whole tenancy
  • Since the assets are deployed to a CDN endpoint, so if assets needs restricted access then this mayn’t be recommended
  • Accidental deletion could cause issues

3. Deploy to SharePoint document library

This is also an option to copy for the compiled assets to be copied to a SharePoint document library anywhere in the tenancy. Setting this up is quite simple, first set the setting “includeClientSideAssets”: false in package-solution.json file and then set the CDN details in write-manifests.json  file.


  • No need of additional Azure hosting or enabling Office 365 CDN
  • Access to Assets files from SharePoint interface


  • Manual Copy of assets file to SharePoint library
  • Accidental deletion could cause issues

4. Deploy to ClientAssets in App Catalog

From SPFx version 1.4, it is possible to include assets as part of the package file and deploy it to the hidden ClientAssets library residing in App Catalog. It is set in the package-solution.json file “includeClientSideAssets”: true.


  • No extra steps needed to package and deploy assets


  • Increases the payload of the package file
  • Risk for Tenant admins to deploy script files to the Tenant App Catalog.


In this blog, we saw the various options for SPFx deployment, and merits and limitations of each approach.

Happy Coding !!!


Managing SailPoint IdentityNow Applications via API with PowerShell

The SailPoint IdentityNow Request Center comes pre-populated with 130 Applications (as shown below) that by default are visible to users in the Dashboard and can be requested via the Request Center. Whilst this is great the majority are not often applicable and you need to configure each individual application to remove visibility and requestablity. You could of course ask your IdentityNow Support representative to do this for you, or you could manage it yourself. Lets go with option B and I’ll show you how.

Application List.PNG

To disable visibility of an Application, and to also remove it from being requested through the Request Center there are two options that need to be toggled off. Enabled For Users, and Visible in the Request Center. 

Application Settings 1.PNG

Say you want to remove all from being visible and requestable. You will need to open each app, toggle the slider and the radio button and select save. That’s a minimum of 4 mouse clicks and some mouse scrolling x 130, or do it via the IdentityNow API in < 60 seconds. Option B please.

Retrieving Applications

The URI to return all IdentityNow Applications is


Before you can call that URI you will need to be authenticated to IdentityNow. Follow this post and make sure you have the headers in the WebSession configured with the Bearer Access Token.

Then using PowerShell you can return all Applications with;

$appList = Invoke-RestMethod -Uri $appListURI -Method Get -WebSession $IDN

If you want to find a single app, find it by name using Select-Object | Where-Object

$myApp = $appList | Select-Object | Where-Object {$ -eq "New York Times"}

The Application PowerShell Object for the New York Times looks like;

id : 24184
appId : 11
serviceId : 32896
serviceAppId : 24184
name : New York Times
description : American daily newspaper
appCenterEnabled : False
provisionRequestEnabled : False
controlType : PERSONAL
mobile : True
privateApp : False
scriptName : C:1-new-york-times
status : ACTIVE
icon :
health : @{status=HEALTHY; lastChanged=1539766560496; since=0; healthy=True}
enableSso : True
ssoMethod : PASSWORD
hasLinks : True
hasAutomations : True
primaryLink :
primaryMobileLink :
stepUpAuthData :
stepUpAuthType : NONE
usageAnalytics : False
usageCertRequired : False
usageCertText :
launchpadEnabled : False
passwordManaged : False
owner :
dateCreated : 1522393052000
lastUpdated : 1539766536000
defaultAccessProfile :
service : New York Times
selectedSsoMethod : PASSWORD
supportedSsoMethods : 2
authenticationCookie : []
directoryPassword_supported : false
none_supported : true
passwordReplay_supported : true
proxy_supported : false
saml_supported : false
wsfed_supported : false
accountServiceId : -1
launcherCount : 0
accountServiceName :
accountServiceExternalId :
accountServiceMatchAllAccounts : True
externalId :
passwordServiceId : -1

Removing Applications from User Visibility

Let’s remove all Applications from user visibility (and the Dashboard). The process is simply to retrieve all Applications, then update each one to toggle off the options for visibility. The following script does just that.

After updating each app the Request Center is empty. Much quicker than hundreds of mouse clicks.

Request Center Empty.PNG


With the ability to retrieve Applications and update them via the API repetitive configuration becomes super quick.

WorkdayAPI PowerShell Module

Obtaining Workday HR Supervisory Hierarchy, Provisioning Flags and Photos with PowerShell

A few weeks back I posted this regarding using PowerShell and the Granfeldt PowerShell Management Agent to interface Microsoft Identity Manager with Workday HR. The core of this functionality is the WorkdayAPI PowerShell Module which I forked from Nathan and added additional functionality.

New WorkdayAPI PowerShell Module Cmdlets

This post details additional functionality I’ve added to the WorkdayAPI PowerShell Module. Updates include the following additional cmdlets;


Implementations of Workday obviously vary from organisation to organisation. Workday HR being an authoritative source of identity means it is also often the initiator of processes for Identity Management. For one of our customers base entitlements are derived from Workday Job Profile and may specify an Active Directory Account and an Office 365 License.

The Get-WorkdayWorkerProvData cmdlet is invoked from when calling Get_WorkdayWorkerAdv with the -IncludeWork flag. The result now returns ProvisioningGroup information as part of the returned PowerShell Object. As shown below Workday has indicated my Identity requires Active Directory and Office 365 (email).

Workday Provisioning Group.PNG

If you just want to return the Account Provisioning information you can with Get-WorkdayWorkerProvData.
Get-WorkdayWorkerProvData -WorkerId 181123 -WorkerType Employee_ID
Provisioning_Group Status   Last_Changed
------------------ ------   ------------
Office 365 (email) Assigned 1/05/2017 9:31:21 PM
Active Directory   Assigned 1/05/2017 9:57:37 PM


The Get-WorkdayWorkerMgmtData cmdlet is invoked from when calling Get_WorkdayWorkerAdv with the -IncludeWork flag. The result now returns MgmtData information as part of the returned PowerShell Object. The collection is an ordered list of the Supervisory Hierarchy for the object.

Mgmt Data.PNG

Expanding the collection we can see the Supervisory Hierarchy. Note: the top of the hierarchy is listed twice as in this organisation the top reports to himself. 

Mgmt Heirarchy 2.PNG

If you just want to return the Management Hierarchy you can with Get-WorkdayWorkerMgmtData.

Get-WorkdayWorkerMgmtData -WorkerId 1234 -WorkerType Employee_ID


The Get-WorkdayWorkerPhoto cmdlet is invoked from when calling Get_WorkdayWorkerAdv with the -IncludePhoto and -PhotoPath flags. The result will then output the photo to the path provided in the -PhotoPath option.

If you just want to export the Photo for a single user you can with Get-WorkdayWorkerPhoto.

Get-WorkdayWorkerPhoto -WorkerId 1234 -WorkerType Employee_ID -PhotoPath 'C:\temp\workday'


Using the WorkdayAPI PowerShell Module we can now access information to drive the provisioning process as well as understand identities placement in the Supervisory hierarchy. We can also obtain their Workday Profile photo and sync that to other places if required.

Step-by-step: Using Azure DevOps Services to deploy ARM templates with CI/ CD – Part 1

In this blog, we will see how to get started with Azure DevOps for an Infrastructure background person.

We will familiarize ourselves with deploying your Azure resources with ARM templates by using Azure DevOps with Continuous Integration (CI) and Continuous Deployment (CD).

I have made this entire post into two parts for easier understanding:

Part 1: Creating your first project in Azure DevOps

Part 2: Creating the first project in Azure DevOps for Continuous Integration (CI) / Continuous Deployment (CD).

This article will focus on Part 1. The things needed to make this successful include:

        1. Visual Studio software (Free edition)  – you can get this from website:
        2. Azure Subscription access. If not, you can create a free azure account.
        3. An account in Visual Studio. if you don`t have one create a new account by signing into and enabling Azure DevOps service. 
        4. Click on Azure DevOps and select sign in.
        5. Once you sign in with your Microsoft account, click continue.
  1. Creating the first project in Azure DevOps:When you log into Azure DevOps( for first time with your MSDN/ Microsoft account.
    • Now, click on New project and provide name (Eg: Firstproject) & add a Description for project.
    • Select visibility options: Private (with this setting, only you can access the content. You can provide access to people who can able to view this project.)
    • Under Firstproject , Click on Repos.
    • Since the project folder is empty, we need to create a new file. We can use Visual studio for creating it and click on clone in Visual studio options:

      • Visual studio software will open its console.
      • Provide your Microsoft account credentials, which has been used for Azure DevOps and Azure account.
      • The project needs to be cloned on local disk. Click on clone.
      • This will pop-up for Azure DevOps credentials.
      • This may result in authentication failed or fatal error. To resolve this, follow below steps:
      • In Visual studio, select team explorer and select manage connections and click connect to project.
      • Select your user id for Azure DevOps and provide credentials. Then your Project (First project) will be listed for connect.
      • Now you will get clone options:
      • On Team Explorer view, click on Create a new project or solution in this repository.
      • Select Installed -> cloud and Azure Resource group

      • Select Blank template for deployment.

    • Select solution explorer view on Visual Studio
    • Select AzureResourceGroup and click on Azuredeploy.json
    • Click on Resources on Json outline and select virtual network for deployment. provide name for vnet : eg   firstnetwork01

  • On the bottom of Visual Studio, you find the number of changes icon has been performed to it. Click on it for commit changes.
    • Provide comments for commit and select commit all.

    • The change has been committed locally and we need to push the changes to Azure DevOps project file. Click on sync for change.

  • Click on push for changes to cloud (Azure DevOps).
  • Now, go back to Azure DevOps portal and select your project (First project) and select repos.
  • You will able to find your AzureResourceGroup, which you created on Visual Studio will be available.
  • Click on Azuredeploy.json file to verify your file.
    1. Enabling deployment of ARM Template in Azure DevOps:
  • Log on Azure DevOps portal and open Firstproject (your project name), then click on Builds.
  • On the new page, click on New Pipeline. Select “Use the visual designer to create a pipeline without YAML”.
  • Ensure your project & repository is selected and click on continue.
  • Select “Start with an Empty Job”
  • Click on + item on Agent Job.
  • On the new pane, select deploy and click on Azure Resource Group deployment and click ADD.
  • On the left pane, select Azure Deployment: Create or Update Resource Group action on
  • Select Azure Subscription and click on Authorize.
  • Select your resource group on your Azure subscription and location.
  • The template location will be linked artefact.
  • Select your template file (azuredeploy.json) from the selection menu.
  • Select your template parameter file (azuredeploy.parameters.json) from the selection menu.
  • Deployment mode: complete.
  • Click save and queue and provide your comment on the file changes.
  • After it  has saved, the build operation will commences deployment  on your Azure tenant.
  • You can view the deployment logs from the Azure DevOps portal. In addition, you will receive an email (email which has been used for Azure DevOps account) with deployment status.
  • Verify your network (Azure Resource which we added on ARM template) has been created on Azure tenant.
  1. This concludes Part 1 creating and deploying  ARM templates with Azure DevOps.
  2. In Part-2, I take you through on enabling Continuous Integration (CI) / Continuous Deployment (CD).

A Lean Approach to UX design – ASOS case study – Part 1 of 2

The double diamond seems to be a popular method of approaching design thinking for most UX designers. Discover, Define, Develop, Deliver. But often clients and stakeholders start to run for the hills when they realise that the discover phase involves time consuming user research, research that the client believes they don’t need to do because “they already know their users”. A lean approach to user experience design may be an easier way to sell design thinking to a client as it involves starting with assumptions and creating hypothesis that may solve a problem, then testing these hypotheses with real users in a short time frame. I find this to be a better starting point with clients as it saves time and engages them more from the beginning of the project, making the design process more transparent.

Just in-case that first paragraph made no sense to you, don’t worry. I find the best way of learning is by seeing an example. I decided to adopt a lean approach to a little challenge given to me – improve the ASOS app in order to increase conversion rates.

First Step: Declaring Assumptions

Problem Statement

Before diving into assumptions there needs to be a clear problem statement. I treated my challenge as the problem statement – improve the ASOS app in order to increase conversion rates. Probably not worded the best. Problem statements should follow a more structured format addressing the criteria below

[Our service/product] was designed to achieve [these goals]. We have observed that the product/service isn’t meeting [these goals], which is causing [this adverse effect] to our business. How might we improve [service/product] so that our customers are more successful based on [these measurable criteria]?

Lean UX, O’Reilly

As this is a hypothetical I can’t say I know the goals of the ASOS app, but I can take a good guess based on a quick google search and their press releases.

“Our core customer is the twenty-something fashion-lover: an avid consumer and communicator who is inspired by friends, celebrities and the media.”

From this I created a more fleshed out problem statement: The ASOS app was designed to be the favoured place for twenty-something fashion lovers to be inspired by fashion and buy clothes. The ASOS app isn’t meeting these goals which is causing a stagnation in sales and profit loss to our business. How might we improve the ASOS app so that our customers will want to buy more clothes and follow through with sales.


Once you have a clear problem statement it’s time to capture the audience you are focusing on. Proto-personas are an archetype of key users you will be focusing on, however a proto-persona is different to a regular persona as it is purely based on assumptions which you will later validate. As I did not have access to the ASOS team my assumptions were based on a few press releases and previous case studies on ASOS. From that I created one Proto-persona.

In a client setting this is a good in 2 ways. Firstly, you are getting them to think about their users. Secondly, it’s a quick and dirty way to get their current knowledge on their users.

Hypothesis Statement

Now that I have a primary persona (in most cases you’d have more than one) I could come up with ideas on how I might address their pain points. I had a look at the current ASOS app, did a quick heuristic evaluation, looked at what competitors did, and apps such as Pinterest that had features that could potentially address some pain points. From that I came up with the following hypotheses using the following template:

I believe that We will achieve [business outcome] if the user [persona] can achieve [user outcome] with this feature [feature]

Second Step: Testing my assumptions

I then wanted to test these assumptions with real users by sending out a quick survey which got 20 respondents, 5 user interviews and 1 guerrilla test of the current ASOS app. From this I gathered the data into themes (affinity mapping) and got the following insights

From this quick research, which only took a couple of days, I updated my proto-persona and re- prioritised my hypotheses based on the pain points that both would critically affect the business and users.

The key point I am trying to make is that proto-personas and hypotheses are fluid artefacts, they are not set in stone and the beauty of a lean approach is to constantly validate and update your assumptions. It’s also a good way of quickly getting the client to see the benefit of a design thinking approach in a ‘sneak peak’ way. Once they start to see the benefits of what a bit of research and user input can have they may be more inclined to invest more in a human centred design approach in the future.

In my next blog I’ll be going through the prototype I made based on these insights and how I iterated its design based on testing it with only 5 users.

Replacing your Secure FTP Server with Amazon Simple Storage Service

First published at


What if I told you that you could get rid of most of your servers, however still consume the services that you rely on them for? No longer will you have to worry about ensuring the servers are up all the time, that they are regularly patched and updated. Would you be interested?

To quote Werner Vogel “No server is easier to manage than no server”.

In this blog, I will show you how you can potentially replace your secure ftp servers by using Amazon Simple Storage Service (S3). Amazon S3 provides additional benefits, for instance, lifecycle policies which can be used to automatically move older files to a cheaper storage, which could potentially save you lots of money.


The solution is quite simple and is illustrated in the following diagram.

Replacing Secure FTP with Amazon S3 - Architecture

We will create an Amazon S3 bucket, which will be used to store files. This bucket will be private. We will then create some policies that will allow our users to access the Amazon S3 bucket, to upload/download files from it. We will be using the free version of CloudBerry Explorer for Amazon S3, to transfer the files to/from the Amazon S3 bucket. CloudBerry Explorer is an awesome tool, its interface is quite intuitive and for those that have used a gui version of a secure ftp client, it looks very similar.

With me so far? Perfect. Let the good times begin 😉

Lets first configure the AWS side of things and then we will move on to the client configuration.

AWS Configuration

In this section we will configure the AWS side of things.

  1. Login to your AWS Account
  2. Create a private Amazon S3 bucket (for the purpose of this blog, I have created an S3 bucket in the region US East (North Virginia) called secureftpfolder)
  3. Use the JSON below to create an AWS Identity and Access Management (IAM) policy called secureftp-policy. This policy will allow access to the newly created S3 bucket (change the Amazon S3 bucket arn in the JSON to your own Amazon S3 bucket’s arn)
  4. {
        "Version": "2012-10-17",
        "Statement": [
                "Sid": "SecureFTPPolicyBucketAccess",
                "Effect": "Allow",
                "Action": "s3:ListBucket",
                "Resource": [
                "Sid": "SecureFTPPolicyObjectAccess",
                "Effect": "Allow",
                "Action": "s3:*",
                "Resource": [

    4. Create an AWS IAM group called secureftp-users and attach the policy created above (secureftp-policy) to it.

  5. Create AWS IAM Users with Programmatic access and add them to the AWS IAM group secureftp-users. Note down the access key and secret access key for the user accounts as these will have to be provided to the users.

Thats all that needs to be configured on the AWS side. Simple isn’t it? Now lets move on to the client configuration.

Client Configuration

In this section, we will configure CloudBerry Explorer on a computer, using one of the usernames created above.

  1. On your computer, download CloudBerry Explorer for Amazon S3 from Note down the access key that is provided during the download as this will be required when you install it.
  2. Open the downloaded file to install it, and choose the free version when you are provided a choice between the free version and the trial for the pro version.
  3. After installation has completed, open CloudBerry Explorer.
  4. Click on File from the top menu and then choose New Amazon S3 Account.
  5. Provide a meaningful name for the Display Name (you can set this to the username that will be used)
  6. Enter the Access key and Secret key for the user that was created for you in AWS.
  7. Ensure Use SSL is ticked and then click on Advanced and change the Primary region to the region where you created the Amazon S3 bucket.
  8. Click OK to close the Advanced screen and return to the previous screen.
  9. Click on Test Connection to verify that the entered settings are correct and that you can access the AWS Account using the the access key and secret access key.
  10. Once the settings have been verified, return to the main screen for CloudBerry Explorer. The main screen is divided into two panes, left and right. For our purposes, we will use the left-hand side pane to pick files in our local computer and the right-hand side pane to correspond to the Amazon S3 bucket.
  11. In the right-hand side pane, click on Source and from the drop down, select the name you gave the account that was created in step 4 above.
  12. Next, in the right-hand side pane, click on the green icon that corresponds to External bucket. In the window that comes up, for Bucket or path to folder/subfolder enter the name of the Amazon S3 bucket you had created in AWS (I had created secureftpfolder) and then click OK.
  13. You will now be returned to the main screen, and the Amazon S3 bucket will now be visible in the right-hand side pane. Double click on the Amazon S3 bucket name to open it. Viola! You have successfully created a connection to the Amazon S3 bucket.
  14. To copy files/folders from your local computer to the Amazon S3 bucket, select the file/folder in the left-hand pane and then drag and drop it to the right-hand pane.
  15. To copy files/folders from the Amazon S3 bucket to your local computer, drag and drop the files/folder from the right-hand pane to the appropriate folder in the left-hand pane.


So, tell me honestly, was that easy or what?

Just to ensure I have covered all bases (for now), here are few questions I would like to answer

A. Is the transfer of files between the local computer and Amazon S3 bucket secure?

Yes, it is secure. This is due to the Use SSL setting that we saw when configuring the account within CloudBerry Explorer.

B. Can I protect subfolders within the Amazon S3 bucket, so that different users have different access to the subfolders?

Yes, you can. You will have to modify the AWS IAM policy to do this.

C. Instead of a GUI client, can I access the Amazon S3 bucket via a script?

Yes, you can. You can download AWS tools to access the Amazon S3 bucket using the command line interface or PowerShell. AWS tools are available from

I hope the above comes in handy to anyone thinking of moving their secure ftp (or normal ftp) servers to a serverless architecture.

User Psychology and Experience

Often times when designing a product or solution for a customer, in planning and concept development, we might consider the user experience to be one of two (or both) things:

  1. User feedback regarding their interaction with their technological environment/platforms
  2. The experience the user is likely to have with given technology based on various factors that contribute to delivering that technology to them; presentation, training, accessibility, necessity, intuitiveness, just to name a few.

These factors are not solely focused on the user and their role in the human – technology interaction process, but also their experience of dealing with us as solution providers. That is to say, the way in which we engage the experience and behaviour of the user is just as important to the delivery of new technology to them, as is developing our own understanding of a broader sense of human interfacing technology behaviour. UX is a colourful – pun intended – profession/skill to have within this industry. Sales pitches, demos and generally ‘wowing the crowd’ are a few of the ways in which UX-ers can deploy their unique set of skills to curve user behaviour and responsiveness in a positive direction, for the supplier especially.

Not all behavioural considerations with regards to technology are underpinned by the needs or requirements of a user, however. There are more general patterns of behaviour and characteristics within people, particularly in a working environment, that can be observed, to indicate how a user experiences [new] technology, including functionality and valued content that, at a base level, captures a user’s attention. The psychology of this attention can be broken down into a simplified pathology: the working mechanisms of perception as a reaction to stimulus, and how consistent the behaviour is that develops out of this. The stimulus mentioned are usually the most common ones when relating to technology; visual, auditory.

You’ve likely heard of, or experienced first-hand, the common types of attention in everyday life. The main three are identified as selective, divided and undivided. Through consistency of behavourial outcomes, or observing in a use case a consistent reaction to stimuli, we look to observe a ‘sustainability of attention or interest’ over an extended period of the time, even if repetition of an activity or a set of activities is involved. This means that the solution, or at very least, the awareness and training developed to sell a solution, should serve a goal of achieving sustainable attention.

How Can We Derive a Positive User Experience through Psychology?

Too much information equals lack of cognitive intake. From observation and general experience, a person’s attention, especially when captured within a session, a day or week, is a finite resource. Many other factors of an individual’s life can create a cocktail of emotions which makes people in general, unpredictable in a professional environment. The right amount of information, training and direct experience should be segmented based on a gauge of the audience’s attention. Including reflection exercises or on-the-spot feedback, especially in user training can give you a good measure of this. The mistake of cognitively overloading the user can be best seen when a series of options are present as viable routes to the desired solution or outcome. Too many options can, at worst, create confusion, an adversity to the solution and new technologies in general, and an overall messy user experience.

Psychology doesn’t have to be a full submersion into the human psyche, especially when it comes to understanding the user experience. Simple empathy can be a powerful tool to mitigate some of the aforementioned issues of attention and to prevent the cultivation of repeated adverse behaviour from users. When it boils down to the users, most scenarios in the way of behaviour and reaction have been seen and experienced before, irrespective of the technology being provided. Fundamentally, it is a human experience that [still] comes first before we look at bridging the user and the technology. For UX practitioners, tools are already in-place to achieve this such as user journey maps and story capturing,

There are new ideas still emerging around the discipline of user experience, ‘UX’. From my experience with it thus far, it presents a case that it could integrate very well with modern business analysis methodologies. It’s more than designing the solution, it’s solutions based on how we, the human element, are designed.

Managing SailPoint IdentityNow Roles via API and PowerShell

Managing SailPoint IdentityNow Role Groups typically involves leveraging the SailPoint IdentityNow Portal for the creation and on-going management. That’s because the API for Roles is not published or documented.

What happens then if you have many to create, update/manage? How does the IdentityNow Portal use the non-published undocumented API’s to create and manage them? I’ve worked it out and am documenting it here in the interim until the API’s get versioned and published.

Note: There is a chance that the Roles API may change, 
so use at your own risk. 

Roles API Authentication

First up the Roles API URI looks like this.

The /cc/ part is a good indication that the API isn’t documented, but also that the Authentication mechanism to interact with it is using a JWT Bearer Token. I covered how to obtain a JWT Bearer Token specifically for interacting with these API’s in this post here. I’m not going to cover that here so read that post to get up to speed with that process.

Associated with managing the criteria of Roles we will also need to reference IdentityNow Sources. We can do that via a non-published API too.

Listing all IdentityNow Roles

Similar to Governance Groups that I detailed in this post Roles can be returned on an individual basis (if you know the ID of the Role which you won’t). So the simplest way is to query and return them all (no, the Search (BETA) doesn’t support Roles either). The API to list all roles is:

Doing this then via PowerShell to return all roles looks like this:

Roles = Invoke-RestMethod -Uri "" -WebSession $IDN
and finding Roles you are interested in can be done using Where-Object
$Roles.items | Select-Object | Where-Object {$_.displayName -like "Kloud*"}

SailPoint Roles.PNG

Creating Roles

The example script below shows the generation of information for a Role Object. Specifically;

  • name
  • displayname
  • description
  • disabled (whether the Role is enabled or disabled)
  • owner (the IdentityNow NAME attribute value of the owner of the Role)

Executing this creates the Role.

Role Created.PNG

Looking in the Portal we can then see the Role.

Role Created in Portal.PNG

Managing Role Criteria

The Role Criteria will obviously depend on what your criteria is. Is it based on Standard Group Rules, a List of Identities or Custom? I’ll cover a List of Identities and Group based criteria.

Adding a List of Identities to a Role

This isn’t too dis-similar to Governance Roles. We search for the list of users we want to add to the Role and add them to a collection. The key difference is that we use Name instead of ID. We then specify the type of Criteria (Identity_List) and the ID of the Group to update.

Executing the Script

The example script below switches the Headers for Basic Auth to use the v2 endpoint to search and locate the users to add to the Role. It then switches the Headers over to JWT Bearer Auth, generates the body for the request to update the Role Group and updates the Role Group.

Executing the script looks as per below and runs through successfully.

Adding IdentityList Members to Role Group.PNG

Checking the Role Group in the IdentityNow Portal

Identity List Updated - Portal.PNG

Adding a Members Based on Criteria to a Role

Adding users to a Role based on Criteria is similar to above except rather than searching for Identities we are adding the criteria for a role.

The criteria can be based on an Identity Attribute an Account Attribute or an Entitlement. I’ll show using an Identity Attribute and an Account Attribute.

Account Attribute based Criteria

Account criteria is based on attributes available in the Schema of a Source. In order to reference the Source we need the ID of the Source. This is available via the non-published Source API cc/api/source

The example script below shows getting all Sources and finding the one you want to base your Role Criteria off (based on name). You can of course mix criteria across many sources, but I’m showing you doing it from one. Running the following script will return your Sources. You require the ID of the Source you want to leverage attributes from to base your criteria off. The attribute that contains this used further along in these examples is $RoleSource 

Now that we have the Source ID for the Source that contains the attribute that we want to base our Role Criteria on, we need to build the criteria. Let’s start with a simple one and Single Criteria.

Here is an example JSON document with a Single Criteria that contains the variables for the Role that we just created (Role ID), the Source ID for the Source with the attribute we are using for the criteria and the value we want to match (RoleCriteriaValue). The Attribute from my Source that I’m using is Supplier. Update for the attribute you’re using on your Source.

After converting the JSON object to a PowerShell Object using ConvertFrom-JSON it looks like this:
Single Role Criteria.PNG

Having it as a PowerShell Object also makes it easy to modify. If you wanted to change the criteria to match against, the Operator and/or the operation then just change the value. e.g. the following will change the value to match from “Kloud Solutions Pty Ltd” to “Darren J Robinson

$RoleCriteria.selector.complexRoleCriterion.children.value = "Darren J Robinson"

Convert back to JSON for Post via the webrequest to the API. That and updating the Role with the criteria thereby is;

Update Single Criteria.PNG

In the Portal we can see that the Criteria has been added.

Update Single Criteria - Portal.PNG

And you will notice I have a Role Refresh call after the update to update the Role Membership as it is now based on Criteria. If you are updating/adding criteria to numerous roles only call the Refresh at the end.

Role Refresh.PNG

Adding Multiple Criterion

Multiple criterion is just more information to pass to the update. The format can be a little confusing so here is a template for three criteria showing the three Operators (Equals, Contains and Not_Equals).

$RoleCriteria = "{`"id`":`"$($`",`"accessProfileIds`":[],`"selector`":{`"type`":`"COMPLEX_CRITERIA`",`"entitlementIds`":[],`"aliasList`":[],`"valueMap`":[],`"complexRoleCriterion`":{`"operation`":`"OR`",`"children`":[{`"operation`":`"AND`",`"children`":[{`"operation`":`"EQUALS`",`"key`":{`"type`":`"ACCOUNT`",`"property`":`"attribute.Supplier`",`"sourceId`":`"$($RoleSource)`"},`"value`":`"$($RoleCriteriaValue)`"},{`"operation`":`"NOT_EQUALS`",`"key`":{`"type`":`"ACCOUNT`",`"property`":`"`",`"sourceId`":`"$($RoleSource)`"},`"value`":`"New Zealand`"},{`"operation`":`"CONTAINS`",`"key`":{`"type`":`"ACCOUNT`",`"property`":`"`",`"sourceId`":`"$($RoleSource)`"},`"value`":`"`"}]}]}}}"

When converted to a PowerShell Object and displayed it looks like this;

3 Account Criteria Options.PNG

Identity Attribute based Criteria

The last example is Identity Attribute Criteria. This is simpler than the Account Attribute Criteria as we just need to reference the Identity Attribute and don’t need to reference a Source. A single criteria for isEmployee  = True looks like this;


As a second Criteria Group though in the Portal it looks like this;

2nd Criteria Group.PNG

and for the JSON object (RAW without variablizing) looks like this;

"{`"id`":`"2c918086663fbbd0016612345678909876`",`"accessProfileIds`":[],`"selector`":{`"type`":`"COMPLEX_CRITERIA`",`"entitlementIds`":[],`"aliasList`":[],`"valueMap`":[],`"complexRoleCriterion`":{`"operation`":`"OR`",`"children`":[{`"operation`":`"AND`",`"children`":[{`"operation`":`"EQUALS`",`"key`":{`"type`":`"ACCOUNT`",`"property`":`"attribute.Supplier`",`"sourceId`":`"2c91808365f742620165f9ba0e831bf8`"},`"value`":`"Kloud Solutions Pty Ltd`"},{`"operation`":`"NOT_EQUALS`",`"key`":{`"type`":`"ACCOUNT`",`"property`":`"`",`"sourceId`":`"2c91808365f742620165f9ba0e831bf8`"},`"value`":`"New Zealand`"},{`"operation`":`"CONTAINS`",`"key`":{`"type`":`"ACCOUNT`",`"property`":`"`",`"sourceId`":`"2c91808365f742620165f9ba0e831bf8`"},`"value`":`"`"}]},{`"operation`":`"EQUALS`",`"key`":{`"type`":`"IDENTITY`",`"property`":`"attribute.isemployee`"},`"value`":`"true`"}]}}}"


Using the cc/api/role APIs we can Create Role Groups, Update Roles Groups to contain Criteria and Refresh Role Groups to have the membership calculated. With the base functionality detailed we can now use this to create and manage our Role Groups through automation. Happy Days.

Follow ...+

Kloud Blog - Follow