One of the benefits of Cloud Services is the continual enhancements that vendors provide based on feedback from their customers. One such item of feedback that Microsoft has heard often is the request to know what state a Guest user in Azure AD is in. In the last couple of days Microsoft exposed two additional attributes on the User objectClass in Azure AD;
This means we can now query the Microsoft Graph for B2B users and understand if they have Accepted or are PendingAcceptance, and the datetime of the last change.
My customers have been wanting such information and would like to report on it. Here is an example PowerShell script I’ve put together that queries Microsoft Graph to locate B2B Users in the Accepted and PendingAcceptance states and outputs summary information into two HTML Reports. One for Accepted and one for PendingAcceptance.
Update the following script;
Line 2 for a Microsoft Azure PowerShell Module such as AzureAD that has the Microsoft.IdentityModel.Clients.ActiveDirectory.dll library in it
Line 5 for your Tenant name
Lines 15 and 16 for a Username and Password
Line 19 for where you want the Reports written to
Running the Script will then generate the Accepted and Pending HTML Reports.
Here is an example of the AcceptedB2BUsers HTML Report.
With the addition of these two additional attributes to the Microsoft Graph we can now query for B2B users based on their State and using PowerShell quickly report on them.
I recently had a scenario where I wanted to add an address space to a Virtual Network and encountered an issue where it was not possible to modify the address space while VNet Peering was in use. This is likely due to the fact that the routes to the peered VNet that are applied through the peering only get updated at the time the peer is created and cannot be dynamically updated.
The following error detailed this.
Failed to save virtual network changes Failed to save address space changes to virtual network ‘vnet1’. Error: Address space of the virtual network /subscriptions/xxxxxxx/resourceGroups/rg1/providers/Microsoft.Network/virtualNetworks/vnet1 cannot change when virtual network has peerings.
On the surface this isn’t a big deal, just delete the peering, modify the address space and re-create the peering. While the actual requirements to achieve the changes is straight forward the VNet I was working with was a Hub VNet, as such there were multiple VNet peerings in place to spoke VNets. Additionally, there was no documentation available on all of the configurations specific to the peering.
I wanted to export all of the peering configurations so that I had a backup and record of the configurations so that they could be re-applied the same way after the address space was added to the Hub network.
Exporting VNet Peering Configuration
The following snippet, will export all of the peering configurations for the VNet specified to a file named “vnetname-peerings.csv”. The configurations recorded for the peering include:
Important Note: Removing the VNet peering will disrupt communication between the VNets. You should plan for and accommodate this within your change control processes.
Now that that all of the peering configurations were exported, we can proceed to delete the peering and then make the required modifications to the address space.
Re-create VNet Peering Configuration
After the modifications were applied to the VNet, we are ready to re-create the peering configuration using the exported configuration. You can do this from the portal or PowerShell. In my example, I simply re-added it from the Portal using the values recorded from the exported CSV. If you wanted to use PowerShell the Add-AzureRmVirtualNetworkPeering command can be used, substituting the values that were exported into the CSV.
However, when I saved the configuration the following error was produced:
Failed to add virtual network peering ‘peer1’. Error: Cannot create or update peering /subscriptions/xxxxxxx/resourceGroups/rg1/providers/Microsoft.Network/virtualNetworks/vnet1/virtualNetworkPeerings/peer1 because remote peering /subscriptions/xxxxxxx/resourceGroups/rg2/providers/Microsoft.Network/virtualNetworks/vnet3/virtualNetworkPeerings/peer1 referencing parent virtual network /subscriptions/xxxxxxx/resourceGroups/rg1/providers/Microsoft.Network/virtualNetworks/vnet1 is in Disconnected state. Update or re-create the remote peering to get it back to Initiated state. Peering gets Disconnected when remote vnet or remote peering is deleted and re-created.
The reason for this error is that the corresponding peering on the remote VNet was in a disconnected state.
As the error suggests you have 2 options to resolve this, update or re-create the peering to get it back into the initiated state. Once this is done we can then create the peering in the parent VNet. You could use the script snippet above to backup the peering configuration and then delete and re-create it. However the easiest option is just to update the peering using the following command:
There are various options to package and deploy a SharePoint Framework solution and as part of packaging and deployment process, the developers have to identify a best approach for their team. Sometimes it becomes a nightmare to plan the right approach for your solution, if you haven’t weighed the options properly.
Working at multiple implementations of SPFx solution for sometime now, I have been able to get an idea of various options and approach for them. Hence in this blog, we will at these options and look at merits and limitations for each.
At a high level, below are the main components that are deployed as part of SPFx package deployment:
The minified js file for all code
The webpart manifest file
Webpart compiled file
The package (.pckg) file with all package information
Please check this blog for an overview of the steps for building and packaging a SPFx solution. The packaged solution (.sppkg) file can then be deployed to a App catalog site. The assets of the package (point 1-3 of above) could be deployed by any of the four below options. We will look at the merits and limitations for each.
1. Deploy to Azure CDN or Office 365 CDN
The assets could be deployed to an Azure CDN. The deployment script is already a part of SPFx solution and could be done from within the solution. More detailed steps for setting this up are here.
Note: Please remember to enable CORS on the storage account before deployment of the package.
If CORS is not enabled before CDN profile is used, you might have delete and recreate the Storage account.
Easy deployment using gulp
Faster access to assets and web part resources because of CDN hosting
Add geographical restrictions (if needed)
Dependency on Azure Subscription
Proper set up steps required for setting up Azure CDN. In some cases if the CDN if not set properly, then the deployment has to be done again.
Since the assets are deployed to a CDN endpoint, so if assets need restricted access then this mayn’t be recommended
2. Deploy to Office 365 Public CDN
For this option, you will need to enable and set up Office 365 CDN in your tenancy before deployment. For more details of setting this up, check the link here.
Faster access to assets and web part resources because of CDN hosting
No Azure subscription requirement
Content is accessible from SharePoint Interface
Manual copy of assets files to CDN enabled SharePoint library
Office 365 CDN is a tenant setting and has to be enabled for the whole tenancy
Since the assets are deployed to a CDN endpoint, so if assets needs restricted access then this mayn’t be recommended
Accidental deletion could cause issues
3. Deploy to SharePoint document library
This is also an option to copy for the compiled assets to be copied to a SharePoint document library anywhere in the tenancy. Setting this up is quite simple, first set the setting “includeClientSideAssets”: false in package-solution.json file and then set the CDN details in write-manifests.json file.
No need of additional Azure hosting or enabling Office 365 CDN
Access to Assets files from SharePoint interface
Manual Copy of assets file to SharePoint library
Accidental deletion could cause issues
4. Deploy to ClientAssets in App Catalog
From SPFx version 1.4, it is possible to include assets as part of the package file and deploy it to the hidden ClientAssets library residing in App Catalog. It is set in the package-solution.json file “includeClientSideAssets”: true.
No extra steps needed to package and deploy assets
Increases the payload of the package file
Risk for Tenant admins to deploy script files to the Tenant App Catalog.
In this blog, we saw the various options for SPFx deployment, and merits and limitations of each approach.
The SailPoint IdentityNow Request Center comes pre-populated with 130 Applications (as shown below) that by default are visible to users in the Dashboard and can be requested via the Request Center. Whilst this is great the majority are not often applicable and you need to configure each individual application to remove visibility and requestablity. You could of course ask your IdentityNow Support representative to do this for you, or you could manage it yourself. Lets go with option B and I’ll show you how.
To disable visibility of an Application, and to also remove it from being requested through the Request Center there are two options that need to be toggled off. Enabled For Users, and Visible in the Request Center.
Say you want to remove all from being visible and requestable. You will need to open each app, toggle the slider and the radio button and select save. That’s a minimum of 4 mouse clicks and some mouse scrolling x 130, or do it via the IdentityNow API in < 60 seconds. Option B please.
Let’s remove all Applications from user visibility (and the Dashboard). The process is simply to retrieve all Applications, then update each one to toggle off the options for visibility. The following script does just that.
After updating each app the Request Center is empty. Much quicker than hundreds of mouse clicks.
With the ability to retrieve Applications and update them via the API repetitive configuration becomes super quick.
Implementations of Workday obviously vary from organisation to organisation. Workday HR being an authoritative source of identity means it is also often the initiator of processes for Identity Management. For one of our customers base entitlements are derived from Workday Job Profile and may specify an Active Directory Account and an Office 365 License.
The Get-WorkdayWorkerProvData cmdlet is invoked from when calling Get_WorkdayWorkerAdv with the -IncludeWork flag. The result now returns ProvisioningGroup information as part of the returned PowerShell Object. As shown below Workday has indicated my Identity requires Active Directory and Office 365 (email).
If you just want to return the Account Provisioning information you can with Get-WorkdayWorkerProvData.
Provisioning_Group Status Last_Changed
------------------ ------ ------------
Office 365 (email) Assigned 1/05/2017 9:31:21 PM
Active Directory Assigned 1/05/2017 9:57:37 PM
The Get-WorkdayWorkerMgmtData cmdlet is invoked from when calling Get_WorkdayWorkerAdv with the -IncludeWork flag. The result now returns MgmtData information as part of the returned PowerShell Object. The collection is an ordered list of the Supervisory Hierarchy for the object.
Expanding the collection we can see the Supervisory Hierarchy. Note: the top of the hierarchy is listed twice as in this organisation the top reports to himself.
If you just want to return the Management Hierarchy you can with Get-WorkdayWorkerMgmtData.
The Get-WorkdayWorkerPhoto cmdlet is invoked from when calling Get_WorkdayWorkerAdv with the -IncludePhoto and -PhotoPath flags. The result will then output the photo to the path provided in the -PhotoPath option.
If you just want to export the Photo for a single user you can with Get-WorkdayWorkerPhoto.
The double diamond seems to be a popular method of approaching design thinking for most UX designers. Discover, Define, Develop, Deliver. But often clients and stakeholders start to run for the hills when they realise that the discover phase involves time consuming user research, research that the client believes they don’t need to do because “they already know their users”. A lean approach to user experience design may be an easier way to sell design thinking to a client as it involves starting with assumptions and creating hypothesis that may solve a problem, then testing these hypotheses with real users in a short time frame. I find this to be a better starting point with clients as it saves time and engages them more from the beginning of the project, making the design process more transparent.
Just in-case that first paragraph made no sense to you, don’t worry. I find the best way of learning is by seeing an example. I decided to adopt a lean approach to a little challenge given to me – improve the ASOS app in order to increase conversion rates.
First Step: Declaring Assumptions
Before diving into assumptions there needs to be a clear problem statement. I treated my challenge as the problem statement – improve the ASOS app in order to increase conversion rates. Probably not worded the best. Problem statements should follow a more structured format addressing the criteria below
[Our service/product] was designed to achieve [these goals]. We have observed that the product/service isn’t meeting [these goals], which is causing [this adverse effect] to our business. How might we improve [service/product] so that our customers are more successful based on [these measurable criteria]?
Lean UX, O’Reilly
As this is a hypothetical I can’t say I know the goals of the ASOS app, but I can take a good guess based on a quick google search and their press releases.
“Our corecustomer is the twenty-something fashion-lover: an avid consumer and communicator who is inspired by friends, celebrities and the media.”
From this I created a more fleshed out problem statement: The ASOS app was designed to be the favoured place for twenty-something fashion lovers to be inspired by fashion and buy clothes. The ASOS app isn’t meeting these goals which is causing a stagnation in sales and profit loss to our business. How might we improve the ASOS app so that our customers will want to buy more clothes and follow through with sales.
Once you have a clear problem statement it’s time to capture the audience you are focusing on. Proto-personas are an archetype of key users you will be focusing on, however a proto-persona is different to a regular persona as it is purely based on assumptions which you will later validate. As I did not have access to the ASOS team my assumptions were based on a few press releases and previous case studies on ASOS. From that I created one Proto-persona.
In a client setting this is a good in 2 ways. Firstly, you are getting them to think about their users. Secondly, it’s a quick and dirty way to get their current knowledge on their users.
Now that I have a primary persona (in most cases you’d have more than one) I could come up with ideas on how I might address their pain points. I had a look at the current ASOS app, did a quick heuristic evaluation, looked at what competitors did, and apps such as Pinterest that had features that could potentially address some pain points. From that I came up with the following hypotheses using the following template:
I believe that We will achieve [business outcome] if the user [persona] can achieve [user outcome] with this feature [feature]
Second Step: Testing my assumptions
I then wanted to test these assumptions with real users by sending out a quick survey which got 20 respondents, 5 user interviews and 1 guerrilla test of the current ASOS app. From this I gathered the data into themes (affinity mapping) and got the following insights
From this quick research, which only took a couple of days, I updated my proto-persona and re- prioritised my hypotheses based on the pain points that both would critically affect the business and users.
The key point I am trying to make is that proto-personas and hypotheses are fluid artefacts, they are not set in stone and the beauty of a lean approach is to constantly validate and update your assumptions. It’s also a good way of quickly getting the client to see the benefit of a design thinking approach in a ‘sneak peak’ way. Once they start to see the benefits of what a bit of research and user input can have they may be more inclined to invest more in a human centred design approach in the future.
In my next blog I’ll be going through the prototype I made based on these insights and how I iterated its design based on testing it with only 5 users.
What if I told you that you could get rid of most of your servers, however still consume the services that you rely on them for? No longer will you have to worry about ensuring the servers are up all the time, that they are regularly patched and updated. Would you be interested?
In this blog, I will show you how you can potentially replace your secure ftp servers by using Amazon Simple Storage Service (S3). Amazon S3 provides additional benefits, for instance, lifecycle policies which can be used to automatically move older files to a cheaper storage, which could potentially save you lots of money.
The solution is quite simple and is illustrated in the following diagram.
We will create an Amazon S3 bucket, which will be used to store files. This bucket will be private. We will then create some policies that will allow our users to access the Amazon S3 bucket, to upload/download files from it. We will be using the free version of CloudBerry Explorer for Amazon S3, to transfer the files to/from the Amazon S3 bucket. CloudBerry Explorer is an awesome tool, its interface is quite intuitive and for those that have used a gui version of a secure ftp client, it looks very similar.
With me so far? Perfect. Let the good times begin 😉
Lets first configure the AWS side of things and then we will move on to the client configuration.
In this section we will configure the AWS side of things.
Login to your AWS Account
Create a private Amazon S3 bucket (for the purpose of this blog, I have created an S3 bucket in the region US East (North Virginia) called secureftpfolder)
Use the JSON below to create an AWS Identity and Access Management (IAM) policy called secureftp-policy. This policy will allow access to the newly created S3 bucket (change the Amazon S3 bucket arn in the JSON to your own Amazon S3 bucket’s arn)
4. Create an AWS IAM group called secureftp-users and attach the policy created above (secureftp-policy) to it.
Create AWS IAM Users with Programmatic access and add them to the AWS IAM group secureftp-users. Note down the access key and secret access key for the user accounts as these will have to be provided to the users.
Thats all that needs to be configured on the AWS side. Simple isn’t it? Now lets move on to the client configuration.
In this section, we will configure CloudBerry Explorer on a computer, using one of the usernames created above.
Open the downloaded file to install it, and choose the free version when you are provided a choice between the free version and the trial for the pro version.
After installation has completed, open CloudBerry Explorer.
Click on File from the top menu and then choose New Amazon S3 Account.
Provide a meaningful name for the Display Name (you can set this to the username that will be used)
Enter the Access key and Secret key for the user that was created for you in AWS.
Ensure Use SSL is ticked and then click on Advanced and change the Primary region to the region where you created the Amazon S3 bucket.
Click OK to close the Advanced screen and return to the previous screen.
Click on Test Connection to verify that the entered settings are correct and that you can access the AWS Account using the the access key and secret access key.
Once the settings have been verified, return to the main screen for CloudBerry Explorer. The main screen is divided into two panes, left and right. For our purposes, we will use the left-hand side pane to pick files in our local computer and the right-hand side pane to correspond to the Amazon S3 bucket.
In the right-hand side pane, click on Source and from the drop down, select the name you gave the account that was created in step 4 above.
Next, in the right-hand side pane, click on the green icon that corresponds to External bucket. In the window that comes up, for Bucket or path to folder/subfolder enter the name of the Amazon S3 bucket you had created in AWS (I had created secureftpfolder) and then click OK.
You will now be returned to the main screen, and the Amazon S3 bucket will now be visible in the right-hand side pane. Double click on the Amazon S3 bucket name to open it. Viola! You have successfully created a connection to the Amazon S3 bucket.
To copy files/folders from your local computer to the Amazon S3 bucket, select the file/folder in the left-hand pane and then drag and drop it to the right-hand pane.
To copy files/folders from the Amazon S3 bucket to your local computer, drag and drop the files/folder from the right-hand pane to the appropriate folder in the left-hand pane.
So, tell me honestly, was that easy or what?
Just to ensure I have covered all bases (for now), here are few questions I would like to answer
A. Is the transfer of files between the local computer and Amazon S3 bucket secure?
Yes, it is secure. This is due to the Use SSL setting that we saw when configuring the account within CloudBerry Explorer.
B. Can I protect subfolders within the Amazon S3 bucket, so that different users have different access to the subfolders?
Yes, you can. You will have to modify the AWS IAM policy to do this.
C. Instead of a GUI client, can I access the Amazon S3 bucket via a script?
Yes, you can. You can download AWS tools to access the Amazon S3 bucket using the command line interface or PowerShell. AWS tools are available from https://aws.amazon.com/tools/
I hope the above comes in handy to anyone thinking of moving their secure ftp (or normal ftp) servers to a serverless architecture.
Often times when designing a product or solution for a customer, in planning and concept development, we might consider the user experience to be one of two (or both) things:
User feedback regarding their interaction with their technological environment/platforms
The experience the user is likely to have with given technology based on various factors that contribute to delivering that technology to them; presentation, training, accessibility, necessity, intuitiveness, just to name a few.
These factors are not solely focused on the user and their role in the human – technology interaction process, but also their experience of dealing with us as solution providers. That is to say, the way in which we engage the experience and behaviour of the user is just as important to the delivery of new technology to them, as is developing our own understanding of a broader sense of human interfacing technology behaviour. UX is a colourful – pun intended – profession/skill to have within this industry. Sales pitches, demos and generally ‘wowing the crowd’ are a few of the ways in which UX-ers can deploy their unique set of skills to curve user behaviour and responsiveness in a positive direction, for the supplier especially.
Not all behavioural considerations with regards to technology are underpinned by the needs or requirements of a user, however. There are more general patterns of behaviour and characteristics within people, particularly in a working environment, that can be observed, to indicate how a user experiences [new] technology, including functionality and valued content that, at a base level, captures a user’s attention. The psychology of this attention can be broken down into a simplified pathology: the working mechanisms of perception as a reaction to stimulus, and how consistent the behaviour is that develops out of this. The stimulus mentioned are usually the most common ones when relating to technology; visual, auditory.
You’ve likely heard of, or experienced first-hand, the common types of attention in everyday life. The main three are identified as selective, divided and undivided. Through consistency of behavourial outcomes, or observing in a use case a consistent reaction to stimuli, we look to observe a ‘sustainability of attention or interest’ over an extended period of the time, even if repetition of an activity or a set of activities is involved. This means that the solution, or at very least, the awareness and training developed to sell a solution, should serve a goal of achieving sustainable attention.
How Can We Derive a Positive User Experience through Psychology?
Too much information equals lack of cognitive intake. From observation and general experience, a person’s attention, especially when captured within a session, a day or week, is a finite resource. Many other factors of an individual’s life can create a cocktail of emotions which makes people in general, unpredictable in a professional environment. The right amount of information, training and direct experience should be segmented based on a gauge of the audience’s attention. Including reflection exercises or on-the-spot feedback, especially in user training can give you a good measure of this. The mistake of cognitively overloading the user can be best seen when a series of options are present as viable routes to the desired solution or outcome. Too many options can, at worst, create confusion, an adversity to the solution and new technologies in general, and an overall messy user experience.
Psychology doesn’t have to be a full submersion into the human psyche, especially when it comes to understanding the user experience. Simple empathy can be a powerful tool to mitigate some of the aforementioned issues of attention and to prevent the cultivation of repeated adverse behaviour from users. When it boils down to the users, most scenarios in the way of behaviour and reaction have been seen and experienced before, irrespective of the technology being provided. Fundamentally, it is a human experience that [still] comes first before we look at bridging the user and the technology. For UX practitioners, tools are already in-place to achieve this such as user journey maps and story capturing,
There are new ideas still emerging around the discipline of user experience, ‘UX’. From my experience with it thus far, it presents a case that it could integrate very well with modern business analysis methodologies. It’s more than designing the solution, it’s solutions based on how we, the human element, are designed.
Managing SailPoint IdentityNow Role Groups typically involves leveraging the SailPoint IdentityNow Portal for the creation and on-going management. That’s because the API for Roles is not published or documented.
What happens then if you have many to create, update/manage? How does the IdentityNow Portal use the non-published undocumented API’s to create and manage them? I’ve worked it out and am documenting it here in the interim until the API’s get versioned and published.
Note: There is a chance that the Roles API may change,
so use at your own risk.
Roles API Authentication
First up the Roles API URI looks like this.
The /cc/ part is a good indication that the API isn’t documented, but also that the Authentication mechanism to interact with it is using a JWT Bearer Token. I covered how to obtain a JWT Bearer Token specifically for interacting with these API’s in this post here. I’m not going to cover that here so read that post to get up to speed with that process.
Associated with managing the criteria of Roles we will also need to reference IdentityNow Sources. We can do that via a non-published API too.
Listing all IdentityNow Roles
Similar to Governance Groups that I detailed in this post Roles can be returned on an individual basis (if you know the ID of the Role which you won’t). So the simplest way is to query and return them all (no, the Search (BETA) doesn’t support Roles either). The API to list all roles is:
The example script below shows the generation of information for a Role Object. Specifically;
disabled (whether the Role is enabled or disabled)
owner (the IdentityNow NAME attribute value of the owner of the Role)
Executing this creates the Role.
Looking in the Portal we can then see the Role.
Managing Role Criteria
The Role Criteria will obviously depend on what your criteria is. Is it based on Standard Group Rules, a List of Identities or Custom? I’ll cover a List of Identities and Group based criteria.
Adding a List of Identities to a Role
This isn’t too dis-similar to Governance Roles. We search for the list of users we want to add to the Role and add them to a collection. The key difference is that we use Name instead of ID. We then specify the type of Criteria (Identity_List) and the ID of the Group to update.
Executing the Script
The example script below switches the Headers for Basic Auth to use the v2 endpoint to search and locate the users to add to the Role. It then switches the Headers over to JWT Bearer Auth, generates the body for the request to update the Role Group and updates the Role Group.
Executing the script looks as per below and runs through successfully.
Checking the Role Group in the IdentityNow Portal
Adding a Members Based on Criteria to a Role
Adding users to a Role based on Criteria is similar to above except rather than searching for Identities we are adding the criteria for a role.
The criteria can be based on an Identity Attribute an Account Attribute or an Entitlement. I’ll show using an Identity Attribute and an Account Attribute.
Account Attribute based Criteria
Account criteria is based on attributes available in the Schema of a Source. In order to reference the Source we need the ID of the Source. This is available via the non-published Source API cc/api/source
The example script below shows getting all Sources and finding the one you want to base your Role Criteria off (based on name). You can of course mix criteria across many sources, but I’m showing you doing it from one. Running the following script will return your Sources. You require the ID of the Source you want to leverage attributes from to base your criteria off. The attribute that contains this used further along in these examples is $RoleSource
Now that we have the Source ID for the Source that contains the attribute that we want to base our Role Criteria on, we need to build the criteria. Let’s start with a simple one and Single Criteria.
Here is an example JSON document with a Single Criteria that contains the variables for the Role that we just created (Role ID), the Source ID for the Source with the attribute we are using for the criteria and the value we want to match (RoleCriteriaValue). The Attribute from my Source that I’m using is Supplier. Update for the attribute you’re using on your Source.
After converting the JSON object to a PowerShell Object using ConvertFrom-JSON it looks like this:
Having it as a PowerShell Object also makes it easy to modify. If you wanted to change the criteria to match against, the Operator and/or the operation then just change the value. e.g. the following will change the value to match from “Kloud Solutions Pty Ltd” to “Darren J Robinson”
Convert back to JSON for Post via the webrequest to the API. That and updating the Role with the criteria thereby is;
In the Portal we can see that the Criteria has been added.
And you will notice I have a Role Refresh call after the update to update the Role Membership as it is now based on Criteria. If you are updating/adding criteria to numerous roles only call the Refresh at the end.
Adding Multiple Criterion
Multiple criterion is just more information to pass to the update. The format can be a little confusing so here is a template for three criteria showing the three Operators (Equals, Contains and Not_Equals).
When converted to a PowerShell Object and displayed it looks like this;
Identity Attribute based Criteria
The last example is Identity Attribute Criteria. This is simpler than the Account Attribute Criteria as we just need to reference the Identity Attribute and don’t need to reference a Source. A single criteria for isEmployee = True looks like this;
Using the cc/api/role APIs we can Create Role Groups, Update Roles Groups to contain Criteria and Refresh Role Groups to have the membership calculated. With the base functionality detailed we can now use this to create and manage our Role Groups through automation. Happy Days.