Understanding Password Sync and Write-back

For anyone who has worked with Office 365/Azure AD and AADConnect, you will of course be aware that we can now sync passwords two ways from Azure AD to our on-premises AD. This is obviously a very handy thing to do for myriad reasons, and an obvious suggestion for a business intending to utilise Office 365. The conversation with the security bod however, might be a different kettle of fish. In this post, I aim to explain how the password sync and write-back features work, and hopefully arm you with enough information to have that chat with the security guys.

Password Sync to AAD

If you’ve been half-listening to any talks around password sync, the term ‘it’s not the password, it’s a hash of a hash’ is probably the line you walked away with, so let’s break down what that actually means. First up, a quick explanation of what it actually means to hash a value. Hashing a value is applying a cryptographic algorithm to a string to irreversibly convert that string into another sting of a fixed length. This differs from encryption in that with encryption we intend to be able to retrieve the data we have stored, so must hold a decryption key – with hashing, we are unable to derive the original string from the hashed string, and nor do we want to. If it makes no sense to you that we wouldn’t want to reverse a hashed value, it will by the end of this post.

So, in Active Directory when a user sets their password, the value stored is not actually the password itself, it’s an MD4 hash of the password once it’s been converted to Unicode Little Endian format. Using this process, my password “nicedog” ends up in Active Directory as 2993E081C67D79B9D8D3D620A7CD058E. So, this is the first part of the “hash of a hash”.

The second part of the “hash of a hash” is going to go buck wild with the hashing. Once the password has been securely transmitted to AADConnect (I won’t detail that process, because we end up with our original MD4 hash anyway), the MD4 hash is then:

  1. Converted to a 32-byte hexadecimal string
  2. Converted into binary with UTF-16 encoding
  3. Salted with 10 additional bytes
  4. Resultant string hashed and re-hashed 1000 times with HMAC-SHA256 via PBKDF2

So, as you can see, we’re actually hashing the hash enough to make Snoop Dogg blush. Now, we finally have the have the hash which is securely sent to Azure AD via an SSL channel. At this point, the hashed value which we send is so far removed from the original hash that even the most pony-tailed, black tee-shirt and sandal wearing security guy would have to agree it’s probably pretty secure. Most importantly, if Azure AD was ever compromised and an attacker did obtain your users password hashes, they are unable to be used to attack any on-premises infrastructure with a “pass the hash” attack, which would be possible if AD’s ntds.dit was compromised and password hashes extracted.

So now Azure AD has these hashes, but doesn’t know your passwords – how are user’s passwords validated? Well it’s just a matter of doing the whole thing again with a user provided password when that user logs on. The password is converted, salted, hashed and rehashed 1000 times in the exact same manner, meaning the final hash is exactly the same as the one stored in Azure AD – the two hashes are compared, and if we have a match, we’re in. Easy. Now let’s go the other way.

Password Write-back to AD

So how does it work going back the other way? Differently. Going back the other way, we can’t write a hash ourselves to Active Directory, so we need to get the actual password back from AAD to AD and essentially perform a password reset on-prem allowing AD to then hash and store the password. There are a couple of things which happen when we install AADConnect and enable Password Write-back to allow us to do this

  1. Symmetric keys are created by AAD and shared with AADConnect
  2. A tenant specific Service Bus is created, and a password for it shared with AADConnect

The scenario in which we are writing a password back from AAD to AD is obviously a password reset event. This is the only applicable scenario where we would need to, because a password is only set in AAD if that user is “cloud only”, so they would of course not have a directory to write back to. Only synced users need password write-back, and only upon password reset. So AAD gets the password back on-premises by doing the following:

  1. User’s submitted password is encrypted with the 2048-bit RSA Key generated when you setup write-back
  2. Some metadata is added to the package, and it is re-encrypted with AES-GCM
  3. Message sent to the Service Bus via and SSL/TLS channel
  4. On-premises AADConnect agent wakes up and authenticates to Service Bus
  5. On-premises agent decrypts package
  6. AADConnect traces cloudanchor attribute back to the connected AD account via the AADConnect sync engine
  7. Password reset is performed for that user against a Primary Domain Controller

So, that’s pretty much it. That’s how we’re able to securely write passwords back and forth between our on-premises environment and AAD without Microsoft ever needing to store a user’s password. It’s worth noting for that extra paranoid security guy though that if he’s worried about any of the encryption keys or shared passwords being compromised, you can re-generate the lot simply by disabling and re-enabling password writeback.

Hopefully this post has gone some way towards helping you have this particular security conversation, no matter what year the security guys Metallica tour tee-shirt is from.

Applying Business Rules to Profile Photos Using Microsoft Cognitive Services

A customer I am working with at the moment is in the (very) early stages of discussion around the gathering and application of profile photos across their internal systems. In this particular case, we are considering that the photos themselves do not exist. Sure, there are ID card photos of startled staff taken on day one of their employment, but people being people, they would rather not be forever digitally represented by their former selves – particularly not the version of themselves which had an ID photo taken in a poorly lit un-used meeting room 7 years ago before they got that gym membership. There are many technical considerations when embarking on something like this: Where will we store the data? What file formats do we need to support? What is the maximum size and resolution for photos across our systems? But before answering any of these questions we should first think about how we are actually going to gather these photos, and once you have them, how do you ensure that they comply with whatever business rules you wish to apply to them? Not very long ago, the answer to this question would have been ‘hire a grad’ (sorry grads) – but we live in the future now, and we have artificial intelligence to do our bidding, so let’s take a look at how we do just that.

The Rules

Let’s make up some rules which might be applicable to corporate profile photos.

  1. The photo should be of your face
  2. The photo should contain only one person
  3. The face should be framed in the photo similarly to a passport photo
  4. The photo should not contain any adult content

The API’s

Our rules can be satisfied using two of Microsoft’s Cognitive Services API’s, namely the Face API and the Computer Vision API. Both of these API’s have free tiers which more than satisfy the requirements for this little demo, but have paid tiers which are actually extraordinarily fairly priced. To sign up for the API’s, head over to portal.azure.com and click new (A), Intelligence (B), then Cognitive Services API (C).

cog_1

And then fill in the relevant details.

cog_2

We are using both the Face API and the Computer Vision API in this example, so the above steps will be repeated for each API.

Once you have completed this process, you will find details of your new accounts in the portal under “Cognitive Services accounts”. This is going to give you the details you’ll need to interact with the API’s.

cog_3

Now that we have an account setup and ready to go, we can start to play! Because I am an infrastructure guy rather than a dev, I will be using PowerShell for this demonstration. Let’s work through the rules we defined earlier.

Rule #1: The photo should be of your face

To check our image against this rule, we want a response from the Face API to simply confirm that the image is of a face. As I already own a face, I will use my own for this example. The image “C:\temp\DanThom_Profile.jpg” which is referenced in the code snippet is the same image used as my profile photo on this blog.

Executing the above code will give you a simple true/false set against the variable $FaceDetected. This gives us some confidence that the uploaded photo is in fact a face – it doesn’t necessarily mean it’s my face, but I will talk about that a little later.

Rule #2: The photo should contain only one person

We’re going to reuse the same API and almost the same code to validate this rule. Feeding the API a crudely photoshopped version of my original photo with my face duplicated using the snippet below, the variable $MultipleFaces is set to true. Feeding in the original photo sets the variable as false.

Rule #3: The face should be framed in the photo similarly to a passport photo

For this rule, we will use a combination of the Computer Vision and the Face API. The Face API is going to give us some data about how many pixels are occupied by the face in the photo, and we’re simply using the Computer Vision API to get the dimensions of the photo. I appreciate there are many other ways you can retrieve this data without having to call out to an external API, but seeing as we’re playing with these API’s today, why not?

The following snippet of code will get the dimensions of the photo, get the width of the Face Rectangle (the width of the detected face) then work out the percentage of the width of the photo which is consumed by the face. My profile picture is a good example of good framing, and the width of my face consumes 43.59375% of the width of the photo. Based on this, I’m going to say a ‘good’ photo ranges somewhere between 35% and 65%. The following code snippet will work out if the picture meets this criteria, and return a true/false for the variable of $GoodFraming.

Rule #4: the photo should not contain any adult content

We are all decent human beings, so it seems like this shouldn’t be a concern, but the reality is if you work in a larger organisation, someone may choose to perform a ‘mic drop’ by updating their profile picture to something unsavoury in their last weeks of employment. Luckily the Computer Vision API also has adult content detection. The following code snippet will return a simple true/false against the variable $NSFW.

Interestingly, the ‘visualFeatures=Adult’ query returns a true/false for ‘isAdult’ and for ‘isRacy’ as well as numerical results for ‘adultScore’ and ‘racyScore’. I was wondering what might be considered ‘racy’, so I fed the API the following image.

racybill

As it turns out, old mate Bill gets himself a racyScore of 0.2335. My profile picture gets 0.0132, and an actual racing car got 0.0087. Bill Gates is twenty times as racy as I am, and off the charts compared to a racing car.

Other Cool Things

There are all sorts of other neat things these API’s can return, which would be even more helpful in vetting corporate profile pictures. Some things such as returning the landmarks of the face, whether or not the person is wearing sunglasses, the individuals gender, the photos similarity to other photos or whether or not the person is a celebrity would all be helpful in a fully developed solution around vetting corporate profile photos.

Conclusion

Hopefully the above examples have provided a little insight into how the Microsoft Cognitive Services API’s might be useful for business applications. It is amazing to me the kind of easily available and affordable artificial cognitive power we now have at our fingertips for interested parties with minimal coding skills.

You can see from the examples above how easily you could scale something like this. You could offer users an application where they can take their own profile picture, have that picture immediately reviewed by Microsoft Cognitive Services and approved, or immediately rejected giving the user the option to either submit the photo for manual approval (hooray! one of the grads got their job back!), or discard the photo and try again.

I have yet to see any of the Microsoft Cognitive Services implemented in any of the businesses I have been involved with, but I suspect in the coming years we will be seeing more and more of it, and that is certainly something I look forward to.

Managing SPO User Profiles with FIM/MIM and the Microsoft PowerShell Connector

Back in March, my colleague Darren Robinson published this post which nicely explains how to use Søren Granfeldt’s FIM/MIM PowerShell MA to manage SharePoint Online profiles. While Darren’s post covers everything you need to connect to SPO and manage user profiles via FIM/MIM, some of your clients may prefer to use the Microsoft equivalent for reasons of perceived support and product quality. This post will cover off what is required to get the Connector up and running.

Prerequisites

To get this show on the road, you’re going to need the following

Setting up the Connector

First up, if you’re here because you just need to get an SPO connector up and going and get some attributes flowing, I’m going to make your life real easy: here is a full export of a working MA with a bunch of attributes configured in the scripts and ready to go. Just add your own credentials, tenant SPO admin URL, join rules and attribute flows and you’re done. You’re welcome. If you do however have some time to fill in your day, feel free to follow along.

Create a new Connector in the Sync Engine. Provided you have correctly installed the PowerShell connector linked in the prerequisites, you will see “PowerShell (Microsoft)” as an available Connector type. Give your Connector a name and click next, where you will see the connectivity tab appear.

Connectivity Tab Settings

Plug in the following configuration items:

Server: https://yourtenant-admin.sharepoint.com/
User: yourSPOserviceaccount@yourtenant.onmicrosoft.com
Password: yourSPOserviceaccountpassword
Common Module Script Name: FIMPowerShellConnectorModule.psm1

You will see now where you can start to paste some scripts. On this tab, we will provide two scripts – the common module script which was written by Microsoft and contains functions used in our import and export scripts, and a schema script. The schema script contains attributes I was interested in for now, which you can add to using the same formatting. My scripts are as follows:

Common Module Script

Schema Script

Capabilities Tab Settings

On the capabilities tab after much trial and error, I settled on the configuration as per the screenshots below

PSMASS3

Global Parameters Tab Settings

Import Script

Export Script

Join Rules and Attribute Flows

I am not going to go into Join Rules or Attribute Flows in any detail here, as those things are specific to your organisation and it’s requirements. The process for creating Join Rules and Attribute Flows is exactly the same for every other FIM MA you’ve ever worked with.

Troubleshooting

As with any PowerShell scripting or FIM work, it’s not just going to work first time as expected when you hit the go button. Particularly with this Connector, the default level of logging will just tell you something didn’t work, and offer you no real detail as to why. For this reason, during development you’ll want to crank up the logging. The following steps on enabling logging are shameless plagiarised from Technet.

Open the %ProgramFiles%\Microsoft Forefront Identity Manager\2010\Synchronization Service\bin\miiserver.exe.config file using a text editor and paste the following XML in to the file on the line immediately following the <sources> tag

Create the directory c:\logs, grant the service account for the synchronization service Modify permissions to the c:\logs directory, then restart the synchronization service. By default, PowerShell errors and some other data from the MA will be logged here, but you can be as verbose as you like in your script by including cmdlets like Write-Debug.

Conclusion

Following the steps above, it should be relatively straight forward to spin up an SPO Connector and get some attributes flowing. Hopefully this post has saved you a bit of time and effort!

 

 

Sending SMS Through PowerShell with Telstra’s New API

Recently, Telstra released their first public API, which in true telco fashion leverages an existing product in their stable; SMS. The service allows anyone with a Telstra t.dev account (get one here) to get an API key which will allow you to send up to 100 messages per day, 1000 per month to Australian mobiles. Obviously, this is going to be great for anyone wishing to use a free SMS service for labbing, testing, or sending your buddies anonymous cat facts.

I’m not so much a dev, so the first thing I wanted to do was to test this out using PowerShell. Using PowerShell, I get to look like I’m doing something super-important whilst I send my buddies cat facts. The following is the code I used to make this happen.

First, we want to get ourselves an access token, so we can auth to the service.

$app_key = "Th1SiSn0TreAllYmYAppK3ybUtTHanKsAnyW4y"
$app_secret = "n0rmYS3cr3t"
$auth_string = "https://api.telstra.com/v1/oauth/token?client_id=" + $app_key + "&client_secret=" + $app_secret + "&grant_type=client_credentials&scope=SMS"
$auth_values = Invoke-RestMethod $auth_string

Now that we have an auth token, we can use it to send, receive, and check the status of messages.

# Send SMS
$tel_number = "0488888888"
$token = $auth_values.access_token
$body = "On average, cats spend 2/3 of every day sleeping. That means a nine-year-old cat has been awake for only three years of its life"
$sent_message = Invoke-RestMethod "https://api.telstra.com/v1/sms/messages" -ContentType "application/json" -Headers @{"Authorization"="Bearer $token"} -Method Post -Body "{`"to`":`"$tel_number`", `"body`":`"$body`"}"
$sent_message

At this point, I receive an SMS to my phone, which I can reply to

telstraSMS_reply

The message can also be queried to check its delivery status, and check if the message has been replied to, as below:

# Get Message Status
$messageid = $sent_message.messageId
$message_status = Invoke-RestMethod "https://api.telstra.com/v1/sms/messages/$messageid" -Headers @{"Authorization"="Bearer $token"}
$message_status
# Get Message Response
$message_response = Invoke-RestMethod "https://api.telstra.com/v1/sms/messages/$messageid/response" -Headers @{"Authorization"="Bearer $token"}
$message_response

Executing the above code gives me the following

telstraSMS_powershell

Now obviously, you can wrap all these up in some functions, parse in external parameters, strap into PowerShell workflows in FIM, incorporate into SSPR, and just about anything else you can think of (in your labs). There are some caveats to using the service, some obvious of course:

  • It’s SMS, so a 160 character limit applies
  • You can send only one message at a time
  • The service is not intended for large volumes of messages
  • 100 messages per day/1000 per month limit
  • The service is in beta
  • Telstra cannot guarantee delivery once the message is passed to another telco
  • Australia mobiles only

Initially in my testing, I found messages sat in the state of “SENT” and would not update to “DELIVERED”. After some general fist waving and mutterings about beta services, I rebooted my phone and the messages I had queued came through. Although I have had no issue with SMS delivery in the past, I’m happy to put this down to the handset bugging out. In all subsequent testing, the messages came through so quickly, that my phone buzzed immediately after hitting enter on the command.

I hope the code snippets provided help you out with spinning this up in your labs, but please check the Telstra T’s and C’s before sending out some informative cat facts.

 

The FIM User Experience

A recent post by my colleague Jamie Skella “What UX Isn’t” started me thinking about how UX applies to FIM. Throughout my career as an Identity Management Consultant, I’ve seen projects reach a point in maturity where stakeholders are walked through the tasks an admin or user will perform in the portal, and the average eyebrow height in the room rises exponentially.

Those of us working with Microsoft’s identity products for a while, are used to seeing the glitz and glamour of the Sync Engine console, previously the only interface available with the product, so when the FIM Portal was introduced with FIM 2010, it gave a “user friendly” interface to work with. Sure, it was a bit clunky here and there, but hey, we’ve got a nice user interface now! The problem is however, we’re not the users. The users are a completely separate group of people, who are not Identity Management Consultants and who do not find this a refreshing change.

In this post, I will cover some of the user experience pain points in the FIM Portal which I believe should be called out early in the consulting piece. The fact of the matter is, what may seem like a trivial user experience change to the casual observer, may be a significant piece of development work for you, your UX guys, and your developers. Calling these things out early will give you the opportunity to talk about scope, budgets, or simply get an agreement up-front about how it is.

The Lack of Formatting Flexibility in RCDC’s

An RCDC is essentially a bunch of XML which tells the FIM Portal what items, representing what attributes to present. The FIM Portal takes all that information and presents it to the user in the only way it knows how. What this means, is that each item laid out in the XML, renders itself as a single item in the UI.

The problem here is that there is not much flexibility in how the portal will render this item. For each control on the page, it will appear on its own line, and each control will appear one after the other in a stack, in the order you define them in the XML. In demoing the portal in the past and showing off these screens, I’ve had project stakeholders say things like “That’s fine, but just put those options in two columns” or “Great, you just need to indent the options below that first one to show they are related” or “Group all those options tightly together across the page”. Queue the shocked look when the answer is “Easier said than done”.

RCDCs

Nothing Happens Until You Hit Finish

Typically in FIM, we have forms (RCDC’s) which we use to enter a bunch of information, then do something with. We flow that information somewhere, we kick off a workflow based on the data and we add or remove sync rules. If we didn’t want to do something useful with the data, it’s fair to say we don’t want it. The issue is that nothing happens with this data until we hit that finish button. The forms are essentially static. Yes of course we can use auto-post-back to make the forms more dynamic, but how useful would it be, if when we are creating a user account, the form could query Active Directory and let us know that there happens to already be a Gordon Shumway in the directory that is going to result in that users account name being gshumway1? Perhaps someone has already created that exact account in AD directly, and we’re actually busy creating a duplicate? This is just one example where real dynamic forms would be advantageous, I’m sure based on your experience and your customer’s needs, you could think of dozens more.

Adding and Removing Users from Groups and Sets is Clunky

When adding and removing users from a set or a group, we have a whole stack of page real estate dedicated to this one task. Why? Because you need the box showing the current group membership, you need the box and corresponding buttons for adding users, and you need a box and corresponding buttons for removing users. If we forget for a minute that this is what we have become used to in FIM, we quickly realise that this is not pretty. Considering that adding and removing users is a task which would typically be assigned to IT Admins who are probably most familiar with performing this task in Active Directory Users and Computers, you can see how the new interface we are presenting may seem like a step backwards.

setsandgroups

The Date Picker is not a Date Picker

For the longest time on the internet, we’ve known that if we need to enter a date into a website, we click on the date field, and a date picker pops up. We can quickly select an appropriate date quickly by evaluating what day of the week the 20th of March happens to be in 2015. Default FIM behaviour does not afford us this opportunity, and instead we need to enter in a date in a specified format. Once again, if we consider our audience here is likely to be either IT Admins, or even end users, this is going to seem like a backwards step.

employeestartend

So What Can We Do?

There are many options for customising the portal to increase usability and to tighten up the interface. We can plug in community provided features which replace the calendar picker, we can play with the CSS behind the pages and change the feel of the portal with our own custom themes, and we can strip down or beef up the RCDC’s to include or exclude the parts we require. Ultimately, we should take a step back at the top of the engagement and ask the basic question: “Who is going to use this portal and what are they going to use it for?” and take a realistic approach by thinking like the end user.

If the requirement is for an admin to be able to manage user accounts and nothing more, is the FIM Portal really the best solution? How much effort would be required for a Developer and a UX guy to spin up a tailored solution to perform this task? How different might that time be, compared to the time taken for an Identity Management Consultant to hammer the FIM Portal into the required shape? We can still use the functionality of both the FIM Syncronisation Engine and the FIM Service to handle the workflows and data flow, so all we have to gain is a better user experience, and a happier customer, right?

Conversing with my colleagues on this topic, it seems one of the reasons why clients shy away from complete customisation in this area is the perception that a custom solution will be less supported, or supported only by the vendor who installed it. How could this be true? If we are writing a custom front end to known Web Services end-points, and supplying the source code and appropriate documentation to the client as part of the engagement, where are the concerns? Code is code is code.

My TL;DR (Too Long; Didn’t Read) line is this: Start thinking about the FIM User Experience now and keep your clients eyebrow height at an appropriate level.

Controlling UI Elements with Sets in FIM 2010

Out of the box, FIM 2010’s methodology for handling which UI elements are visible on the FIM homepage are limited to “are you an administrator” or “are you not”. This is governed by the Usage Keyword “BasicUI”. This guide will demonstrate how you can create additional Usage Keywords tied to sets which will allow for granular control over which navigation bar and homepage elements are visible to a user.

Before we get in to how to create a Usage Keyword, let’s understand what it actually is. A usage keyword is basically a set. The set targets resources where the “Usage Keyword” multivalued attribute is equal to a string you define. The best way to understand this is to have a look at the membership to the existing sets for the “BasicUI” Usage Keyword.

So all we have here, is a bunch of resources which have the string “BasicUI” populated on the “Usage Keyword” multivalued attribute which is bound to them. As you can probably tell from the membership list, these resources are all links on the Home Page. So of course where we have a grouping of resources, we can use an MPR to control access to them. This is essentially how Usage Keywords work.

Now that we have an understanding of the concept, let’s build it.

In this example I have a pre-existing criteria based set of users called “_SET: Kloud Users” which contains users that belong to the KLOUD Active Directory domain. I would like to grant these users access to the navigation bar resource “Security Groups (SGs)”.

The first step towards achieving this, is to create the Usage Keyword in the context of the navigation bar resource

  1. Create a new set. You can name this set whatever you like in-line with your naming conventions. I will call mine “_SET: NavBar Usage Keyword Kloud Users”
  2. Create membership criteria. Select navigation bar resource that matches the following condition “Usage Keyword contains Kloud Users”
  3. Click finish

You now have a usage keyword of “Kloud Users” which applies to the navigation bar resource. We will now create another couple of sets. We will repeat the above steps, replacing “navigation bar resources” with “home page resource” and “search scope”. If you follow along with me, create these sets and call them “_SET: Homepage Usage Keyword Kloud Users” and “_SET: Search Scope Usage Keyword Kloud Users”

Now that we have our Usage Keywords in place, we need to make them do something. We can of course achieve this using MPRs.

  1. Create a new request type MPR. Again, you can name this MPR whatever you like in-line with your naming conventions. I will call mine “_MPR: NavBar Read Usage Keyword Kloud Users”
  2. On the “Requestors and Operations” tab, select the set of users that you would like your newly created usage Keyword to apply to. In my example, I would like my “Kloud Users” keyword to relate to the “_SET: Kloud Users” set.
  3. Again on the “Requestors and Operations” tab, tick the boxes for “Read resource” and “Grants permission” then click next
  4. On the “Target Resources” tab, define the set we created earlier as the “Target Resource Definition Before Request” and select “All Attributes” then click finish.

So now our Usage Keyword for the navigation bar resource is ready to go. As we wish to apply this to the homepage resources and search scopes, we must repeat the MPR creation steps for each resource, replacing the “Target Resource Definition Before Request” with the relevant set. I now have three sets and three MPRs as follows:

_SET: NavBar Usage Keyword Kloud Users
_SET: Homepage Usage Keyword Kloud Users
_SET: Search Scope Usage Keyword Kloud Users

_MPR: NavBar Read Usage Keyword Kloud Users
_MPR: Homepage Read Usage Keyword Kloud Users
_MPR: Search Scope Read Usage Keyword Kloud Users

The final step is now to employ our newly defined usage keyword. As mentioned, my desire is to make the “Security Groups (SGs)” navigation bar item visible to all users which are part of the “_SET: Kloud Users” set.

  1. From the administration menu, select “Navigation Bar Resource”
  2. In the “Usage Keyword” box, enter your newly created usage keyword “Kloud Users”
  3. Click next, then finish
  4. Perform an iisreset

That’s all there is to it, all that remains now is to log in as a user which belongs to the set you’ve targeted to ensure they can in fact see the element you’ve granted them access to read. The screenshot below shows what the navigation bar looks like to a member of the KLOUD domain versus a user not in KLOUD domain.

You will find when editing search scopes and home page resources that they too have a field for “Usage Keyword”. If you have followed through with me, you will now be able to use your new usage keyword to control the visibility of these elements.