Sending SMS Through PowerShell with Telstra’s New New API


Back in the before time, I wrote a blog post entitled “Sending SMS Through PowerShell with Telstra’s New API”. Using some PowerShell scripts I provided back then, you could have a little play with using Telstra’s SMS gateways to amuse and annoy your pals (and I imagine apply some valid business use cases in there somewhere too). Fast forward to a few weeks ago, and I received an email from somebody who had read that blog post and reached out to me to say “Hey Dan, your scripts don’t work.” – that person is 100% correct, because much like time, API’s do not stand still and given enough change and too few updates, code you were using to send stuff might just break. This is the case with the code I published back in the day, which prompts me (for that one guy, and perhaps for you too) to write this follow-up blog post, which I have creatively titled “Sending SMS Through PowerShell with Telstra’s New New API”

At the time of writing, the current version of the Telstra SMS API available to the public is v2.2.9. Let’s quickly go over the changes in the API since the olden days when I wrote the other blog with the similar title.

Randomly created number on each SMS sendProvision a dedicated number and send from that
Send SMS to only a single number at a timeBroadcast to up to 10 recipients
Send SMS to Australian mobiles onlyAnnoy your friends in many countries (link:
SMS OnlyMMS joins the party
1000 free messages per monthStill 1000, but come on. That’s pretty good.

Lots of new cool features. So, lets update that dusty old PowerShell and show you how you can make use of the new API.

Getting Started

The very first thing you’re going to need to do, is create an account over at if you don’t already have one and create an app. This is a very painless affair, all you have to do is hit the “develop” drop down box at the top, then “my apps and keys” and then hit the plus button to “create application”. Once you’ve done that, you will be issued a client key and client secret, which we will make use of in our code.
Then, you can start playing! For the purposes of this example, I’m just going to paste my client key and secret as variables within the script, if you’re doing this in an actual production environment however, you should not do that. Depending on how you end up deploying this thing will determine the best way to keep your secrets secret, but this Kloud blog by my former colleague Dave Lee might give you some clues on the best way to approach that.

Generate an Access Token

The first thing we need to do code-wise, is create an access token, we can do this easily with the following snippet:

Now, if you take a peek at the $Authorization variable (by of course just typing $Authorization) you should see something like the following:

Create a Subscription

Now that we’re authorised, we need to perform a step which is new for this updated version of the API, and that is to create a “subscription”. What’s that exactly? Well, it seems more or less that we are creating a phone number within Telstra which will become ours, but only for the next 30 days on the free tier. This is however an improvement on the olden days where we got a new number each and every time we sent a message. At this point, you can also specify a “notify URL” in your POST, what this would do is POST any responses that number receives back to the URL you specify, so if you had a fully-fledged app, you could start getting into some automated replies. We’re keeping it simple for this example. The following code snippet will help you create a subscription:

Once we execute that code, we get a subscription which consists of a destinationAddress (mobile number where you will send and receive SMS message) and an expiry date in UNIX time. This is all we need to start bouncing out messages and receiving replies, so let’s do that! Executing the following code is going to fire an SMS out in to the world

Send an SMS

The SMS is received quicker than the time it took me to lift my phone after hitting enter – not bad.

And now, of course I can reply to that message from my phone as I would any other information cat-based message, and retrieve said reply via the following PowerShell command:

Which results in the following:

Bonus Round: MMS

OK, this was just going to be an update on my original samples of sending SMS via Telstras API’s, but one of the new capabilities is MMS, so why the heck not – let’s have a play with that functionality too.

All of the authentication bits and pieces are identical, and we can send using the same subscription (mobile number) we created before, but the JSON we end up posting is structured quite differently and contains an embedded image in BASE 64. Here’s what it looks like:

This time, the message did take slightly longer to deliver to my phone. I imagine this is something I could complain to my local MP about to have rectified. Based on my experience alone, I would suggest if you’re building out any sort of application which requires a timely back and forth between your app and the user, use SMS rather than MMS, and where timeliness is irrelevant but you want to embed lovely images – well, use MMS.


To summarise, yes indeed astute reader, my old code no longer works with the Telstra SMS API, this is because we have a brand spanking new API! I hope the update hadn’t broken your automated cat fact spammer, and you enjoy updating your code to get the app back on it’s feet.

This post has provided a simple view of how you can make use of the Telstra API at no cost – PowerShell is my choice of language due to my background, but you can really write this type of thing in any old language you like. If you are interested in playing with this stuff yourself, I would encourage you to sign up over at and also check out the wealth of documentation they’ve got on this (and many other) API’s. Happy tinkering!


Plugging the Gaps in Azure Policy – Part Two


Welcome to the second and final part of my blogs on how to plug some gaps in Azure Policy. If you missed part one, this second part isn’t going to be a lot of use without the context from that, so maybe head on back and read part one before you continue.

In part one, I gave an overview of Azure Policy, a basic idea of how it works, what the gap in the product is in terms of resource evaluation, and a high-level view of how we plug that gap. In this second part, I’m going to show you how I built that idea out and provide you some scripts and a policy so you can spin up the same idea in your own Azure environments.

Just a quick note, that this is not a “next, next, finish” tutorial – if you do need something like that, there are plenty to be found online for each component I describe. My assumption is that you have a general familiarity with Azure as a whole, and the detail provided here should be enough to muddle your way through.

I’ll use the same image I used in part one to show you which bits we’re building, and where that bit fits in to the grand scheme of things.

We’re going to create a singular Azure Automation account, and we’re going to have two PowerShell scripts under it. One of those scripts will be triggered by a webhook, which will receive a POST from Event Grid, and the other will be fired by a good old-fashioned scheduler. I’m not going to include all the evaluations performed in my production version of this (hey, gotta hold back some IP right?) but I will include enough for you to build on for your own environment.

The Automation Account

When creating the automation account, you don’t need to put a lot of thought into it. By default when you create an automation account, it is going to automatically create as Azure Run As account on your behalf. If you’re doing this in your own lab, or an environment you have full control over, you’ll be able to do this step without issue , but typically in an Azure environment you may have access to build resources within a subscription, but perhaps not be able to create Azure AD objects – if that level of control applies to your environment, you will likely need to get someone to manually create an Azure AD Service Principal on your behalf. For this example, we’ll just let Azure Automation create the Run As account, which, by default, will have contributor access on the subscription you are creating the account under (which is plenty for what we are doing). You will also notice a “Classic” Run As account is also created – we’re not going to be using that, so you can scrap it. Good consultants like you will of course figure out the least permissions required for the production account and implement that accordingly rather than relying on these defaults.

The Event-Based Runbook

The Event-Based Runbook grabs parameters from POSTed JSON which we get from Event Hub. The JSON we get contains enough information about an individual resource which has been created or modified that we are able to perform an evaluation on that resource alone. In the next section, I will give you a sample of what that JSON looks like.

When we create this event-based Runbook, obviously we need somewhere to receive the POSTed JSON, so we need to create a Webhook. If you’ve never done this before, it’s a fairly straight forward exercise, but you need to be aware of the following things

  • When creating the Webhook, you are displayed the tokenized URL at the point of creation. Take note of it, you won’t be seeing it again and you’ll have to re-create the webhook if you didn’t save your notepad.
  • This URL is open out to the big bad internet. Although the damage you can cause in this instance is limited, you need to be aware that anyone with the right URL can hit that Webhook and start poking.
  • The security of the Webhook is contained solely in that tokenised URL (you can do some trickery around this, but it’s out of scope for this conversation) so in case the previous two points weren’t illustrative enough, the point is that you should be careful with Webhook security.

Below is the script we will use for the event-driven Runbook.

So, what are the interesting bits in there we need to know about? Well firstly, the webhook data. You can see we ingest the data initially into the $WebhookData variable, then store it in a more useful format in the $InputJSON variable, and then break it up into a bunch of other more useful variables $resourceUri, $status and $subject. The purpose in each of those variables is described below


$resourceUriThe resource URI of the resource we want to evaluate
$statusThe status of the Azure operation we received from Event Grid. If the operation failed to make a change for example, we don’t need to re-evaluate it.
$subjectThe subject contains the resource type, this helps us to narrow down the scope of our evaluation


As you can see, aside from dealing with inputs at the top, the script essentially has two parts to it: the tagging function, and the evaluation. As you can see from the evaluation (line 78-88) we scope down the input to make sure we only ever bother evaluating a resource if it’s one we care about. The evaluation itself, as you can see is really just saying “hey, does this resource have more than one NIC? If so, tag the resource using the tagging function. If it doesn’t? remove the tag using the tagging function”. Easy.

The Schedule-Based Runbook

The evaluations (and the function) we have in the schedule-based Runbook is essentially the same as what we have in the event-based one. Why do we even have the schedule-based Runbook then? Well, imagine for a second that Azure Automation has fallen over for a few minutes, or someone publishes dud code, or one of many other things happens which means the automation account is temporarily unavailable – this means the fleeting event which may occur one time only as a resource is being created is essentially lost to the ether, Having the schedule-based books means we can come back every 24 hours (or whatever your organisation decides) and pick up things which may have been missed.

The schedule-based runbook obviously does not have the ability to target individual resources, so instead it must perform an evaluation on all resources. The larger your Azure environment, the longer the processing time, and potentially the higher the cost. Be wary of this and make sensible decisions.

The schedule-based runbook PowerShell is pasted below.

Event Grid

Event Grid is the bit which is going to take logs from our Azure Subscription and allow us to POST it to our Azure Automation Webhook in order to perform our evaluation. Create your Event Grid Subscription with the “Event Grid Schema”, the “Subscription” topic type (using your target subscription) and listening only for “success” event types. The final field we care about on the Event Subscription create form, is for the Webhook – this is the one we created earlier in our Azure Automation Runbook, and now is the time to paste that value in.

Below is an example of the JSON we end up getting POSTed to our Webhook.

Azure Policy

And finally, we arrive at Azure Policy itself. So once again to remind you, all we are doing at this point is performing a compliance evaluation on a resource based solely on the tag applied to it, and accordingly, the policy itself is very simple. Because this is a policy based only on the tag, it means the only effect we can really use is “Audit” – we cannot deny creation of resources based on these evaluations.

The JSON for this policy is pasted below.

And that’s it, folks – I hope these last two blog posts have given you enough ideas or artifacts to start building out this idea in your own environments, or building out something much bigger and better using Azure Functions in place of our Azure Automation examples!

If you want to have a chat about how Azure Policy might be useful for your organisation, by all means, please do reach out, as a business we’ve done a bunch of this stuff now, and I’m sure we can help you to plug whatever gaps you might have.


Plugging the Gaps in Azure Policy – Part One


Welcome to the first part of a two part blog on Azure Policy. Multi-part blogs are not my usual style, but the nature of blogging whilst also being a full time Consultant is that you slip some words in when you find time, and I was starting to feel if I wrote this in a single part, it would just never see the light of day. Part one of this blog deals with the high-level overview of what the problem is, and how we solved it at a high level, part two will include the icky sticky granular detail, including some scripts which you can shamelessly plagiarise.

Azure Policy is a feature complete solution which performs granular analysis on all your Azure resources, allowing your IT department to take swift and decisive action on resources which attempt to skirt infrastructure policies you define. Right, the sales guys now have their quotable line, let’s get stuck in to how you’re going to deliver on that.

Azure Policy Overview

First, a quick overview of what Azure Policy actually is. Azure Policy is a service which allows you to create rules (policies) which allow you to take an action on an attempt to create or modify an Azure resource. For example, I might have a policy which says “only allow VM SKU’s of Standard_D2s_v3” with the effect of denying the creation of said VM if it’s anything other than that SKU. Now, if a user attempts to create a VM other than the sizing I specify, they get denied – same story if they attempt to modify an existing VM to use that SKU. Deny is just one example of an “effect” we can take via Azure Policy, but we can also use Audit, Append, AuditIfNotExists, DeployIfNotExists, and Disabled.

Taking the actions described above obviously requires that you evaluate the resource to take the action. We do this using some chunks of JSON with fairly basic operators to determine what action we take. The properties you plug into a policy you create via Azure Policy, are not actually direct properties of the resource you are attempting to evaluate, rather we have “Aliases”, which map to those properties. So, for example, the alias for the image SKU we used as an example is “Microsoft.Compute/virtualMachines/imageSku”, which maps to the path “properties.storageProfile.imageReference.sku” on the actual resource. This leads me to….

The Gap

If your organisation has decided Azure Policy is the way forward (because of the snazzy dashboard you get for resource compliance, or because you’re going down the path of using baked in Azure stuff, or whatever), you’re going to find fairly quickly that there is currently not a one to one mapping between the aliases on offer, and the properties on a resource. Using a virtual machine as an example, we can use Azure Policy to take an effect on a resource depending on its SKU (lovely!) but up until very recently, we didn’t have the ability to say if you do spin up a VM with that SKU, that it should only ever have a single NIC attached. The existing officially supported path to getting such aliases added to Policy is via the Azure Policy GitHub (oh, by the way if you’re working with policy and not frequenting that GitHub, you’re doing it wrong). The example I used about the multiple NIC’s, you can see was a requested as an alias by my colleague Ken on October 22nd 2018, and marked as “deployed” into the product on February 14th 2019. Perhaps this is not bad for the turnaround from request to implementation into the product speaking in general terms, but not quick enough when you’re working on a project which relies on that alias for a delivery deadline which arrives months before February 14th 2019. A quick review of both the open and closed issues on the Azure Policy GitHub gives you a feel for the sporadic nature of issues being addressed, and in some cases due to complexity or security, the inability to address the issues at all. That’s OK, we can fix this.

Plugging the Gap

Something we can use across all Azure resources in Policy, are fields. One of the fields we can use, is the tag on a resource. So, what we can do here is report compliance status to the Azure Policy dashboard not based on the actual compliance status of the resource, but based on whether or not is has a certain tag applied to it – that is to say, a resource can be deemed compliant or non-compliant based on whether or not it has a tag of a certain value – then, we can use something out of band to evaluate the resources compliance and apply the compliance tag. Pretty cunning huh?

So I’m going to show you how we built out this idea for a customer. In this first part, you’re going to get the high-level view of how it hangs together, and in the second part I will share with you the actual scripts, policies, and other delicious little nuggets so you can build out a demo yourself should it be something you want to have a play with. Bear in mind the following things when using all this danger I am placing in your hands:

  • This was not built to scale, more as a POC, however;
    • This idea would be fine for handling a mid-sized Azure environment
  • This concept is now being built out using Azure Functions (as it should be)
  • Roll-your-own error handling and logging, the examples I will provide will contain none
  • Don’t rely on 100% event-based compliance evaluation (I’ll explain why in part 2)
  • I’m giving you just enough IP to be dangerous, be a good Consultant

Here’s a breakdown of how the solution hangs together. The example below will more or less translate to the future Functions based version, we’ll just scribble out a couple bits, add a couple bits in.

So, from the diagram above, here’s the high-level view what’s going on:

  1. Event Grid forwards events to a webhook hanging off a PowerShell Runbook.
  2. The PowerShell Runbook executes a script which evaluates the resource forwarded in the webhook data, and applies, removes, or modifies a tag accordingly. Separately, a similar PowerShell runbook fires on a schedule. The schedule-based script contains the same evaluations as the event-driven one, but rather than evaluate an individual resource, it will evaluate all of them.
  3. Azure Policy evaluates resources for compliance, and reports on it. In our case, compliance is simply the presence of a tag of a particular value.

Now, that might already be enough for many of you guys to build out something like this on your own, which is great! If you are that person, you’re probably going to come up with a bunch of extra bits I wouldn’t have thought about, because you’re working from a (more-or-less) blank idea. For others, you’re going to want some gnarly config and scripts so you can plug that stuff into your own environment, tweak it up, and customise it to fit your own lab – for you guys, see you soon for part two!

Kloud has been building out a bunch of stuff recently in Azure Policy, using both complex native policies, and ideas such as the one I’ve detailed here. If your organisation is looking at Azure Policy and you think it might be a good fit for your business, by all means, reach out for a chat. We love talking about this stuff.

Understanding Password Sync and Write-back

For anyone who has worked with Office 365/Azure AD and AADConnect, you will of course be aware that we can now sync passwords two ways from Azure AD to our on-premises AD. This is obviously a very handy thing to do for myriad reasons, and an obvious suggestion for a business intending to utilise Office 365. The conversation with the security bod however, might be a different kettle of fish. In this post, I aim to explain how the password sync and write-back features work, and hopefully arm you with enough information to have that chat with the security guys.

Password Sync to AAD

If you’ve been half-listening to any talks around password sync, the term ‘it’s not the password, it’s a hash of a hash’ is probably the line you walked away with, so let’s break down what that actually means. First up, a quick explanation of what it actually means to hash a value. Hashing a value is applying a cryptographic algorithm to a string to irreversibly convert that string into another sting of a fixed length. This differs from encryption in that with encryption we intend to be able to retrieve the data we have stored, so must hold a decryption key – with hashing, we are unable to derive the original string from the hashed string, and nor do we want to. If it makes no sense to you that we wouldn’t want to reverse a hashed value, it will by the end of this post.
So, in Active Directory when a user sets their password, the value stored is not actually the password itself, it’s an MD4 hash of the password once it’s been converted to Unicode Little Endian format. Using this process, my password “nicedog” ends up in Active Directory as 2993E081C67D79B9D8D3D620A7CD058E. So, this is the first part of the “hash of a hash”.
The second part of the “hash of a hash” is going to go buck wild with the hashing. Once the password has been securely transmitted to AADConnect (I won’t detail that process, because we end up with our original MD4 hash anyway), the MD4 hash is then:

  1. Converted to a 32-byte hexadecimal string
  2. Converted into binary with UTF-16 encoding
  3. Salted with 10 additional bytes
  4. Resultant string hashed and re-hashed 1000 times with HMAC-SHA256 via PBKDF2

So, as you can see, we’re actually hashing the hash enough to make Snoop Dogg blush. Now, we finally have the have the hash which is securely sent to Azure AD via an SSL channel. At this point, the hashed value which we send is so far removed from the original hash that even the most pony-tailed, black tee-shirt and sandal wearing security guy would have to agree it’s probably pretty secure. Most importantly, if Azure AD was ever compromised and an attacker did obtain your users password hashes, they are unable to be used to attack any on-premises infrastructure with a “pass the hash” attack, which would be possible if AD’s ntds.dit was compromised and password hashes extracted.
So now Azure AD has these hashes, but doesn’t know your passwords – how are user’s passwords validated? Well it’s just a matter of doing the whole thing again with a user provided password when that user logs on. The password is converted, salted, hashed and rehashed 1000 times in the exact same manner, meaning the final hash is exactly the same as the one stored in Azure AD – the two hashes are compared, and if we have a match, we’re in. Easy. Now let’s go the other way.

Password Write-back to AD

So how does it work going back the other way? Differently. Going back the other way, we can’t write a hash ourselves to Active Directory, so we need to get the actual password back from AAD to AD and essentially perform a password reset on-prem allowing AD to then hash and store the password. There are a couple of things which happen when we install AADConnect and enable Password Write-back to allow us to do this

  1. Symmetric keys are created by AAD and shared with AADConnect
  2. A tenant specific Service Bus is created, and a password for it shared with AADConnect

The scenario in which we are writing a password back from AAD to AD is obviously a password reset event. This is the only applicable scenario where we would need to, because a password is only set in AAD if that user is “cloud only”, so they would of course not have a directory to write back to. Only synced users need password write-back, and only upon password reset. So AAD gets the password back on-premises by doing the following:

  1. User’s submitted password is encrypted with the 2048-bit RSA Key generated when you setup write-back
  2. Some metadata is added to the package, and it is re-encrypted with AES-GCM
  3. Message sent to the Service Bus via and SSL/TLS channel
  4. On-premises AADConnect agent wakes up and authenticates to Service Bus
  5. On-premises agent decrypts package
  6. AADConnect traces cloudanchor attribute back to the connected AD account via the AADConnect sync engine
  7. Password reset is performed for that user against a Primary Domain Controller

So, that’s pretty much it. That’s how we’re able to securely write passwords back and forth between our on-premises environment and AAD without Microsoft ever needing to store a user’s password. It’s worth noting for that extra paranoid security guy though that if he’s worried about any of the encryption keys or shared passwords being compromised, you can re-generate the lot simply by disabling and re-enabling password writeback.
Hopefully this post has gone some way towards helping you have this particular security conversation, no matter what year the security guys Metallica tour tee-shirt is from.

Applying Business Rules to Profile Photos Using Microsoft Cognitive Services

A customer I am working with at the moment is in the (very) early stages of discussion around the gathering and application of profile photos across their internal systems. In this particular case, we are considering that the photos themselves do not exist. Sure, there are ID card photos of startled staff taken on day one of their employment, but people being people, they would rather not be forever digitally represented by their former selves – particularly not the version of themselves which had an ID photo taken in a poorly lit un-used meeting room 7 years ago before they got that gym membership. There are many technical considerations when embarking on something like this: Where will we store the data? What file formats do we need to support? What is the maximum size and resolution for photos across our systems? But before answering any of these questions we should first think about how we are actually going to gather these photos, and once you have them, how do you ensure that they comply with whatever business rules you wish to apply to them? Not very long ago, the answer to this question would have been ‘hire a grad’ (sorry grads) – but we live in the future now, and we have artificial intelligence to do our bidding, so let’s take a look at how we do just that.

The Rules

Let’s make up some rules which might be applicable to corporate profile photos.

  1. The photo should be of your face
  2. The photo should contain only one person
  3. The face should be framed in the photo similarly to a passport photo
  4. The photo should not contain any adult content

The API’s

Our rules can be satisfied using two of Microsoft’s Cognitive Services API’s, namely the Face API and the Computer Vision API. Both of these API’s have free tiers which more than satisfy the requirements for this little demo, but have paid tiers which are actually extraordinarily fairly priced. To sign up for the API’s, head over to and click new (A), Intelligence (B), then Cognitive Services API (C).


And then fill in the relevant details.


We are using both the Face API and the Computer Vision API in this example, so the above steps will be repeated for each API.

Once you have completed this process, you will find details of your new accounts in the portal under “Cognitive Services accounts”. This is going to give you the details you’ll need to interact with the API’s.


Now that we have an account setup and ready to go, we can start to play! Because I am an infrastructure guy rather than a dev, I will be using PowerShell for this demonstration. Let’s work through the rules we defined earlier.

Rule #1: The photo should be of your face

To check our image against this rule, we want a response from the Face API to simply confirm that the image is of a face. As I already own a face, I will use my own for this example. The image “C:\temp\DanThom_Profile.jpg” which is referenced in the code snippet is the same image used as my profile photo on this blog.

Executing the above code will give you a simple true/false set against the variable $FaceDetected. This gives us some confidence that the uploaded photo is in fact a face – it doesn’t necessarily mean it’s my face, but I will talk about that a little later.

Rule #2: The photo should contain only one person

We’re going to reuse the same API and almost the same code to validate this rule. Feeding the API a crudely photoshopped version of my original photo with my face duplicated using the snippet below, the variable $MultipleFaces is set to true. Feeding in the original photo sets the variable as false.

Rule #3: The face should be framed in the photo similarly to a passport photo

For this rule, we will use a combination of the Computer Vision and the Face API. The Face API is going to give us some data about how many pixels are occupied by the face in the photo, and we’re simply using the Computer Vision API to get the dimensions of the photo. I appreciate there are many other ways you can retrieve this data without having to call out to an external API, but seeing as we’re playing with these API’s today, why not?

The following snippet of code will get the dimensions of the photo, get the width of the Face Rectangle (the width of the detected face) then work out the percentage of the width of the photo which is consumed by the face. My profile picture is a good example of good framing, and the width of my face consumes 43.59375% of the width of the photo. Based on this, I’m going to say a ‘good’ photo ranges somewhere between 35% and 65%. The following code snippet will work out if the picture meets this criteria, and return a true/false for the variable of $GoodFraming.

Rule #4: the photo should not contain any adult content

We are all decent human beings, so it seems like this shouldn’t be a concern, but the reality is if you work in a larger organisation, someone may choose to perform a ‘mic drop’ by updating their profile picture to something unsavoury in their last weeks of employment. Luckily the Computer Vision API also has adult content detection. The following code snippet will return a simple true/false against the variable $NSFW.

Interestingly, the ‘visualFeatures=Adult’ query returns a true/false for ‘isAdult’ and for ‘isRacy’ as well as numerical results for ‘adultScore’ and ‘racyScore’. I was wondering what might be considered ‘racy’, so I fed the API the following image.


As it turns out, old mate Bill gets himself a racyScore of 0.2335. My profile picture gets 0.0132, and an actual racing car got 0.0087. Bill Gates is twenty times as racy as I am, and off the charts compared to a racing car.

Other Cool Things

There are all sorts of other neat things these API’s can return, which would be even more helpful in vetting corporate profile pictures. Some things such as returning the landmarks of the face, whether or not the person is wearing sunglasses, the individuals gender, the photos similarity to other photos or whether or not the person is a celebrity would all be helpful in a fully developed solution around vetting corporate profile photos.


Hopefully the above examples have provided a little insight into how the Microsoft Cognitive Services API’s might be useful for business applications. It is amazing to me the kind of easily available and affordable artificial cognitive power we now have at our fingertips for interested parties with minimal coding skills.

You can see from the examples above how easily you could scale something like this. You could offer users an application where they can take their own profile picture, have that picture immediately reviewed by Microsoft Cognitive Services and approved, or immediately rejected giving the user the option to either submit the photo for manual approval (hooray! one of the grads got their job back!), or discard the photo and try again.

I have yet to see any of the Microsoft Cognitive Services implemented in any of the businesses I have been involved with, but I suspect in the coming years we will be seeing more and more of it, and that is certainly something I look forward to.

Managing SPO User Profiles with FIM/MIM and the Microsoft PowerShell Connector

Back in March, my colleague Darren Robinson published this post which nicely explains how to use Søren Granfeldt’s FIM/MIM PowerShell MA to manage SharePoint Online profiles. While Darren’s post covers everything you need to connect to SPO and manage user profiles via FIM/MIM, some of your clients may prefer to use the Microsoft equivalent for reasons of perceived support and product quality. This post will cover off what is required to get the Connector up and running.


To get this show on the road, you’re going to need the following

Setting up the Connector

First up, if you’re here because you just need to get an SPO connector up and going and get some attributes flowing, I’m going to make your life real easy: here is a full export of a working MA with a bunch of attributes configured in the scripts and ready to go. Just add your own credentials, tenant SPO admin URL, join rules and attribute flows and you’re done. You’re welcome. If you do however have some time to fill in your day, feel free to follow along.

Create a new Connector in the Sync Engine. Provided you have correctly installed the PowerShell connector linked in the prerequisites, you will see “PowerShell (Microsoft)” as an available Connector type. Give your Connector a name and click next, where you will see the connectivity tab appear.

Connectivity Tab Settings

Plug in the following configuration items:

Password: yourSPOserviceaccountpassword
Common Module Script Name: FIMPowerShellConnectorModule.psm1

You will see now where you can start to paste some scripts. On this tab, we will provide two scripts – the common module script which was written by Microsoft and contains functions used in our import and export scripts, and a schema script. The schema script contains attributes I was interested in for now, which you can add to using the same formatting. My scripts are as follows:

Common Module Script

Schema Script

Capabilities Tab Settings

On the capabilities tab after much trial and error, I settled on the configuration as per the screenshots below


Global Parameters Tab Settings

Import Script

Export Script

Join Rules and Attribute Flows

I am not going to go into Join Rules or Attribute Flows in any detail here, as those things are specific to your organisation and it’s requirements. The process for creating Join Rules and Attribute Flows is exactly the same for every other FIM MA you’ve ever worked with.


As with any PowerShell scripting or FIM work, it’s not just going to work first time as expected when you hit the go button. Particularly with this Connector, the default level of logging will just tell you something didn’t work, and offer you no real detail as to why. For this reason, during development you’ll want to crank up the logging. The following steps on enabling logging are shameless plagiarised from Technet.

Open the %ProgramFiles%\Microsoft Forefront Identity Manager\2010\Synchronization Service\bin\miiserver.exe.config file using a text editor and paste the following XML in to the file on the line immediately following the <sources> tag

Create the directory c:\logs, grant the service account for the synchronization service Modify permissions to the c:\logs directory, then restart the synchronization service. By default, PowerShell errors and some other data from the MA will be logged here, but you can be as verbose as you like in your script by including cmdlets like Write-Debug.


Following the steps above, it should be relatively straight forward to spin up an SPO Connector and get some attributes flowing. Hopefully this post has saved you a bit of time and effort!



Sending SMS Through PowerShell with Telstra’s New API

The code detailed in this post won’t work anymore. If you’re looking for updated PowerShell to use with Telstra’s APIs, please check out this updated post. 

Recently, Telstra released their first public API, which in true telco fashion leverages an existing product in their stable; SMS. The service allows anyone with a Telstra account (get one here) to get an API key which will allow you to send up to 100 messages per day, 1000 per month to Australian mobiles. Obviously, this is going to be great for anyone wishing to use a free SMS service for labbing, testing, or sending your buddies anonymous cat facts.

I’m not so much a dev, so the first thing I wanted to do was to test this out using PowerShell. Using PowerShell, I get to look like I’m doing something super-important whilst I send my buddies cat facts. The following is the code I used to make this happen.

First, we want to get ourselves an access token, so we can auth to the service.

$app_key = "Th1SiSn0TreAllYmYAppK3ybUtTHanKsAnyW4y"
$app_secret = "n0rmYS3cr3t"
$auth_string = "" + $app_key + "&client_secret=" + $app_secret + "&grant_type=client_credentials&scope=SMS"
$auth_values = Invoke-RestMethod $auth_string

Now that we have an auth token, we can use it to send, receive, and check the status of messages.

# Send SMS
$tel_number = "0488888888"
$token = $auth_values.access_token
$body = "On average, cats spend 2/3 of every day sleeping. That means a nine-year-old cat has been awake for only three years of its life"
$sent_message = Invoke-RestMethod "" -ContentType "application/json" -Headers @{"Authorization"="Bearer $token"} -Method Post -Body "{`"to`":`"$tel_number`", `"body`":`"$body`"}"

At this point, I receive an SMS to my phone, which I can reply to


The message can also be queried to check its delivery status, and check if the message has been replied to, as below:

# Get Message Status
$messageid = $sent_message.messageId
$message_status = Invoke-RestMethod "$messageid" -Headers @{"Authorization"="Bearer $token"}
# Get Message Response
$message_response = Invoke-RestMethod "$messageid/response" -Headers @{"Authorization"="Bearer $token"}

Executing the above code gives me the following


Now obviously, you can wrap all these up in some functions, parse in external parameters, strap into PowerShell workflows in FIM, incorporate into SSPR, and just about anything else you can think of (in your labs). There are some caveats to using the service, some obvious of course:

  • It’s SMS, so a 160 character limit applies
  • You can send only one message at a time
  • The service is not intended for large volumes of messages
  • 100 messages per day/1000 per month limit
  • The service is in beta
  • Telstra cannot guarantee delivery once the message is passed to another telco
  • Australia mobiles only

Initially in my testing, I found messages sat in the state of “SENT” and would not update to “DELIVERED”. After some general fist waving and mutterings about beta services, I rebooted my phone and the messages I had queued came through. Although I have had no issue with SMS delivery in the past, I’m happy to put this down to the handset bugging out. In all subsequent testing, the messages came through so quickly, that my phone buzzed immediately after hitting enter on the command.

I hope the code snippets provided help you out with spinning this up in your labs, but please check the Telstra T’s and C’s before sending out some informative cat facts.


The FIM User Experience

A recent post by my colleague Jamie Skella “What UX Isn’t” started me thinking about how UX applies to FIM. Throughout my career as an Identity Management Consultant, I’ve seen projects reach a point in maturity where stakeholders are walked through the tasks an admin or user will perform in the portal, and the average eyebrow height in the room rises exponentially.

Those of us working with Microsoft’s identity products for a while, are used to seeing the glitz and glamour of the Sync Engine console, previously the only interface available with the product, so when the FIM Portal was introduced with FIM 2010, it gave a “user friendly” interface to work with. Sure, it was a bit clunky here and there, but hey, we’ve got a nice user interface now! The problem is however, we’re not the users. The users are a completely separate group of people, who are not Identity Management Consultants and who do not find this a refreshing change.

In this post, I will cover some of the user experience pain points in the FIM Portal which I believe should be called out early in the consulting piece. The fact of the matter is, what may seem like a trivial user experience change to the casual observer, may be a significant piece of development work for you, your UX guys, and your developers. Calling these things out early will give you the opportunity to talk about scope, budgets, or simply get an agreement up-front about how it is.

The Lack of Formatting Flexibility in RCDC’s

An RCDC is essentially a bunch of XML which tells the FIM Portal what items, representing what attributes to present. The FIM Portal takes all that information and presents it to the user in the only way it knows how. What this means, is that each item laid out in the XML, renders itself as a single item in the UI.

The problem here is that there is not much flexibility in how the portal will render this item. For each control on the page, it will appear on its own line, and each control will appear one after the other in a stack, in the order you define them in the XML. In demoing the portal in the past and showing off these screens, I’ve had project stakeholders say things like “That’s fine, but just put those options in two columns” or “Great, you just need to indent the options below that first one to show they are related” or “Group all those options tightly together across the page”. Queue the shocked look when the answer is “Easier said than done”.


Nothing Happens Until You Hit Finish

Typically in FIM, we have forms (RCDC’s) which we use to enter a bunch of information, then do something with. We flow that information somewhere, we kick off a workflow based on the data and we add or remove sync rules. If we didn’t want to do something useful with the data, it’s fair to say we don’t want it. The issue is that nothing happens with this data until we hit that finish button. The forms are essentially static. Yes of course we can use auto-post-back to make the forms more dynamic, but how useful would it be, if when we are creating a user account, the form could query Active Directory and let us know that there happens to already be a Gordon Shumway in the directory that is going to result in that users account name being gshumway1? Perhaps someone has already created that exact account in AD directly, and we’re actually busy creating a duplicate? This is just one example where real dynamic forms would be advantageous, I’m sure based on your experience and your customer’s needs, you could think of dozens more.

Adding and Removing Users from Groups and Sets is Clunky

When adding and removing users from a set or a group, we have a whole stack of page real estate dedicated to this one task. Why? Because you need the box showing the current group membership, you need the box and corresponding buttons for adding users, and you need a box and corresponding buttons for removing users. If we forget for a minute that this is what we have become used to in FIM, we quickly realise that this is not pretty. Considering that adding and removing users is a task which would typically be assigned to IT Admins who are probably most familiar with performing this task in Active Directory Users and Computers, you can see how the new interface we are presenting may seem like a step backwards.


The Date Picker is not a Date Picker

For the longest time on the internet, we’ve known that if we need to enter a date into a website, we click on the date field, and a date picker pops up. We can quickly select an appropriate date quickly by evaluating what day of the week the 20th of March happens to be in 2015. Default FIM behaviour does not afford us this opportunity, and instead we need to enter in a date in a specified format. Once again, if we consider our audience here is likely to be either IT Admins, or even end users, this is going to seem like a backwards step.


So What Can We Do?

There are many options for customising the portal to increase usability and to tighten up the interface. We can plug in community provided features which replace the calendar picker, we can play with the CSS behind the pages and change the feel of the portal with our own custom themes, and we can strip down or beef up the RCDC’s to include or exclude the parts we require. Ultimately, we should take a step back at the top of the engagement and ask the basic question: “Who is going to use this portal and what are they going to use it for?” and take a realistic approach by thinking like the end user.

If the requirement is for an admin to be able to manage user accounts and nothing more, is the FIM Portal really the best solution? How much effort would be required for a Developer and a UX guy to spin up a tailored solution to perform this task? How different might that time be, compared to the time taken for an Identity Management Consultant to hammer the FIM Portal into the required shape? We can still use the functionality of both the FIM Syncronisation Engine and the FIM Service to handle the workflows and data flow, so all we have to gain is a better user experience, and a happier customer, right?

Conversing with my colleagues on this topic, it seems one of the reasons why clients shy away from complete customisation in this area is the perception that a custom solution will be less supported, or supported only by the vendor who installed it. How could this be true? If we are writing a custom front end to known Web Services end-points, and supplying the source code and appropriate documentation to the client as part of the engagement, where are the concerns? Code is code is code.

My TL;DR (Too Long; Didn’t Read) line is this: Start thinking about the FIM User Experience now and keep your clients eyebrow height at an appropriate level.

Controlling UI Elements with Sets in FIM 2010

Out of the box, FIM 2010’s methodology for handling which UI elements are visible on the FIM homepage are limited to “are you an administrator” or “are you not”. This is governed by the Usage Keyword “BasicUI”. This guide will demonstrate how you can create additional Usage Keywords tied to sets which will allow for granular control over which navigation bar and homepage elements are visible to a user.

Before we get in to how to create a Usage Keyword, let’s understand what it actually is. A usage keyword is basically a set. The set targets resources where the “Usage Keyword” multivalued attribute is equal to a string you define. The best way to understand this is to have a look at the membership to the existing sets for the “BasicUI” Usage Keyword.

So all we have here, is a bunch of resources which have the string “BasicUI” populated on the “Usage Keyword” multivalued attribute which is bound to them. As you can probably tell from the membership list, these resources are all links on the Home Page. So of course where we have a grouping of resources, we can use an MPR to control access to them. This is essentially how Usage Keywords work.

Now that we have an understanding of the concept, let’s build it.

In this example I have a pre-existing criteria based set of users called “_SET: Kloud Users” which contains users that belong to the KLOUD Active Directory domain. I would like to grant these users access to the navigation bar resource “Security Groups (SGs)”.

The first step towards achieving this, is to create the Usage Keyword in the context of the navigation bar resource

  1. Create a new set. You can name this set whatever you like in-line with your naming conventions. I will call mine “_SET: NavBar Usage Keyword Kloud Users”
  2. Create membership criteria. Select navigation bar resource that matches the following condition “Usage Keyword contains Kloud Users”
  3. Click finish

You now have a usage keyword of “Kloud Users” which applies to the navigation bar resource. We will now create another couple of sets. We will repeat the above steps, replacing “navigation bar resources” with “home page resource” and “search scope”. If you follow along with me, create these sets and call them “_SET: Homepage Usage Keyword Kloud Users” and “_SET: Search Scope Usage Keyword Kloud Users”

Now that we have our Usage Keywords in place, we need to make them do something. We can of course achieve this using MPRs.

  1. Create a new request type MPR. Again, you can name this MPR whatever you like in-line with your naming conventions. I will call mine “_MPR: NavBar Read Usage Keyword Kloud Users”
  2. On the “Requestors and Operations” tab, select the set of users that you would like your newly created usage Keyword to apply to. In my example, I would like my “Kloud Users” keyword to relate to the “_SET: Kloud Users” set.
  3. Again on the “Requestors and Operations” tab, tick the boxes for “Read resource” and “Grants permission” then click next
  4. On the “Target Resources” tab, define the set we created earlier as the “Target Resource Definition Before Request” and select “All Attributes” then click finish.

So now our Usage Keyword for the navigation bar resource is ready to go. As we wish to apply this to the homepage resources and search scopes, we must repeat the MPR creation steps for each resource, replacing the “Target Resource Definition Before Request” with the relevant set. I now have three sets and three MPRs as follows:

_SET: NavBar Usage Keyword Kloud Users
_SET: Homepage Usage Keyword Kloud Users
_SET: Search Scope Usage Keyword Kloud Users

_MPR: NavBar Read Usage Keyword Kloud Users
_MPR: Homepage Read Usage Keyword Kloud Users
_MPR: Search Scope Read Usage Keyword Kloud Users

The final step is now to employ our newly defined usage keyword. As mentioned, my desire is to make the “Security Groups (SGs)” navigation bar item visible to all users which are part of the “_SET: Kloud Users” set.

  1. From the administration menu, select “Navigation Bar Resource”
  2. In the “Usage Keyword” box, enter your newly created usage keyword “Kloud Users”
  3. Click next, then finish
  4. Perform an iisreset

That’s all there is to it, all that remains now is to log in as a user which belongs to the set you’ve targeted to ensure they can in fact see the element you’ve granted them access to read. The screenshot below shows what the navigation bar looks like to a member of the KLOUD domain versus a user not in KLOUD domain.

You will find when editing search scopes and home page resources that they too have a field for “Usage Keyword”. If you have followed through with me, you will now be able to use your new usage keyword to control the visibility of these elements.

Follow Us!

Kloud Solutions Blog - Follow Us!