Automate ADFS Farm Installation and Configuration

Originally posted on Nivlesh’s blog @


In this multi-part blog, I will be showing how to automatically install and configure a new ADFS Farm. We will accomplish this using Azure Resource Manager templates, Desired State Configuration scripts and Custom Script Extensions.


We will use Azure Resource Manager to create a virtual machine that will become our first ADFS Server. We will then use a desired state configuration script to join the virtual machine to our Active Directory domain and to install the ADFS role. Finally, we will use a Custom Script Extension to install our first ADFS Farm.

Install ADFS Role

We will be using the xActiveDirectory and xPendingReboot experimental DSC modules.

Download these from

After downloading, unzip the file and  place the contents in the Powershell modules directory located at $env:ProgramFiles\WindowsPowerShell\Modules (unless you have changed your systemroot folder, this will be located at C:\ProgramFiles\WindowsPowerShell\Modules )

Open your Windows Powershell ISE and lets create a DSC script that will join our virtual machine to the domain and also install the ADFS role.

Copy the following into a new Windows Powershell ISE file and save it as a filename of your choice (I saved mine as InstallADFS.ps1)

In the above, we are declaring some mandatory parameters and some variables that will be used within the script

$MachineName is the hostname of the virtual machine that will become the first ADFS server

$DomainName is the name of the domain where the virtual machine will be joined

$AdminCreds contains the username and password for an account that has permissions to join the virtual machine to the domain

$RetryCount and $RetryIntervalSec hold values that will be used to  check if the domain is available

We need to import the experimental DSC modules that we had downloaded. To do this, add the following lines to the DSC script

Import-DscResource -Module xActiveDirectory, xPendingReboot

Next, we need to convert the supplied $AdminCreds into a domain\username format. This is accomplished by the following lines (the converted value is held in $DomainCreds )

Next, we need to tell DSC that the command needs to be run on the local computer. This is done by the following line (localhost refers to the local computer)

Node localhost

We need to tell the LocalConfigurationManager that it should reboot the server if needed, continue with the configuration after reboot,  and to just apply the settings only once (DSC can apply a setting and constantly monitor it to check that it has not been changed. If the setting is found to be changed, DSC can re-apply the setting. In our case we will not do this, we will apply the setting just once).

Next, we need to check if the Active Directory domain is ready. For this, we will use the xWaitForADDomain function from the xActiveDirectory experimental DSC module.

Once we know that the Active Directory domain is available, we can go ahead and join the virtual machine to the domain.

the JoinDomain function depends on xWaitForADDomain. If xWaitForADDomain fails, JoinDomain will not run

Once the virtual machine has been added to the domain, it needs to be restarted. We will use xPendingReboot function from the xPendingReboot experimental DSC module to accomplish this.

Next, we will install the ADFS role on the virtual machine

Our script has now successfully added the virtual machine to the domain and installed the ADFS role on it. Next, create a zip file with InstallADFS.ps1 and upload it to a location that Azure Resource Manager can access (I would recommend uploading to GitHub). Include the xActiveDirectory and xPendingReboot experimental DSC module directories in the zip file as well. Also add a folder called Certificates inside the zip file and put the ADFS certificate and the encrypted password files (discussed in the next section) inside the folder.

In the next section, we will configure the ADFS Farm.

The full InstallADFS.ps1 DSC script is pasted below

Create ADFS Farm

Once the ADFS role has been installed, we will use Custom Script Extensions (CSE) to create the ADFS farm.

One of the requirements to configure ADFS is a signed certificate. I used a 90 day trial certificate from Comodo.

There is a trick that I am using to make my certificate available on the virtual machine. If you bootstrap a DSC script to your virtual machine in an Azure Resource Manager template, the script along with all the non out-of-box DSC modules have to be packaged into a zip file and uploaded to a location that ARM can access. ARM then will download the zip file, unzip it, and place all directories inside the zip file to $env:ProgramFiles\WindowsPowerShell\Modules ( C:\ProgramFiles\WindowsPowerShell\Modules ) ARM assumes the directories are PowerShell modules and puts them in the appropriate directory.

I am using this feature to sneak my certificate on to the virtual machine. I create a folder called Certificates inside the zip file containing the DSC script and put the certificate inside it. Also, I am not too fond of passing plain passwords from my ARM template to the CSE, so I created two files, one to hold the encrypted password for the domain administrator account and the other to contain the encrypted password of the adfs service account. These two files are named adminpass.key and adfspass.key and will be placed in the same Certificates folder within the zip file.

I used the following to generate the encrypted password files

AdminPlainTextPassword and ADFSPlainTextPassword are the plain text passwords that will be encrypted.

$key  is used to convert the secure string into an encrypted standard string. Valid key lengths are 16, 24, 32

For this blog, we will use

$Key = (3,4,2,3,56,34,254,222,1,1,2,23,42,54,33,233,1,34,2,7,6,5,35,43)

Open Windows PowerShell ISE and paste the following (save the file with a name of your choice. I saved mine as ConfigureADFS.ps1)

param (

These are the parameters that will be passed to the CSE

$DomainName is the name of the Active Directory domain
$DomainAdminUsername is the username of the domain administrator account
$AdfsSvcUsername is the username of the ADFS service account

Next, we will define the value of the Key that was used to encrypt the password and the location where the certificate and the encrypted password files will be placed

$localpath = "C:\Program Files\WindowsPowerShell\Modules\Certificates\"
$Key = (3,4,2,3,56,34,254,222,1,1,2,23,42,54,33,233,1,34,2,7,6,5,35,43)

Now, we have to read the encrypted passwords from the adminpass.key and adfspass.key file and then convert them into a domain\username format

Next, we will import the certificate into the local computer certificate store. We will mark the certificate exportable and set the password same as the domain administrator password.

In the above after the certificate is imported,  $cert is used to hold the certificate thumbprint

Next, we will configure the ADFS Farm

The ADFS Federation Service displayname is set to “Active Directory Federation Service” and the Federation Service Name is set to

Upload the CSE to a location that Azure Resource Manager can access (I uploaded my script to GitHub)

The full ConfigureADFS.ps1 CSE is shown below

Azure Resource Manager Template Bootstrapping

Now that the DSC and CSE scripts have been created, we need to add them in our ARM template, straight after the virtual machine is provisioned.

To add the DSC script, create a DSC extension and link it to the DSC Package that was created to install ADFS. Below is an example of what can be used

The extension will run after the ADFS virtual machine has been successfully created (referred to as ADFS01VMName)

The MachineName, DomainName and domain administrator credentials are passed to the DSC extension.

Below are the variables that have been used in the json file for the DSC extension (I have listed my GitHub repository location)

Next, we have to create a Custom Script Extension to link to the CSE for configuring ADFS. Below is an example that can be used

The CSE depends on the ADFS virtual machine being successfully provisioned and the DSC extension that installs the ADFS role to have successfully completed.

The DomainName, Domain Administrator Username and the ADFS Service Username are passed to the CSE script

The following contains a list of the variables being used by the CSE (the example below shows my GitHub repository location)

"repoLocation": "",
"ConfigureADFSScriptUrl": "[concat(parameters('repoLocation'),'ConfigureADFS.ps1')]",

That’s it Folks! You now have an ARM Template that can be used to automatically install the ADFS role and then configure a new ADFS Farm.

In my next blog, we will explore how to add another node to the ADFS Farm and we will also look at how we can automatically create a Web Application Proxy server for our ADFS Farm.

Creating Organizational Units, and Groups in AD with GUID

A recent client of Kloud, wanted to have the chance to create new organizational units, and groups, automatically, with a unique ID (GUID) for each organizational unit. The groups created needed to share the GUID of the OU.

In this blog, I will demonstrate how you could achieve the aforementioned, through a simple PowerShell script, naturally.

Before you start however, you may have to run PowerShell (run as Administrator), and execute the following cmdlet:

Set-ExecutionPolicy RemoteSigned

This is to allow PowerShell scripts to run on the computer.

How to define GUID?

To define the GUID in PowerShell, we use the [GUID] type accelerator: [guid]::NewGuid()

Defining the OU, and Groups.

First part of the script consists of the following parameters:

#Define GUID

$Guid = [guid]::NewGuid()

This is to define the GUID, in which a unique ID resembling this “0fda3c39-c5d8-424f-9bac-654e2cf045ed” will be added to the OU, and groups within that OU.

#Define OU Name

$OUName = (Read-Host Input OU name';) + '-' + $Guid

Defining the name of the OU. This needs an administrator’s input.

#Define Managers Group Name

$MGroupPrefix = (Read-Host -Prompt 'Input Managers Group name')

if ($MGroupPrefix) {write-host 'The specified Managers Name will be used.' -foreground 'yellow'}

else {

$MGroupPrefix = 'Managers';

Write-Host 'Default Managers Group name will be used.' -foreground 'yellow'


$MGroupPrefix = $MGroupPrefix + '-' + $Guid


This is to define the name of the “First Group”, it will also be followed by a “-“ and a GUID.

If the Administrator selects a name, then, the selected name will be used.

#Define Supervisors Group Name

$SGroupPrefix = (Read-Host -Prompt 'Input Supervisors Group name')

if ($SGroupPrefix) {write-host 'The specified Supervisors Group name will be used.' -foreground 'yellow'}

else {

$SGroupPrefix = 'Supervisors'

Write-Host 'Default Supervisors name will be used.' -foreground 'yellow'


$SGroupPrefix = $SGroupPrefix + '-' + $Guid


This will define the second group name “Supervisors”, again, it will be followed by “-” and a GUID.

In this section however, I didn’t specify a name to demonstrate how the default name “Supervisors” + “-” + GUID come into place, as define in “else” function. Simply press “enter”.

Defining the OU Name, group name, and path

#Defining Main OU Name, OUs, and path

$OUPath = 'OU=Departments,DC=contoso,DC=lab'

$GroupsOUName = 'Groups'

$GroupOUPath = 'OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab'

$UsersOUName = 'Users'

$UsersOUPath = 'OU=' + $OUName + ',OU=Dpartments,DC=contoso,DC=lab'

The above parameters define the OU Path in which the main OU will be created, e.g. Finance.

It will also create two other OUs, “Groups” and Users” in the specified path.

Defining the Groups (that will be created in “Groups” OU), the group names and path.

#Defining Groups, groups name and path

$MGroupPath = 'OU=Groups, OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab'

$MGroupSamAccountName = 'Managers-' + $Guid

$MGroupDisplayName = 'Managers'

$SGroupPath = 'OU=Groups, OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab' 

$SGroupSamAccountName = 'Supervisors-' + $Guid

$SGroupDisplayName = 'Supervisors'

The above parameters define the Groups path, the SamAccountName and Display Name, that have been created in the main OU, e.g. Finance.

Creating the Organizational units, and groups.

As you may have noticed, everything that we have defined above is just parameters. Nothing will be created without calling the cmdlets.

#Creating Tenant OU, Groups & Users OU, and Admins Groups

Import-Module ActiveDirectory

New-ADOrganizationalUnit -Name $OUName -Path $OUPath

New-ADOrganizationalUnit -Name $GroupsOUName -Path $GroupOUPath

New-ADOrganizationalUnit -Name $UsersOUName -Path $UsersOUPath

New-ADGroup -Name $FirstGroupPrefix -SamAccountName $FirstGroupSamAccountName -GroupCategory Security -GroupScope Global -DisplayName $FirstGroupDisplayName -Path $FirstGroupPath -Description 'Members of this group are Managers of $OUName'

New-ADGroup -Name $SecondGroupPrefix -SamAccountName $SecondGroupSamAccountName -GroupCategory Security -GroupScope Global -DisplayName $SecondGroupDisplayName -Path $SecondGroupPath -Description Members of this group are Supervisors of $OUName'

Once completed you will have the following OU/Department

And in Groups OU, you will have two groups, Managers (as defined in the previous step) and “SupervisorsGroup” the default name, as the Administrator did not specify a name for the Group.

Notice how the GUID is shared across the Departments, and the Groups.

Putting it all together:

Note: The SYNOPSIS, DESCRIPTION, EXAMPLE, and NOTES are used for get-help.


This is a Powershell script to create Organizations Units (OU), and groups.

The script will create:
1) An OU, defined by the administrator. The OU name is define by the administrator, followed by a unique ID generated through GUID. It will be created in a specific OU path.
2) Two OUs consisting of 'Groups' and 'Users'.
3) Two Groups; 'First Group' and 'Second Group' These names are the default names followed with GUID. An administrator can define Group names as desired. GUID will be added by default.


This PowerShell script should be ran as administrator.



#Define GUID
$Guid = [guid]::NewGuid()

#Define OU Name
$OUName  = (Read-Host 'Input OU name') + '-' + $Guid

#Define First Group Name
$MGroupPrefix = (Read-Host -Prompt 'Input Managers Group name')
if ($MGroupPrefix) {write-host 'The specified Managers Name will be used.' -foreground 'yellow'}
else {
$MGroupPrefix = 'Managers'
Write-Host 'Default Managers Group name will be used' -foreground 'yellow'
$MGroupPrefix = $MGroupPrefix + '-' + $Guid

#Define Second Group Name
$SGroupPrefix = (Read-Host -Prompt 'Input Supervisors Group name')
if ($SGroupPrefix) {write-host 'The specified Supervisors Group name will be used.' -foreground 'yellow'}
else {
$SGroupPrefix = 'SupervisorsGroup'
Write-Host 'Default SupervisorsGroup name will be used' -foreground 'yellow'

$SGroupPrefix = $SGroupPrefix + '-' + $Guid


#Defining Main OU Name, OUs, and path
$OUPath  = 'OU=Departments,DC=contoso,DC=lab'
$GroupsOUName = 'Groups'
$GroupOUPath = 'OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab'
$UsersOUName = 'Users'
$UsersOUPath = 'OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab'

#Defining Groups, groups name and path
$MGroupPath = 'OU=Groups, OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab'
$MGroupSamAccountName = 'Managers-' + $Guid
$MGroupDisplayName = 'Managers'
$SGroupPath = 'OU=Groups, OU=' + $OUName + ',OU=Departments,DC=contoso,DC=lab'
$SGroupSamAccountName = 'Supervisors-' + $Guid
$SGroupDisplayName = 'Supervisors'

#Creating Tenant OU, Groups &amp;amp; Users OU, and Admins Groups
Import-Module ActiveDirectory

New-ADOrganizationalUnit -Name $OUName -Path $OUPath

New-ADOrganizationalUnit -Name $GroupsOUName -Path $GroupOUPath

New-ADOrganizationalUnit -Name $UsersOUName -Path $UsersOUPath

New-ADGroup -Name $MGroupPrefix -SamAccountName $MGroupSamAccountName -GroupCategory Security -GroupScope Global -DisplayName $MGroupDisplayName -Path $MGroupPath -Description 'Members of this group are Managers of $OUName'

New-ADGroup -Name $SGroupPrefix -SamAccountName $SGroupSamAccountName -GroupCategory Security -GroupScope Global -DisplayName $SGroupDisplayName -Path $SGroupPath -Description 'Members of this group are Supervisors of $OUName'

Active Directory – What are Linked Attributes?

A customer request to add some additional attributes to their Azure AD tenant via Directory Extensions feature in the Azure AD Connect tool, lead me into further investigation. My last blog here set out the customer request, but what I didn’t detail in that blog was one of the attributes they also wanted to extend into Azure AD was directReports, an attribute they had used in the past for their custom built on-premise applications to display the list of staff the user was a manager for. This led me down a rabbit hole where it took a while to reach the end.

With my past experience in using Microsoft Identity Manager (formally Forefront Identity Manager), I knew directReports wasn’t a real attribute stored in Active Directory, but rather a calculated value shown using the Active Directory Users and Computers console. The directReports was based on the values of the manager attribute that contained the reference to the user you were querying (phew, that was a mouthful). This is why directReport and other similar type of attributes such as memberOf were not selectable for Directory Extension in the Azure AD Connect tool. I had never bothered to understand it further than that until the customer also asked for a list of these type of attributes so that they could tell their application developers they would need a different technique to determine these values in Azure AD. This is where the investigation started which I would like to summarise as I found it very difficult to find this information in one place.

In short, these attributes in the Active Directory schema are Linked Attributes as detailed in this Microsoft MSDN article here:

Linked attributes are pairs of attributes in which the system calculates the values of one attribute (the back link) based on the values set on the other attribute (the forward link) throughout the forest. A back-link value on any object instance consists of the DNs of all the objects that have the object’s DN set in the corresponding forward link. For example, “Manager” and “Reports” are a pair of linked attributes, where Manager is the forward link and Reports is the back link. Now suppose Bill is Joe’s manager. If you store the DN of Bill’s user object in the “Manager” attribute of Joe’s user object, then the DN of Joe’s user object will show up in the “Reports” attribute of Bill’s user object.

I then found this article here which further explained these forward and back links in respect of which are writeable and which are read-only, the example below referring to the linked attributes member/memberOf:

Not going too deep into the technical details, there’s another thing we need to know when looking at group membership and forward- and backlinks: forward-links are writable and backlinks are read-only. This means that only forward-links changed and the corresponding backlinks are computed automatically. That also means that only forward-links are replicated between DCs whereas backlinks are maintained by the DCs after that.

The take-out from this is the value in the forward-link can be updated, the member attribute in this case, but you cannot update the back-link memberOf. Back-links are always calculated automatically by the system whenever an attribute that is a forward-link is modified.

My final quest was to find the list of linked attributes without querying the Active Directory schema which then led me to this article here, which listed the common linked attributes:

  • altRecipient/altRecipientBL
  • dLMemRejectPerms/dLMemRejectPermsBL
  • dLMemSubmitPerms/dLMemSubmitPermsBL
  • msExchArchiveDatabaseLink/msExchArchiveDatabaseLinkBL
  • msExchDelegateListLink/msExchDelegateListBL
  • publicDelegates/publicDelegatesBL
  • member/memberOf
  • manager/directReports
  • owner/ownerBL

There is further, deeper technical information about linked attributes such as “distinguished name tags” (DNT) and what is replicated between DCs versus what is calculated locally on a DC, which you can read in your own leisure in the articles listed throughout this blog. But I hope the summary is enough information on how they work.

Azure API Management Step by Step – Use Cases

jorge-fotoUse Cases

On this second post about Azure API management, let’s discuss about use cases. Why “Use Cases”?                  

Use cases helps to manage complexity, since it focuses on one specific usage aspect at the time. I am grouping and versioning use cases to facilitate your learning process and helping to keep track with future changes. You are welcome to use these diagrams to demonstrate Azure API management features.

API On-boarding is a key aspect of API governance and first thing to be discussed. How can I publish my existing and future APIs back-ends to API Management?

API description formats like Swagger Specification (aka Open API Initiative are fundamental to properly implement automation and devops on your APIM initiative. API can be imported using swagger, created manually or as part of a custom automation/integration process.

Azure API management administrators can group APIs by product allowing subscription workflow. Products visibility are linked with user groups, providing restricted access to APIs. You can manage your API policies as Code thought an exclusive GIT source control repository available to your APIM instance. Secrets and constants used by policies are managed by a key/value(string) service called properties.


Azure API management platform provides a rich developers portal. Developers can create an account/profile, discover APIs and subscribe to products. API Documentation, multiple language source code samples, console to try APIs, API subscription keys management and Analytics are main features provided. 


The management and operation of the platform plays an important role on daily tasks. For enterprises, user groups and user(developers) can be fully integrated with Active Directory. Analytics dashboards and reports are available. Email notification and templates are customizable. APIM REST API and powershell commands are available to most of platform features, including exporting analytics reports.


Security administration use cases groups different configurations. Delegation allows custom development of portal sign-in, sign-up and product subscription. OAuth 2.0 and OpenID providers registration are used by development portal console, when trying APIs, to generate required tokens. Client certificates upload and management are done here or using automation. Developers portal identities configurations brings out of the box integration with social providers. GIT source control settings/management and APIM REST API tokens are available as well.


Administrators can customize developers portal using built in content management systems functionality. Custom pages and modern javascript development is now allowed. Blogs feature allow of the box blog/post publish/unpublish functionality. Developers submitted applications can be published/unpublished by administrator, to be displayed at developers portal.


In Summary, Azure API management is a mature and live platform with a few new features under development, bringing a strong integration with Azure Cloud. Click here for RoadMap

In my next post, I will deep dive in API on-boarding strategies.  

Thanks for reading @jorgearteiro

Posts: 1) Introduction  2) Use Cases

Exploring Cloud Adoption

At Kloud we get incredible opportunities to partner with global organisations.

Listening to and observing one of our established clients has inspired me to write about the change programme around Office 365 and how we can expand the approach.

The change management programme, in terms of adoption, is based on a user’s journey through the office 365 toolset. A sort of step by step approach incorporating exchange online and SharePoint online – building a workspace for each department which effectively will become the reference point in which you work from. In short, targeting adoption champions throughout particular business units and keeping them abreast of the Office 365 updates.

We have seen results through this approach but it’s here I want to drop a different thought into the equation.

It’s natural tendency to develop a well-trodden path, but what happens when you have a disparate group of individuals who don’t take this route?

Taking people on a well thought out journey could almost take the excitement out of the trip. Why not foster an environment that will allow individuals to explore the Office 365 toolset?

What do I mean? Office 365 is continuously evolving its services. Just look at the roadmap and you will see what I mean.

Think about the last trip you took. Did you perhaps decide to stop the car and explore the countryside? You could well have stepped off the beaten track and explored the unknown finding that place that no one told you about.

Rigorous change control programmes that try and control how people ,within the organisation , adopt the services could well stifle the opportunity that the Cloud is creating.

Here are some key thoughts when considering an explorative approach to adoption.

Pop-up Adoption

The pop-up store is a remarkable concept in today’s society. It is a fantastic way to introduce a new thought, concept or idea down the well-trodden route with an ability to create a short term experience that could prompt exploring. There is fantastic opportunity to create these pop-up adoption initiatives throughout the organisation to inspire alternate ways to being productive.

Your posture on Change Management

In cloud computing, we find that cloud suppliers have a wave of new features designed to change and enhance the productivity of the consumer. The consumer loves all the additional functionality that will come with the solution, however at most times does not understand what the full extent of the changes. Herein lies the challenge, how does the conventional IT department communicate the change, train up end users and ultimately manage change? We get ready to run all our time intensive checks so that people know exactly what is about to change, when and how it will change. Challenging, to say the least, but when you ramp this up, well to be specific multiply by a factor of a hundred.

It’s time-consuming and nearly impossible to manage in the current cloud climate.

Here is a thought that will take you to the extreme opposite.

Enable everything. Yes, that’s right, let the full power of Cloud permeate through your organisation. The more time you spend trying to control what gets released as opposed to what doesn’t, the more time you waste which can be better used to create more value. Enable it and let your early adopters benefit, explore, they could even coach the late bloomers, then let the remaining people go on a journey.


Use what you got to expose people to what they could adopt

Yammer is a great tool in the Office 365 toolset that will expose people to different opportunities in being productive. Exposing different toolsets to people causes them to explore what they don’t know.

The mobile phone will help

People have started to explore technology. The mobile age has stimulated this approach. The slogan “There is an app for that” could well be the tag line that has encouraged end users to explore the tools that are available to them.

The Adoption Map

Whenever one is navigating areas unknown we tend to develop visuals that will help us make sense of the space. The Adoption Map could be a great tool to visually show your people what is actually out there. Instead of being a mere passenger on the journey, they could help plot out their own coordinates. When people start learning for themselves it’s potentially the most powerful recipe for success.

This approach could alter the role of change managers to become change enablers. Instead of controlling change you are learning and innovating from it.

Adoption simple means to choose. We have a great opportunity to present them with as many services as possible from which to choose, you can potentially foster an environment that creates exploration. The ability to learn for themselves and ultimately to explore in the confounds of a boundary that is ever expanding.

[Updated] Yammer group and user export via Yammer API to JSON, then converted to CSV

Originally posted on Lucian’s blog at Follow Lucian on Twitter @LucianFrango.

Update: awesome pro-tip

Big shout out to Scott Hoag (@ciphertxt on Twitter) with this pro-tip which will save you having to read this blog post. Yes, you don’t know what you don’t know.

Part of a Yammer network merge (which I am writing a blog post about.. WIP), you would lose data, posts, files etc as that can’t come across. You can however do an export of all that data to, depending on how much there is to export, usually a large .zip file. This is where Scott showed me the light. In that export, there are also two .csv files that contain all the user info in the first, and in the second all the group info. Knowing this, run that export process and you probably don’t need to read the rest of this blog post. #FacePalm.

HOWEVER, and that is a big however for a reason. The network export process does not export what members there are in groups in that groups.csv file. So if you want to to export Yammer groups and all their members, the below blog post is one way of doing that process, just a longer way… 

Yammer network merges are not pretty. I’m not taking a stab at you (Yammer developers and Microsoft Office 365 developers), but, I’m taking a stab.

There should be an option to allow at least group and group member data to be brought across when there is a network merge. Fair enough not bringing any data across as that can certainly be a headache with the vast amount of posts, photos, files and various content that consumes a Yammer network.

However, it would be considerably much less painful for customers if at least the groups and all their members could be merged. It would also make my life a little easier not having to do it.

Let me set the stage her and paint you a word picture. I’m no developer. Putting that out there from the start. I am good at problem solving though and I’m a black belt at finding information online (surfing the interwebs). So, after some deliberation, I found the following that might help with gathering group and user data, to be used for Yammer network merges.

Read More

Skype for Business Online: Cloud PBX

Skype for Business Online has been around for a number of years (previously known as Lync Online), offering Australian customers a subset of the on-premises Lync/Skype for Business capabilities. As Australian Office 365 services were originally hosted in Microsoft’s Hong Kong and Singapore datacentres, high latency meant that it wasn’t viable to offer the telephony and multi-party conferencing components that are available in the on-premises product.

Fast forward to the present day and Microsoft have established local datacentres in Sydney and Melbourne to host Office 365, as well as offering ExpressRoute services. With the latency issues significantly reduced, Microsoft now have the ability to offer more of the Skype for Business modalities to Office 365 subscribers.

As a result of Microsoft’s ‘cloud first’ strategy to development, there are many features and integrations in Office 365 that are not available in the equivalent on-premises products and I suspect some of these will never make it out of the cloud. Although there are some features and capabilities of Skype for Business that are not available in Skype for Business Online, the reverse is also true and we’re starting to see some features appear in Skype for Business Online that are unique to the platform, such as Skype Meeting Broadcast, Cloud PBX Hybrid Voice and Cloud PBX with PSTN Calling.

Skype Meeting Broadcast enables businesses to broadcast meetings on the internet to up to 10,000 people, who can attend in a browser on nearly any device, nearly anywhere in the world. Skype Meeting Broadcast makes it easy to host large virtual meetings similar to internal ‘Town Hall’ style meetings. Businesses are now able to schedule one Skype meeting with up to 10,000 people, reaching colleagues anywhere in the world.

Cloud PBX with Hybrid Voice connectivity offers existing Skype for Business or Lync Server 2013 Enterprise Voice customers the best of both worlds. By leveraging the active on-premises PSTN solution, Enterprise Voice users can co-exist between Office 365 or on-premises homed registrars. In the absence of a Lync 2013 Server or Skype for Business Server, it is possible to use Microsoft’s Skype for Business Cloud Connector Edition – providing the minimum requirements for a Hybrid setup, paired with a pre-existing SIP Voice Gateway.

Cloud PBX with PSTN Calling provides the ability to make and receive traditional phone calls in their Skype for Business Client, with Microsoft providing the phone numbers, phone lines and telephony capabilities directly from Office 365. Microsoft have announced that Cloud PBX is in preview for US customers and will soon be available in Australia with a configuration option for customers to port their existing phone number ranges.

The Cloud PBX will have the following features:

  • Using call answer/initiate (by name and number), you can answer inbound calls with a touch, and place outbound calls either by dialing the full phone number or clicking a name in Skype for Business or Outlook.
  • Call delegation and call on-behalf enables you to make or answer calls on behalf of a manager you support. Notifications make it clear to all participants when calls are being answered or made for someone else.
  • Call forwarding and simultaneous ring allow you to set up forwarding rules so your calls can go with you anywhere, and you can forward calls to colleagues or to voicemail.
  • Call history allows you to keep track of all your conversations in one place, whether those conversations are from IMs, phone calls, or impromptu and scheduled meetings. Conversations are recorded in your call history and stored in Exchange.
  • Use call hold/retrieve when multiple calls occur at the same time. When you answer the next inbound call or place an outbound call, and your current call goes on hold automatically.
  • Call transfer (blind, consult, and mobile) transfers calls between your PC, IP phone, mobile device, or tablet.
  • With caller ID, calls from inside your company display a detailed caller ID that pulls information from your corporate directory, showing you a picture and job title instead of just a phone number.
  • Call waiting notifies you when a call comes in while you’re on a call or in a meeting, giving you a quiet notification in addition to a regular notification, so you can accept the call or route it to voicemail.
  • Camp-on allows you to tag people who are currently unavailable and get notified when their presence changes and they’re ready to take phone calls.
  • Clients for PC, Mac, and Mobile gives you calling features on devices from tablets and mobile phones to PCs and desktop IP phones.
  • Device switching enables you to play your call or meeting on another device.
  • Use distinctive ringing to play different ringtones for the different types of calls you get every day, so you quickly know who is calling you.
  • Do-not-disturb routing controls your inbound communications with presence, enabling you to block all incoming communication except from those you specifically indicate.
  • Enterprise calendar call routing allows you to use your Exchange calendar business hours to enable or disable call forwarding and simultaneous ringing in Skype for Business.
  • Integrated dial-pad allows you to dial by name or by number anywhere in the search bar and in the dial pad, speeding up the process of making outbound calls.
  • Music on hold plays music when you place a call on hold, so your callers know you’re still there and the call hasn’t accidentally dropped.
  • Qualified IP desk phones allows you to choose from a large variety of desk phones that are compatible with Skype for Business.
  • Skype and federated calling enables you to search for anyone in the Skype directory, then securely connect, communicate, and collaborate with them.
  • Team calling sends your calls to your team either immediately or after a set time period.
  • Using video call monitor, you can see the speaker’s name and video stream in a small floating window, so you’re always ready to respond to any questions.
  • Voicemail includes personalised greeting, message waiting indicator, and reply with call. You can have all of your voicemail deposited in your Exchange mailbox and made available through Skype for Business on your PC, mobile device, and IP phone.






Create a Replica Domain Controller using Desired State Configuration

Originally posted on Nivlesh’s blog @

Welcome back. In this blog we will continue with our new Active Directory Domain and use Desired State Configuration (DSC) to add a replica domain controller to it, for redundancy.

If you have not read the first part of this blog series, I would recommend doing that before continuing (even if you need a refresher). The first blog can be found at Create a new Active Directory Forest using Desired State Configuration

Whenever you create an Active Directory Domain, you should have, at a minimum, two domain controllers. This ensures that domain services are available even if one domain controller goes down.

To create a replica domain controller we will be using the xActiveDirectory and xPendingReboot experimental DSC modules. Download these from

After downloading the zip files, extract the contents and place them in the PowerShell modules directory located at $env:ProgramFiles\WindowsPowerShell\Modules folder (unless you have changed your systemroot folder, this will be located at C:\ProgramFiles\WindowsPowerShell\Modules )

With the above done, open Windows PowerShell ISE and lets get started.

Paste the following into your PowerShell editor and save it using a filename of your choice (I have saved mine as CreateADReplicaDC.ps1 )

The above declares the parameters that need to be passed to the CreateADReplicaDC DSC configuration function ($DomainName , $Admincreds, $SafemodeAdminCreds). Some variables have also been declared ($RetryCount and $RetryIntervalSec). These will be used later within the configuration script.

Next, we have to import the experimental DSC modules that we have downloaded (unless the DSC modules are present out-of-the-box, they have to be imported so that they are available for use)

The $Admincreds parameter needs to be converted into a domain\username format. This is accomplished by using the following (the result is held in a variable called $DomainCreds)

Next, we need to tell DSC that the command has to be run on the local computer. This is done by the following line (localhost refers to the local computer)

Node localhost

As mentioned in the first blog, the DSC instructions are passed to the LocalConfigurationManager which carries out the commands. We need to tell the LocalConfigurationManager it should reboot the server if required, to continue with the configuration after a reboot, and to just apply the settings once (DSC can apply a setting and constantly monitor it to check that it has not been changed. If the setting is found to be changed, DSC can re-apply the setting. In our case we will not do this, we will apply the setting just once).

Next, we will install the Remote Server Administration Tools

Now, lets install Active Directory Domain Services Role

Before continuing on, we need to check if the Active Directory Domain that we are going to add the domain controller to is available. This is achieved by using the xWaitForADDomain function in the xActiveDirectory module.

Below is what we will use.

$DomainName is the name of the domain the domain controller will be joined to
$DomainCreds is the Administrator username and password for the domain (the username is in the format domain\username since we had converted this previously)

The DependsOn in the above code means that this section depends on the Active Directory Domain Services role to have been successfully installed.

So far, our script has successfully installed the Active Directory Domain Services role and has checked to ensure that the domain we will join the domain controller to is available.

So, lets proceed. We will now promote the virtual machine to a replica domain controller of our new Active Directory domain.

The above will only run after DscForestWait successfully signals that the Active Directory domain is available (denoted by DependsOn ). The SafeModeAdministratorPassword is set to the password that was provided in the parameters. The DatabasePath, LogPath and SysvolPath are all set to a location on C:\. This might not be suitable for everyone. For those that are uncomfortable with this, I would suggest to add another data disk to your virtual machine and point the Database, Log and Sysvol path to this new volume.

The only thing left now is to restart the server. So lets go ahead and do that

Keep in mind that the reboot will only happen if the domain controller was successfully added as a replica domain controller (this is due to the DependsOn).

That’s it. We now have a DSC configuration script that will add a replica domain controller to our Active Directory domain

The virtual machine that will be promoted to a replica domain controller needs to have the first domain controller’s ip address added to its dns settings and it also needs to have network connectivity to the first domain controller. These are required for the DSC configuration script to succeed.

The full DSC Configuration script is pasted below

If you would like to bootstrap the above to your Azure Resource Manager template, create a DSC extension and link it to the above DSC Configuration script (the script, together with the experimental DSC modules has to be packaged into a zip file and uploaded to a public location which ARM can access. I used GitHub for this)

One thing to note when bootstrapping is that after you create your first domain controller, you will need to inject its ip address into the dns settings of the virtual machine that will become your replica domain controller. This is required so that the virtual machine can locate the Active Directory domain.

I accomplished this by using the following lines of code in my ARM template, right after the Active Directory Domain had been created.


DC01 is the name of the first domain controller that was created in the new Active Directory domain
DC01NicIPAddress is the ip address of DC01
DC02 is the name of the virtual machine that will be added to the domain as a replica Domain Controller
DC02NicName is the name of the network card on DC02
DC02NicIPAddress is the ip address of DC02
DC02SubnetRef is a reference to the subnet DC02 is in
nicTemplateUri is the url of the ARM deployment template that will update the network interface settings for DC02 (it will inject the DNS settings). This file has to be uploaded to a location that ARM can access (for instance GitHub)

Below are the contents of the deployment template file that nicTemplateUri refers to

After the DNS settings have been updated, the following can be used to create a DSC extension to promote the virtual machine to a replica domain controller (it uses the DSC configuration script we had created earlier)

The variables used in the json file above are listed below (I have pointed repoLocation to my GitHub repository)

This concludes the DSC series for creating a new Active Directory domain and adding a replica domain controller to it.

Hope you all enjoyed this blog post.

Enterprise Cloud Take Up Accelerating Rapidly According to New Study By McKinsey

A pair of studies published a few days ago by global management consulting firm McKinsey & Company entitled IT as a service: From build to consume show enterprise adoption of Infrastructure as a Service (IaaS) services accelerating increasingly rapidly over the next two years into 2018.

Of the two, one examined the on-going migrations of 50 global businesses. The other saw a large number of CIOs, from small businesses up to Fortune 100 companies, interviewed on the progress of their transitions and the results speak for themselves.

1. Compute and storage is shifting massively to cloud service providers.

Compute and storage is shift massively to the cloud service providers.

Compute and storage is shift massively to the cloud service providers.

“The data reveals that a notable shift is under way for enterprise IT vendors, with on-premise shipped server instances and storage capacity facing compound annual growth rates of –5 percent and –3 percent, respectively, from 2015 to 2018.”

With on-premise storage and server sales growth going into negative territory, it’s clear the next couple of years will see the hyperscalers of this world consume an ever increasing share of global infrastructure hardware shipments.

2.Companies of all sizes are shifting to off-premise cloud services.

Companies of all sizes are shifting to off-premise cloud services.

Companies of all sizes are shifting to off-premise cloud services.

“A deeper look into cloud adoption by size of enterprise shows a significant shift coming in large enterprises (Exhibit 2). More large enterprises are likely to move workloads away from traditional and virtualized environments toward the cloud—at a rate and pace that is expected to be far quicker than in the past.

The report also anticipates the number of enterprises hosting at least one workload on an IaaS platform will see an increase of 41% in the three year period to 2018. While that of small and medium sized businesses will increase a somewhat less aggressive 12% and 10% respectively.

3. A fundamental shift is underway from a build to consume model for IT workloads.


“The survey showed an overall shift from build to consume, with off-premise environments expected to see considerable growth (Exhibit 1). In particular, enterprises plan to reduce the number of workloads housed in on-premise traditional and virtualized environments, while dedicated private cloud, virtual private cloud, and public infrastructure as a service (IaaS) are expected to see substantially higher rates of adoption.”

Another takeaway is that the share of traditional and virtualized on-premise workloads will shrink significantly from 77% and 67% in 2015 to 43% and 57% respectively in 2018. While virtual private cloud and IaaS will grow from 34% and 25% in 2015 to 54% and 37% respectively in 2018.

Cloud adoption will have far-reaching effects

The report concludes “McKinsey’s global ITaaS Cloud and Enterprise Cloud Infrastructure surveys found that the shift to the cloud is accelerating, with large enterprises becoming a major driver of growth for cloud environments. This represents a departure from today, and we expect it to translate into greater headwinds for the industry value chain focused on on-premise environments; cloud-service providers, led by hyperscale players and the vendors supplying them, are likely to see significant growth.”

About McKinsey & Company

McKinsey & Company is a worldwide management consulting firm. It conducts qualitative and quantitative analysis in order to evaluate management decisions across the public and private sectors. Widely considered the most prestigious management consultancy, McKinsey’s clientele includes 80% of the world’s largest corporations, and an extensive list of governments and non-profit organizations.

Web site: McKinsey & Company
The full report: IT as a service: From build to consume

Azure AD Connect – Multi-valued Directory Extensions

I happened to be at a customer site working on an Azure project when I was asked to cast a quick eye over an issue they had been battling with. They had an Azure AD Connect server synchronising user and group objects between their corporate Active Directory and their Azure AD, used for Office 365 services and other Azure-based applications. Their intention was to synchronise some additional attributes from their Active Directory to Azure AD so that they could be used by some of their custom built Azure applications. These additional attributes were a combination of standard Active Directory attributes as well as some custom schema extended attributes.

They were following the guidance from the Microsoft article listed here. As mentioned in the article, ‘Directory extensions allows you to extend the schema in Azure AD with your own attributes from on-premises Active Directory‘. When running the Azure AD Connect installation wizard and trying to find the attributes in the dropdown list, some of their desired attributes were not listed as shown below. An example attribute they wanted to synchronise was postalAddress which was not in the list.

After browsing the dropdown list trying to determine why some of their desired attributes were missing, I noticed multi-valued attributes were missing, such as the description standard Active Directory attribute which I knew was a multi-valued attribute.

I checked the schema in Active Directory and it was clear the postalAddress attribute was multi-valued.

The customer pointed me back to the Microsoft article which clearly stated that the valid attribute candidates included multi-valued strings and binary attributes. With the time I had remaining, I reviewed their Azure AD Connect implementation and tried a few techniques in the synchronisation service such as:

  • refreshing the schema of the on-premise Active Directory management agent
  • enabled the attribute in the properties of the on-premise Active Directory management agent as by default it was not checked

I next checked the Azure AD Connect release notes (here) and quickly noticed the cause of the issue which had to do with the version of Connect they were using, which was a few releases old. It was from version released in April 2016 which added support for multi-valued attributes to Directory Extensions, while the version running by the customer was from only a couple of months earlier.

After upgrading Azure AD Connect to currently released version, the customer was able to successfully select these multi-valued attributes.

Microsoft are very good at keeping the release notes upto date as new versions of Azure AD Connect are released, currently every 1-2 months. The lessons learned are to check the release notes to view the new features and bug fixes in releases to determine if you need to upgrade the tool.