[UPDATED] Azure AD Connect: SyncRuleEditor.exe and why is targetAddress missing

Originally posted on Lucian’s blog here @ clouduccino.com. Follow Lucian on Twitter @LucianFrango. Send Lucian an email.

Today is back to AAD Connect. I want to talk about Office 365 migrations and how they can be tricky with various options and scenarios around hybrid or non hybrid. On a recent project we were migrating a client from IBM Lotus Notes to Exchange Online in Office 365. The plan and proposed solution was designed to not use Exchange Server Hybrid on-premises and use Dell Software Migrator for a direct migration from on-premises to the cloud.

The client has never had Exchange Server on-premises before and was running a well-managed ADDS deployment spanning three sites across three continents. To allow for the schema requirements for Exchange Online, Exchange Server 2013 was downloaded and the ADDS schema was extended with that from Exchange Server 2013. All simple, standard stuff right?..

Read More

Office 365 SSO: Configuring multiple Office 365 tenants to use a single AD FS instance

Q: Can multiple Office 365 tenants use a single AD FS instance to provide SSO?

A: Yes


  • Office 365 tenant 1 is configured with the domain contoso.com
  • Office 365 tenant 2 is configured with the domain sub.contoso.com
  • Single Active Directory Forest with multiple UPNs configured (contoso.com and sub.contoso.com)
  • Single AD FS instance including an AD FS Proxy/Web Application Proxy published with the name sts.contoso.com
  • Two instances of Azure ADConnect configured with container filtering to ensure users are only synchronised to a single tenant

Configuring SSO

The Federation Trust for Tenant 1 is configured by establishing a Remote PowerShell session (with the Azure Active Directory Module loaded) and running the standard ‘Convert-MsolDomainToFederated’ cmdlet:

Convert-MsolDomainToFederated -DomainName contoso.com -SupportMultipleDomain

When it comes to configuring Tenant 2, things become a little more tricky. One of the features of the ‘Convert-MsolDomainToFederated’ cmdlet is that it performs the required configuration on Office 365 as well as the AD FS Farm. If you attempt to run this cmdlet against an AD FS Farm that has a Federation Trust established with a different tenant, it will fail and return an error. Therefore, we need to make use of the ‘Set-MsolDomainAuthentication’ cmdlet which only makes configuration changes to Office 365 and is usually used for establishing Federation Trusts with third party IdPs.

The first step is to export the token-signing certificate from the AD FS farm either via Windows Certificate Manager or via PowerShell:

$certRefs=Get-AdfsCertificate -CertificateType Token-Signing
[System.IO.File]::WriteAllBytes("c:\temp\tokensigning.cer", $certBytes)

Next, establish a Remote PowerShell session with Tenant 2 and then run the following script to configure the trust:

$cert = New-Object System.Security.Cryptography.X509Certificates.X509Certificate2("C:\temp\tokensigning.cer")
$certData = [system.convert]::tobase64string($cert.rawdata)
#command to enable SSO
Set-MsolDomainAuthentication -DomainName $dom -Authentication Federated -ActiveLogOnUri $ura -PassiveLogOnUri $url -MetadataExchangeUri $metadata -SigningCertificate $certData -IssuerUri $uri -LogOffUri $logouturl -PreferredAuthenticationProtocol WsFed

Once configured, the configuration of both tenants can be validated using the ‘Get-MsolDomainFederationSettings’ cmdlet. The only difference when comparing the tenant configuration should be the ‘FederationBrandName’ and the ‘IssuerUri’ values.

Skype for Business Online to On-Premises Migration

Okay guys – you’ve been told “lets move everyone back from the cloud! We need Enterprise Voice for our users” This will go against most Microsoft sales materials as we should be looking towards cloud.

If you are part of an organisation that has been birthed out of Skype for Business Online (SFBO) as part of your Office 365 subscription, it would make sense that you would have never had on-premises Lync or SFB servers in your Active Directory domain. Very little configuration is needed in SFBO and a busy administrators would have loved enabling the license SKU for SFBO for each users and then wiped their hands of it. Its just that easy, enable and forget.

The main limitation around SFBO is the need for an IP-PBX and/or PSTN connectivity. The time may have come for your organisation to leverage your Microsoft agreement even further and look to your existing technology/application catalogue and see that Skype for Business can fill the requirements of your aging PBX. This trigger point is usually when the PBX asset has reached capacity and there is a cost trade off;

  • Throw more money at the dusty old PBX box for extra expanision cards and possibly cabling to terminal
  • Spend the money to start new in the world of VOIP, but look to an existing technology that can provide this functionality (and hopefully more) before looking elsewhere

Telephones have been around for a long time, its nothing new. Picking up, making, transferring calls is all pretty standard stuff we have been doing for few decades now. If I’m going to invest money in something I should be asking for more! more! more! How do I make sure that my choice keeps my organisation staying relevant in the way it communicates for the next chapter?

Hello Skype for Business as PBX replacement.

That’s enough of ramble. You have come here to understand the process of moving users back from the cloud because there is less documented about this procedure than too the cloud.

Current Environment

  • On-premises Domain Services
  • DirSync/AADSync Server
  • Office 365 Tenant
    • Skype for Business Online SKU enabled
  • DNS for my domain.com.au
    • Lyncdiscover.domain.com CNAME webdir.online.lync.com
    • SIP.domain.com. CNAME sipdir.online.lync.com
    • _sipfederationtls._tcp.domain.com SERVICE sipfed.online.lync.com
    • _sip._tls.domain.com SERVICE sipdir.online.lync.com


In this scenario we will look at the steps needed to specifically enable Hybrid and move users back to on-premises.

  1. Add on-premises infrastructure
  2. Connect Hybrid SFB with Office 365 and on-premises
  3. Move Enterprise Voice users back

Add On-premises Infrastructure

  • Add your Front End, Edge and Reverse Proxy Infrastructure
    1. Build the Servers as per TechNet, but leave the SIP Address’s DNS zones to not effect internal and external clients just yet
    2. All discover records should still point to Microsoft (sipdir, webdir etc)
  • Confiure your Edge and Reverse Proxy with Public Certificates
    1. Test the port authentication as best you can;
      1. Telnet to Edge Ports
      2. Test Reverse Proxy URLs
      3. Remote Connectivity Analyzer for Edge
  • Configure Edge for Federation
    1. Assign your Edge as the Federation Route in your Topology Builder
    2. Configure Edge Specific Configuration
Set-CSAccessEdgeConfiguration -AllowOutsideUsers 1 -AllowFederatedUsers 1 -UseDnsSrvRouting -EnablePartnerDiscovery $true
  • Recreate Allowed Federated Domains in On-premises
    1. If you do a get-csalloweddomain in Office 365 you may not get all the correct information specific for your tenant back to you.
      1. If you have federation with only allowed/block lists, you may need to recreate these as there is no nice way of piping the cmdlets from ‘get’ to ‘new’.
      2. Allow open federation to accommodate for all traffic is the simplist approach for migration
  • Set the Global Remote Access and Federation User Policy to Allow
Get-CsExternalAccessPolicy -Identity Global | Set-CsExternalAccessPolicy -EnableFederationAccess $True -EnableOutsideAccess $True


Connect Hybrid Skype for Business

  • Remove existing Lync/Skype for Business Hosting Rule
Get-CSHostingProvider -Identity <SFB/Lync Online> | Remove-CSHostingProvider
  • Recreate the Provider with Hybrid Specific Configuration
New-CSHostingProvider -Identity SFBOnline -ProxyFqdn "sipfed.online.lync.com" -Enabled $true -EnabledSharedAddressSpace $true -HostsOCSUsers $true -VerificationLevel UseSourceVerification -IsLocal $false -AutodiscoverUrl https://webdir.online.lync.com/Autodiscover/AutodiscoverService.svc/root
  • Update External/Public DNS Records
    1. Remember that only updating external DNS records means that your internal users can functions ‘as-is’ until your happy with the progress
      1. Edge Names (SIP Access/Web Conference/ AV  FQDNs)
      2. External Web Services FQDN
      3. Dialin FQDN
      4. Meeting FQDN
      5. LyncDiscover FQDN
      6. SRV _sipfederationtls._tcp.domain.com
      7. SRV _sip._tls.domain.com
    2. Remote users that aren’t previously authenticated could have an issue logging in at the time of the change

Test Process

Join On-premises Pilot User with Online Account

This makes your on-premises deployment aware of active directory accounts that are currently cloud enabled.

  • Run the following cmdlet in SFB Management Shell connected to on-premises servers to test a user
Enable-CsUser -Identity &lt;accountname&gt; -SipAddress "sip:<sipaddress>" -HostingProviderProxyFqdn "sipfed.online.lync.com" -verbose
  • Synchronise AADSync/DirSync
    1. Login to Directory Sync Server
    2. Run >  Delta Import And Delta Sync on the Active Directory Connector
    3. You will see an update count that includes your object

Move a Pilot Users back

This step will actually move the user from SFB Online back to your on-premises pool with contact lists intact. This is initiated from the on-premises server and will need authentication for the Office 365 tenant to perfom the task.

  • Run the following cmdlet in Powershell connected to both on-premises and online sessions
Import-Module LyncOnlineConnector
$credential = Get-Credential
$session = New-CsOnlineSession -Credential $credential
Import-PSSession $session -AllowClobber
  • Get the Online Admin URL for your tenant
    1. Log into Office 365 Portal
    2. Check the URL presented in the address bar, will be admin0x. Where x = a letter specific for you
  • Move the User back
Move-CsUser -Identity <UPN> -Target <FE Pool Name> -Credential $cred -HostedMigrationOverrideURL https://admin0f.online.lync.com/HostedMigration/hostedmigrationservice.svc

Enable All Users

If the above to Pilot Tests worked we need to scale up our migration batches. We need to mass produce the following cmdlet in SFB Management Shell connecting on-premises user accounts to the corresponding online account;

Enable-CsUser -Identity <accountname> -SipAddress "sip:&lt;sipaddress&gt;" -HostingProviderProxyFqdn "sipfed.online.lync.com" -verbose

To do this practically I used the UPN value which I knew that would resolve to the correct users values in on-premises and Office 365 because they are synced from the source. I also could then understand the logic that the users UPN is in this case the primary SMTP/Mail value and therefor matching SIP Address for Skype for Business that I needed.

  • Get all the Office 365 users that are enabled for Skype for Business
Get-CSOnlineUser | ? {$_.SipAddress -notlike $Null} | Select SipAddress, DisplayName | Export-CSV -Path C:\temp\OnlineUsers.csv -NoTypeInformation
  1. This will give you a list of ‘real’ SFBO users that are licensed but are also registered SFB logins
  2. Review the list for deleted users that haven’t been removed properly, there SIPAddress will include GUID style login, these lines can be removed as we do not wish to migrate them.

Lets leverage this list of known online users and enable their joining in on-premises with a ForEach loop example below;

$Users = Import-Csv C:\updates\OnlineUsers.csv

ForEach($User in $Users)
$SipAddress = $user.sipaddress
$UPN = $SipAddress.replace("sip:", "")
$Enable = Enable-CsUser -Identity $UPN -SipAddress $SipAddress -HostingProviderProxyFqdn "sipfed.online.lync.com"
  • Update Azure Active Directory of the changes by another AADSync/DirSync > Delta Import & Delta Sync
  • Update Internal DNS to point all associated SFB records to on-premises Skype for Business Server(s)
    1. SRV _sipinternaltls._tcp.domain.com.au
    2. Lyncdiscoverinternal.domain.com.au
    3. SIP.domain.com.au
  • Add Additional A Records
    1. Meet.domain.com.au
    2. Dialin.domain.com.au
    3. Pool Name
    4. SFB Web Service URL Names
    5. Admin URL

Visual Indication of Success

Log into your on-premises SFB Admin Control Panel and run a blank user search to discover all users. Noticed the ‘Homed’ field should say ‘SFBOnline’ 

Move All Users

Leveraging the same list of users, run the move cmdlet like the example;

ForEach($User in $Users)

$Displname = $user.displayname
$SipAddress = $user.sipaddress
$UPN = $SipAddress.replace("sip:", "")
$Move = Move-CsUser -Identity $UPN -Target &amp;amp;amp;amp;amp;amp;lt;FE Pool Name&amp;amp;amp;amp;amp;amp;gt; -Credential $credential -HostedMigrationOverrideURL https://admin0f.online.lync.com/HostedMigration/hostedmigrationservice.svc -Confirm:$false
if($Move -eq $False)
Write-host "User $SipAddress didn't move!!"


To get visual status while you move all the users, log into your Office 365 Skype for Business Administration Portal and view the details. Continually refresh the page to see the value for “users synced and homed online” go down as each user becomes enabled on-premises.





Log into your on-premises SFB Admin Control Panel and run a blank user search with a additional filter for Homed or Registrar Pool / is equal to / <registrar server name>.

Client Experience

The client should be unknowning of your changes being made in Office 365 and on-premises until you perform the move-csuser request for their account. During this period a redirect message will be sent to the client with a new registrar server FQDN and a automated logout and login will happen. If the user doesn’t have their client in the forground of their desktop, then this will happen silently in the background. The redirect in my move request had the users logged out and back in within about 1-2 seconds.

Secure Azure Virtual Network Defense In Depth using Network Security Groups, User Defined Routes and Barracuda NG Firewall

Security Challenge on Azure

There are few common security related questions when we start planning migration to Azure:

  • How can we restrict the ingress and egress traffic on Azure ?
  • How can we route the traffic on Azure ?
  • Can we have Firewall kit, Intrusion Prevention System (IPS), Network Access Control, Application Control and Anti – Malware on Azure DMZ ?

This blog post intention is to answer above questions using following Azure features combined with Security Virtual Appliance available on Azure Marketplace:

  • Azure Virtual Network (VNET)
  • Azure Network Security Groups (NSGs)
  • Azure Network Security Rule
  • Azure Forced Tunelling
  • Azure Route Table
  • Azure IP Forwarding
  • Barracuda NG Firewall available on Azure Marketplace

One of the most common methods of attack is The Script Kiddie / Skiddie / Script Bunny / Script Kitty. Script Kiddies attacks frequency is one of the highest frequency and still is. However the attacks have been evolved into something more advanced, sophisticated and far more organized. The diagram below illustrates the evolution of attacks:

evolution of attacks


The main target of the attacks from the lowest sophistication level of the attacks to the most advanced one is our data. Data loss = financial loss. We are working together and sharing the responsibility with our cloud provider to secure our cloud environment. This blog post will focus on Azure environment.

Defense in Depth

Based on SANS Institute of Information Security. Defense in depth is the concept of protecting a computer network with a layer of defensive mechanisms. There are varies of defensive mechanisms and countermeasures to protect our Azure environment because there are many attack scenarios and attack methods available.

In this post we will use combination of Azure Network Security Groups to establish Security Zone discussed previously on my previous blog, deploy network firewall including Intrusion Prevention System on our Azure network to implement additional high security layer and route the traffic to our security kit. On Secure Azure Network blog we have learned on how to establish the simple Security Zone on our Azure VNET. The underlying concept behind the zone model is the increasing level of trust from outside into the center. On the outside is the Internet – Zero Trust which is where the Script Kiddies and other attackers reside.

The diagram below illustrates the simple scenario we will implement on this post:


There are four main configurations we need to do in order to establish solution as per diagram above:

  • Azure VNET Configuration
  • Azure NSG and Security Rules
  • Azure User Defined Routes and IP Forwarding
  • Barracuda NG Firewall Configuration

In this post we will focus on the last two items. This tutorial link will assist the readers on how to create Azure VNET and my previous blog post will assist the readers on how to establish Security Zone using Azure NSGs.

Barracuda NG Firewall

The Barracuda NG Firewall fills the functional gaps between cloud infrastructure security and Defense-In-Depth strategy by providing protection where our application and data reside on Azure rather than solely where the connection terminates.

The Barracuda NG Firewall can intercept all Layer 2 through 7 traffic and apply Policy – based controls, authentication, filtering and other capabilities. Just like its physical device, Barracuda NG Firewall running on Azure has traffic management capability and bandwidth optimizations.

The main features:

  • PAYG – Pay as you go / BYOL – Bring your own license
  • ExpressRoute Support
  • Network Firewall
  • VPN
  • Application Control
  • IDS – IPS
  • Anti-Malware
  • Network Access Control Management
  • Advanced Threat Detection
  • Centralized Management

Above features are necessary to establish a virtual DMZ in Azure to implement our Defense-In-Depth and Security Zoning strategy.

Choosing the right size of Barracuda NG Firewall will determine the level of support and throughput to our Azure environment. Details of the datasheet can be found here.

I wrote handy little script below to deploy Barracuda NG Firewall Azure VM with two Ethernets :

User Defined Routes in Azure

Azure allows us to re-defined the routing in our VNET which we will use in order to re-direct the routing through our Barracuda NG Firewall. We will enable IP forwarding for the Barracuda NG Firewall virtual appliance and then create and configure the routing table for the backend networks so all traffic is routed through the Barracuda NG Firewall.

There are some notes using Barracuda NG Firewall on Azure:

  • User-defined routing at the time of writing cannot be used for two Barracuda NG Firewall units in a high availability cluster
  • After the Azure routing table has been applied, the VMs in the backend networks are only reachable via the NG Firewall. This also means that existing Endpoints allowing direct access no longer work

Step 1: Enable IP Forwarding for Barracuda NG Firewall VM

In order to forward the traffic, we must enable IP forwarding on Primary network interface and other network interfaces (Ethernet 1 and Ethernet 2) on the Barracuda NG Firewall VM.

Enable IP Forwarding:

Enable IP Forwarding on Ethernet 1 and Ethernet 2:

On the Azure networking side, our Azure Barracuda NG Firewall VM is now allowed to forward IP packets.

Step 2: Create Azure Routing Table

By creating a routing table in Azure, we will be able to redirect all Internet outbound connectivity from Mid and Backend subnets of the VNET to the Barracuda NG Firewall VM.

Firstly, create the Azure Routing Table:

Next, we need to add the Route to the Azure Routing Table:

As we can see the next hop IP address for the default route is the IP address of the default network interface of the Barracuda NG Firewall ( We have extra two network interfaces which can be used for other routing ( and

Lastly, we will need to assign the Azure Routing Table we created to our Mid or Backend subnet.

Step 3: Create Access Rules on the Barracuda NG Firewall

By default all outgoing traffic from the mid or backend is blocked by the NG Firewall. Create an access rule to allow access to the Internet.

Download the Barracuda NG Admin to manage our Barracuda NG Firewall running on Azure and login to our Barracuda NG Admin console:



Create a PASS access rule:

  • Source – Enter our mid or backend subnet
  • Service – Select Any
  • Destination – Select Internet
  • Connection – Select Dynamic SNAT
  • Click OK and place the access rule higher than other rules blocking the same type of traffic
  • Click Send Changes and Activate


Our VMs in the mid or backend subnet can now access the Internet via the Barracuda NG Firewall. RDP to my VM sitting on Mid subnet, browse to Google.com:


Let’s have a quick look at Barracuda NG Admin Logs :)


And we are good to go using same method configuring the rest to protect our Azure environment:

  • Backend traffic to go pass our Barracuda NG Firewall before hitting the Mid traffic and Vice Versa
  • Mid traffic to go pass our Barracuda NG Firewall before hitting the Frontend traffic and Vice Versa

I hope you’ve found this post useful – please leave any comments or questions below!

Read more from me on the Kloud Blog or on my own blog at www.wasita.net.




Programmatically interacting with Yammer via PowerShell – Part 2

In my last post I foolishly said that part 2 would be ‘coming in the next few days’. This of course didn’t happen, but I guess it’s better late than never!

In part 1 which is available here, I wrote how it was possible to post to a Yammer group via a *.ps1 using a ‘Yammer Verified Admin’ account. While this worked a treat, it soon became apparent that this approach had limited productivity rewards. Instead, I wanted to create groups and add users to these groups, all while providing minimal inputs.

Firstly, there isn’t a documented create group .json?, but a quick hunt round the tinterweb with Google helped me uncover the groups.json?. This simply needs a name and whether it’s open or closed, open = $false, closed = $true. So building on my example from Part 1, the below code should create a new group…

$clientID = "fvIPx********GoqqV4A"
$clientsecret = "5bYh6vvDTomAJ********RmrX7RzKL0oc0MJobrwnc"
$Token = "AyD********NB65i2LidQ"
$Group = "Posting to Yammer Group"
$GroupType = $True
$CreateGroupUri = "https://www.yammer.com/api/v1/groups.json?name=$Group&private=$GroupType"

    $Headers = @{
        "Accept" = "*/*"
        "Authorization" = "Bearer "+$Token
        "accept-encoding" = "gzip"
        "content-type" = "application/json"

Invoke-WebRequest -Method POST -Uri $CreateGroupUri -Header $Headers
    You’ll noticed I’ve moved away from Invoke-RestMethod to Invoke-WebRequest. This is due to finding a bug where the script would hang and eventually timeout which is detailed in this link.

All going well, you should end up with a new group which has your ‘Yammer Verified Admin’ as the sole member ala…


Created Yammer Group

Great, but as I’ve just highlighted, there is only one person in that group, and that’s the admin account we’ve been using. To add other Yammer registered users to the group we need to impersonate. This is only possible via a ‘Yammer Verified Admin’ account for obvious chaos avoiding reasons. So firstly you need to grab the token of the user…

$GetUsersUri = "https://www.yammer.com/api/v1/users.json"
$YammerUPN = "dave.young@daveswebsite.com"
$YammerUsers = (Invoke-WebRequest -Uri $GetUsersUri -Method Get -Headers $Headers).content | ConvertFrom-Json

foreach ($YammerUser in $YammerUsers)
    if ($YammerUser.email -eq $YammerUPN)
        $YammerUserId = $YammerUser.id

$GetUserTokUri = “https://www.yammer.com/api/v1/oauth/tokens.json?user_id=$YammerUserId&consumer_key=$clientID"
$YammerUserDave = (Invoke-WebRequest -Uri $GetUserTokUri -Method Get -Headers $Headers).content | ConvertFrom-Json

To step you through the code. I’ve changed the uri to the users.json, provided the UPN of the user that I want to impersonate and I’m using the headers from the previously provided code. I grab all the users into the $YammerUsers variable and then I do a foreach/if to obtain the id of the user. Now we’ve got that we can use the tokens.json to perform a Get request. This will bring you back a lot of information about the user, but most importantly you’ll get the token!

    user_id : 154**24726
    network_id : 20**148
    network_permalink : daveswebsite.com
    network_name : daveswebsite.com
    token : 18Lz3********Nu0JlvXYA
    secret : Wn9ab********kellNnQgvSfbGJjBfRMWZNICW0JTA
    view_members : True
    view_groups : True
    view_messages : True
    view_subscriptions : True
    modify_subscriptions : True
    modify_messages : True
    view_tags : True
    created_at : 2015/06/15 23:59:19 +0000
    authorized_at : 2015/06/15 23:59:19 +0000
    expires_at :

Storing this into the $UserToken variable allows for you to append this to the Authorization within the Headers so you can impersonate/authenticate on behalf of the user. The code looks like…

$UserToken = $YammerUserDave.token
$YammerGroupId = "61***91"

 $UserHeaders = @{
                "Accept" = "*/*"
                "Authorization" = "Bearer "+$UserToken
                "accept-encoding" = "gzip"
                "content-type" = "application/json"

$PostGroupUri = "https://www.yammer.com/api/v1/group_memberships.json?group_id=$YammerGroupId"
$AddYammerUser = Invoke-WebRequest -Uri $PostGroupUri -Method Post -Headers $UserHeaders

So using the group that we created earlier and the correct variables we then successfully add the user to the group…


Dave in the group

Something to be mindful of, when you pull the groups or the users it will be done in pages of 50. I found using a Do/While worked nicely to build up the variables so they could then be queried, like this…

If ($YammerGroups.Count -eq 50)
    $GroupCycle = 1
        $GetMoreGroupsUri = "https://www.yammer.com/api/v1/groups.json?page=$GroupCycle"
        $MoreYammerGroups = (Invoke-WebRequest -Uri $GetMoreGroupsUri -Method Get -Headers $AdminHeaders).content | ConvertFrom-Json    
        $YammerGroups += $MoreYammerGroups
        $GroupCycle ++
        $GroupCount = $YammerGroups.Count
    While ($MoreYammerGroups.Count -gt 0)

Once you’ve got your head around this, then the rest of the API/Json’s on the REST API are really quite useful, my only gripe right now is that they are really missing a delete group json – hopefully it’ll be out soon!



Kloud Solutions named as Microsoft Australia Partner Awards finalist in four categories!

MELBOURNE, VICTORIA – 10 August, 2015 – Today, Kloud Solutions proudly announced it has been named a finalistin four categories in the 2015 Microsoft Australian Partner Awards (MAPA):

  • Cloud Productivity
  • Cloud Platform
  • Managed Service
  • Social Enterprise

Earlier this year, Kloud won Cloud Productivity Partner of the Year and was recognised as a finalist for Enterprise Mobility Suite Partner of the Year at Microsoft’s Worldwide Partner Conference in Orlando, Florida.

Kloud’s managing director Nicki Bowers is proud of the recognition, saying it is representative of the way customers entrust Kloud with their journey to the cloud.

“To be recognised in multiple categories is a huge honour and a testament to the strong relationship we enjoy with our customers. It’s proof that customer centricity is a key driver to innovation and this enables us to deliver the right technology to meet business needs across the board” she said.

The 19 categories of the Microsoft Australia Partner Awards programme recognise Microsoft Partners that have developed and delivered exceptional Microsoft-based solutions during the year.

Microsoft’s director of partner business and development, Phil Goldie, said this year partners had embraced the company’s Cloud platforms in unique ways, creating new applications and transforming line-of-business for clients.

“These award-finalist solutions all highlight the extraordinary power of our partnership and proof of customer confidence. It’s very apparent that when customers have a stronger connection with their trusted partner, they also demonstrate a stronger commitment to products like Azure and Office 365. Most importantly, together we’ve achieved the ability to delight our customers with product and solution offerings that better meet their needs,” he said.

The Microsoft Australia Partner Awards programme winners will be announced at the Microsoft Australia Partner Conference on August 31st 2015.

Microsoft Awards Kloud

Azure Applications Insights for Xamarin iOS

Azure Application Insights (AI) is a great instrumentation tool that can help you learn about how your application is doing during run-time. It is currently in Preview mode, so bear that in mind when developing production ready apps. It gives you the ability to log lots of different kinds of information like tracing, page views, custom events, metrics and more.

Azure AI supports multiple platforms, but unfortunately they have not released a Xamarin package yet. There is one library which is for Xamarin.Forms since it uses the DependencyResolver. I have taken that and removed that dependency to make it compatible with Xamarin.iOS. You could do the same thing to use it for Xamarin.Android too if you like.

It’s very simple to use, all you need is your instrumentationkey which you can get from your Azure portal. Follow the steps from the MSDN tutorial as you can see here to create the Azure AI instance and get your key. Once done, you could download and reference the repository that I have created on GitHub, as you can find it here.

To start Azure AI on your Xamarin iOS app, you could do:




The implementation of the AzureAIManager is as follows:

	public static class AzureAIManager
		public static void Setup(string appKey = "your-azure-AI-instrumentation-key")

			var ai = new AI.Xamarin.iOS.ApplicationInsights ();
			ApplicationInsights.Init (ai);

			TelemetryManager.Init(new AI.Xamarin.iOS.TelemetryManager());


		public static void Start()

		public static void Configure(string userId = "" )

			if (string.IsNullOrEmpty(userId))

		public static void RenewSession()

I have not put this as a Nuget package because I am sure Microsoft will release one very soon, so until that happens, you can use this bindings to play around with Azure AI and you could even use it on your small projects.

Azure Active Directory Connect Export profile error: stopped-server-down.

Originally posted on Lucian’s blog over at clouduccino.com.

Follow Lucian on Twitter @LucianFrango.

A couple of weeks ago I deployed Azure AD Connect in production. It was a relatively smooth process. The wizard did most of the work which was great. There was a few hiccups (blog post) along the way, which, in most cases is expected if the problems are not so serious.

Fast forward to my second install of the latest and greatest sync service for Azure AD and Office 365 cloud identities and we have problem no. 2. This time, though, I can say that the process ran through allot smoother. There was no real errors. Things were looking straight great and I was looking at my next task with some enthusiasm.

However, come 8.30ish this morning and going over the AADConnect server once more for peace of mind, I had noticed that the “Export” profile task that runs as the last task in the scheduled hourly run for AADConnect synchronisation (I’ve set it to 60min), unfortunately had a nice little error for me:


Read More

Release Management Architecture in Visual Studio Online with vNext Environments

Release Management – The Background

Release Management Service in Visual Studio Online (referred as RM-VSO in the rest of this blog post) automates the process of deploying the builds into the target environments. It integrates nicely with the Team Build Service in VSO and both the services can work together to implement a continuous integration/continuous delivery (CI/CD) pipeline. The on-premises version of Release Management has been around for a couple of years, however the Visual Studio Online version has just been recently released, in November 2014, and somewhat different than the on-premises version. (Check the latest on the MS official blog Microsoft Application Lifecycle Management). This blog post focuses on the Release Management Service architecture with reference to visual studio online and vNext Environments.

Skilling up on RM-VSO and deploying a ‘Continuous Integration/Continuous Delivery (CI/CD)’ pipeline is not too difficult. However, the absence of adequate information material (at the time of writing this blog) on RM-VSO and its integration with vNext environment can be a daunting task for someone new to RM-VSO. Current Microsoft resources available on RM are mainly for the on-premises version and one has to rely heavily on the community blogs to understand and implement RM-VSO. There are indeed some good blog posts already published that provide you with step by step instructions (see the links in the reference section at the end), however, when I was looking for the resources on RM-VSO I couldn’t find something that could provide me with an architectural overview, the core components and their relationships, particularly with reference to vNext environments. I reverse engineered the step by step guides to construct the RM-VSO architecture in my mind. It helps to know the core architectural concepts to design and implement a CI/CD solution.

Setting up an RM-VSO based release process can be summarised as

  • Connect to the vNext environment and servers in Azure, the targets where the releases will be deployed on
  • Define ‘Release Path(s)’, ‘Stages’ and ‘Steps’ that will direct the release deployment workflows
  • Define ‘ Components’ that will be deployed
  • Have a build definition defined in VSO that VSO will use to produce the builds
  • Create deployment script(s) and define variables (to be used in the deployment scripts)
  • Define a Release Template, associate it with a build definition and setup deployment flow and actions

However, as mentioned earlier, it is vital to understand the RM-VSO architecture and components for a setting up the release processes. We will first look at the core components in RM-VSO and then how they are related and tied up together to define a release process.

Release Management – The Core Components

The following outlines the list of the key components that are required to be setup/configured to implement a release pipeline.

  • RM – Desktop Client: At present the Release Management Service in VSO doesn’t have a web based user interface (due to be released in Q4-2015), therefore the RM Desktop Client is required to perform all the configuration tasks. The desktop client is required to be connected with a VSO instance once after installation and then it will remember the credentials and connection settings.
  • Visual Studio Online (VSO): Hosts the Team Build and Release Management services, stores the configuration data and builds; manages the release workflows and controls the release deployment process.
  • Azure Subscription: RM needs to be configured with the details of the Azure subscription(s) where the deployment servers are located. More than one subscriptions can be added. RM needs to have the subscription name, Id, management cert. key and storage account name. This enables VSO to connect the target deployment servers.
  • Pick List: The lists of the lookup items to be used with release paths
  • Stage Type: The stages are an important part of your delivery pipeline. For instance, you can have stages like DEV, SIT, UAT, Staging and Production. The release processes are designed around stages so you need to define them carefully. You can change them later for sure but it will require a lot of manual configuration.
  • Technology Type: Not applicable to RM – VSO
  • Users: The users in VSO. RM will list the users who have the project administrators’ rights, though you cannot have groups, not supported yet in RM-VSO. 
  • Inventory: The inventory is used while defining the release template workflows 
  • Actions: Reusable workflow actions for release templates. At present RM-VSO has only four pre-defined actions, adding new custom actions are not yet supported.
  • Environments: You need to pay attention to this term which is referred as ‘Environment’ and ‘Azure Environment’ and also as vNext environments in the RM client and reference text/blogs. In RM-VSO context they all refer to a set of Azure servers, without the deployment agents, that are published under the same cloud service name within an Azure subscription (that has already been added to RM-VSO). You should group your servers wisely under common cloud service names. One release path in RM can hook up to only one ‘Environment’ i.e. if a release path has two stages (Dev and SIT) then you need at least two cloud services wrapping up the DEV and SIT servers separately. You may create a unique cloud service for each of your servers but then it would mean you will need to link as many environments in RM. Also it would mean your release paths won’t be reusable as effectively they would be tied up with individual servers instead of group of servers. 
  • Server: An Azure server within a linked Azure environment where RM deploy the releases.

Before we move forward see the following illustration on how RM-VSO components/configurations are mapped to Azure objects

Release Management Vnext

  • vNext Release Paths: A set of release stages with each stage is comprised of approval and verification steps. Release paths are the core building blocks in a release template.
  • Component: A deployable component to be used in the release template workflow actions. It could be the build drop folder or some file in a specific location in VSO etc.
  • Build Definition: A release template needs to hookup with a build definition as the source of the build outputs. A build definition can also automatically fire up a release when a build is done.
  • Variables: The variables allow you to pass on the values through them to the deployment script. This way you can configure the RM deployment processes for different deployment environments while using the same set of deployment scripts. There are four types of variables
    • Global: Available everywhere, in all release templates
    • Server: Associated with a server and available when the server is used in a release template action
    • Component: Associated with a component and available when the component is used in a release template
    • Action: Associated with an action and available when the action is used in a release template. Note: Since we cannot create custom actions in RM-VSO so these variables are only created within release templates and as such they are not reusable. 
  • Release Template: Releases in RM are based on release templates. Release template is the place where all the above components come together and define how a release should be deployed and progressed. A release template is composed of a release path (with stages and steps), a build definition, a control flow, actions, servers, a component, a deployment script and variables.

Release Template – The thing that knits your release process.

Now after getting an understanding of the key RM-VSO components it is the time to put them together. A release template is the place where all these components gel together. The following diagram depicts the Release Management components and their relationships.

The RM Release Process – A brief summary

The following summary of setting up the release management process can aid in understanding the RM architecture illustrated in the previous

  • Link Azure environments
  • Add servers from the Azure environments to RM
  • Define lookup ‘Stages’ such as ‘Dev’, ‘SIT’, ‘UAT’ etc
  • Create reusable ‘Release Paths’ with the stages and appropriate approval steps
  • Create Release Templates
    • A release template is bound to a single build definition. A build definition can trigger a release automatically.
    • A release template is based on a release path, that defines the stages and steps
    • An stage in the release path is customised with a control flow sequence and actions
    • A control flow sequence can have multiple actions
  • An action is configured with
    • An Azure environment and a sever
    • Userid/pwd to connect to the server
    • Path to the PowerShell or DSC script that will deploy the release package (the script should be included in the build, this is a relative path within the drop folder).
    • A component representing a release package (or a part of it)
    • Custom variables
  • Once a release template is created a new release can be requested, however there must be at least one successful build already done for the build definition associated with the release template.

The following diagram helps visually in understanding a release template structure with reference to the other key components in RM.

Use this blog as a reference while skilling up on RM-VSO or designing a solution based on it.

(also published on my personal blog mycloudview.net)


  1. Manage your release (Microsoft Official)
  2. Release your app to vNext Environments (Microsoft Official)
  3. Continuous Delivery with VSO: Configuring Release Management (This one is really good)
  4. Create a Release Management pipeline for Professional Developers (and this one is probably the best)
  5. Start with Visual Studio Release Management vNext–VS RM for Dummies

Azure Preview Features website

I had stumbled upon this site before, however, on my long journey through the interwebs I must have forgotten or lost it. The site I’m referring to is the Azure Preview Features site which isn’t directly accessible through the main Azure site top or bottom menu’s. So as this is a lucky find, I thought I’d share.

(Note: If you Google Azure preview; the site is the first result that comes up. Face palm?)


The Azure Feature Preview site is a list of current publicly accessible preview features and functionality. Moreover, Microsoft explain that the preview features in Azure are as follows:

Azure currently offers the following preview features, which are made available to you for evaluation purposes and subject to reduced or different service terms, as set forth in your service agreement and the preview supplemental terms. Azure may include preview, beta, or other pre-release features, services, software, or regions to obtain customer feedback (“Previews”). Previews are made available to you on the condition that you agree to these terms of use, which supplement your agreement governing use of Microsoft Azure.

Read More