ADSync Cmdlets – Part 1

I really enjoyed the later versions of DirSync which included a native PowerShell Module to execute sync engine tasks and show some global configuration settings. Now that we are looking at moving over to the new tool AADSync there is a new module installed but with very little reference to it available on the web at time of writing this blog. I’ve outlined the name of the cmdlets below but the ‘Get-Help’ doesn’t offer any description or examples as yet so I’ve included some in this post.



From browsing over these cmdlets we can see that there is much more functionality available to use then there was in the DirSync module equivalent. If we take nothing else away from this list it’s that we can now not just run the engine but configure the tool itself.


Here are some nice examples of what we can achieve now that ADSync Module is available

Example 1

Scenario: Create a custom rule to not sync users with X121Address=NoSync

#Get the AD Connector
$ADConnector = (Get-ADSyncConnector | ? {$_.Type -eq "AD"})
#Create the Scope Filter Object
$scopefilter = New-Object Microsoft.IdentityManagement.PowerShell.ObjectModel.ScopeCondition
$scopefilter.Attribute = "x121Address"
$scopefilter.ComparisonValue = "NoSync"
$scopefilter.ComparisonOperator = "EQUAL"
#Create the Attribute Flow
$AttributeFlowMappings = New-Object Microsoft.IdentityManagement.PowerShell.ObjectModel.AttributeFlowMapping
$AttributeFlowMappings.Source = "True"
$AttributeFlowMappings.Destination = "cloudFiltered"
$AttributeFlowMappings.FlowType = "constant"
$AttributeFlowMappings.ExecuteOnce = $False
$AttributeFlowMappings.ValueMergeType = "Update"
#Add the Scope Filter to a Scope Group
$scopefiltergroup = New-Object Microsoft.IdentityManagement.PowerShell.ObjectModel.ScopeConditionGroup
#Create the Rule
$GUID = $ADConnector.Identifier.Guid
Add-ADSyncRule -Connector $GUID -Name "In from AD – User DoNotSyncFilter" -SourceObjectType user -TargetObjectType person -Direction inbound -AttributeFlowMappings $AttributeFlowMappings -LinkType Join -Precedence "1" -ScopeFilter $scopefiltergroup

Example 2

Scenario: Add Additional Attributes to be imported from Active Directory

$ADConnector = (Get-ADSyncConnector | ? {$_.Type -eq "AD"}) | Add-ADSyncConnectorAttributeInclusion -AttributeTypes employeeID

Example 3

Scenario: Adjust the attribute flow of UPN so that AD Mail Attribute flows to UPN in Office 365

#Define the Flow Mapping 
$Mapping = New-object Microsoft.IdentityManagement.PowerShell.ObjectModel.AttributeFlowMapping
$Mapping.Source = "mail"
$Mapping.Destination = "userPrincipalName"
$Mapping.FlowType = "Direct"
$Mapping.ExecuteOnce = $false
$Mapping.Expression = $null    
$Mapping.ValueMergeType = "update"
#Get the AD Connector
$ADConnector = (Get-ADSyncConnector | ? {$_.Type -eq "AD"})
$GUID = $ADConnector.Identifier.Guid
#Create the Rule with higher precedence
Add-ADSyncRule -Connector $GUID -Direction Inbound -Name "In From AD - User UPN Flow" -SourceObjectType user -TargetObjectType person -AttributeFlowMappings $Mapping -Description "Map Mail to UPN in the Metaverse" -LinkType Join -Precedence 99

Here is the user sync’d to the Metaverse without Attribute Flow Transformation. (Note the UPN value)


We run the PowerShell and preview the results


Commit the change and check with a Metavere search


Let’s check Azure Active Directory. It has the prefix.

Happy days!

Some of these cmdlets seem to still need a little TLC. I found that they didn’t give the desired results although committing in the shell. We all love agile, so give it time and they should get fixed up, and there is always the GUI if you really have too.

Creating new Office documents for OneDrive for Business from another ASPX page

Recently one of my clients wanted to add a new control to their widget dashboard that would give them access to OneDrive for Business functionality. As part of the control they wanted the ability to create new Word, Excel and PowerPoint files. These files would then be edited using Office Web Apps.

Our control has been written as a single page application deployed to the My Site Host of the Office 365 Tenant. It would then be accessed via an IFRAME on a non ASPX page.

This blog will look at how to create a Word, Excel or PowerPoint file and start editing it using Office Web Applications.

To achieve this we need to do the following:

  1. Determine where the users OneDrive for business site collection is on the tenant
  2. Create the new document within OneDrive
  3. Navigate to Office Web Apps to open the new document

Determine location of OneDrive for Business site collection

One of the problems we face is that we do not know which site collection our logged in user has been allocated for OneDrive. We make use of the SP.UserProfiles.PeopleManager  REST API to retrieve the PersonalUrl of the user. We use jQuery to make the GET request.

function getPersonalUrl(success, failure) {
  var url = _spPageContextInfo.siteAbsoluteUrl + 
      type: 'GET',
      headers: {
          'accept': 'application/json;odata=verbose'
      xhrFields: {
          withCredentials: true
      url: url,
      success: function (data) {
          var personalUrl = data.d.PersonalUrl;
      failure: function () {

Create the new document within OneDrive

Once we know where the user’s site collection is we create a new file at the root of the Documents library. Using the PersonalUrl retrieved previously, we open a reference to the personal sites Client Context and then call the list.createDocument method to create a blank document for that document Type.

For reference the mapping of docType:

  • 1 – Word
  • 2 – Excel
  • 3 – PowerPoint

We then retrieve the URL of the newly created file by using the the item.getWOPIFrameUrl method.

function createNewDefaultDocument(personalUrl, docType) {
  var clientContext = new SP.ClientContext(personalUrl);
  var web = clientContext.get_web();
  var folder = list.get_rootFolder();
  var item = list.createDocument(null, folder, docType);
  var wopiFrameActionEnum = SP.Utilities.SPWOPIFrameAction;
  var wopiUrl = item.getWOPIFrameUrl(wopiFrameActionEnum.edit);
  context.executeQueryAsync(onSuccess, onFailure);
  function onSuccess() {
    var wopiUrlValue = wopiUrl.get_value();
    if (Boolean(wopiUrlValue)) {
        wopiUrlValue = wopiUrlValue.replace("action=edit", "action=editnew");
        GoToPage(wopiUrlValue);  // Use core.js GoToPage method
  function onFailure(sender, args) {
    // Handle failure

Navigate to Office Web Apps to open the new document

Once we retrieve the URL of the document (wopiUrlValue) we need to replace the action from edit to editnew.

Lastly we use the GoToPage method (core.js) to navigate to the modifed Url. This will open the file in Office Web Apps in the same browser tab.

Logging with log4net and Azure Diagnostics on Web and Worker Roles

Once you start publishing content to Azure Cloud Services it becomes increasingly critical to have insights into what is going on with your Web or Worker Roles without the need to manually connect to the hosts and inspect local logs.

Logging locally to file is an option but results in a couple of challenges: there is limited local persistent disk space on an Azure Role and local logging makes it hard to get an aggregated view of what’s happening across multiple Instances servicing a single Cloud Service.

In this post we’ll take a look at how we can use the in-built capabilities of log4net’s TraceAppender and the standard Azure Diagnostics.


If you’ve been hunting for details on how to log with log4net on Azure you will no doubt have come across a lot of posts online (particularly on Stack Overflow) that talk about how logging isn’t really reliable and that there appears to be issues using log4net.

The good news is that those posts are out-of-date and the vanilla setup works quite happily.

Just for good measure the contents of this post were pulled together using:

The Setup

You should add the two above references to the Web Application or other project you are planning on deploy to Azure. This solution should also have an Azure Cloud Service project added as well (this is the Azure deployment container for you actual solution).

You should open the primary configuration file for your solution (web.config or app.config) and add the following:

        <add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=, Culture=neutral, PublicKeyToken=31bf3856ad364e35" name="AzureDiagnostics">
          <filter type="" />

This will attach the Azure Diagnostics Trace Listener on deployment of your solution to an Azure Role Instance.

You will also need to define the log4net configuration (you may already have this in your project) and ensure that you add a TraceAppender as shown in the example below.

  <appender name="TraceAppender" type="log4net.Appender.TraceAppender">
      <layout type="log4net.Layout.PatternLayout">
        <!-- can be any pattern you like -->
        <conversionPattern value="%logger - %message" />
  <!-- does not have to be at the root level -->
      <level value="ALL" />
      <appender-ref ref="TraceAppender" />

So far so good.. now for the key bit!

Tie it all together

The key thing to understand is the relationship between the Azure Diagnostics level settings and the log4net log level. If you have a mismatched level then there is a chance you will not get all the logging information you expect to see.

Azure Diagnostics

Firstly we need to define the level we wish to capture using the Azure Diagnostics. This can be done prior to deployment in Visual Studio or it can be modified for an existing running Cloud Service use Server Explorer.

Prior to deployment (and useful for development debugging when using the local Azure emulator environment) you can use the properties of the Role in Visual Studio to set Diagnostics levels as shown below.

Instance Diagnostics Settings

The above example is shown for local development use – you can have the build framework swap out the logging storage account location when deploying to Azure in order to use a proper Azure Storage account.

The important things here are: Diagnostics are enabled; the level. If you get either of these wrong you will see no messages logged by Azure Diagnostics from log4net. My recommendation is to deploy and run the solution for a while and see what volume of logs you capture.

You can update this setting post-deployment so you have some freedom to tweak it without needing to redploy your solution.


The only key with log4net is the level you set in your configuration. Unlike the Azure Diagnostics changing this level does require a redeployment to update so it’s worth experimenting with an appropriate level for your Azure environment prior to doing a “final” deployment.

But where are my logs?

Once deployed in Azure you will find a Table called ‘WADLogsTable’ is created in the specified Storage Account and your logging entries will begin showing up here. Individual Role Instances can be identified by the RoleInstance column in the table as shown below.

Diagnostics Table View

Note that this is not a real-time logging setup – the Diagnostics Trace Listener has some smarts baked into it to batch up log entries and write them to the table on a periodic basis.

You can view your log entries using a variety of tools – Visual Studio Server Explorer or the multitude of free Azure Storage Explorers on the market such as Azure Storage Explorer.

You can store up to 500TB of data in an individual Storage Account (at time of writing) so you don’t need to worry about logging too much information – the challenge of course is then finding the key information you need. I’d recommend performing periodic maintenance on the table and purging older entries.

So there you have it – how you can use standard log4net and the Azure Diagnostics tooling to redirect your existing log4net logging to Azure Diagnostics. Hope this helps!

Do It Yourself Cloud Accelerator – Part III Scaling Out

There’s recently been some interest in the space of accelerating Office 365 SharePoint Online traffic for organisations and for good reason. All it takes is a CEO to send out an email to All Staff with a link to a movie hosted in SharePoint Online to create some interest in better ways to serve content internally. There are commercial solutions to this problem, but they are, well… commercial (read expensive). Now that the basic functionality has been proven using existing Windows Server components, what would it take to put into production?

A quick refresh; in the last post I demonstrated an cloud routing layer, that took control of the traffic and added in compression and local caching, both at the router and also at the end point by utilising BranchCache to share content blocks between users. That’s a pretty powerful service and can provide a significantly better browsing experience for sites with restricted or challenged internet connections. The router can be deployed either in a branch office, within the corporate LAN or as an externally hosted service.


The various deployment options offer trade-offs between latency and bandwidth savings which are important to understand. The Cloud Accelerator actually provides four services:

  • SSL channel decryption, and re-encryption using a new certificate
  • Compression of uncompressed content
  • Generation of BranchCache content hashes
  • Caching and delivery of cached content

For the best price performance consider the following

  • Breaking the SSL channel, compression and generating BranchCache hashes are all CPU intensive workloads, best done on cheap CPUs and on elastic hardware.
  • Delivering data from cache is best done as close to the end user as possible

With these considerations in mind we end up with an architecture where the Cloud Accelerator roles are split across tiers in some cases. First an elastic tier runs all the hard CPU grunt work during business hours and scales back to near nothing when not required. Perfect for cloud. This, with BranchCache enabled on the client is enough for a small office. In a medium sized office or where the workforce is transient (such as kiosk based users) it makes sense to deploy a BranchCache Server in the branch office to provide a permanent source of BranchCache content enabling users to come and go from the office. In a large office it makes sense to deploy the whole Cloud Accelerator inline again with the BranchCache server role deployed on it. This node will re-break the SSL channel again and provide a local in-office cache for secure cacheable content, perfect for those times when the CEO sends out a movie link!

Scaling Out

The Cloud Accelerator I originally used is operating on a single server in Azure Infrastructure as a Service (IaaS). The process of generating BranchCache hashes on the server as the content passes through is CPU intensive and will require more processors to cope with scale. We could just deploy a bigger server but it’s a more economical solution (and a good opportunity) to use the Auto Scaling features within Azure IaaS offering. To create a multi-server farm requires a couple of other configuration changes to ensure the nodes work together effectively.

Application Request Routing Farm

One thing that’s obvious when working with ARR, is that it’s built to scale. In fact it is running underneath some very significant content delivery networks and is integral to such services as Azure WebSites. Configuration settings in ARR reveal that it is designed to operate both as part of a horizontally scaled farm and also as a tier in a multi layered caching network. We’ll be using some of those features here.

First the file based request cache that ARR uses is stored on the local machine. Ideally in a load balanced farm, requests that have been cached on one server could be used by other servers in the farm. ARR supports this using a “Secondary” cache. The Primary cache is fast local disk but all writes go asynchronously to the Secondary cache too and if content is not found on the Primary then the Secondary is checked before going to the content source. To support this feature we need to attach a shared disk that can be used across all farm nodes. There are two ways to do this in Azure. One way is to attach a disk to one server and share it to the other servers in the farm via the SMB protocol. The problem with this strategy is the server that acts as the file server is then a single point of failure for the farm which requires some fancy configuration to get peers to cooperate by hunting out SMB shares and creating one if none can be found. That’s an awful lot of effort for something as simple as a file share. A much better option is to let Azure handle all that complexity by using the new Azure Files feature (still in Preview at time of writing)

 Azure Files

After signing up for Azure Files, new storage accounts created in a subscription carry an extra “” endpoint. Create a new storage endpoint in an affinity group. The affinity group is guidance to the Azure deployment fabric keep everything close together. We will use this later to put our virtual machines in to keep performance optimal.

New-AzureAffinityGroup -Name o365accel -Location “Southeast Asia”
New-AzureStorageAccount -AffinityGroup o365accel -StorageAccountName o365accel -Description “o365accel cache” -Label “o365accel”

Download the Azure Files bits from here taking care to follow instructions and UnBlock the PowerShell zip before extracting it and then execute the following:

# import module and create a context for account and key
import-module .\AzureStorageFile.psd1
$ctx=New-AzureStorageContext <storageaccountname> <storageaccountkey>
# create a new share
$s = New-AzureStorageShare “Cache” -Context $ctx

This script tells Azure to create a new fileshare called “Cache” over my nominated storage account. Now we have a file share running over the storage account that can be accessed by any server in the farm and they can freely and safely read and write files to it without fear of corruption between competing writers. To use the fileshare, one further step is required; to attach it to windows, and there is a bit of a trick here. Attaching mapped drive fileshares is a user based activity rather than a system based one. So we need to map the drive and set the AppPool to run as that user.

  • Change the Identity of the DefaultAppPool to run as Network Service


  • Now set up a Scheduled Task to run as “Network Service” at System Start Up to run the following command. This will add the credentials for accessing the file share

  • Where mapdrive.bat is something like:
    cmdkey / /user:<storageaccountname> o365accelcache /pass:<storageaccountkey>
  • Now we just need to reconfigure the ARR to use this new location as a secondary cache. Start IIS and Add a Secondary drive like this:

BranchCache Key

A BranchCache Content Server operates very well as a single server deployment but when deployed in a farm the nodes need to agree on and share the key used to hash and encrypt the blocks sent to clients. If reinstalling the servers independently this would require the export (Export-BCSecretKey) and import (Import-BCSecretKey) across the farm of the shared hashing key. However, in this case it’s not necessary because we are going to make the keys the same by another method. Cloning all the servers from a single server template.


In preparation for creating a farm of servers the certificates used to encrypt the SSL traffic will need to be available for reinstalling into each server after the cloning process. The easiest way to do this is to get those certificates onto the server now and then we can install and delete them as server instances are created and deployed into the farm.

  • Create a folder C:\Certs and copy the certs into that folder


Azure supports the cloning of existing machines through “sysprep”ing and capturing the running machine that has been preconfigured.

  • Remote Desktop connect to the Azure virtual server and run C:\Windows\System32\Sysprep\sysprep.exe

This will shut down the machine and save the server in a generic state so new replica servers can be created in its form. Now click on the server and choose Capture which will delete the virtual machine and take that snapshot.

The image will appear in your very own Virtual Machine gallery, right alongside the other base machines and pre built servers. Now create a new Cloud Service in the management console or PowerShell ensuring to choose the new Affinity Group for it to sit within. This will keep our servers, and the fileshare deployed close together for performance.

New-AzureService -AffinityGroup o365accel -ServiceName o365accel

Now we can create a farm of O365 Accelerators which will scale up and down as load on the services does. This is a much better option than trying to guess the right sized hardware appliance from a third party vendor)

Now we could just go through this wizard a few times and create a farm of O365Accelerator servers, but this repetitive task is a job better done by scripting using Azure Powershell cmdlets. The following script does just that. It will create n machine instances and also configure those machines by reinstalling the SSL certificates through a Remote Powershell session.

After running this script multiple server instances will be deployed into the farm all serving content from a shared farm wide cache.

As a final configuration step, we use the Auto Scale feature in Azure to scale back the number of instances until required. The following setup uses the CPU utilisation on the servers to adjust up and down the number of servers from an idle state of 1 to a maximum of 4 servers thereby giving a very cost effective cloud accelerator and caching service which will automatically adjust with usage as required. This configuration says every 5 minutes have a look at the CPU load across the farm, if its greater than 75% then add another server, if its less than 50% remove a server.

 Try doing that with a hardware appliance! Next we’ll load up the service and show how it operates under load and delivers a better end user experience.

Publishing to Azure Event Hubs using a .NET Micro Framework Device

In previous posts, Kloudies Matt Davies and Olaf Loogman have shown how we connect Arduino based devices to the Azure platform. Preferring the .NET Micro Framework (NETMF) platform myself, I thought it time to show how we can publish senor data to Azure Event Hubs using a NETMF connected device.

.NET Micro Framework

Like Arduino, the .NET Micro Framework is an open source platform that runs on small, microcontroller based devices or “things”as we call them now in the world of the Internet-of-Things (IoT). However, unlike the Arduino platform, developers in the NETMF world use Visual Studio and C# to develop embedded applications leveraging the rich developer experience that comes with working within the Visual Studio IDE. Using the .NET Gadgeteer toolkit, we take this experience to the next level using a model driven development approach with graphical designers that abstracts much of the low level “engineering” aspects of embedded device development.

net gadgeteer i VS

Whether we are working with earlier NETMF versions or with the Gadgeteer toolkit, we still get the rich debugging experience and deployment features from Visual Studio which is the big differentiator of the NETMF platform.

FEZ Panda II

The device I have had for a number of years is the FEZ Panda II from GHI Electronics (FEZ stands for Freak’N’Easy).


Although not a member of the newer .NET Gadgeteer family, the FEZ Panda II still provides those in the maker community a solid device platform for DIY and commercial grade applications. The FEZ Panda II sports:

  • 72Mhz 32-bit processor with 512KB of FLASH and 96KB of RAM (compared to the Arduino Yun’s 16 MHz and 32KB of FLASH and  2KB of RAM)
  • Micro SD socket for up to 16GB of memory
  • Real Time Clock (RTC)
  • Over 60 digital inputs and outputs
  • TCP/IP HTTP support
  • Arduino shield compatibility

Note: The FEZ Panda II does not have built in support for TLS/SSL which is required to publish data to Azure Event Hubs. This is not a problem for the newer Gadgeteer boards such as the FEZ Raptor.

Our Scenario

iot cloud gateway

The scenario I will walkthrough in this post will feature our NETMF embedded device with a number of analogue sensors taking readings a couple of times per second and publishing the sensor data to an Azure Event Hub via a field gateway. A monitoring application will act as an event hub consumer to display sensor readings in near real-time.

  • Sensors – Thermometer (degs Celsius) and Light (intensity of light as a % with zero being complete darkness)
  • Device – FEZ Panda II connected to the internet using the Wiznet W5100 ethernet controller.
  • Field Gateway – Simple IIS Application Request Routing rule in an Azure hosted Virtual Machine that routes the request as-is to a HTTPS Azure Event Hub endpoint.
  • Cloud Gateway – Azure Event Hub configured as follows:
    • 8 partitions
    • 1 day message retention
    • Monitoring Consumer Group
    • Publisher access policy for our connected device
    • Consumer access policy for our monitoring application
  • Monitoring Application – Silverlight (long live Ag!) application consuming events off the Monitoring Consumer Group.

Creating the Visual Studio Solution

To develop NETMF applications we must first install the .NET Micro Framework SDK and any vendor specific SDK’s. Using Visual Studio we create a NETMF project using the .NET Micro Framework Console Application template (or Gadgeteer template if you are using the newer family of devices).


For the FEZ Panda II, I need to target NETMF version 4.1.


Additionally, I also need to add assembly references to the device manufacturer’s SDK libraries, GHI Electronics in my case.


Sensor Code

Working with sensors and other modules is fairly straight forward using NETMF and the GHI libraries. To initialise an instance of my sensor class I need to know:

  • The analogue pin on the device my sensor is connected to
  • The interval between sensor readings
  • The min/max values of the readings
public Thermometer(FEZ_Pin.AnalogIn pin, int interval, int min, int max)
    // Set reading parameters
    _interval = interval;
    _minScale = min;
    _maxScale = max;
    // Initialise thermometer sensor 
    _sensor = new AnalogIn((AnalogIn.Pin)pin);
    _sensor.SetLinearScale(_minScale, _maxScale);

    // Set sensor id
    _sensorId = "An" + pin.ToString() + _sensorType;

I then use a background thread to periodically take sensor readings and raise an event passing the sensor data as an event argument

void SensorThreadStart()
    while (true)
        // Take a sensor reading
        var temp = _sensor.Read();

        // Create sensor event data 
        var eventData = new SensorEventData()
            DeviceId = _deviceId,
            SensorData = new SensorData[]
                    new SensorData() { SensorId = _sensorId, SensorType = _sensorType, SensorValue = temp }

        // Raise sensor event

        // Pause


A critical attribute of any “thing” in the world of IoT is being connected. When working with resource constrained devices we quickly come to terms with having to perform many lower level functions than we may not be accustomed to in our day to day development. Initialising your network stack may be one of them…

public static void InitNetworkStack()
    Debug.Print("Network settings...");

        // Enable ethernet

        // Enable DHCP
        Dhcp.EnableDhcp(new byte[] { 0x00, 0x5B, 0x1C, 0x51, 0xC6, 0xC7 }, "FEZA");
        Debug.Print("IP Address: " + new IPAddress(NetworkInterface.IPAddress).ToString());
        Debug.Print("Subnet Mask: " + new IPAddress(NetworkInterface.SubnetMask).ToString());
        Debug.Print("Default Gateway: " + new IPAddress(NetworkInterface.GatewayAddress).ToString());
        Debug.Print("DNS Server: " + new IPAddress(NetworkInterface.DnsServer).ToString());
    catch (Exception ex)
        Debug.Print("Network settings...Error: " + ex.ToString());



Note the use of the Debug.Print statements. While in debug mode these are written to the output Window for easy troubleshooting and debugging.

Event Hub Client

As I write this, we don’t yet have a Azure SDK for NETMF (but we have been told it is on its way). Like most services in Azure, Event Hubs provides a REST based API that I can consume using plain old web http requests. To handle the access control, I assigned a pre-generated SAS token to the device during deployment. This avoids the resource constrained device having to generate a SAS token itself and use up precious memory doing so.

To construct our request to Event Hubs we need the following details:

  • Service Bus Namespace
  • Event Hub name
  • PartitionKey (I am using a device ID)
  • Authorisation token
public EventHubClient(string serviceNamespace, string eventhub, string deviceName, string accessSignature)
    // Assign event hub details
    _serviceNamespace = serviceNamespace;
    _hubName = eventhub;
    _deviceName = deviceName;
    _sas = accessSignature;

    // Generate the url to the event hub
    //_url = "https://" + _serviceNamespace + "" + _hubName + "/Publishers/" + _deviceName;

    //  Note: As the FEZ Panda (.NET MF 4.1) does not support SSL I need to send this to the field gateway over HTTP
    _url = "" + _serviceNamespace + "/" + _hubName + "/" + _deviceName;

Note here I have switched my Event Hub Client to use an intermediary field gateway URL as the device does not support SSL and cannot post requests directly to Azure Event Hub endpoint.

Finally the actual payload is the sensor data that I serialise into json format. Event Hubs is payload agnostic so any stream of data may be sent through the hub. Anything from sensor data, application logging or perhaps observational data from medical devices can be published to Azure Event Hubs.

public bool SendEvent(SensorEventData sensorData)
    var success = false;
        // Format the sensor data as json
        var eventData = sensorData.ToJson();

        Debug.Print("Sending event data: " + eventData);

        // Create an HTTP Web request.
        HttpWebRequest webReq = HttpWebRequest.Create(_url) as HttpWebRequest;

        // Add required headers
        webReq.Method = "POST";
        webReq.Headers.Add("Authorization", _sas);
        webReq.ContentType = "application/atom+xml;type=entry;charset=utf-8";
        webReq.ContentLength = eventData.Length;
        webReq.KeepAlive = true;                

        using (var writer = new StreamWriter(webReq.GetRequestStream()))

        webReq.Timeout = 3000; // 3 secs
        using (var response = webReq.GetResponse() as HttpWebResponse)
            Debug.Print("HttpWebResponse: " + response.StatusCode.ToString());
            // Check status code
            success = (response.StatusCode == HttpStatusCode.Created);
    catch (Exception ex)

    return success;


Wiring it all together is the job of our entry point Main(). Here we initialise our network stack, sensors, LEDs and of course our Azure Event Hub client. We then wire up the sensor event handlers and off we go.

public static void Main()
    // Initialise device and sensors

    // Setup Event Hub client
    client = new EventHubClient(serviceNamespace, hubName, deviceName, sas);

    Debug.Print("Device ready");

    // Start sensor monitoring

static void Init()
    // Enable ethernet

    // Init LED 
    led = new LED((Cpu.Pin)FEZ_Pin.Digital.Di5, false);

    // Init thermometer sensor
    thermo = new Thermometer(FEZ_Pin.AnalogIn.An2, 500, -22, 56);
    thermo.SensorReadEvent += SensorReadEvent;

    // Init light sensor
    light = new Light(FEZ_Pin.AnalogIn.An3, 500, 0, 100);
    light.SensorReadEvent += SensorReadEvent;

    // Flash once if all is good

static void SensorReadEvent(SensorEventData data)
    // Send event to Event Hubs
    if (!client.SendEvent(data))
        // Flash three times if failed to send
        // Flash once if all is good

Note the use of the LED, connected to digital pin 5, to provide runtime feedback. We flash the LED once for every successful publish of an event and three times if we have a failure. It is this kind of low level controller interaction that makes NETMF development such a satisfying, albeit geeky pastime.

Field Gateway

As mentioned above, the FEZ Panda II does not support TLS/SSL. To overcome this, I posted sensor data to a “field gateway” consisting of a simple IIS Application Request Routing rule to perform the protocol translation from HTTP to HTTPS. The ARR rule only performed a rewrite of the URL and did not need to enrich or modify the request in any other way.


Our Consumer

Azure Event Hubs provides Consumer Groups that subscribe to events published to the hub. Only one consumer can receive events from each partition at time so I have found it good practice to create at least two consumer groups so that one group can be used for monitoring as required while your downstream processing application/services consume the primary consumer group. To this end, I developed a quick Silverlight application (yes I know. long live Ag!) to act as a monitoring consumer for the event hub.

event hubs consumer


The .NET Micro Framework provides a great way for .NET developers to participate in the growing Internet-of-Things movement for a relatively small ( < $100 ) outlay while retaining the rich developer experience using familiar tools such as Visual Studio. Azure Event Hubs provides the platform for a cloud-based device gateway allowing the ingestion of millions of events that downstream applications and services can consume for real-time analytics and ordered processing.

Fix: Azure Cloud Services Error: No deployments were found. Http Status Code: NotFound

If you find yourself having to move existing .Net solutions to Microsoft Azure you may come across an initial deployment issue if you add a Cloud Service Project type to your existing application and then publish it to a new Azure Cloud Service using Visual Studio. It’s not immediately obvious what the source of the issue is so let’s take a look at how we can troubleshoot the source of the problem.

Visual Studio provides you with the vanilla error message “Error: No deployments were found. Http Status Code: NotFound” which doesn’t provide much guidance on what the actual source is!

Visual Studio Deployment Error

The way to troubleshoot this is to package the solution you want to deploy and then manually deploy via the Azure Management Portal. Create a package by selecting the menu option shown below.

Package Option in Visual Studio

This option will produce a cspkg file (essentially a zip file) and a cscfg configuration file which you provide in the Upload dialog in the Azure Management Portal which is shown next.

Upload Package

After uploading and provisioning our new test Cloud Service we get this error back from the Portal.

Deployment Failed

When we click on the arrow we finally see the actual source of this failure!

Guest OS doesn't support this .Net Framework version.

The source of the issue (and the fix)

Cloud Services operate on Windows-based virtual machines that can run a range of “Guest OS” releases from Windows Server 2008 up to Windows Server 2012 R2. Each Guest OS release supports a different set of .Net framework versions as detailed on MSDN.

When you create a new Cloud Service project in Visual Studio it sets the default Guest OS to “2” which based on the above MSDN article does not support .Net 4.5. The fix for this is relatively simple – you need to update the ServiceConfiguration configuration file so that it has the correct setting of “3” as per the below snippet.

<ServiceConfiguration serviceName="TestCloudService" xmlns="…" osFamily="3" osVersion="*" schemaVersion="2014-06.2.4">

Once you have done this you will be able to deploy your solution to Azure. Happy days!

Azure Active Directory Synchronization Services: How to Install, Backup & Restore with full SQL

Microsoft recently released the latest version of the Directory Synchronisation tool; Azure Active Directory Synchronisation Services (AADSync). The “one sync to rule them all” is likely going to be your first choice for synchronising identities to the Microsoft cloud.

Installing and configuring the tool is relatively straight forward for the majority of deployments and this process is well documented at the Microsoft Azure Documentation Centre. If your organisation has a large number of identities (100,000+), Microsoft recommends deploying the AADSync tool with a full installation of SQL. This process, including the backup & restoration of AADSync, is not so well documented and something I am going to cover in this post.

Installing AADSync with full SQL


1. A Windows Server for AADSync. Windows 2008 to 2012 R2 is supported. PowerShell 3+ and .Net 4.5 are required.

2. An SQL Server. SQL 2008 to SQL 2012 R2 is supported.

3. An Office 365 account with Global Administrator permissions. You can set the password to never expire via PowerShell:
Set-MsolUser -UserPrincipalName -PasswordNeverExpires $true

4. An Active Directory user account to act as a service account. This doesn’t require any special permissions but you should set the password to never expire. In my demo this account is contoso\aadsync.

5. An Active Directory user account for the installation. This account should be a member of the Administrators group on the AADSync server & requires sysadmin privileges for the target SQL instance.

6. The AADSync Installation Media

Installing AADSync

1. Log on to the AADSync server with the installation account and launch the AADSync installation executable you downloaded earlier (MicrosoftAzureADConnectionTool.exe). Close the installation screen when it opens.

Up to this point, the installation process has already created a local folder in ‘C:\Program Files\Microsoft Azure AD Connection Tool’ that includes the AADSync files and a ‘DirectorySyncTool’ shortcut will have been created in the start menu and on the desktop.

2. Open a command prompt (run as administrator) and execute the following command to install to a full SQL database:

DirectorySyncTool.exe /sqlserver localhost /sqlserverinstance <Instance Name> /serviceAccountDomain <Domain Name> /serviceAccountName <Service Account>
/serviceAccountPassword <Password>

The /sqlserverinstance and /serviceAccount* parameters are all optional. If you don’t specify the SQL instance name, it will use the default instance. If you don’t specify the serviceAccount details, the installation will create a random service account to run the AADSync service. I highly recommend specifying the serviceAccount parameters so that you know the credentials; which will be required later for the backup & restore process.

In my demo, I decided to use the aadsync account and install to the default instance, so my installation command looks like:

“C:\Program Files\Microsoft Azure AD Connection Tool\DirectorySyncTool.exe” /sqlserver /serviceAccountDomain contoso /serviceAccountName aadsync /serviceAccountPassword P@ssw0rd

3. At this point, the installation wizard will open and you can run through the configuration as normal; a process that is well documented here. Once the installation has completed a database called ‘ADSync’ will have been provisioned to the target SQL instance.

Backup the AADSync Service

There is no need to backup the AADSync server itself, only the database. You can use any standard backup processes to backup your SQL database, though this will be useless in the event you need to restore the database & attach a replacement AADSync server unless you have the encryption keys. To create a backup of the encryption keys:

1. Disable the  ‘Azure AD Sync’ scheduled task and ensure all synchronisation jobs have completed.

2. Disable the ‘Microsoft Azure AD Sync’ service.

3. Start the ‘Synchronization Service Key Management’ from start menu. Make sure you ‘Run as administrator’.

4. Select ‘Export key set’ and click ‘Next’.

5. Enter the credentials of the AADSync service account (this is why we specified the credentials during the installation).

6. Specify a location to store the for the encryption key backup file and click ‘Next’ & ‘Finish’.

The wizard will have generated a .bin file in the location you specified. You need to store this in location that will allow you to connect a replacement AADSync server to the database (i.e. not on the current AADSync server).

Restore the AADSync Service

1. Restore your AADSync SQL database from backup.

2. On your new AADSync server, install the AADSync service following the same steps as in you did for the initial install, specifying the remote SQL server and service account details. Setup will run through as usual and then display the following screen – note the ‘Unable to retrieve configuration settings..’ error:


The event viewer explains the issue in more detail:


At this point I logged off/on again to ensure my account had enumerated the correct permissions (as a newly added member of the AADSyncAdmins security group).

3. Open a command prompt (run as administrator) and execute the following command to restore the encryption keys:

miisactivate.exe <encryption key file> <AADSync service account> <password>

In my lab, I executed: “C:\Program Files\Microsoft Azure AD Sync\Bin\miisactivate.exe” C:\Temp\AADSyncKey.bin contoso\aadsync *

4. A warning is displayed to explain that activating this server, with the original running may cause data corruption. Our original server is no longer online so we can say ‘Yes’.


5. Enter the password for the AADSync service account. Note that the * in step 3 will prompt you to enter the password.

6. You receive a notification that the operation completed successfully. You will also see an event that shows the AADSync service is now up and running.



7. At this point you will need to re-enable the ‘Azure AD Sync’ scheduled task to resume normal operations.

Fix: unable to delete an Azure Storage Container due to a lease.

A quick tip for anyone who gets stuck when trying to delete an Azure Blob Storage Container that appears to be empty but upon deletion generates the following helpful error message

There is currently a lease on the container and no lease ID was specified in the request…

You’ve looked and there are no VHD or other objects appearing in this Container and you have found that you are unable to change the lease settings on the Container (yes, you can read them but you can’t update or remove them).

This is a little gotcha and is all down to Virtual Machine Images.

If you have created any Machine Images their VHDs will be held in a Blob Container but will not show up as other blobs – you will need to switch to the Virtual Machines > Images tab (shown below as I delete my old Images) in order to delete or otherwise update the Image so you can remove the Container.

Delete VM Images

Hope this helps you save a few minutes in your day!

Azure Table Storage little gem – InsertOrMerge

This blog describes the usage of the InsertOrMerge operation for Azure Table Storage.

Each entity in Table Storage is defined by the PartitionKey/RowKey combination. InsertOrMerge will insert the entity if it doesn’t exist and, if it exists, it would merge the properties of updated entity with the existing one. For more details, see the Azure Storage blog.

When comparing with the existing table schema, not all properties are required to be specified for this operation. InsertOrMerge only merges the specified properties; the rest of them are ignored. As a result, this feature is very convenient — for instance, when aggregating dashboard data. There might be a background process, which simultaneously assembles various pieces of dashboard data from multiple sources. There is an obvious race condition, which InsertOrMerge solves straight forward.

To demonstrate this, let’s define the following entity that we would like to store in Table Storage.

public class Dashboard : TableEntity
    public string AppleType { get; set; }
    public int ApplesCount { get; set; }
    public string OrangeType { get; set; }
    public int OrangesCount { get; set; }

Now, let’s define two other entities that contain only some of the existing properties of the Dashboard entity:

public class DashboardApple : TableEntity
    public string AppleType { get; set; }
    public int ApplesCount { get; set; }

public class DashboardOrange: TableEntity
    public string OrangeType { get; set; }
    public int OrangesCount { get; set; }

The following code saves the two entities defined above using the InsertOrMerge operation and then retrieves the full entity.

public async Task<Dashboard> Save(
        DashboardApple dashboardApple,
        DashboardOrange dashboardOrange)
    var storageAccount = ...
    var tableClient = storageAccount.CreateCloudTableClient();
    table = tableClient.GetTableReference("Dashboard");

    var t1 = table.ExecuteAsync(TableOperation.InsertOrMerge(dashboardApple));
    var t2 = table.ExecuteAsync(TableOperation.InsertOrMerge(dashboardOrange));

    await Task.WhenAll(t1, t2);

    var tableResult = await table.ExecuteAsync(
            TableOperation.Retrieve<Dashboard>(partitionKey, rowKey));

    return (Dashboard)tableResult.Result;

To test the above code the two entities with an identical PartitionKey/RowKey must be created:

var dashboardApple = new DashboardApple
    PartitionKey = "myPartition",
    RowKey = "myRow",
    AppleType = "GoldenDelicious",
    ApplesCount = 5

var dashboardOrange = new DashboardOrange
    PartitionKey = "myPartition",
    RowKey = "myRow",
    OrangeType = "Naval",
    OrangesCount = 10

var dashboard = await Save(dashboardApple, dashboardOrange);

This results in a single entity in the table with the following properties:

    PartitionKey = "myPartition",
    RowKey = "myRow",
    AppleType = "GoldenDelicious",
    ApplesCount = 5,
    OrangeType = "Naval",
    OrangesCount = 10

The data has been aggregated.

Extending Yammer SSO to Support Users Without an Email Address


Yammer Enterprise is offered through the Microsoft Office 365 Enterprise plan. Deployment of Yammer Single Sign-On (SSO) for Office 365 users with a valid primary email address is a relative simple and well documented process.

One of our customers had a requirement for Yammer as a social platform, however a large percentage of their workforce are not enabled for email services. In the ‘SSO Implementation FAQ‘ published by Microsoft, it suggests that it is possible to configure SSO support for user accounts that do not have an email address associated with them, however there isn’t any supporting documentation to go with it.

The process outlined here assumes that Yammer SSO has already been enabled for users with a valid primary email address and all user accounts have been configured with a publicly routable UserPrincipalName suffix (UPN) for logon. This blog post provides guidance for extending Yammer SSO to support users without an email address, requiring a custom claim configuration on ADFS and the Office 365 tenant to enable this scenario.

ADFS Configuration

As in the image below, you should have an existing ‘Relying Party Trust’ configuration on ADFS if Yammer SSO is enabled for ordinary email enabled users.

Note: The ‘E-Mail Address’ at right side column for ‘Outgoing Claim Type‘ should be replaced with ‘SAML_SUBJECT’.

In order to extend the support to users without primary email address the ‘samAccountName’ attribute will be used for the claim rule (you could also use the UserPrincipalName). Therefore the following four custom claim rules need to be created and configured on the ‘Issuance Transform Rules‘ tab under the ‘Relying Party Trusts‘ node of the ADFS management console.

1. Remove the existing rule for ‘E-Mail-Addresses‘ under ‘Issuances Transform Rules
2. Add following custom rules in the order specified below to ensure the logic flows

Rule 1: Check for Email Address
– Click on Add Rules and select custom rule
– Insert the following text and save

@RuleName = &quot;Check for Email&quot;
c:[Type == &quot;;, Issuer == &quot;AD AUTHORITY&quot;]
=&gt; add(store = &quot;Active Directory&quot;, types = (&quot;;), query = &quot;;mail;{0}&quot;, param = c.Value);

Rule 2: Check for No Email Address
– Click on Add Rules and select custom rule
– Insert the following text and save

@RuleName = &quot;No email&quot;
NOT EXISTS([Type == &quot;;])
=&gt; add(Type = &quot;http://emailCheck&quot;, Value = &quot;NoEmail&quot;);

Rule 3: If No Email Address Exists Use samAccountName Attribute
– Click on Add Rules and select custom rule
– Insert the following text and save

@RuleName = &quot;Send samAccountName for users without email&quot;
c:[Type == &quot;;, Issuer == &quot;AD AUTHORITY&quot;]
&amp;&amp; [Type == &quot;http://emailCheck&quot;, Value == &quot;NoEmail&quot;]
=&gt; issue(store = &quot;Active Directory&quot;, types = (&quot;SAML_SUBJECT&quot;), query = &quot;;samAccountName;{0}&quot;, param = c.Value);

Rule 4: Use Primary Email Address if email address exists
– Click on Add Rules and select custom rule
– Insert the following text and save

@RuleTemplate = &quot;LdapClaims&quot;
@RuleName = &quot;Send email to Yammer&quot;
c:[Type == &quot;;, Issuer == &quot;AD AUTHORITY&quot;]
=&gt; issue(store = &quot;Active Directory&quot;, types = (&quot;SAML_SUBJECT&quot;), query = &quot;;mail;{0}&quot;, param = c.Value);

The custom rules will be listed in order that they were created in as shown below:

Office 365 Tenant Configuration

You will need to raise a support request with Microsoft to set the ‘Allow Fake Email‘ option on the email domain being used for Yammer SSO. For all user accounts without a valid email address the ‘Fake Email: true‘ flag will be set after its authentication by ADFS and the Microsoft Office 365 Support Engineer will be able validate this for you.

Yammer Directory Synchronization Tool

Yammer DirSync is typically used for synchronising user account information between your Active Directory and Office 365 Yammer. Yammer DirSync does not officially support user accounts without a valid primary email address as stated in the Yammer Directory Synchronization FAQ:

As such, the recommended way to do this would be to manually synchronise your user list to Yammer by using a CSV. To automate the synchronisation for user accounts without an email address, custom coding through the Yammer REST API would be required.

As is documented in the Yammer configuration guide, Yammer DirSync only requires the two attributes of GUID and mail set on the user accounts for it to work. As a workaround it would be possible to populate the mail attribute in Active Directory with the ‘fake’ email address for the user accounts you would like to synchronise, however this may not be a suitable approach for every environment.