Command and control with Arduino, Windows Phone and Azure Mobile Services

In most of our posts on the topic of IoT to date we’ve focussed on how to send data emitted from sensors and devices to centralised platforms where we can further process and analyse this data. In this post we’re going to have a look at how we can reverse this model and control our ‘things’ remotely by utilising cloud services. I’m going to demonstrate how to remotely control a light emitting diode (LED) strip with a Windows Phone using Microsoft Azure Mobile Services.

To control the RGB LED strip I’m going to use an Arduino Uno, a breadboard and some MOSFETs (a type of transistor). The LED strip will require more power than the Arduino can supply, so I’m using a 9V battery as a power supply which needs to be separated from the Arduino power circuit, hence why we’re using MOSFET transistors to switch the LEDs on and off.

The Arduino Uno will control the colour of the light by controlling three MOSFETs – one each for the red, blue and green LEDs. The limited programmability of the Arduino Uno means we can’t establish an Azure Service Bus relay connection, or use Azure Service Bus queues. Luckily Azure Mobile Services allow us to retrieve data via plain HTTP.

A Windows Phone App will control the colour of the lights by sending data to the mobile service. Subsequently the Arduino Uno can retrieve this data from the service to control the colour by using a technique called ‘pulse width modulation‘ on the red, green and blue LEDs. Pulse width modulation allows us to adjust the brightness of the LEDs by quickly turning on and off a particular LED colour, thus artificially creating a unique colour spectrum.

For the purpose of this example we won’t incorporate any authentication in our application, though you can easily enforce authentication for your Mobile Service with a Microsoft Account by following these two guides:

A diagram showing our overall implementation is shown below.

Command and Control diagram

Mobile service

We will first start by creating an Azure Mobile Service in the Azure portal and for the purpose of this demonstration we can use the service’s free tier which provides data storage up to 20MB per subscription.

Navigate to the Azure portal and create a new service:

Creating a Mobile Service 1

Next, choose a name for your Mobile Service, your database tier and geographic location. We’ll choose a Javascript backend for simplicity in this instance.

Creating a Mobile Service 2

Creating a Mobile Service 3

In this example we’ll create a table ‘sensordata’ with the following permissions:

Mobile Service Permissions

These permissions allows us to insert records from our Windows Phone app with the application key, and have the Arduino Uno retrieve data without any security. We could make the insertion of new data secure by demanding authentication from our Windows Phone device without too much effort, but for the purpose of this demo we stick to this very basic form of protection.

In the next section we’re going to create a Windows Phone application to send commands to our mobile service.

Windows Phone Application

To control the colour in a user friendly way we will use a colour picker control from the Windows Phone Toolkit, which can be installed as a NuGet package. This toolkit is not compatible with Windows Phone 8.1 yet, so we’ll create a Windows Phone Silverlight project and target the Windows Phone 8.0 platform as shown below.

Visual Studio Create Project 1

Visual Studio Create Project 2

Next, we’ll install the ‘Windows Phone Toolkit’ NuGet package as well as the mobile services NuGet package:

Install Windows Phone Toolkit Nuget

Install Mobile Services NuGet

For the purpose of this demo we won’t go through all the colour picker code in detail here. Excellent guidance on how to use the colour picker can be found at on the Microsoft Mobile Developer Wiki.

The code that sends the selected colour to our mobile service table is as follows.

The event data consists of colour data in the RGB model, separated by semicolons.

The complete working solution can be found in this Github repository. Make sure you point it to the right Azure Mobile Service and change the Application Key before you use it!

Run the application and pick a colour on the phone as shown below.

Phone ScreenShot

Now that we have a remote control that is sending out data to the Mobile Service it’s time to look at how we can use this information to control our LED strip.

Arduino sketch

In order to receive commands from the Windows Phone App we are going to use OData queries to retrieve the last inserted record from the Azure Mobile Servicewhihch exposes table data via OData out of the box. We can easily get the last inserted record in JSON format via a HTTP GET request to a URL similar to the following:

https://myiotservice.azure-mobile.net/tables/sensordata?$top=1&$orderby=__createdAt%20desc

When we send a HTTP GET request, the following HTTP body will be returned:

[
  {
    "id":"A086CE3F-5FD3-45B6-A967-E0928E3C5A96",
    "DeviceId":"PhoneEmulator",
    "SensorId":"ColorPicker",
    "EventType":"RGB",
    "EventData":"0;0;255"
  }
]

Notice how the colour is set to blue in the RGB data.

The Arduino schematics for the solution:

Arduino Command Control Schematic

For illustrative purposes I’ve drawn a single LED. In reality I’m using a LED strip that needs more voltage than the Arduino can supply, hence the 9V battery is attached and MOSFET transistors are used. Don’t attach a 9V battery to a single LED or it will have a very short life…

The complete Arduino sketch:

When we run the sketch the JSON data will be retrieved, and the colour of the LED strip set to blue:

The Working Prototype!

In this article I’ve demonstrated how to control a low end IoT device that does not have any HTTPS/TLS capabilities. This scenario is far from perfect, and ideally we want to take different security measures to prevent unauthorised access to our IoT devices and transport data. In a future article I will showcase how we can resolve these issues by using a much more powerful device than the Arduino Uno with an even smaller form factor: the Intel Edison. Stay tuned!

Xamarin Test Cloud – the new kid on the block?

Early last year I was working for one of our customers to find out an optimum test solution for their upcoming mobile application. The idea was that it should be heavily automated, efficient and cost-effective. The first observations (and the feeling) that I had was that we have very little choice in the way of tool choices. That was unfortunate, but the reality.

Almost a year later, when looking back and reflecting on some of those findings, I can see things have changed immensely. There have been announcements coming from key industry players almost every month. New promises being made! But then again, is it sufficient?

While development of mobile applications has become much simpler, the same is not quite true when it comes to testing them.

I earlier wrote a post on ‘what you should consider’ when you look to adopt a test platform/approach. While it always seems ideal that you go by those suggestions, it is not often possible, partly because you do not have the all-in-one toolset/solution that can tick all the boxes. In the constantly evolving mobile application testing market, there is no gold standard for choosing your test solution either – you have options ranging from feature-rich, flexible open source platforms or more organised and supported licensed platforms, each with their own pros and cons.

I recently discovered a test solution provided by Xamarin, namely Xamarin Test Cloud. While Xamarin is strong in the application development space, it is a relatively new entrant in the testing space. Regardless, I must say I am quite impressed with what they have so far! While it is still in its infancy, and will take them some time to compete with the other key players, in the Xamarin eco-system this might play well.

I will spend some time in this blog on how you can quickly set up yourself to run some tests against the Xamarin Test Cloud. I am consciously not going into the Architecture and other details as you can find many of those references in their website.

What is Xamarin Test Cloud?

Xamarin Test Cloud is the web-based cloud service where you can run your tests against a wide range of physical mobile devices that are managed by Xamarin in their lab. Like some other cloud based service providers, the obvious advantage is you do not have to worry about the device lab and you ony pay for what you use. You can use the Xamarin Test Cloud in two different ways:

  1. via Xamarin.UITest nuget (Xamarin-provided API)
  2. via Calabash

How to run tests using Xamarin.UITest?

We will talk about the first option here for the sake of simplicity. Here are my high level steps.

Step 1

The first thing that you need to do is create a Test project and download the Xamarin.UITest and NUnit nuget packages.

As a side note, Xamarin tests use the NUnit framework and do not support MSTest yet. The focus on NUnit is because the tests can be run on OSX / Linux boxes which don’t support MSTest.

Test project setup

Step 2

You can now go ahead and write your tests – in this example, I have used an android app (.apk) from the Selendroid project.

In order to write tests, you need to use the Xamarin.UITest API that provides you with ways to interact with your AUT (mobile application). Xamarin’s website has detailed documentation on the scripting process and how you can write get started pretty quickly.

Here is how you can set up your application to run against the device or emulator. If you want to debug and run your tests against the local emulator, you can install one from Google or other emulators in the market.

Unit Test Setup Code Snippet

A simple unit test looks like the below. It taps on a pop-up button and then dismisses the pop up. You could add assert statements to make it a complete test – something I have not done here!

Sample Unit Test

Running your test against a local emulator

Once you build the solution, you can run the tests using the standard NUnit interface. The only word of advice is to make sure to set the path of your .apk file properly. Ideally the .apk file should be in the same path as in the test DLLs. Note that you do not need to worry about the path in the code when you run against the Test cloud.

NUnit Test Runner

The android emulator will load the application and the test framework will run its automation scripts.

Android Emulator

Running your tests against Xamarin Test Cloud

Now you can run the same tests against Xamarin Test Cloud – leveraging hundreds of devices! All you need to do is to specify against which devices you want your tests to run.

You need to have a Xamarin subscription to do this (if you don’t have one you can request a trial subscription for 14 days if you want to try it out first).

From the Test Cloud interface once you specify the devices and your test suite, it will provide you with a command that contains the subscription details (an API key), the path to your test DLLs and source path to your application file (.apk). You can run a test using a command line similar to the one below.

test-cloud.exe submit aut-test-app-0.12.0.apk apikeyprovidedbyxamarinxxxyyyyyzzzzzzz --devices devicekeygeneratedbytestcloud --series "master" --locale "en_US" --assembly-dir "C:\XamarinTest\XamarinTest\bin\Debug"

Note: This is run from the path where the Xamarin.UITest nuget was downloaded. When you follow the path, you will see ‘test-cloud.exe’ residing under ‘packages\Xamarin.UITest.[0.6.8]\tools’. This command line helps to execute the above command and connect to Xamarin Test Cloud.

As an example the same command can be run as below where I have moved to the directory where test-cloud.exe resides and executed the command from there.

Test Cloud run screenshot

The command line will show you the progress of your test run. In this example, I am running my tests against three Android devices in Test Cloud.

Test run results in Test cloud

The Test Cloud infrastructure will track results for you and you can follow them through the Test Cloud web interface as shown below.

Test results in Test Cloud web interface

Once a test run is finished, the Test Cloud dashboard will indicate the completed status and you can drill down to see the results from each of the devices.

Dashboard and test results

Dashboard sample 1

Dashboard sample 2

Dashboard sample 3

Dashboard sample 4

Individual test runs:

You can click on the test runs and find the results for each of the devices. On your left side you should be able to view all your tests and the individual steps:

Individual Test run sample

And when you focus on a single device, you can go something like this:

Individual Test run sample

As you can see, this gives us a fair amount of detail about the test run!

The way I see it, Test Cloud gives us three distinct advantages:

  1. Diversity of devices: the ability to run our tests against many devices helps us to see the application behaviour against far more devices than we may have had access traditionally. That to me is a huge win – to be able to compare the behaviour.
  2. Script once: your tests do not need to change for devices – it is the same code!
  3. Identify the failure points: In mobile application testing, finding the “point of failure” is far more important than to see what is working. In Test Cloud you can instantly get a feedback if certain steps do not work on any specific device.

At the same time, the wishlist of missing features is also long. I think Xamarin will have to continue to add new features to keep the offering relevant and distinct from the competition. An example is easier integration with some of the popular build management systems (it can be done in somewhat roundabout way in its current version), support for Windows Phone and a truly cross-platform environment where you should be able to run your appium (or any other tests). The power and reach will increase ten fold!

Mobile application testing has become a focal point for the industry, especially with the progress of the Appium project and its alignment with Selenium 3 and we can only hope it gets better every day. If you look back at the past two years, this space has already matured many times over and new features are being released almost every week. But so does the complexity and people’s expectation of applications.

The huge year-on-year growth of applications and increasing usage promises an exciting future for mobile applications. This is why it is key for our mobile applications to be well tested because applications become truly ‘exciting’ when they deliver what you want and not when all you get it “Sorry… something went wrong”!

Kloud develops online learning portal for leading education organisation

Customer Overview

Catholic Education South Australia (CESA) is made up of the South Australian Commission for Catholic Schools (SACCS), Catholic Education Office of South Australia (CEO) and 103 Catholic schools across South Australia. The organisation comprises 6,000 staff who care for more than 48,000 students.

Business Situation

Catholic Education South Australia recently made the decision to offer the capabilities of Office 365 to its 103 schools across the state (including Exchange email, Lync, SharePoint and Office on Demand). As part of this offering, CESA sought to leverage Office 365 to provide each school with a portal for students and teachers to collaborate.

Solution

Kloud worked with CESA to ensure comprehensive understanding of the requirements and delivered a solution design document based on the needs of the organisation. Following acceptance of the design document, Kloud commenced configuration of the tenant (in particular SharePoint Online) in readiness for the deployment of to-be created templates. Kloud worked closely with the Learning and Technology Team to create conceptual designs for the following types of templates that would be used within each school portal:

  • School
  • Class
  • Community
  • Professional Learning.

From these designs Kloud developed prototypes in SharePoint and iteratively refined them with regular reviews from the Learning and Technologies Team. The final solution included a custom application which created the school sites in Office 365 and a remote provisioning application in Azure for self-service site creation. The latter provided teachers with a mechanism to create their own class, community and professional learning sites based on the predefined template which they could then fine-tune to suit their needs.

Benefits

The school portal empowers students and teachers to collaborate in a safe and well-monitored environment. They can now easily share documents, images and videos as well as create blogs or post questions in a single place.

Through the class sites, students will be able to spend their time collaborating with others in their class as well as teachers who will provide additional resources and oversight. The community sites allow students to join groups of interest, either social or academic, and is a great way for like-minded students to expand their learning or be more informed about what is happening. Likewise, the professional learning sites allow teachers to share ideas and resources about a subject or stream which will translate to better learning outcomes for students.

“CESA’s Learning and Technologies team worked with consultants from Kloud on the functionality and design of the Office 365 and SharePoint Online templates for schools. We were impressed by the communication and analytical skills used to meet our organisation’s needs and to inform its direction. High levels of expertise supported the project, as well as knowledge of the product, solid task prioritisation, budget management and timely reporting” – Karen Sloan, Learning and Technologies Senior Education Advisor, CESA.

Sharing Azure SSO Access Tokens Across Multiple Native Mobile Apps

This blog post is the fourth and final in the series that cover Azure AD SSO in native mobile applications.

  1. Authenticating iOS app users with Azure Active Directory
  2. How to Best handle AAD access tokens in native mobile apps
  3. Using Azure SSO tokens for Multiple AAD Resources From Native Mobile Apps
  4. Sharing Azure SSO Access Tokens Across Multiple Native Mobile Apps (this post).

Introduction

Most enterprises have more than one mobile app and it’s not unusual for these mobile apps to interact with some back-end services or APIs to fetch and update data. In the previous posts of this series we looked at how to manage access to APIs and share tokens as a way to enable a mobile app to interact with multiple AAD-secured resources.

This post will cover how to share access tokens to AAD resources across multiple mobile apps. This is very useful if the enterprise (or any mobile app vendor) wants to provide convenience for the users by not asking them to authenticate on every mobile app the user has.

Once a mobile user logs into Azure AD and gets a token we want to reuse the same token with other apps. This is suitable for some scenarios, but it might not be ideal for apps and resources that deal with sensitive data – the judgement is yours.

Sharing Tokens Across Multiple Mobile Apps

Moving on from previous posts it is now time to enable our mobile apps to share the access token. In this scenario we are covering iOS devices (iOS 6 and above), however other mobile platforms provide similar capabilities too. So what are the options for sharing data on iOS:

KeyChain Store

iOS offers developers a simple utility for storing and sharing keys, this is called SecKeyChain. The API has been part of the iOS platform since before iOS 6, but in iOS 6 Apple integrated this tool with iCloud, to make it even easier to push any saved passwords and keys to Apple iCloud, and then share them on multiple devices.

We could use iOS SecKeyChain to store the token (and the refreshToken) once the user logs in on any of the apps. When the user starts using any of the other apps, we check the SecKeyChain first before attempting to authenticate the user.

public async Task AsyncInit(UIViewController controller, ITokensRepository repository)
{
	_controller = controller;
	_repository = repository;
	_authContext = new AuthenticationContext(authority);
}

public async Task<string> RefreshTokensLocally()
{
	var refreshToken = _repository.GetKey(Constants.CacheKeys.RefreshToken, string.Empty);
	var authorizationParameters = new AuthorizationParameters(_controller);

	var result = "Refreshed an existing Token";
	bool hasARefreshToken = true;

	if (string.IsNullOrEmpty(refreshToken)) 
	{
		var localAuthResult = await _authContext.AcquireTokenAsync(
			resourceId1, clientId, 
                        new Uri (redirectUrl), 
                        authorizationParameters, 
                         UserIdentifier.AnyUser, null);

		refreshToken = localAuthResult.RefreshToken;
		_repository.SaveKey(Constants.CacheKeys.WebService1Token, localAuthResult.AccessToken, null);


		hasARefreshToken = false;
		result = "Acquired a new Token"; 
	} 

	var refreshAuthResult = await _authContext.AcquireTokenByRefreshTokenAsync(refreshToken, clientId, resourceId2);
	_repository.SaveKey(Constants.CacheKeys.WebService2Token, refreshAuthResult.AccessToken, null);

	if (hasARefreshToken) 
	{
		// this will only be called when we try refreshing the tokens (not when we are acquiring new tokens. 
		refreshAuthResult = await _authContext.AcquireTokenByRefreshTokenAsync(refreshAuthResult.RefreshToken,  clientId,  resourceId1);
		_repository.SaveKey(Constants.CacheKeys.WebService1Token, refreshAuthResult.AccessToken, null);
	}

	_repository.SaveKey(Constants.CacheKeys.RefreshToken, refreshAuthResult.RefreshToken, null);

	return result;
}

Some of the above code will be familiar from previous posts, but what has changed is that now we are passing ITokenRepository which would save any tokens (and refreshTokens) once the user logs in to make them available for other mobile apps.

I have intentionally passed an interface (ITokenRepository) to allow for different implementations, in case you opt to use a different approach for sharing the tokens. The internal implementation of the concrete TokenRepository is something like this:

public interface ITokensRepository 
{
	bool SaveKey(string key, string val, string keyDescription);
	string GetKey(string key, string defaultValue);
	bool SaveKeys(Dictionary<string,string> secrets);
}

public class TokensRepository : ITokensRepository
{
	private const string _keyChainAccountName = "myService";

	public bool SaveKey(string key, string val, string keyDescription)
	{
		var setResult = KeychainHelpers.SetPasswordForUsername(key, val, _keyChainAccountName, SecAccessible.WhenUnlockedThisDeviceOnly, false );

		return setResult == SecStatusCode.Success;
	}

	public string GetKey(string key, string defaultValue)
	{
		return KeychainHelpers.GetPasswordForUsername(key, _keyChainAccountName, false) ?? defaultValue;
	}
		
	public bool SaveKeys(Dictionary<string,string> secrets)
	{
		var result = true;
		foreach (var key in secrets.Keys) 
		{
			result = result && SaveKey(key, secrets [key], string.Empty);
		}

		return result;
	}
}

iCloud

We could use Apple iCloud to push the access tokens to the cloud and share them with other apps. The approach would be similar to what we have done above with the only difference being in the way we are storing these keys. Instead of storing them locally, we push them to Apple iCloud directly. As the SecKeyChain implementation above does support pushing data to iCloud, I won’t go through the implementation details here and simply note the option is available for you.

Third Party Cloud Providers (ie Azure)

Similar to the previous option, but offer more flexibility. This is a very good solution if we already are already using Azure Mobile Services for our mobile app. We can create one more table and then use this table to store and share access tokens. The implementation of this could be similar to the following:

public async Task<string> RefreshTokensInAzureTable()
{
	var tokensListOnAzure = await tokensTable.ToListAsync();
	var tokenEntry = tokensListOnAzure.FirstOrDefault();
	var authorizationParameters = new AuthorizationParameters(_controller);

	var result = "Refreshed an existing Token";
	bool hasARefreshToken = true;

	if (tokenEntry == null) 
	{
		var localAuthResult = await _authContext.AcquireTokenAsync(resourceId1, clientId, new Uri (redirectUrl),  authorizationParameters, UserIdentifier.AnyUser, null);

		tokenEntry = new Tokens {
			WebApi1AccessToken = localAuthResult.AccessToken,
			RefreshToken = localAuthResult.RefreshToken,
			Email = localAuthResult.UserInfo.DisplayableId,
			ExpiresOn = localAuthResult.ExpiresOn
		};
		hasARefreshToken = false;
		result = "Acquired a new Token"; 
	} 
		
	var refreshAuthResult = await _authContext.AcquireTokenByRefreshTokenAsync(tokenEntry.RefreshToken, 
                                                                                    clientId, 
                                                                                    resourceId2);
	tokenEntry.WebApi2AccessToken = refreshAuthResult.AccessToken;
	tokenEntry.RefreshToken = refreshAuthResult.RefreshToken;
	tokenEntry.ExpiresOn = refreshAuthResult.ExpiresOn;

	if (hasARefreshToken) 
	{
		// this will only be called when we try refreshing the tokens (not when we are acquiring new tokens. 
		refreshAuthResult = await _authContext.AcquireTokenByRefreshTokenAsync (refreshAuthResult.RefreshToken, clientId, resourceId1);
		tokenEntry.WebApi1AccessToken = refreshAuthResult.AccessToken;
		tokenEntry.RefreshToken = refreshAuthResult.RefreshToken;
		tokenEntry.ExpiresOn = refreshAuthResult.ExpiresOn;
	}

	if (hasARefreshToken)
		await tokensTable.UpdateAsync (tokenEntry);
	else
		await tokensTable.InsertAsync (tokenEntry);

	return result;
}

Words of Warning

Bearer Tokens

Developers need to understand Bearer Tokens when using Azure AD authentication. Bearer Tokens mean anybody who has the token (bearer of the token) could access and interact with your AAD resource. This offers high flexibility but it could also be a security risk if your key was exposed somehow. This needs to be thought of when implementing any token sharing mechanism.

iOS SecKeyChain is “Secure”

iOS SecKeyChain is “Secure”, right? No, not at all. Apple calls it secure, but on jail-broken devices, you could see the key store as a normal file. Thus, I would highly recommend encrypting these access tokens and any key that you might want to store before persisting it. The same goes for iCloud, Azure, or any of the other approaches we went through above.

Apple AppStore Verification

If you intend on submitting your app to Apple AppStore, then you need to be extra careful with what approach you take to share data between your apps. For enterprises (locally deployed apps), you have the control and you make the call based on your use case. However, Apple has a history of rejecting apps (ie PastePane) for using some of iOS APIs in “an unintended” manner.

I hope you found this series of posts useful, and as usual, if there is something not clear or you need some help with similar projects that you are undertaking, then get in touch, and we will do our best to help. I have pushed the sample code from this post and the previous ones to GitHub, and can be found here

Has.

This blog post is the fourth and final in the series that cover Azure AD SSO in native mobile applications.

  1. Authenticating iOS app users with Azure Active Directory
  2. How to Best handle AAD access tokens in native mobile apps
  3. Using Azure SSO tokens for Multiple AAD Resources From Native Mobile Apps
  4. Sharing Azure SSO Access Tokens Across Multiple Native Mobile Apps (this post).

Get Started with Docker on Azure

Originally posted on siliconvalve:

The most important part of this whole post is that you need to know that the whale in the Docker logo is officially named “Moby Dock“. Once you know that you can probably bluff your way through at least an introductory session on Docker :).

It’s been hard to miss the increasing presence of Docker, particularly if you work in cloud technology. Each of the major cloud providers has raced to provide container services (Azure, AWS, GCE and IBM) and these platforms see benefits in the higher density hosting they can achieve with minimal changes to existing infrastructure.

In this post I’m going to look at first steps to getting Docker running in Azure. There are other posts about that will cover this but there are a few gotchas along the way that I will cover off here.

First You Need a Beard

Anyone…

View original 874 more words

Using a Proxy with Azure AD Sync Services

In this blog I am going to cover some tips and tricks for using Azure AD Sync Services with a proxy… including the specific URLs required for whitelisting, the proxy settings used during the installation, configuration and running of the tool, and a workaround for apps that do not support authenticating proxies.

URL Whitelisting

It is generally recommended to whitelist all the Office 365 URLs to bypass proxy infrastructure as this provides the best performance and avoids issues with applications that are not compatible with an authenticating proxies (OneDrive for Business client installations, Exchange Hybrid services, Azure AD Sync Services and so on…). Although this is the easiest path to adoption and least likely to encounter technical issues, it is not always possible. This is particularly true for security conscious organisations, where whitelisting wildcard addresses may be undesirable.

If you want to be specific with the URLs required for Azure AD Sync Services, the following URLs must bypass proxy authentication:

  • adminwebservice.microsoftonline.com
  • login.microsoftonline.com

Proxy Settings

When you run through the DirectorySyncTool.exe wizard to install and configure Azure AD Sync Services, at the point where you first enter your Azure AD credentials the wizard will use the proxy settings defined for the current logged on Windows user. In this instance, make sure you’ve configured your proxy settings in Internet Options (inetcpl.cpl) for the user running the installation.

In step 8 (Configure), the installation wizard connects to and configures Azure Active Directory. This step of the wizard attempts an outbound HTTPS to login.microsoftonline.com using the proxy settings defined for the Azure AD Sync Services service account. This service account is either the one you specified during the installation (if you ran the DirectorySyncTool.exe with the /serviceAccount* paramater), or the one that was automatically created by the wizard.

I’ve previously written about my recommendations to specify a service account for the installation so that you know the credentials. In this case you can easily configure the proxy settings by launching inetcpl.cpl with the service account. For example:

runas /user:<domain>\<AADSync Service Account> "control.exe inetcpl.cpl"

Once the Azure AD Sync Services installation is complete, all synchronisation events are going to run under the context of the Azure AD Sync Services service account and will rely on the proxy settings defined in inetcpl.cpl.

AADSync with an authenticating Proxy

If for some reason you can’t bypass an authenticating proxy for AADSync, or you’re desperate to get AADSync up and running while you wait for the proxy admin to add the URLs to a whitelist (my scenario), CNTLM to the rescue! I used this recently to get Azure AD Sync Services working with an authenticating proxy and it’s as easy as:

  1. Download and install CNTLM on the AADSync server
  2. Configure the cntlm.ini with the proxy server and authentication details (you can save the account password or an NTLM hash, for those that are concerned about saving credentials in plain text)
  3. Start the CNTLM service
  4. Configure CNTLM as your proxy in Internet Settings (default is 127.0.0.1:3128)
  5. Install and Configure AADSync

AADSync – AD Service Account Delegated Permissions

When you configure Azure AD Sync (AADSync), you need to provide credentials of an account that is used by AADSync’s AD DS Management Agent to connect to your on-premises Active Directory. In previous versions of DirSync this was achieved via running the configuration wizard as a ‘Enterprise Admin’ and thus allowing the installer to create a service account and apply permissions to the Directory on your behalf. The account could have any of these following permissions to your AD DS based on your choices for purpose of the sync:

  1. Write Access to User, Contact, Groups Attributes – Hybrid Exchange Deployment
  2. Password Changes – Password Synchronisation
  3. Write Access to Passwords – Password Write-back (AAD Premium)

There has been a lot of talk lately about new tools to help you configure synchronisation of your directory with ‘(this) many clicks’. While these new tools do some great pre-req checks and wrap everything into a nice shiny wizard that helps guide you through your experience, it currently puts the burden of creating this service account and applying AD DS permissions back on you. It is now your responsibility to raise a change with the Active Directory team, in which you will need explain how you are going to splatter permissions all over their directory.

So we should re-assure the Active Directory team that we can create a service account and appy LEAST permissions on the directory for this account using the following script(s).

Apply all that are appropriate to your scenario:

Exchange Hybrid Deployment:

For rich co-existence between your on-premises Exchange infrastructure and Office 365 you must allow the service account to write-back attributes to your on-premises environment.

Configure Hybrid Write-back:

###--------variables
$DN = "OU=Users,OU=Company,DC=mydomain,DC=com"
$Account = "mydomain\svc_aadsync"

###--------variables

<#
Switches used in cmds

http://technet.microsoft.com/en-us/library/cc771151.aspx

/I:S = Specifies the objects to which you are applying the permissions.'S' - The child objects only
/G = Grants the permissions that you specify to the user or group
WP = Write to a property Permission

#>

###---Update Attributes

#Object type: user
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;proxyAddresses;user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;msExchUCVoiceMailSettings;user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;msExchUserHoldPolicies;user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;msExchArchiveStatus;user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;msExchSafeSendersHash;user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;msExchBlockedSendersHash;user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;msExchSafeRecipientsHash;user'"
Invoke-Expression $cmd
#Object type: group
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;proxyAddresses;group'"
Invoke-Expression $cmd
#Object type: contact
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;proxyAddresses;contact'"
Invoke-Expression $cmd

 

Validate

Use DSACLS to validate your settings
dsacls “\\DCHostname.mydomain.com\OU=Users,OU=Company,DC=mydomain,DC=com”

Your output should resemble:

Inherited to user
 Allow BUILTIN\Pre-Windows 2000 Compatible Access
                                       SPECIAL ACCESS for Group Membership   <Inherited from parent>
                                       READ PROPERTY
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for msExchSafeSendersHash
                                       WRITE PROPERTY
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for msExchArchiveStatus
                                       WRITE PROPERTY
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for msExchUCVoiceMailSettings
                                       WRITE PROPERTY
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for msExchBlockedSendersHash
                                       WRITE PROPERTY
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for msExchSafeRecipientsHash
                                       WRITE PROPERTY
 Inherited to contact
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for proxyAddresses
                                       WRITE PROPERTY
 Inherited to user
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for proxyAddresses
                                       WRITE PROPERTY
 Inherited to group
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for proxyAddresses
                                       WRITE PROPERTY

Password Synchronisation:

For granting permissions to the service account for reading password hashes from your on-premises AD DS you must allow the special permission of Replicating Directory Changes & Replicating Directory Changes ALL.

Configure Password Synchronisation:

###--------variables
$Account = "mydomain\svc_aadsync"

###--------variables

<#
Switches used in cmds

http://technet.microsoft.com/en-us/library/cc771151.aspx

/G = Grants the permissions that you specify to the user or group
CA = Control access
If you do not specify {ObjectType | Property} to define the specific extended right for control access, this permission applies to all meaningful control accesses on the object; otherwise, it applies only to the specific extended right for that object.
#>

$RootDSE = [ADSI]"LDAP://RootDSE"
$DefaultNamingContext = $RootDse.defaultNamingContext
$ConfigurationNamingContext = $RootDse.configurationNamingContext

###---Update Attributes

#Object type: user
$cmd = "dsacls '$DefaultNamingContext' /G '`"$Account`":CA;`"Replicating Directory Changes`";'"
Invoke-Expression $cmd
$cmd = "dsacls '$DefaultNamingContext' /G '`"$Account`":CA;`"Replicating Directory Changes All`";'"
Invoke-Expression $cmd

Validate

The output of the cmdlet if completed successfully you will find:

Allow mydomain\svc_aadsync           Replicating Directory Changes

 

Password Write-back:

To grant the service account password write-back permission on the directory you must allow the special permissions of Reset Password & Change Password extended rights.

Configure Password Write-back


###--------variables
$DN = "OU=Users,OU=Company,DC=mydomain,DC=com"
$Account = "mydomain\svc_aadsync"

###--------variables

<#
Switches used in cmds

http://technet.microsoft.com/en-us/library/cc771151.aspx

/I:S = Specifies the objects to which you are applying the permissions.'S' - The child objects only
/G = Grants the permissions that you specify to the user or group
CA = Control access
If you do not specify {ObjectType | Property} to define the specific extended right for control access, this permission applies to all meaningful control accesses on the object; otherwise, it applies only to the specific extended right for that object.
#>

###---Update Attributes

#Object type: user

$cmd = "dsacls '$DN' /I:S /G '`"$Account`":CA;`"Reset Password`";user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":CA;`"Change Password`";user'"
Invoke-Expression $cmd

Validate

Run dsacls “\\DCHostname.mydomain.com\OU=Users,OU=Company,DC=mydomain,DC=com” once again to find your entry:

Allow mydomain\svc_aadsync           Reset Password

 Check your AD MA credentials

  1. Open the ‘Synchronization Service’
  2. Choose ‘Connectors’
  3. Select the Connector with Type ‘Active Directory Domain Services’
  4. Right-Click ‘Properties’
  5. Configure Directory Partitions
  6. Select the radio button below

ADMA_SetCredentials

Add your credentials for the service account

ADMA_SetCredentials2

 

Deploying Office Pro Plus without admin rights

There are many ways to install Office Pro Plus to your client base. You can let the user install it from the web, push it out via SCCM or Intune or simply provide the user with an installation package. However, every now and then you come across some special requirements where security is tight and some options are not available for various reasons. In this post I show you how to deploy Office Pro Plus to client machines where users do not have administrative access.

Problem

In one of my recent projects, I was tasked to help a client deploy a POC for Office Pro Plus via Click-To-Run. The company has a few thousand client machines and these are secured following best practice;  users do not have admin rights on their PC’s, PowerShell execution policy is set to restricted and UAC is enabled.

The company uses a non-Microsoft enterprise software distribution tool, however this is managed by a third party and due to time constrains Office Pro Plus should be deployed with a different method. Because of the existing investment in the tool, other deployment tools like SCCM and Intune where also out of the question.

And to make it just a bit more challenging the client wanted to:

  • Enable the user to start the Office Pro Plus deployment
  • Have an easy and repeatable deployment process
  • Not require any administrator intervention for the deployment process

We decided to store the Office binaries on a DFS Share that is accessible by all users. An AD security group would be used to determine who should be able to install Office and adding a user to this security group should be the only step an Administrator needed to do to allow the user to install Office.

Solution

I am assuming that you already know how to download, customize and install Office Click-To-Run. After some testing and several trials we implemented the following Procedure.

Needed components

  • A new security group to which the administrator can assign the POC users
  • A GPO which creates a link to a batch file on the DFS Share
  • A software installation service account that has administrator rights on the client PCs
  • Batch File a:
    • Copies file to a temporary folder on the client machine
    • Elevates a secondary batch file
  • Batch File b:
    • Starts the deployment process
    • Removes temporary data after the installation
  • The new security group was quickly created
  • For the GPO I chose to create a Group Policy Preference that copies an existing link (pointing to Batch File A) to the desktop of the user.

A new Security Group and GPO was created

Group Policy Object:

3

A service account was created in AD

The appropriate rights were given to the account via Active Directory / Group Policy.

To be able to use the account within a script I needed to create a password hash with a secure key. This will later allow me to run the 2nd batch file evaluated as the service account.The PasswordHash was created via the below PowerShell commands

Creating Password Hash Key:

$ServiceaccountPassword = &quot;Enter Password for Service Account here&quot;
$SecuredPassword = $ServiceaccountPassword | ConvertTo-SecureString -AsPlainText -Force
$key = (54,33,233,1,34,78,7,6,33,35,99,9,4,12,87,33,34,2,111,1,1,2,23,32)
$PasswordKey = ConvertFrom-SecureString $SecurePassword -Key $key

Batch File 1

After the users activates the link on his desktop a batch file while be executed. The first batch file will

  • Inform the user what is about to happen
  • Copy the following files to the users machine:
    • the Office Pro Plus setup.exe from the Office Pro Plus deployment toolkit
    • the Office Pro Plus configuration file
    • the second batch file discussed in point 5
  • execute the second batch file as the service account (i.e. with admin rights)

Note: Initially I wanted to use a PowerShell script instead of the batch file, however this would have presented a UAC prompt for elevation. By using PowerShell –command I was able to get surpress the UAC prompt.

Batch File 1

echo off
echo /****************************************************
echo /* We are now installing Office Pro Plus onto your PC
echo /*
echo /* A Windows pop up will appear shortly,
echo /* Please select yes on the installation pop-up
echo /*
echo /* Please do not close this window
echo /****************************************************
copy \\DFSSHARE\OfficeProPlus\setup.exe c:\temp\setup.exe
copy \\DFSSHARE\OfficeProPlus\Install-Full_no_Lync.xml c:\temp\Install-Full_no_Lync.xml
copy \\DFSSHARE\OfficeProPlus\Install-Full_no_Lync.bat c:\temp\Install-Full_no_Lync.bat

powershell -command "$SecurePasswordKey = '$PasswordKey : … UltralongKey from the Step 3'; $key =(54,33,233,1,34,78,7,6,33,35,99,9,4,12,87,33,34,2,111,1,1,2,23,32); $SecurePassword = ConvertTo-SecureString -String $SecurePasswordKey -Key $key; $cred = new-object -typename System.Management.Automation.PSCredential -argumentlist 'domainname\serviceaccount', $SecurePassword; Start-Process -FilePath c:\temp\Install-Full_no_Lync.bat -Credential $cred"

 Batch File 2

The second batch file, which runs as the service account, now executes the Office Pro Plus setup routine and delete the temporary files after the deployment.
@echo off
echo /****************************************************
echo /*
echo /* A Windows pop up will appear shortly,
echo /* Please select yes on the installation pop-up
echo /*
echo /* Please do not close this window
echo /****************************************************
c:\temp\setup.exe /CONFIGURE c:\temp\Install-Full_no_Lync.xml
del c:\temp\setup.exe
del c:\temp\Install-Full_no_Lync.xml
del c:\temp\Install-Full_no_Lync.bat
del %userprofile%\desktop\Install-Office-2013.lnk

User’s setup experience

After the user was added to the Security Group and GPOs have been refreshed the user will find a new icon on their desktop:

After launching the file the user will see a command prompt

As well as a UAC prompt asking for permissions to change local Data (without the need to supply admin credentials)

Once the user clicks “Yes”, the deployment process will complete within 15-60 minutes depending on PC and network performance.

Michael

About meEmail me | LinkedIn

Using Azure SSO Tokens for Multiple AAD Resources From Native Mobile Apps

This blog post is the third in a series that cover Azure Active Directory Single Sign-On (SSO) authentication in native mobile applications.

  1. Authenticating iOS app users with Azure Active Directory
  2. How to Best handle AAD access tokens in native mobile apps
  3. Using Azure SSO tokens for Multiple AAD Resources From Native Mobile Apps (this post)
  4. Sharing Azure SSO access tokens across multiple native mobile apps.

Introduction

In an enterprise context it is highly likely there are multiple web services that your native mobile app needs to consume. I had exactly this scenario at one of my clients who asked if they could maintain the same SSO token in the background in the mobile app and use it for accessing multiple web services. I spent some time digging through the documentation and conducting some experiments to confirm some points and this post is to share my findings.

Cannot Share Azure AD Tokens for Multiple Resources

The first thing that comes to mind is to use the same access token for multiple Azure AD resources. Unfortunately this is not allowed. Azure AD issues a token for a certain resource (which is mapped to an Azure AD app). When we call AcquireToken, we need to provide a single resourceID. The result is the token can only be used for resource matching the supplied identifier.

There are ways where you could use the same token (as we will see later in this post), but it is not recommended as it complicates operations logging, authentication process tracing, etc. Therefore it is better to look at the other options provided by Azure and the ADAL library.

Use Refresh-Token to Acquire Tokens for Multiple Resources

The ADAL library supports acquiring multiple access tokens for multiple resources using a “refresh token”. This means once a user is authenticated, the ADAL’s authentication context is able to generate an access token to multiple resources without authenticating the user again. This is covered briefly by the MSDN documentation. A sample implementation to retrieve this token is shown below.

public async Task<string> RefreshTokens()
{
	var tokenEntry = await tokensRepository.GetTokens();
	var authorizationParameters = new AuthorizationParameters (_controller);

	var result = "Refreshed an existing Token";
	bool hasARefreshToken = true;

	if (tokenEntry == null) 
	{
		var localAuthResult = await _authContext.AcquireTokenAsync (
			resourceId1, 
                        clientId, 
                        new Uri (redirectUrl), 
                        authorizationParameters, 
                        UserIdentifier.AnyUser, 
                        null);

		tokenEntry = new Tokens {
			WebApi1AccessToken = localAuthResult.AccessToken,
			RefreshToken = localAuthResult.RefreshToken,
			Email = localAuthResult.UserInfo.DisplayableId,
			ExpiresOn = localAuthResult.ExpiresOn
		};
		hasARefreshToken = false;
		result = "Acquired a new Token"; 
	} 

	var refreshAuthResult = await _authContext.AcquireTokenByRefreshTokenAsync(
                                tokenEntry.RefreshToken, 
                                clientId, 
                                resourceId2);

	tokenEntry.WebApi2AccessToken = refreshAuthResult.AccessToken;
	tokenEntry.RefreshToken = refreshAuthResult.RefreshToken;
	tokenEntry.ExpiresOn = refreshAuthResult.ExpiresOn;

	if (hasARefreshToken) 
	{
		// this will only be called when we try refreshing the tokens (not when we are acquiring new tokens. 
		refreshAuthResult = await _authContext.AcquireTokenByRefreshTokenAsync (
                                                     refreshAuthResult.RefreshToken, 
                                                     clientId, 
                                                     resourceId1);

		tokenEntry.WebApi1AccessToken = refreshAuthResult.AccessToken;
		tokenEntry.RefreshToken = refreshAuthResult.RefreshToken;
		tokenEntry.ExpiresOn = refreshAuthResult.ExpiresOn;
	}

	await tokensRepository.InsertOrUpdateAsync (tokenEntry);

	return result;
}

As you can see from above, we check if we have an access token from previous calls, and if we do, we refresh the access tokens for both web services. Notice how the _authContext.AcquireTokenByRefreshTokenAsync() method provides an overloading parameter that takes a resourceId. This enables us to get multiple access tokens for multiple resources without having to re-authenticate the user. The rest of the code is similar to what we have seen in the previous two posts.

ADAL Library Can Produce New Tokens For Other Resources

In the previous two posts we looked at ADAL and how it uses the TokenCache. Although ADAL does not support persistent caching of tokens yet on mobile apps, it still uses the TokenCache for in-memory caching. This enables ADAL to generate new access tokens if the AuthenticationContext still exists from previous authentication calls. Remember in the previous post we said it is recommended to keep a reference to the authentication-context? Here it comes in handy as it enables us to generate new access tokens for accessing multiple Azure AD resources.

var localAuthResult = await _authContext.AcquireTokenAsync (
                                   resourceId2, 
                                   clientId, 
                                   new Uri(redirectUrl),
                                   authorizationParameters,
                                   UserIdentifier.AnyUser, 
                                   null
                                 );

Calling AcquireToken() (even with no refresh token) would give us a new access token to the requested resource. This is due to ADAL checking if we have a refresh token in-memory which ADAL then uses that to generate a new access token for the resource.

An alternative

The third alternative is the simplest (but not necessarily the best). In this option, we can use the same access token to consume multiple Azure AD resources. To do this, we need to use the same Azure AD app ID when setting the two APIs for authentication via Azure AD. This requires some understanding of how the Azure AD authentication happens on our web apps.

If you refer to Taiseer Joudeh’s tutorial you will see that in our web app, we need to tell the authentication framework what our Authority is and the Audience (Azure AD App Id). If we set up both of our web APIs to use the same Audience (Azure AD app Id) we link them both into the same Azure AD application which allows use of the same access token to use both web APIs.

// linking our web app authentication to an Azure AD application
private void ConfigureAuth(IAppBuilder app)
{
	app.UseWindowsAzureActiveDirectoryBearerAuthentication(
		new WindowsAzureActiveDirectoryBearerAuthenticationOptions
		{
			Audience = ConfigurationManager.AppSettings["Audience"],
			Tenant = ConfigurationManager.AppSettings["Tenant"]
		});
}
<appSettings>
    <add key="Tenant" value="hasaltaiargmail.onmicrosoft.com" />
    <add key="Audience" value="http://my-Azure-AD-Application-Id" />	
</appSettings>

As I said before, this is very simple and requires less code, but could cause complications in terms of security logging and maintenance. At the end of the day, it depends on your context and what you are trying to achieve.

Conclusion

We looked at how we could use Azure AD SSO with ADAL to access multiple resources from native mobile apps. As we saw, there are three main options, and the choice could be made based on the context of your app. I hope you find this useful and if you have any questions or you need help with some development that you are doing, then just get in touch.

This blog post is the third in a series that cover Azure Active Directory Single Sign On (SSO) Authentication in native mobile applications.

  1. Authenticating iOS app users with Azure Active Directory
  2. How to Best handle AAD access tokens in native mobile apps
  3. Using Azure SSO tokens for Multiple AAD Resources From Native Mobile Apps (this post)
  4. Sharing Azure SSO access tokens across multiple native mobile apps.

Microsoft Windows IoT and the Intel Galileo

You might have seen one of these headlines a while back: ‘Microsoft Windows now running on Intel Galileo development board’, ‘Microsoft giving away free Windows 8.1 for IoT developers’. Now before we all get too excited, let’s have a closer look beyond these headlines and see what we’re actually getting!

Intel Galileo

With a zillion devices being connected to the Internet by the year 2020 a lot of hardware manufacturers want to have a piece of this big pie, and Intel got into the game by releasing two different development boards / processors: the Intel Galileo and more recently the Intel Edison.

Intel Galileo

Intel Galileo

Intel Edison

Intel Edison

The Galileo is Intel’s first attempt to break into consumer prototyping, or the ‘maker scene’. The board comes in two flavours, Gen 1 and Gen 2 with the latter being a slightly upgraded model of the first release.

Like many other development platforms the board offers hardware and pin compatibility with a range of Arduino shields to catch the interest from a large number of existing DIY enthusiasts. The fundamental difference between boards like the Arduino Uno and the Intel Galileo is that Arduino devices run on a real-time microcontroller (mostly Atmel Atmega processors) whereas the Galileo runs on a System on Chip architecture (SoC). The SoC runs a standard multi-tasking operating system like Linux or Windows, which aren’t real-time.

Both Gen1 and Gen2 boards contain an Intel Quark 32-bit 400 MHz processor, which is compatible with the Intel Pentium processor instruction set. Furthermore we have a full-sized mini-PCI express slot, a 100 Mb Ethernet port, microSD slot and USB port. The Galileo is a headless device which means you can’t connect a monitor via a VGA or HDMI unlike the Raspberry Pi for example. The Galileo effectively offers Arduino compatibility through hardware pins, and software simulation within the operation system.

The microSD card slot makes it easy to run different operating systems on the device as you can simply write an operating system image on an SD card, insert it into the slot and boot the Galileo. Although Intel offers the Yocto Poky Linux environment there are some great initiatives to support other operating systems. At Build 2014 Microsoft announced the ‘Windows Developer Program for IoT’. As part of this program Microsoft offers a custom Windows image that can run on Galileo boards (there’s no official name yet, but let’s call it Windows IoT for now).

Windows on Devices / Windows Developer Program for IoT

Great, so now we can run .NET Framework application, and for example utilise the .NET Azure SDK? Well not really, yet… The Windows image is still in Alpha release stage and only runs a small subset of the .NET CLR and is not able to support larger .NET applications of any kind. Although a simple “Hello World” application will run flawlessly, applications will throw multiple Exceptions as soon as functionality beyond the System.Core.dll is called.

So how can we start building our things? You can write applications using the Wiring APIs in exactly the same way as you program your Arduino. Microsoft provides compatibility with the Arduino environment with a set of C++ libraries that are part of a new Visual Studio project type when you setup your development environment according to the instructions on http://ms-iot.github.io/content/.

We’ll start off by creating a new ‘Windows for IoT’ project in Visual Studio 2013:

New IoT VS Project

The project template will create a Visual C++ console application with a basic Arduino program that turns the built-in LED on and off in a loop:

Now let’s grab our breadboard and wire up some sensors. For the purpose of this demo I will use the built-in temperature sensor on the Galileo board. The objective will be to transmit the temperature to an Azure storage queue.

Since the Arduino Wiring API is implemented in C++ I decided to utilise some of the other Microsoft C++ libraries on offer: the Azure Storage Client Library for C++, which in return is using the C++ REST SDK. They’re hosted on Github and Codeplex respectively and can both be installed as Nuget packages. I was able to deliver messages to a storage queue with the C++ library in a standard C++ Win32 console application, so assumed this would work on the Galileo. Here’s the program listing of the ‘main.cpp’ file of the project:

The instructions mentioned earlier explain in detail how to setup your Galileo to run Windows, so I won’t repeat that here. We can deploy the Galileo console application to the development board from Visual Studio. This simply causes the compiled executable to be copied to the Galileo via a file share. Since it’s a headless device we can only connect to the Galileo via good old Telnet. Next, we launch the deployed application on the command line:

Windows IoT command line output

Although the console application is supposed to write output to the console, none of it is shown. I am wondering if there are certain Win32 features missing in this Windows on Devices release, since no debug information is outputted to the console for most commands that are executed over Telnet. When I tried to debug the application from Visual Studio I was able to extract some further diagnostics:

IoT VS Debug Output

Perhaps this is due to a missing Visual Studio C++ runtime on the Galileo board. I tried to perform an unattended installation of this runtime it did not seem to install at all, although a lack of command line output makes this guesswork.

Conclusion

Microsoft’s IoT offering is still in its very early days. That doesn’t only apply to the Windows IoT operating system, but for also to Azure platform features like Event Hubs as well. Although this is an Alpha release of Windows IoT I can’t say I’m overly impressed. The Arduino compatibility is a great feature, but a lack of easy connectivity makes it just a ‘thing’ without Internet. Although you can use the Arduino Ethernet / HTTP library, I would have liked to benefit from the available C++ libraries to securely connect to APIs over HTTPS, something which is impossible on the Arduino platform.

The Microsoft product documentation looks rather sloppy at times and is generally just lacking and I’m curious to see what the next release will bring along. According to Microsoft’s FAQ they’re focussing on supporting the universal app model. The recent announcements around open sourcing the .NET Framework will perhaps enable us to use some .NET Framework features in a Galileo Linux distribution in the not-to-distant future.

In a future blog post I will explore some other scenarios for the Intel Galileo using Intel’s IoT XDK, Node JS and look at how to connect the Galileo board to some of the Microsoft Azure platform services.