AADSync – AD Service Account Delegated Permissions

When you configure Azure AD Sync (AADSync), you need to provide credentials of an account that is used by AADSync’s AD DS Management Agent to connect to your on-premises Active Directory. In previous versions of DirSync this was achieved via running the configuration wizard as a ‘Enterprise Admin’ and thus allowing the installer to create a service account and apply permissions to the Directory on your behalf. The account could have any of these following permissions to your AD DS based on your choices for purpose of the sync:

  1. Write Access to User, Contact, Groups Attributes – Hybrid Exchange Deployment
  2. Password Changes – Password Synchronisation
  3. Write Access to Passwords – Password Write-back (AAD Premium)

There has been a lot of talk lately about new tools to help you configure synchronisation of your directory with ‘(this) many clicks’. While these new tools do some great pre-req checks and wrap everything into a nice shiny wizard that helps guide you through your experience, it currently puts the burden of creating this service account and applying AD DS permissions back on you. It is now your responsibility to raise a change with the Active Directory team, in which you will need explain how you are going to splatter permissions all over their directory.

So we should re-assure the Active Directory team that we can create a service account and appy LEAST permissions on the directory for this account using the following script(s).

Apply all that are appropriate to your scenario:

Exchange Hybrid Deployment:

For rich co-existence between your on-premises Exchange infrastructure and Office 365 you must allow the service account to write-back attributes to your on-premises environment.

Configure Hybrid Write-back:

###--------variables
$DN = "OU=Users,OU=Company,DC=mydomain,DC=com"
$Account = "mydomain\svc_aadsync"

###--------variables

<#
Switches used in cmds

http://technet.microsoft.com/en-us/library/cc771151.aspx

/I:S = Specifies the objects to which you are applying the permissions.'S' - The child objects only
/G = Grants the permissions that you specify to the user or group
WP = Write to a property Permission

#>

###---Update Attributes

#Object type: user
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;proxyAddresses;user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;msExchUCVoiceMailSettings;user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;msExchUserHoldPolicies;user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;msExchArchiveStatus;user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;msExchSafeSendersHash;user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;msExchBlockedSendersHash;user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;msExchSafeRecipientsHash;user'"
Invoke-Expression $cmd
#Object type: group
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;proxyAddresses;group'"
Invoke-Expression $cmd
#Object type: contact
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;proxyAddresses;contact'"
Invoke-Expression $cmd

 

Validate

Use DSACLS to validate your settings
dsacls “\\DCHostname.mydomain.com\OU=Users,OU=Company,DC=mydomain,DC=com”

Your output should resemble:

Inherited to user
 Allow BUILTIN\Pre-Windows 2000 Compatible Access
                                       SPECIAL ACCESS for Group Membership   <Inherited from parent>
                                       READ PROPERTY
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for msExchSafeSendersHash
                                       WRITE PROPERTY
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for msExchArchiveStatus
                                       WRITE PROPERTY
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for msExchUCVoiceMailSettings
                                       WRITE PROPERTY
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for msExchBlockedSendersHash
                                       WRITE PROPERTY
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for msExchSafeRecipientsHash
                                       WRITE PROPERTY
 Inherited to contact
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for proxyAddresses
                                       WRITE PROPERTY
 Inherited to user
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for proxyAddresses
                                       WRITE PROPERTY
 Inherited to group
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for proxyAddresses
                                       WRITE PROPERTY

Password Synchronisation:

For granting permissions to the service account for reading password hashes from your on-premises AD DS you must allow the special permission of Replicating Directory Changes & Replicating Directory Changes ALL.

Configure Password Synchronisation:

###--------variables
$DN = "OU=Users,OU=Company,DC=mydomain;,DC=com"
$Account = "mydomain\svc_aadsync"

###--------variables

<#
Switches used in cmds

http://technet.microsoft.com/en-us/library/cc771151.aspx

/G = Grants the permissions that you specify to the user or group
CA = Control access
If you do not specify {ObjectType | Property} to define the specific extended right for control access, this permission applies to all meaningful control accesses on the object; otherwise, it applies only to the specific extended right for that object.
#>

$RootDSE = [ADSI]"LDAP://RootDSE"
$DefaultNamingContext = $RootDse.defaultNamingContext
$ConfigurationNamingContext = $RootDse.configurationNamingContext

###---Update Attributes

#Object type: user
$cmd = "dsacls '$DefaultNamingContext' /G '`"$Account`":CA;`"Replicating Directory Changes`";'"
Invoke-Expression $cmd
$cmd = "dsacls '$DefaultNamingContext' /G '`"$Account`":CA;`"Replicating Directory Changes All`";'"
Invoke-Expression $cmd

Validate

The output of the cmdlet if completed successfully you will find:

Allow mydomain\svc_aadsync           Replicating Directory Changes

 

Password Write-back:

To grant the service account password write-back permission on the directory you must allow the special permissions of Reset Password & Change Password extended rights.

Configure Password Write-back


###--------variables
$DN = "OU=Users,OU=Company,DC=mydomain;,DC=com"
$Account = "mydomain\svc_aadsync"

###--------variables

<#
Switches used in cmds

http://technet.microsoft.com/en-us/library/cc771151.aspx

/I:S = Specifies the objects to which you are applying the permissions.'S' - The child objects only
/G = Grants the permissions that you specify to the user or group
CA = Control access
If you do not specify {ObjectType | Property} to define the specific extended right for control access, this permission applies to all meaningful control accesses on the object; otherwise, it applies only to the specific extended right for that object.
#>

###---Update Attributes

#Object type: user

$cmd = "dsacls '$DN' /I:S /G '`"$Account`":CA;`"Reset Password`";user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":CA;`"Change Password`";user'"
Invoke-Expression $cmd

Validate

Run dsacls “\\DCHostname.mydomain.com\OU=Users,OU=Company,DC=mydomain,DC=com” once again to find your entry:

Allow mydomain\svc_aadsync           Reset Password

 Check your AD MA credentials

  1. Open the ‘Synchronization Service’
  2. Choose ‘Connectors’
  3. Select the Connector with Type ‘Active Directory Domain Services’
  4. Right-Click ‘Properties’
  5. Configure Directory Partitions
  6. Select the radio button below

ADMA_SetCredentials

Add your credentials for the service account

ADMA_SetCredentials2

 

Using Azure SSO Tokens for Multiple AAD Resources From Native Mobile Apps

This blog post is the third in a series that cover Azure Active Directory Single Sign-On (SSO) authentication in native mobile applications.

  1. Authenticating iOS app users with Azure Active Directory
  2. How to Best handle AAD access tokens in native mobile apps
  3. Using Azure SSO tokens for Multiple AAD Resources From Native Mobile Apps (this post)
  4. Sharing Azure SSO access tokens across multiple native mobile apps.

Introduction

In an enterprise context it is highly likely there are multiple web services that your native mobile app needs to consume. I had exactly this scenario at one of my clients who asked if they could maintain the same SSO token in the background in the mobile app and use it for accessing multiple web services. I spent some time digging through the documentation and conducting some experiments to confirm some points and this post is to share my findings.

Cannot Share Azure AD Tokens for Multiple Resources

The first thing that comes to mind is to use the same access token for multiple Azure AD resources. Unfortunately this is not allowed. Azure AD issues a token for a certain resource (which is mapped to an Azure AD app). When we call AcquireToken, we need to provide a single resourceID. The result is the token can only be used for resource matching the supplied identifier.

There are ways where you could use the same token (as we will see later in this post), but it is not recommended as it complicates operations logging, authentication process tracing, etc. Therefore it is better to look at the other options provided by Azure and the ADAL library.

Use Refresh-Token to Acquire Tokens for Multiple Resources

The ADAL library supports acquiring multiple access tokens for multiple resources using a “refresh token”. This means once a user is authenticated, the ADAL’s authentication context is able to generate an access token to multiple resources without authenticating the user again. This is covered briefly by the MSDN documentation. A sample implementation to retrieve this token is shown below.

public async Task<string> RefreshTokens()
{
	var tokenEntry = await tokensRepository.GetTokens();
	var authorizationParameters = new AuthorizationParameters (_controller);

	var result = "Refreshed an existing Token";
	bool hasARefreshToken = true;

	if (tokenEntry == null) 
	{
		var localAuthResult = await _authContext.AcquireTokenAsync (
			resourceId1, 
                        clientId, 
                        new Uri (redirectUrl), 
                        authorizationParameters, 
                        UserIdentifier.AnyUser, 
                        null);

		tokenEntry = new Tokens {
			WebApi1AccessToken = localAuthResult.AccessToken,
			RefreshToken = localAuthResult.RefreshToken,
			Email = localAuthResult.UserInfo.DisplayableId,
			ExpiresOn = localAuthResult.ExpiresOn
		};
		hasARefreshToken = false;
		result = "Acquired a new Token"; 
	} 

	var refreshAuthResult = await _authContext.AcquireTokenByRefreshTokenAsync(
                                tokenEntry.RefreshToken, 
                                clientId, 
                                resourceId2);

	tokenEntry.WebApi2AccessToken = refreshAuthResult.AccessToken;
	tokenEntry.RefreshToken = refreshAuthResult.RefreshToken;
	tokenEntry.ExpiresOn = refreshAuthResult.ExpiresOn;

	if (hasARefreshToken) 
	{
		// this will only be called when we try refreshing the tokens (not when we are acquiring new tokens. 
		refreshAuthResult = await _authContext.AcquireTokenByRefreshTokenAsync (
                                                     refreshAuthResult.RefreshToken, 
                                                     clientId, 
                                                     resourceId1);

		tokenEntry.WebApi1AccessToken = refreshAuthResult.AccessToken;
		tokenEntry.RefreshToken = refreshAuthResult.RefreshToken;
		tokenEntry.ExpiresOn = refreshAuthResult.ExpiresOn;
	}

	await tokensRepository.InsertOrUpdateAsync (tokenEntry);

	return result;
}

As you can see from above, we check if we have an access token from previous calls, and if we do, we refresh the access tokens for both web services. Notice how the _authContext.AcquireTokenByRefreshTokenAsync() method provides an overloading parameter that takes a resourceId. This enables us to get multiple access tokens for multiple resources without having to re-authenticate the user. The rest of the code is similar to what we have seen in the previous two posts.

ADAL Library Can Produce New Tokens For Other Resources

In the previous two posts we looked at ADAL and how it uses the TokenCache. Although ADAL does not support persistent caching of tokens yet on mobile apps, it still uses the TokenCache for in-memory caching. This enables ADAL to generate new access tokens if the AuthenticationContext still exists from previous authentication calls. Remember in the previous post we said it is recommended to keep a reference to the authentication-context? Here it comes in handy as it enables us to generate new access tokens for accessing multiple Azure AD resources.

var localAuthResult = await _authContext.AcquireTokenAsync (
                                   resourceId2, 
                                   clientId, 
                                   new Uri(redirectUrl),
                                   authorizationParameters,
                                   UserIdentifier.AnyUser, 
                                   null
                                 );

Calling AcquireToken() (even with no refresh token) would give us a new access token to the requested resource. This is due to ADAL checking if we have a refresh token in-memory which ADAL then uses that to generate a new access token for the resource.

An alternative

The third alternative is the simplest (but not necessarily the best). In this option, we can use the same access token to consume multiple Azure AD resources. To do this, we need to use the same Azure AD app ID when setting the two APIs for authentication via Azure AD. This requires some understanding of how the Azure AD authentication happens on our web apps.

If you refer to Taiseer Joudeh’s tutorial you will see that in our web app, we need to tell the authentication framework what our Authority is and the Audience (Azure AD App Id). If we set up both of our web APIs to use the same Audience (Azure AD app Id) we link them both into the same Azure AD application which allows use of the same access token to use both web APIs.

// linking our web app authentication to an Azure AD application
private void ConfigureAuth(IAppBuilder app)
{
	app.UseWindowsAzureActiveDirectoryBearerAuthentication(
		new WindowsAzureActiveDirectoryBearerAuthenticationOptions
		{
			Audience = ConfigurationManager.AppSettings["Audience"],
			Tenant = ConfigurationManager.AppSettings["Tenant"]
		});
}
<appSettings>
    <add key="Tenant" value="hasaltaiargmail.onmicrosoft.com" />
    <add key="Audience" value="http://my-Azure-AD-Application-Id" />	
</appSettings>

As I said before, this is very simple and requires less code, but could cause complications in terms of security logging and maintenance. At the end of the day, it depends on your context and what you are trying to achieve.

Conclusion

We looked at how we could use Azure AD SSO with ADAL to access multiple resources from native mobile apps. As we saw, there are three main options, and the choice could be made based on the context of your app. I hope you find this useful and if you have any questions or you need help with some development that you are doing, then just get in touch.

This blog post is the third in a series that cover Azure Active Directory Single Sign On (SSO) Authentication in native mobile applications.

  1. Authenticating iOS app users with Azure Active Directory
  2. How to Best handle AAD access tokens in native mobile apps
  3. Using Azure SSO tokens for Multiple AAD Resources From Native Mobile Apps (this post)
  4. Sharing Azure SSO access tokens across multiple native mobile apps.

Microsoft Windows IoT and the Intel Galileo

You might have seen one of these headlines a while back: ‘Microsoft Windows now running on Intel Galileo development board’, ‘Microsoft giving away free Windows 8.1 for IoT developers’. Now before we all get too excited, let’s have a closer look beyond these headlines and see what we’re actually getting!

Intel Galileo

With a zillion devices being connected to the Internet by the year 2020 a lot of hardware manufacturers want to have a piece of this big pie, and Intel got into the game by releasing two different development boards / processors: the Intel Galileo and more recently the Intel Edison.

Intel Galileo

Intel Galileo

Intel Edison

Intel Edison

The Galileo is Intel’s first attempt to break into consumer prototyping, or the ‘maker scene’. The board comes in two flavours, Gen 1 and Gen 2 with the latter being a slightly upgraded model of the first release.

Like many other development platforms the board offers hardware and pin compatibility with a range of Arduino shields to catch the interest from a large number of existing DIY enthusiasts. The fundamental difference between boards like the Arduino Uno and the Intel Galileo is that Arduino devices run on a real-time microcontroller (mostly Atmel Atmega processors) whereas the Galileo runs on a System on Chip architecture (SoC). The SoC runs a standard multi-tasking operating system like Linux or Windows, which aren’t real-time.

Both Gen1 and Gen2 boards contain an Intel Quark 32-bit 400 MHz processor, which is compatible with the Intel Pentium processor instruction set. Furthermore we have a full-sized mini-PCI express slot, a 100 Mb Ethernet port, microSD slot and USB port. The Galileo is a headless device which means you can’t connect a monitor via a VGA or HDMI unlike the Raspberry Pi for example. The Galileo effectively offers Arduino compatibility through hardware pins, and software simulation within the operation system.

The microSD card slot makes it easy to run different operating systems on the device as you can simply write an operating system image on an SD card, insert it into the slot and boot the Galileo. Although Intel offers the Yocto Poky Linux environment there are some great initiatives to support other operating systems. At Build 2014 Microsoft announced the ‘Windows Developer Program for IoT’. As part of this program Microsoft offers a custom Windows image that can run on Galileo boards (there’s no official name yet, but let’s call it Windows IoT for now).

Windows on Devices / Windows Developer Program for IoT

Great, so now we can run .NET Framework application, and for example utilise the .NET Azure SDK? Well not really, yet… The Windows image is still in Alpha release stage and only runs a small subset of the .NET CLR and is not able to support larger .NET applications of any kind. Although a simple “Hello World” application will run flawlessly, applications will throw multiple Exceptions as soon as functionality beyond the System.Core.dll is called.

So how can we start building our things? You can write applications using the Wiring APIs in exactly the same way as you program your Arduino. Microsoft provides compatibility with the Arduino environment with a set of C++ libraries that are part of a new Visual Studio project type when you setup your development environment according to the instructions on http://ms-iot.github.io/content/.

We’ll start off by creating a new ‘Windows for IoT’ project in Visual Studio 2013:

New IoT VS Project

The project template will create a Visual C++ console application with a basic Arduino program that turns the built-in LED on and off in a loop:

Now let’s grab our breadboard and wire up some sensors. For the purpose of this demo I will use the built-in temperature sensor on the Galileo board. The objective will be to transmit the temperature to an Azure storage queue.

Since the Arduino Wiring API is implemented in C++ I decided to utilise some of the other Microsoft C++ libraries on offer: the Azure Storage Client Library for C++, which in return is using the C++ REST SDK. They’re hosted on Github and Codeplex respectively and can both be installed as Nuget packages. I was able to deliver messages to a storage queue with the C++ library in a standard C++ Win32 console application, so assumed this would work on the Galileo. Here’s the program listing of the ‘main.cpp’ file of the project:

The instructions mentioned earlier explain in detail how to setup your Galileo to run Windows, so I won’t repeat that here. We can deploy the Galileo console application to the development board from Visual Studio. This simply causes the compiled executable to be copied to the Galileo via a file share. Since it’s a headless device we can only connect to the Galileo via good old Telnet. Next, we launch the deployed application on the command line:

Windows IoT command line output

Although the console application is supposed to write output to the console, none of it is shown. I am wondering if there are certain Win32 features missing in this Windows on Devices release, since no debug information is outputted to the console for most commands that are executed over Telnet. When I tried to debug the application from Visual Studio I was able to extract some further diagnostics:

IoT VS Debug Output

Perhaps this is due to a missing Visual Studio C++ runtime on the Galileo board. I tried to perform an unattended installation of this runtime it did not seem to install at all, although a lack of command line output makes this guesswork.

Conclusion

Microsoft’s IoT offering is still in its very early days. That doesn’t only apply to the Windows IoT operating system, but for also to Azure platform features like Event Hubs as well. Although this is an Alpha release of Windows IoT I can’t say I’m overly impressed. The Arduino compatibility is a great feature, but a lack of easy connectivity makes it just a ‘thing’ without Internet. Although you can use the Arduino Ethernet / HTTP library, I would have liked to benefit from the available C++ libraries to securely connect to APIs over HTTPS, something which is impossible on the Arduino platform.

The Microsoft product documentation looks rather sloppy at times and is generally just lacking and I’m curious to see what the next release will bring along. According to Microsoft’s FAQ they’re focussing on supporting the universal app model. The recent announcements around open sourcing the .NET Framework will perhaps enable us to use some .NET Framework features in a Galileo Linux distribution in the not-to-distant future.

In a future blog post I will explore some other scenarios for the Intel Galileo using Intel’s IoT XDK, Node JS and look at how to connect the Galileo board to some of the Microsoft Azure platform services.

Kloud places 3rd in CRN’s Fast50

CRN Fast50 2014 - badge WEB

Kloud grew by 135.09 percent to hit $19.2 million in the 2014 financial year to place in the top 10 in the CRN Fast50. The annual awards were held in Sydney at the Four Seasons on Thursday with both Kloud (3rd) and Chamonix IT (46th) making the list. The annual CRN Fast50 awards are the leading Australian IT industry awards honouring the 50 best resellers, systems integrators and managed service providers in the country.

Nicki Bowers spoke to CRN’s Tony Yoo about Kloud’s success and its substantial growth over the past 4 years in an article published on Friday:

Kloud is the definition of that increasingly popular channel label: “born in the cloud”. In its short existence (having celebrated its fourth birthday last month), the company has gone national, with offices opening in Melbourne, Sydney, and Adelaide. The next step is to go international.

“We’re now in three states and I think the only states we don’t support are Western Australia and the ACT. So we’re supporting Northern Territory, Queensland, Tasmania, NSW, Victoria and South Australia,” says managing director Nicki Bowers. “We recently opened in Manila [Philippines] and we’re trying to grow as well.”

Bowers reflected on the journey from the moment in 2010 when four people – Jamie Potter, Geoff Rohrsheim, Brendan Carius and Bowers – identified cloud computing as the start of a new era in enterprise IT.

“It started with four directors building up the business when we saw an opportunity in the market to help organisations move to the cloud,” says Bowers. “We’re all very complementary from different backgrounds. Myself, I come from a sales background at Microsoft. Brendan Carius, he’s a technical director of the business and he comes from an infrastructure background.

“Geoff Rohrsheim and Jamie Potter both ran Strategic Data Management, which they sold to DWS five years ago. Geoff’s background is more in applications – active Flash engineering – but they’ve run their own businesses for many, many years.”

Targeting the big end of town was a deliberate ploy from the start. “Traditionally, we focused at the top end. I have this theory to fish where there’s fish, and enterprise customers were the first ones to start looking at cloud. We’re now seeing more government and financial services customers moving towards cloud, so we’re now having more industry-focus around those areas. We don’t traditionally play in the small-to-medium business area.”

Kloud has since grabbed those opportunities with both hands. This year alone, the company surpassed a million seats deployed for Office 365 and bagged two Microsoft Australia Partner Awards.

“It’s been a blur, it really has,” Bowers tells CRN. “But through all this constant growing up, new and exciting developments and the new customers, we can’t forget that our first customers we had three or four years ago are still our customers today. We’re now supporting more than 150 enterprise customers. For me, every successful project is another great milestone to celebrate.”

Kloud attributes its placing the 2014 CRN Fast50 to the normalisation of cloud computing in the enterprise sector. “I think we’re seeing more mainstream adoption of cloud in enterprise customers. Originally, customers started migrating one or two workloads. Now we’re seeing customers looking at an entire data synch transformation – how do I move everything to the cloud? That’s really accelerated us as an organisation,” she says.

New offerings away from Kloud’s bread-and-butter of cloud migration also contributed to the company’s success this year.

“Our managed services is a whole new business that we’ve grown in the past 12 months. It’s gone from nothing to hero. Our security business is another one, looking at how we remove every single block from the customers to go to the cloud and providing that level of comfort through cloud security services. And Kloud Digital is giving us [user experience] capabilities in mobile development and web-based workloads,” says Bowers. “Twelve months ago we weren’t doing that.”

Although recognised as a prominent Microsoft partner, Azure isn’t the only focus. Kloud has also seen expansion of work with Amazon Web Services and Telstra in the past year.

Team Spirit

In terms of the team, Bowers says they “drive a very high-performing culture, but a very customer-centric culture within the organisation”. Company culture is very important, and workers proudly adopt the moniker of ‘Kloudies’.

“I see everyone in the entire business as leaders because customers look to us as leaders in cloud – leading them through their transformation as they move into or develop an application from the cloud.”

Read more: http://www.crn.com.au/Feature/398243,3-kloud-2014-crn-fast50.aspx#ixzz3KiiTndQC

How to Best Handle Azure AD Access Tokens in Native Mobile Apps

This blog post is the second in a series that cover Azure Active Directory Single Sign On (SSO) Authentication in native mobile applications.

  1. Authenticating iOS app users with Azure Active Directory
  2. How to Best handle AAD access tokens in native mobile apps (this post)
  3. Using Azure SSO access token for multiple AAD resources from native mobile apps
  4. Sharing Azure SSO access token across multiple native mobile apps.

In my previous post, I talked about authenticating mobile app users using Azure AD SSO. In this post, I will explore how to take this further to persist the access token to interact with Azure AD.

Let’s assume that we have an API and a mobile app that consumes it. In order to secure the interaction between our mobile app and the API, we can register both the app and API with Azure AD and let Azure handle the authentication for us. Let’s take a look.

Securing the Web App

To start with, we will implement an authentication mechanism in our API. We can create use a vanilla Web API project from Visual Studio and implement Azure AD authentication on that. Our focus isn’t on this so for a good reference see Taiseer Joudeh’s detailed tutorial.

Securing the Mobile App

In the previous post, I showed how we could use Azure AD with ADAL to authenticate users on native mobile apps. The remainder of this post will assume that you have followed the previous post and you have this part ready. If you are not sure, please revisit my previous post.

Setting up the Permissions in AAD

We have seen how to secure our apps with AAD, now we need to authorise the mobile app to access our Web API. To do this we first need to expose the webApi permissions on Azure AD. We can navigate to AAD/Applications/our-web-app, then click on download manifest. This will give us a copy of the configuration of this API in AAD (in a simple JSON file). We need to modify the permission section and add the following section to tell Azure AD that this web app can be accessed by other AAD apps.

AAD app manifest configuration

AAD app manifest configuration

appPermissions": [
    {
      "claimValue": "user_impersonation",
      "description": "Allow the application full access to the service on behalf of the signed-in user",
      "directAccessGrantTypes": [],
      "displayName": "Have full access to the service",
      "impersonationAccessGrantTypes": [
        {
          "impersonated": "User",
          "impersonator": "Application"
        }
      ],
      "isDisabled": false,
      "origin": "Application",
      "permissionId": "place a NEW GUID here",
      "resourceScopeType": "Personal",
      "userConsentDescription": "Allow the application full access to the service on your behalf",
      "userConsentDisplayName": "Have full access to the service"
    }
  ]

We can update the file then upload it to update the permissions settings to enable AAD to allow access to this API just like any other permissions that it manages. You could read more on Azure AD impersonation and permission settings on MSDN. Note that you need to choose a NEW GUID for the permission id.

Now we need to configure our native mobile app in Azure AD to have access to our Web API. This is very simple and is shown in the screenshot below. In the list of permissions on the left, we now have more permissions that we can grant to the mobile app. Whatever name you gave to your mobile app will appear there along with the type of permissions that you have configured. In my case, I have named it MobileServices1 and that is what is appearing there.

Azure AD app permission settings

Azure AD app permission settings

Token Expiry and Caching

Setting the permissions and configuration above will allow our mobile app to authenticate users and manage the access of the API. This access is managed by the token that Azure AD issues when a user authenticates successfully. If the mobile app interacts with the API frequently, then we need to always have a valid token for all our requests. The question is how to keep a valid request token in the native mobile app?

The answer is depends on what you are trying to do – if you are implementing a highly secure mobile app you might want to always check with Azure and maybe ask the user to login every time the token expires. AAD access tokens expire after one hour by default. This means the default behaviour would be to ask the user to login every hour which is OK for some mobile apps, but it is certainly not the normal flow you see in many apps. So what should we do if we wanted to only ask the user to login once, or only occasionally (say, once every 3 months)? To do that, we would then need to manage the access tokens and automatically refresh it.

ADAL comes with the TokenCache class that is designed to manage caching of tokens so that consumers don’t need to go back to Azure AD every time the mobile app asks for a new token. Unfortunately for us persistent caching of tokens is not supported in the release this post is based on (ADAL 3.0.11). This means that ADAL will only cache the token in memory which means that once an app restarts (or is backgrounded in iOS) you will lose your access token. Therefore, we need to manage the token, and refresh it on our own in the background.

There are many ways that you could do this, a simple way is to always check token validity before we access the API. If our token isn’t valid then we could check for the Refresh Token. Azure AD gives us a refresh token to use when our access token is about to expire. This means that when we ask AAD for a new token and provide this refresh token, AAD will give us a new token without asking the user to re-authenticate.

By Default, Azure AD refresh tokens are valid for 14 days. This means as long as we refresh the actual token even once in this period then we do not need to re-authenticate. Another security constraint that Azure AD imposes is that the access token can only be refreshed for a maximum period of 90 days (i.e. 90 days after the initial issuance of the access and refresh tokens, the end user will have to sign themselves in again).

Alright, time to write some code. The code snippet below shows how you could structure your API calls from your mobile app. Notice that we always call either AcquireToken() or AcquireTokenByRefreshToken() before every call. This is to ensure that we always have a valid token before we send a request to the API. This could even be optimised further by checking if the access token still valid, then we skip the token refreshing call. I will leave this as an exercise for you to implement. In the next release of ADAL hopefully the TokenCache would be implemented, and then we would not need to do this.


public async Task<string> GetResultFromWebApi(string apiCallPath)
{
	var token = await AcquireOrRefreshToken ();
	using (var httpClient = new HttpClient())
	{
		httpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", token);
		HttpResponseMessage response = await httpClient.GetAsync(apiBaseAddress + apiCallPath);
		return await response.Content.ReadAsStringAsync();
	}
}

private async Task<string> AcquireOrRefreshToken()
{
	var refreshToken = _storage.Get<string> (Constants.CacheKeys.RefreshToken);
	AuthenticationResult authResult = null;

	if (string.IsNullOrEmpty (refreshToken)) 
	{
		authResult = await _authContext.AcquireTokenAsync (
			resourceId, clientId, new Uri (redirectUrl), new AuthorizationParameters (_controller), UserIdentifier.AnyUser, null);

	} 
        else 
        {
		authResult = await _authContext.AcquireTokenByRefreshTokenAsync (refreshToken, clientId);
	}

	// when calling refresh token, the UserInfo would be null
	if (authResult.UserInfo != null)
		_storage.Save<string> (Constants.CacheKeys.Email, authResult.UserInfo.DisplayableId);

	_storage.Save<string> (Constants.CacheKeys.Token, authResult.AccessToken);
	_storage.Save<string> (Constants.CacheKeys.ExpireOn, authResult.ExpiresOn.ToString("dd MMM HH:mm:ss"));
	_storage.Save<string> (Constants.CacheKeys.RefreshToken, authResult.RefreshToken);

	return authResult.AccessToken;
}

That’s it! Now your mobile app would keep interacting with the API using a valid token. And if you are concerned about what happens when the user account is disabled, or the password is changed, then well done, you are following the topic properly. Azure AD, would either try to re-authenticate the user again (by showing the login screen), or gives an error. So we need to add some error handling to our code to catch these types of exceptions and handle them properly in the mobile app.

I hope you find this blog post useful and I would love to hear from you if you have a question or comment. In the next blog post, we will look at how we could use the same token for accessing multiple resources registered in Azure AD.

This blog post is the second in a series that cover Azure Active Directory Single Sign On (SSO) Authentication in native mobile applications.

  1. Authenticating iOS app users with Azure Active Directory
  2. How to Best handle AAD access tokens in native mobile apps (this post)
  3. Using Azure SSO access token for multiple AAD resources from native mobile apps
  4. Sharing Azure SSO access token across multiple native mobile apps.

Implementing Azure Active Directory SSO (Single Sign on) in Xamarin iOS apps

This blog post is the first in a series that cover Azure Active Directory Single Sign On (SSO) Authentication in native mobile applications.

  1. Authenticating iOS app users with Azure Active Directory (this post)
  2. How to Best handle AAD access tokens in native mobile apps
  3. Using Azure SSO access token for multiple AAD resources from native mobile apps
  4. Sharing Azure SSO access token across multiple native mobile apps.

Brief Start

Two weeks ago the Azure AD (AAD) team released the Active Directory Authentication Library (ADAL) to enable developers to implement SSO functionality leveraging AAD. This was great news for me as I have few clients who are keen on having this feature in their apps and in this blog post I will share my experience in implementing Azure AD SSO in Xamarin.iOS apps.

Mobile Service vs AAD Authentication

First things first, if you are using Azure Mobile Services, then authentication could be handled for you by Azure Mobile Services itself. All you need to do is to pass a reference of your RootViewController to the library and call LoginAsync() as follows:


// Initialize the Mobile Service client with your URL and key
client = new MobileServiceClient (applicationURL, applicationKey, this);
user = client.LoginAsync (_controller, MobileServiceAuthenticationProvider.WindowsAzureActiveDirectory);

This will return a User object that you can use for future calls. This is slightly different from what we are going to talk about in this blog post. For further details on handling Azure Mobile Services authentication, you can check out this tutorial from the MSDN library.

Azure Active Directory Authentication

This blog post is to show you how you could authenticate users against Azure AD which can be useful in many cases. You may have a mobile app and only want users in Active Directory (on-prem or Azure) to use this app or you might have an API or a website and you share some functionality with your mobile users using a native mobile app. In both cases, you could use ADAL to enable Azure AD to handle the user authentication for you. This is quite handy for the following reasons:

  1. Less code development and maintenance for you as Azure handles it by itself
  2. Guaranteed functionality and less bugs as it is a well structured/tested library from a trusted source (Azure team)
  3. No need to further update your APIs when Azure API/SDKs are changed
  4. Extra features like token caching and token refresh operations.

Lack of Documentation

I have had few issues with the documentation on Azure when trying to implement SSO on Xamarin.iOS – the Azure documentation refers to classes and methods that do not exist in the ADAL. As another example, this tutorial seems to be taken from the native iOS implementation of Azure SSO without any update to match the release of ADAL. Anyway, enough complaining, for this reason we have this blog post. :)

Implementation

To implement SSO, I will assume that I have a native app, and I want to authenticate users against my AAD before they can use this app. For that, I would need to first register my native app in Azure AD so it shows as below.

Adding an App to Azure AD

Adding an App to Azure AD

  1. Click Add at the bottom of the screen to open the Add Application dialog
  2. Provide a name for your application registration so you can find it later. Click Next
  3. Provide a unique Redirect URI for your Application. Click the Finish (tick) button. Note that this must be a valid URI and is how Azure AD will identify your application during authentication requests.

Once we have created our application, let’s get the following details:

Authority
This represents the authority of your AAD, and it follows the format of https://login.windows.net/your-tenant-name.

ClientId
This is the unique client identifier of the native mobile app that we just created. See the screenshot below.

Azure AD app configurations

Azure AD app configurations

Redirect Uri
This is the unique redirect Id of the app that we just created as shown in the screenshot above.

Resource Id
This represents the resource URI of the app that we are trying to access. So if we are trying to access some functionality of a web API that is also registered with AAD, then this web app Id in Azure AD.

Once we have all the info above all we need to do is to write few lines of code. First we should install the ADAL Nuget Package in our solution. At the time of writing, the Nuget package version is 3.0.1102… and is in pre-release which means you need to allow pre-release packages in your Nuget settings.

Time to write some code. The small snippet below shows how to authenticate the user and get a token.
For the sake of this blog post, we only need to authenticate the user to azure. We will get the token and save it for future use, I have another post that talks about using this token. You can find it here.

        
const string authority = "https://login.windows.net/your-tenant-name";
const string resourceId = "your-resource-id";
const string clientId = "your-native-app-client-id-on-AAD";
const string redirectUrl = "your-awsome-app-redirect-url-as-on-AAD";

QSTodoService ()
{
    // this line if very important as it enables the ADAL library to do all 
    // IoC injection and other magic based on the platform that you are in. 
    AdalInitializer.Initialize ();
}

public async Task AsyncInit(UIViewController controller, MySimpleStorage storage)
{
    _storage = storage;
    _controller = controller;

    _authContext = new AuthenticationContext (authority);
}

public async Task<string> RefreshTokens()
{
    var refreshToken = _storage.Get<string> (Constants.CacheKeys.RefreshToken);
    AuthenticationResult authResult = null;
    var result = "Acquired a new Token"; 

    if (string.IsNullOrEmpty (refreshToken)) 
    {
       authResult = await _authContext.AcquireTokenAsync (
                    resourceId, clientId, new Uri (redirectUrl), new AuthorizationParameters (_controller), UserIdentifier.AnyUser, null);

    } else {
       authResult = await _authContext.AcquireTokenByRefreshTokenAsync (refreshToken, clientId);
       result = "Refreshed an existing Token";
    }

    if (authResult.UserInfo != null)
       _storage.Save<string> (Constants.CacheKeys.Email, authResult.UserInfo.DisplayableId);

    _storage.Save<string> (Constants.CacheKeys.Token, authResult.AccessToken);
    _storage.Save<string> (Constants.CacheKeys.RefreshToken, authResult.RefreshToken);

    return result;
}

As you can see, it is very simple. You could keep a reference to your AuthenticationContext in your app. In fact, it is recommended that you do so, for later use as the aggressive GC on Monotouch might dispose of it quickly.

Note that I am storing the token and the refresh token as I mentioned above, but you do not need to do that if you only using the library to authenticate once. In the next blog post, I will show how you could manage these tokens for further interaction with another app that is also using AAD for authentication.

I hope you found this useful and I would love to hear from you if you have any feedback. I will try to upload the source code of this sample to GitHub and share the link.

This blog post is the first in a series that cover Azure Active Directory Single Sign On (SSO) Authentication in native mobile applications.

  1. Authenticating iOS app users with Azure Active Directory (this post)
  2. How to Best handle AAD access tokens in native mobile apps
  3. Using Azure SSO access token for multiple AAD resources from native mobile apps
  4. Sharing Azure SSO access token across multiple native mobile apps.

Installing WordPress in a Sub-Folder on Azure Websites

This blog post shows you how to install a wordpress website in a sub-folder on your Azure website. Now somebody would ask why would I need to do that, and that is a good question, so let me start with the reasons:

Why do it this way?

Assume that you have a website and you want to create a blog section. This is a very common practice and most companies nowadays have a blog section of the website (which replaces the old “news” page). To do that, we would need to either develop a section of our website for blogging, or use a standard blogging engine -something like wordpress-. Let’s say we agreed to use wordpress as it is quick and easy and it is becoming the DE-facto engine for blogging. So how do we install it?
Well we could have a sub-domain. Say my website is hasaltaiar.com.au, I could create a sub-domain call it blog.hasaltaiar.com.au, and point this sub-domain to a wordpress website. This would work and it is good. However, it is not the best option, ask me why. Did you ask? never-mind I will answer :). Google and other search engines split the domain authority when requests come into sub-domains. This means that in maintain a better ranking and higher domain authority it is advised that you have your blog as a sub-module of your app rather than a sub-domain. And this is why we are talking about having the wordpress blog installed in a sub-folder.

The Database

To install a wordpress website, we need a MySql database. Azure gives you ONE free MySql database. You could create it from Azure store (Marketplace). There is a good tutorial on how to do that here. If you have exhausted your quota and have already created a MySql db before, you could either pay for a new database with Azure marketplace, or get a free one outside of Azure. I had this issue when I was creating this blog, as we had used the ONE free MySql database for another website, so I went to ClearDb website and created a new free account with a free MySql Database. This is the same provider for the MySql databases on Azure Marketplace, so you would get a similar service and free. One way or another, we will assume that you have a MySql database for this website. Have the connection details to this MySql database handy as we will need them later for installation.

Changes to Azure Website Configuration

In order to be able to install a wordpress website, you need to make two small changes to your azure website. These are:
1. You need to enable IIS to run php. Azure websites support multiple languages out-of-the-box, you just need to enable the languages that you need. By default, it is configured to run .NET 4.5, you could enable any other languages that you would need like Java, Php, Python, etc. We need to enable Php 5.4 or 5.5 as in the screenshot below.

Azure Website Supported Runtime configuration

Azure Website Supported Runtime configuration

2. We also need to ensure that our Azure website has a list (or at least one doc) for the default document type(s). This is also part of the configuration of your Azure website and it tells IIS what type of document to look for when a user navigate to any path/folder. You could have anything in this list (as long as they are valid docs) and in any order you want them. The important line for us in this case is the index.php, which is the default page for wordpress websites.

Azure website default document list

Azure website default document list

WordPress Install

You need to download the latest version of WordPress from wordpress.org. At the time of writing this blog post, the latest version is 4.0.1. Once you have the files downloaded, you could change the folder name to be blog and upload it directly to your website. This means that you would have this folder under your azure website /your-website/blog. The files could sit under your wwwroot folder as in the screenshot below.

Azure Website Files hierarchy

Azure Website Files hierarchy

When the file upload completes, we can navigate to the your-website-root/blog/wp-admin. This will start the wordpress website. WordPress engine will detect that this is the first time it runs, and it will prompt you with the installation wizard. The installation wizard is very simple, only few steps to set the website title, url, language, etc. The main step is adding the database details. These details can be obtained from the database that you created in the earlier step above. Just copy the database connection details here and you should be set. After adding the connection details, you will see a webpage saying the installation is complete and it would ask you to login (with your newly created credentials) to customise your blog.

That’s it, simple and easy. The method shown above could save you from having to maintain two different websites and would give you the flexibility of having your own, self-hosted wordpress site. I hope you find this useful and I would love to hear your thoughts and feedback.

MIM and Privileged Access Management

Recently Microsoft released Microsoft Identity Manager 2015 (MIM) Customer Technology Preview (CTP). Those expecting a major revision of the FIM product should brace themselves for disappointment. The MIM CTP is more like a service release of FIM. MIM CTP V4.3.1484.0 maintains the existing architecture of the FIM Portal (still integrated with SharePoint), FIM Service, and the FIM Synchronisation Service.  Also maintained are the separate FIM Service and FIM Sync databases. Installation of the CTP is almost identical to FIM 2010 R2 SP1, including the same woes with SharePoint 2013 configuration. The MIM CTP package available from Microsoft Connect contains an excellent step by step guide to install and configure a lab to test out PAM, so I won’t repeat that here.

In brief, the CTP adds the following features to FIM.

1. Privileged Access Management

This feature integrates with new functionality in Windows Server 10 Technical Preview to apply expiration to membership in Active Directory groups.

2. Multi Factor Authentication

FIM Self-Service Password Reset can now use Azure Multi-Factor Authentication as an authentication gate.

3. Improvements to Certificate Management

Incorporation of a Modern UI App, integration with ADFS, and support for multi forest deployment.

After installation, first thing that is evident is how much legacy FIM branding is maintained throughout the CTP product – yes this is MIM, please ignore the F word!:

 Privileged Access Management

For this blog I’ll focus on Privileged Access Management (PAM), which looks to be the biggest addition to the FIM Product for this CTP. PAM is actually a combination of new functionality within Windows Server 10 Technical Preview Active Directory Domain Services (ADDS) and MIM. MIM provides the interface for PAM role management (including PowerShell CMDLETS), whist Windows Server 10 ADDS adds capability to apply a timeout for group membership changes made by MIM

PAM requests are available from the MIM Portal – however for the CTP lab, PowerShell CMDLETS are used for PAM requests. These CMDLETS are executed under the context of the user wishing to elevate their rights. For the CTP lab provided, there is no approval applied to PAM requests, so users could continually elevate their rights via repeated PAM requests – however this would be trivial to address via an approval workflow on PAM request creation within the MIM Portal. At this stage there appears to be no way for an administrator to register a user for a PAM role – e.g. requests for PAM roles are done by the end user. The CMDLET usage is covered in detail by the PAM evaluation guide.

The PAM solution outlined for the CTP solution consists of a separate PAM domain in its own forest, containing the MIM infrastructure. Also present in this domain are duplicates from the production domain of privileged groups and user accounts for staff requiring PAM rights elevation.

The not so secret sauce of PAM is the use of SID history between the duplicate PAM domain groups and privileged production groups. SID history enables user accounts in the PAM domain residing in the duplicate PAM domain groups to access resources within the production domain. For example, a share in production secured by group “ShareAdmins” can be administered by a PAM domain account with membership in the duplicate PAM domain group “PROD.ShareAdmins” containing the same SID (in the SIDHistory attribute) as the production “ShareAdmins” group. When the PAM user account authenticates and attempts access to the production resource, it presents membership in both the duplicate PAM domain group and the production group.

PAM controls access to production domain resources via controlling membership in the duplicate PAM domain groups. To elevate rights, users authenticate with their duplicate PAM domain account which MIM has granted temporary membership in the duplicated privileged groups present within the PAM domain, and then access production domain resources.

What is new to PAM, and Windows Server 10 is the ability for a timeout to be configured for memberships in groups. When PAM adds users to groups, a Time-to-Live (TTL) is also applied. When this TTL expires, the membership is removed. For the PAM solution, MIM controls the addition of users to PAM groups and application of the TTL value, Windows Server 10 ADDS performs removal independently of MIM. MIM does not have to be functional for removal to occur.

TTL capability is enabled in Windows Server 10 ADDS via the following command line, for the technical preview a schema modification is also required – refer to the PAM evaluation guide in the MIM CTP distribution package:

Enable-ADOptionalFeature "Expiring Links Feature" -Scope ForestOrConfigurationSet -Target <PAM Domain FQDN>

During my testing of the PAM CTP, all worked as expected. MIM added users to PAM managed groups with a TTL, Windows Server 10 ADDS duly removed said users from PAM groups when the TTL expired. However the fundamental issue with group membership retention in user Access Tokens remains, e.g. a user must re-authenticate for group changes to apply to their Access Token. So any elevated sessions a user has open essentially retain their elevated rights long after PAM has removed said rights. However PAM does assist with segregation of duties, auditing, and addresses issues where there is a proliferation of accounts with high levels of access.

All in all the MIM CTP is a bit of a mixed bag. I am surprised to see the changes to Certificate Management prioritised above native integration of the Azure Active Directory (AAD) Management Agent and implementation of AAD password synchronisation functionality. The PAM implementation is quite heavy architecturally, e.g. an additional forest, two-way Forest trust, and SID History Filtering disablement. It will be interesting to see how the product develops in future CTPs, however with a scheduled MIM product release first half 2015, I don’t anticipate more deviation from the classic FIM architecture.

Getting Started with Office 365 Video

Starting Tuesday November 18 Microsoft started rolling out Office 365 Video to customers who have opted in to the First Release programme (if you haven’t you will need to wait a little longer!)

Kloud has built video solutions on Office 365 in the past so it’s great to see Microsoft deliver this as a native feature of SharePoint Online – and one that leverages the underlying power of Azure Media Services capabilities for video cross-encoding and dynamic packaging.

In this blog post we’ll take a quick tour of the new offering and show a simple usage scenario.

Basic Restrictions

In order to have access to Office 365 Video the following must be true for your Office 365 tenant:

  • SharePoint Online must be part of your subscription and users must have been granted access to it.
  • Users must have E1, E2, E3, E4, A2, A3 or A4 licenses.
  • There is no external sharing capability – you aren’t able to serve video to users who are not licensed as per the above.

There may be some change in the licenses required in future, but at launch these are the only ones supported.

Note that you don’t need to have an Azure subscription to make use of this Office 365 feature.

Getting Started

When Video is made available in your tenant it will show in either the App Launcher or Office 365 Ribbon.

Video on App Launcher

Video on Office 365 Ribbon

Like any well-managed Intranet it’s important to get the structure of your Channels right. At this stage there is no functionality to allow us to create sub-channels so how you create your Channels will depend primarily on who the target audience will be as a Channel is logical container than can be access controlled like any standard SharePoint item.

There are two default Channels out-of-the-box but let’s go ahead and create a new one for our own use.

Options when creating a Channel

Once completed we will be dropped at the Channel landing page and have the ability to upload content or manage settings. I’m going to modify the Channel I just created and restrict who can manage the content by adding one of my Kloud Colleagues to the Editors group (shown below).

Setting Permissions

Now we have our Channel configured, let’s add some content.

I click on the Upload option on the Channel home page and select an appropriate video (I’ve chosen to use an MP4 created on my trusty Lumia 920) and drag and drop it onto the upload form. The file size limits supported match the standard SharePoint Online ones (hint: your files can be pretty large!)

When you see the page below make sure you scroll down, set the video title and description (note: these are really important as they’ll be used by SharePoint Search and Delve to index the video).

Upload Process

Then you need to wait… time to complete the cross-encoding depends on how long the video is you’ve uploaded.

Once it’s completed you can play the video back via the embedded player and, if you want you can cross-post it to Yammer using the Yammer sidebar (assuming you have Yammer and an active session). You also get preview in search results and can play video from right in the preview (see below).

Video Preview

This is very early days for Office 365 Video – expect to see a lot richer functionality over time based on end user feedback.

The Office 365 Video team is listening to feedback and you can provide yours via their Uservoice site.

IoT – Solar & Azure

Ever since we got our solar system installed about two years ago, I’ve been keeping track of the total power generated by the system. Every month I would write down the totals and add it to my Excel spreadsheet. Although it’s not much work, it’s still manual work… yes all 2 minutes every month.

So when the whole “Internet of Things” discussion started at our office (see Matt’s blog “Azure Mobile Services and the Internet of Things“) I thought it would be a good opportunity to look at doing this using Azure – even if it was only to prove the IoT concept. The potential solution should:

  1. Use a device which connects to the solar inverter to reads its data via RS232.
  2. This device needs to be powered by a battery as no power outlet is close to the inverter.
  3. Upload data to Azure without having to rely on a computer running 24/7 to do this.
  4. Use Azure to store and present this data.

Hardware

The device I built is based on the Arduino Uno and consists of the following components:

Arduino UNO R3
With a little bit of programming these devices are perfectly capable of retrieving data from various data sources, are small in size, expandable with various libraries, add on shields and break-out boards and can be battery powered. Having the inverter on a side of the house with no power outlet close by made this a main requirement.
MAX3232 RS232 Serial to TTL Converter module
As the Arduino Uno doesn’t come with any serial connectors this module adds a DB9 connector to the board. Now the Arduino can be connected to the inverter using a null modem cable.
Adafruit CC3000 WiFi Shield with Onboard Ceramic Antenna
Some of the existing solutions which can send inverter data to a website (e.g. PVOutput) or computer logging those details, all rely on a computer which runs 24/7 which is one of the things I definitely didn’t want to do. I ended up getting this WiFi shield which, after soldering it on top of the Arduino board, turns the Arduino into a WiFi enabled device and allows it to send data to the internet directly. After adding the required libraries and credentials to my script, having access to a wireless router already enables basic access to the internet. Even though it is sitting quite a bit away from the wireless router, connectivity is no issue.
arduinobuild inverter
The Arduino Uno unit… …connected to the inverter

Azure

To store and / or display any of the info the Arduino is collecting, an Azure subscription is required. For this project I signed up for a free trial. Once the subscription is sorted, the following Azure services have to be setup:

Azure Service Description
Cloud service Running the worker roles.
Storage Account Hosting the table storage.
Service Bus Message queue for the Arduino.
Website For displaying data in (near) realtime.

Putting it all together

So how do all these different components fit together?

The Arduino connects to the inverter via a null-modem cable. Reading data from it is achieved by adding a MODBUS library to the Arduino script. This adds additional functionality to the Arduino which is now able to read (and write) data from MODBUS (an industrial comms standard) enabled devices.
The script is set to run every 30 minutes and only after a successful connection (the inverter shuts down if there is not enough sunlight) it will set up a wireless internet connection and send the data to the TCP listener worker role in Azure.

In Azure, a service bus message queue was created to hold all incoming data packets sent from the Arduino. A storage table was also created to permantly store data received from the Arduino. The great thing with the storage table is there is no need to create a table schema before being able to use it, just creating the “placeholder” is enough!

Using Visual Studio, two worker roles were created:

  • A TCP listener which “listens” for any device sending information to the specified endpoints. If a message from the Arduino is received it will write it onto the message queue.

service bus explorer screenshot

Using Service Bus Explorer you can see the individual messages arriving in the message queue.

  • A data writer which checks the message queue for new messages. If a new message has arrived, the message will be read, its content stored in the storage table and the message deleted.

Finally, a simple ASP.Net MVC website is used to display data from the storage table in near real-time. The website displays statistics on how many KWs have been generated during that day and how a day compares to previous days.

Energy Today

Stats for current day.

solarduino-website

Website display.

Conclusion

This IoT project was a good opportunity to have a play with various Azure components through using multiple worker roles, message queues and the like. It probably sounds like overkill when just using the one device sending one message every 30 minutes, but a similar setup can be used in larger environments such factories where multiple devices send dozens of messages per minute.