Azure SQL Database – Dynamic Data Masking Walkthrough

Originally posted on siliconvalve:

Microsoft recently announced the public preview of the Dynamic Data Masking (DDM) feature for Azure SQL Database that holds a lot of potential for on-the-fly data obfuscation that traditionally would have required either custom business logic or third party systems.

In this post I am going to take the opportunity to walk through how we can set this up for an existing database. For the purpose of this post I am going to utilise the AdventureWorks sample database for Azure SQL Database which you can download from Codeplex.

Our Azure SQL Database Server

Firstly we need to understand our Azure SQL Database Server instance configuration. At time of writing any existing Azure SQL Database Server or one created without specifying a Version flag will use the “2.0” Azure SQL Database engine which does not support DDM.

If this is your scenario the you will need to upgrade your server to…

View original 790 more words

Mule ESB DEV/TEST environments in Microsoft Azure

Agility in delivery of IT services is what cloud computing is all about. Week in, week out, projects on-board and wind-up, developers come and go. This places enormous stress on IT teams with limited resourcing and infrastructure capacity to provision developer and test environments. Leveraging public cloud for integration DEV/TEST environments is not without its challenges though. How do we develop our interfaces in the cloud yet retain connectivity to our on-premises line-of-business systems?

In this post I will demonstrate how we can use Microsoft Azure to run Mule ESB DEV/TEST environments using point-to-site VPNs for connectivity between on-premises DEV resources and our servers in the cloud.

MuleSoft P2S

Connectivity

A point-to-site VPN allows you to securely connect an on-premises server to your Azure Virtual Network (VNET). Point-to-site connections don’t require a VPN device. They use the Windows VPN client and must be started manually whenever the on-premises server (point) wishes to connect to the Azure VNET (site). Point-to-site connections use secure socket tunnelling protocol (SSTP) with certificate authentication. They provide a simple, secure connectivity solution without having to involve the networking boffin’s to stand up expensive hardware devices.

I will not cover the setup of the Azure Point-to-site VPN in this post, there are a number of good articles already covering the process in detail including this great MSDN article.

A summary of steps to create the Point-to-site VPN are as follows:

  1. Create an Azure Virtual Network (I named mine AUEastVNet and used address range 10.0.0.0/8)
  2. Configure the Point-to-site VPN client address range  (I used 172.16.0.0/24)
  3. Create a dynamic routing gateway
  4. Configure certificates (upload root cert to portal, install private key cert on on-premise servers)
  5. Download and install client package from the portal on on-premise servers

Once we established the point-to-site VPN we can verify the connectivity by running ipconfig /all and checking we had been assigned an IP address from the range we configured on our VNET.

IP address assigned from P2S client address range

Testing our Mule ESB Flow using On-premises Resources

In our demo, we want to test the interface we developed in the cloud with on-premises systems just as we would if our DEV environment was located within our own organisation

Mule ESB Flow

The flow above listens for HL7 messages using the TCP based MLLP transport and processes using two async pipelines. The first pipeline maps the HL7 message into an XML message for a LOB system to consume. The second writes a copy of the received message for auditing purposes.

MLLP connector showing host running in the cloud

The HL7 MLLP connector is configured to listen on port 50609 of the network interface used by the Azure VNET (10.0.1.4).

FILE connector showing on-premise network share location

The first FILE connector is configured to write the output of the xml transformation to a network share on our on-premises server (across the point-to-site VPN). Note the IP address used is the one assigned by the point-to-site VPN connection (from the client IP address range configured on our Azure VNET)

P2S client IP address range

To test our flow we launch a MLLP client application on our on-premises server and establish a connection across the point-to-site VPN to our Mule ESB flow running in the cloud. We then send a HL7 message for processing and verify we receive a HL7 ACK and that the transformed xml output message has also been written to the configured on-premises network share location.

Establishing the connection across the point-to-site VPN…

On-premises MLLP client showing connection to host running in the cloud

Sending the HL7 request and receiving an HL7 ACK response…

MLLP client showing successful response from Mule flow

Verifying the transformed xml message is written to the on-premises network share…

On-premises network share showing successful output of transformed message

Considerations

  • Connectivity – Point-to-site VPNs provide a relatively simple connectivity option that allows traffic between the your Azure VNET (site) and your nominated on-premise servers (the point inside your private network). You may already be running workloads in Azure and have a site-to-site VPN or MPLS connection between the Azure VNET and your network and as such do not require establishing the point-to-site VPN connection. You can connect up to 128 on-premise servers to your Azure VNET using point-to-site VPNs.
  • DNS – To provide name resolution of servers in Azure to on-premise servers OR name resolution of on-premise servers to servers in Azure you will need to configure your own DNS servers with the Azure VET. The IP address of on-premise servers will likely change every time you establish the point-to-site VPN as the IP address is assigned from a range of IP addresses configured on the Azure VET.
  • Web Proxies – SSTP does not support the use of authenticated web proxies. If your organisation uses a web proxy that requires HTTP authentication then the VPN client will have issues establishing the connection. You may need the network boffins after all to bypass the web proxy for outbound connections to your Azure gateway IP address range.
  • Operating System Support – Point-to-site VPNs only support the use of the Windows VPN client on Windows 7/Windows 2008 R2 64 bit versions and above.

Conclusion

In this post I have demonstrated how we can use Microsoft Azure to run a Mule ESB DEV/TEST environment using point-to-site VPNs for simple connectivity between on-premises resources and servers in the cloud. Provisioning integration DEV/TEST environments on demand increases infrastructure agility, removes those long lead times whenever projects kick-off or resources change and enforces a greater level of standardisation across the team which all improve the development lifecycle, even for integration projects!

Sending SMS Through PowerShell with Telstra’s New API

Recently, Telstra released their first public API, which in true telco fashion leverages an existing product in their stable; SMS. The service allows anyone with a Telstra t.dev account (get one here) to get an API key which will allow you to send up to 100 messages per day, 1000 per month to Australian mobiles. Obviously, this is going to be great for anyone wishing to use a free SMS service for labbing, testing, or sending your buddies anonymous cat facts.

I’m not so much a dev, so the first thing I wanted to do was to test this out using PowerShell. Using PowerShell, I get to look like I’m doing something super-important whilst I send my buddies cat facts. The following is the code I used to make this happen.

First, we want to get ourselves an access token, so we can auth to the service.

$app_key = "Th1SiSn0TreAllYmYAppK3ybUtTHanKsAnyW4y"
$app_secret = "n0rmYS3cr3t"
$auth_string = "https://api.telstra.com/v1/oauth/token?client_id=" + $app_key + "&client_secret=" + $app_secret + "&grant_type=client_credentials&scope=SMS"
$auth_values = Invoke-RestMethod $auth_string

Now that we have an auth token, we can use it to send, receive, and check the status of messages.

# Send SMS
$tel_number = "0488888888"
$token = $auth_values.access_token
$body = "On average, cats spend 2/3 of every day sleeping. That means a nine-year-old cat has been awake for only three years of its life"
$sent_message = Invoke-RestMethod "https://api.telstra.com/v1/sms/messages" -ContentType "application/json" -Headers @{"Authorization"="Bearer $token"} -Method Post -Body "{`"to`":`"$tel_number`", `"body`":`"$body`"}"
$sent_message

At this point, I receive an SMS to my phone, which I can reply to

telstraSMS_reply

The message can also be queried to check its delivery status, and check if the message has been replied to, as below:

# Get Message Status
$messageid = $sent_message.messageId
$message_status = Invoke-RestMethod "https://api.telstra.com/v1/sms/messages/$messageid" -Headers @{"Authorization"="Bearer $token"}
$message_status
# Get Message Response
$message_response = Invoke-RestMethod "https://api.telstra.com/v1/sms/messages/$messageid/response" -Headers @{"Authorization"="Bearer $token"}
$message_response

Executing the above code gives me the following

telstraSMS_powershell

Now obviously, you can wrap all these up in some functions, parse in external parameters, strap into PowerShell workflows in FIM, incorporate into SSPR, and just about anything else you can think of (in your labs). There are some caveats to using the service, some obvious of course:

  • It’s SMS, so a 160 character limit applies
  • You can send only one message at a time
  • The service is not intended for large volumes of messages
  • 100 messages per day/1000 per month limit
  • The service is in beta
  • Telstra cannot guarantee delivery once the message is passed to another telco
  • Australia mobiles only

Initially in my testing, I found messages sat in the state of “SENT” and would not update to “DELIVERED”. After some general fist waving and mutterings about beta services, I rebooted my phone and the messages I had queued came through. Although I have had no issue with SMS delivery in the past, I’m happy to put this down to the handset bugging out. In all subsequent testing, the messages came through so quickly, that my phone buzzed immediately after hitting enter on the command.

I hope the code snippets provided help you out with spinning this up in your labs, but please check the Telstra T’s and C’s before sending out some informative cat facts.

 

Command and control with Arduino, Windows Phone and Azure Mobile Services

In most of our posts on the topic of IoT to date we’ve focussed on how to send data emitted from sensors and devices to centralised platforms where we can further process and analyse this data. In this post we’re going to have a look at how we can reverse this model and control our ‘things’ remotely by utilising cloud services. I’m going to demonstrate how to remotely control a light emitting diode (LED) strip with a Windows Phone using Microsoft Azure Mobile Services.

To control the RGB LED strip I’m going to use an Arduino Uno, a breadboard and some MOSFETs (a type of transistor). The LED strip will require more power than the Arduino can supply, so I’m using a 9V battery as a power supply which needs to be separated from the Arduino power circuit, hence why we’re using MOSFET transistors to switch the LEDs on and off.

The Arduino Uno will control the colour of the light by controlling three MOSFETs – one each for the red, blue and green LEDs. The limited programmability of the Arduino Uno means we can’t establish an Azure Service Bus relay connection, or use Azure Service Bus queues. Luckily Azure Mobile Services allow us to retrieve data via plain HTTP.

A Windows Phone App will control the colour of the lights by sending data to the mobile service. Subsequently the Arduino Uno can retrieve this data from the service to control the colour by using a technique called ‘pulse width modulation‘ on the red, green and blue LEDs. Pulse width modulation allows us to adjust the brightness of the LEDs by quickly turning on and off a particular LED colour, thus artificially creating a unique colour spectrum.

For the purpose of this example we won’t incorporate any authentication in our application, though you can easily enforce authentication for your Mobile Service with a Microsoft Account by following these two guides:

A diagram showing our overall implementation is shown below.

Command and Control diagram

Mobile service

We will first start by creating an Azure Mobile Service in the Azure portal and for the purpose of this demonstration we can use the service’s free tier which provides data storage up to 20MB per subscription.

Navigate to the Azure portal and create a new service:

Creating a Mobile Service 1

Next, choose a name for your Mobile Service, your database tier and geographic location. We’ll choose a Javascript backend for simplicity in this instance.

Creating a Mobile Service 2

Creating a Mobile Service 3

In this example we’ll create a table ‘sensordata’ with the following permissions:

Mobile Service Permissions

These permissions allows us to insert records from our Windows Phone app with the application key, and have the Arduino Uno retrieve data without any security. We could make the insertion of new data secure by demanding authentication from our Windows Phone device without too much effort, but for the purpose of this demo we stick to this very basic form of protection.

In the next section we’re going to create a Windows Phone application to send commands to our mobile service.

Windows Phone Application

To control the colour in a user friendly way we will use a colour picker control from the Windows Phone Toolkit, which can be installed as a NuGet package. This toolkit is not compatible with Windows Phone 8.1 yet, so we’ll create a Windows Phone Silverlight project and target the Windows Phone 8.0 platform as shown below.

Visual Studio Create Project 1

Visual Studio Create Project 2

Next, we’ll install the ‘Windows Phone Toolkit’ NuGet package as well as the mobile services NuGet package:

Install Windows Phone Toolkit Nuget

Install Mobile Services NuGet

For the purpose of this demo we won’t go through all the colour picker code in detail here. Excellent guidance on how to use the colour picker can be found at on the Microsoft Mobile Developer Wiki.

The code that sends the selected colour to our mobile service table is as follows.

The event data consists of colour data in the RGB model, separated by semicolons.

The complete working solution can be found in this Github repository. Make sure you point it to the right Azure Mobile Service and change the Application Key before you use it!

Run the application and pick a colour on the phone as shown below.

Phone ScreenShot

Now that we have a remote control that is sending out data to the Mobile Service it’s time to look at how we can use this information to control our LED strip.

Arduino sketch

In order to receive commands from the Windows Phone App we are going to use OData queries to retrieve the last inserted record from the Azure Mobile Servicewhihch exposes table data via OData out of the box. We can easily get the last inserted record in JSON format via a HTTP GET request to a URL similar to the following:

https://myiotservice.azure-mobile.net/tables/sensordata?$top=1&$orderby=__createdAt%20desc

When we send a HTTP GET request, the following HTTP body will be returned:

[
  {
    "id":"A086CE3F-5FD3-45B6-A967-E0928E3C5A96",
    "DeviceId":"PhoneEmulator",
    "SensorId":"ColorPicker",
    "EventType":"RGB",
    "EventData":"0;0;255"
  }
]

Notice how the colour is set to blue in the RGB data.

The Arduino schematics for the solution:

Arduino Command Control Schematic

For illustrative purposes I’ve drawn a single LED. In reality I’m using a LED strip that needs more voltage than the Arduino can supply, hence the 9V battery is attached and MOSFET transistors are used. Don’t attach a 9V battery to a single LED or it will have a very short life…

The complete Arduino sketch:

When we run the sketch the JSON data will be retrieved, and the colour of the LED strip set to blue:

The Working Prototype!

In this article I’ve demonstrated how to control a low end IoT device that does not have any HTTPS/TLS capabilities. This scenario is far from perfect, and ideally we want to take different security measures to prevent unauthorised access to our IoT devices and transport data. In a future article I will showcase how we can resolve these issues by using a much more powerful device than the Arduino Uno with an even smaller form factor: the Intel Edison. Stay tuned!

Xamarin Test Cloud – the new kid on the block?

Early last year I was working for one of our customers to find out an optimum test solution for their upcoming mobile application. The idea was that it should be heavily automated, efficient and cost-effective. The first observations (and the feeling) that I had was that we have very little choice in the way of tool choices. That was unfortunate, but the reality.

Almost a year later, when looking back and reflecting on some of those findings, I can see things have changed immensely. There have been announcements coming from key industry players almost every month. New promises being made! But then again, is it sufficient?

While development of mobile applications has become much simpler, the same is not quite true when it comes to testing them.

I earlier wrote a post on ‘what you should consider’ when you look to adopt a test platform/approach. While it always seems ideal that you go by those suggestions, it is not often possible, partly because you do not have the all-in-one toolset/solution that can tick all the boxes. In the constantly evolving mobile application testing market, there is no gold standard for choosing your test solution either – you have options ranging from feature-rich, flexible open source platforms or more organised and supported licensed platforms, each with their own pros and cons.

I recently discovered a test solution provided by Xamarin, namely Xamarin Test Cloud. While Xamarin is strong in the application development space, it is a relatively new entrant in the testing space. Regardless, I must say I am quite impressed with what they have so far! While it is still in its infancy, and will take them some time to compete with the other key players, in the Xamarin eco-system this might play well.

I will spend some time in this blog on how you can quickly set up yourself to run some tests against the Xamarin Test Cloud. I am consciously not going into the Architecture and other details as you can find many of those references in their website.

What is Xamarin Test Cloud?

Xamarin Test Cloud is the web-based cloud service where you can run your tests against a wide range of physical mobile devices that are managed by Xamarin in their lab. Like some other cloud based service providers, the obvious advantage is you do not have to worry about the device lab and you ony pay for what you use. You can use the Xamarin Test Cloud in two different ways:

  1. via Xamarin.UITest nuget (Xamarin-provided API)
  2. via Calabash

How to run tests using Xamarin.UITest?

We will talk about the first option here for the sake of simplicity. Here are my high level steps.

Step 1

The first thing that you need to do is create a Test project and download the Xamarin.UITest and NUnit nuget packages.

As a side note, Xamarin tests use the NUnit framework and do not support MSTest yet. The focus on NUnit is because the tests can be run on OSX / Linux boxes which don’t support MSTest.

Test project setup

Step 2

You can now go ahead and write your tests – in this example, I have used an android app (.apk) from the Selendroid project.

In order to write tests, you need to use the Xamarin.UITest API that provides you with ways to interact with your AUT (mobile application). Xamarin’s website has detailed documentation on the scripting process and how you can write get started pretty quickly.

Here is how you can set up your application to run against the device or emulator. If you want to debug and run your tests against the local emulator, you can install one from Google or other emulators in the market.

Unit Test Setup Code Snippet

A simple unit test looks like the below. It taps on a pop-up button and then dismisses the pop up. You could add assert statements to make it a complete test – something I have not done here!

Sample Unit Test

Running your test against a local emulator

Once you build the solution, you can run the tests using the standard NUnit interface. The only word of advice is to make sure to set the path of your .apk file properly. Ideally the .apk file should be in the same path as in the test DLLs. Note that you do not need to worry about the path in the code when you run against the Test cloud.

NUnit Test Runner

The android emulator will load the application and the test framework will run its automation scripts.

Android Emulator

Running your tests against Xamarin Test Cloud

Now you can run the same tests against Xamarin Test Cloud – leveraging hundreds of devices! All you need to do is to specify against which devices you want your tests to run.

You need to have a Xamarin subscription to do this (if you don’t have one you can request a trial subscription for 14 days if you want to try it out first).

From the Test Cloud interface once you specify the devices and your test suite, it will provide you with a command that contains the subscription details (an API key), the path to your test DLLs and source path to your application file (.apk). You can run a test using a command line similar to the one below.

test-cloud.exe submit aut-test-app-0.12.0.apk apikeyprovidedbyxamarinxxxyyyyyzzzzzzz --devices devicekeygeneratedbytestcloud --series "master" --locale "en_US" --assembly-dir "C:\XamarinTest\XamarinTest\bin\Debug"

Note: This is run from the path where the Xamarin.UITest nuget was downloaded. When you follow the path, you will see ‘test-cloud.exe’ residing under ‘packages\Xamarin.UITest.[0.6.8]\tools’. This command line helps to execute the above command and connect to Xamarin Test Cloud.

As an example the same command can be run as below where I have moved to the directory where test-cloud.exe resides and executed the command from there.

Test Cloud run screenshot

The command line will show you the progress of your test run. In this example, I am running my tests against three Android devices in Test Cloud.

Test run results in Test cloud

The Test Cloud infrastructure will track results for you and you can follow them through the Test Cloud web interface as shown below.

Test results in Test Cloud web interface

Once a test run is finished, the Test Cloud dashboard will indicate the completed status and you can drill down to see the results from each of the devices.

Dashboard and test results

Dashboard sample 1

Dashboard sample 2

Dashboard sample 3

Dashboard sample 4

Individual test runs:

You can click on the test runs and find the results for each of the devices. On your left side you should be able to view all your tests and the individual steps:

Individual Test run sample

And when you focus on a single device, you can go something like this:

Individual Test run sample

As you can see, this gives us a fair amount of detail about the test run!

The way I see it, Test Cloud gives us three distinct advantages:

  1. Diversity of devices: the ability to run our tests against many devices helps us to see the application behaviour against far more devices than we may have had access traditionally. That to me is a huge win – to be able to compare the behaviour.
  2. Script once: your tests do not need to change for devices – it is the same code!
  3. Identify the failure points: In mobile application testing, finding the “point of failure” is far more important than to see what is working. In Test Cloud you can instantly get a feedback if certain steps do not work on any specific device.

At the same time, the wishlist of missing features is also long. I think Xamarin will have to continue to add new features to keep the offering relevant and distinct from the competition. An example is easier integration with some of the popular build management systems (it can be done in somewhat roundabout way in its current version), support for Windows Phone and a truly cross-platform environment where you should be able to run your appium (or any other tests). The power and reach will increase ten fold!

Mobile application testing has become a focal point for the industry, especially with the progress of the Appium project and its alignment with Selenium 3 and we can only hope it gets better every day. If you look back at the past two years, this space has already matured many times over and new features are being released almost every week. But so does the complexity and people’s expectation of applications.

The huge year-on-year growth of applications and increasing usage promises an exciting future for mobile applications. This is why it is key for our mobile applications to be well tested because applications become truly ‘exciting’ when they deliver what you want and not when all you get it “Sorry… something went wrong”!

Kloud develops online learning portal for leading education organisation

Customer Overview

Catholic Education South Australia (CESA) is made up of the South Australian Commission for Catholic Schools (SACCS), Catholic Education Office of South Australia (CEO) and 103 Catholic schools across South Australia. The organisation comprises 6,000 staff who care for more than 48,000 students.

Business Situation

Catholic Education South Australia recently made the decision to offer the capabilities of Office 365 to its 103 schools across the state (including Exchange email, Lync, SharePoint and Office on Demand). As part of this offering, CESA sought to leverage Office 365 to provide each school with a portal for students and teachers to collaborate.

Solution

Kloud worked with CESA to ensure comprehensive understanding of the requirements and delivered a solution design document based on the needs of the organisation. Following acceptance of the design document, Kloud commenced configuration of the tenant (in particular SharePoint Online) in readiness for the deployment of to-be created templates. Kloud worked closely with the Learning and Technology Team to create conceptual designs for the following types of templates that would be used within each school portal:

  • School
  • Class
  • Community
  • Professional Learning.

From these designs Kloud developed prototypes in SharePoint and iteratively refined them with regular reviews from the Learning and Technologies Team. The final solution included a custom application which created the school sites in Office 365 and a remote provisioning application in Azure for self-service site creation. The latter provided teachers with a mechanism to create their own class, community and professional learning sites based on the predefined template which they could then fine-tune to suit their needs.

Benefits

The school portal empowers students and teachers to collaborate in a safe and well-monitored environment. They can now easily share documents, images and videos as well as create blogs or post questions in a single place.

Through the class sites, students will be able to spend their time collaborating with others in their class as well as teachers who will provide additional resources and oversight. The community sites allow students to join groups of interest, either social or academic, and is a great way for like-minded students to expand their learning or be more informed about what is happening. Likewise, the professional learning sites allow teachers to share ideas and resources about a subject or stream which will translate to better learning outcomes for students.

“CESA’s Learning and Technologies team worked with consultants from Kloud on the functionality and design of the Office 365 and SharePoint Online templates for schools. We were impressed by the communication and analytical skills used to meet our organisation’s needs and to inform its direction. High levels of expertise supported the project, as well as knowledge of the product, solid task prioritisation, budget management and timely reporting” – Karen Sloan, Learning and Technologies Senior Education Advisor, CESA.

Sharing Azure SSO Access Tokens Across Multiple Native Mobile Apps

This blog post is the fourth and final in the series that cover Azure AD SSO in native mobile applications.

  1. Authenticating iOS app users with Azure Active Directory
  2. How to Best handle AAD access tokens in native mobile apps
  3. Using Azure SSO tokens for Multiple AAD Resources From Native Mobile Apps
  4. Sharing Azure SSO Access Tokens Across Multiple Native Mobile Apps (this post).

Introduction

Most enterprises have more than one mobile app and it’s not unusual for these mobile apps to interact with some back-end services or APIs to fetch and update data. In the previous posts of this series we looked at how to manage access to APIs and share tokens as a way to enable a mobile app to interact with multiple AAD-secured resources.

This post will cover how to share access tokens to AAD resources across multiple mobile apps. This is very useful if the enterprise (or any mobile app vendor) wants to provide convenience for the users by not asking them to authenticate on every mobile app the user has.

Once a mobile user logs into Azure AD and gets a token we want to reuse the same token with other apps. This is suitable for some scenarios, but it might not be ideal for apps and resources that deal with sensitive data – the judgement is yours.

Sharing Tokens Across Multiple Mobile Apps

Moving on from previous posts it is now time to enable our mobile apps to share the access token. In this scenario we are covering iOS devices (iOS 6 and above), however other mobile platforms provide similar capabilities too. So what are the options for sharing data on iOS:

KeyChain Store

iOS offers developers a simple utility for storing and sharing keys, this is called SecKeyChain. The API has been part of the iOS platform since before iOS 6, but in iOS 6 Apple integrated this tool with iCloud, to make it even easier to push any saved passwords and keys to Apple iCloud, and then share them on multiple devices.

We could use iOS SecKeyChain to store the token (and the refreshToken) once the user logs in on any of the apps. When the user starts using any of the other apps, we check the SecKeyChain first before attempting to authenticate the user.

public async Task AsyncInit(UIViewController controller, ITokensRepository repository)
{
	_controller = controller;
	_repository = repository;
	_authContext = new AuthenticationContext(authority);
}

public async Task<string> RefreshTokensLocally()
{
	var refreshToken = _repository.GetKey(Constants.CacheKeys.RefreshToken, string.Empty);
	var authorizationParameters = new AuthorizationParameters(_controller);

	var result = "Refreshed an existing Token";
	bool hasARefreshToken = true;

	if (string.IsNullOrEmpty(refreshToken)) 
	{
		var localAuthResult = await _authContext.AcquireTokenAsync(
			resourceId1, clientId, 
                        new Uri (redirectUrl), 
                        authorizationParameters, 
                         UserIdentifier.AnyUser, null);

		refreshToken = localAuthResult.RefreshToken;
		_repository.SaveKey(Constants.CacheKeys.WebService1Token, localAuthResult.AccessToken, null);


		hasARefreshToken = false;
		result = "Acquired a new Token"; 
	} 

	var refreshAuthResult = await _authContext.AcquireTokenByRefreshTokenAsync(refreshToken, clientId, resourceId2);
	_repository.SaveKey(Constants.CacheKeys.WebService2Token, refreshAuthResult.AccessToken, null);

	if (hasARefreshToken) 
	{
		// this will only be called when we try refreshing the tokens (not when we are acquiring new tokens. 
		refreshAuthResult = await _authContext.AcquireTokenByRefreshTokenAsync(refreshAuthResult.RefreshToken,  clientId,  resourceId1);
		_repository.SaveKey(Constants.CacheKeys.WebService1Token, refreshAuthResult.AccessToken, null);
	}

	_repository.SaveKey(Constants.CacheKeys.RefreshToken, refreshAuthResult.RefreshToken, null);

	return result;
}

Some of the above code will be familiar from previous posts, but what has changed is that now we are passing ITokenRepository which would save any tokens (and refreshTokens) once the user logs in to make them available for other mobile apps.

I have intentionally passed an interface (ITokenRepository) to allow for different implementations, in case you opt to use a different approach for sharing the tokens. The internal implementation of the concrete TokenRepository is something like this:

public interface ITokensRepository 
{
	bool SaveKey(string key, string val, string keyDescription);
	string GetKey(string key, string defaultValue);
	bool SaveKeys(Dictionary<string,string> secrets);
}

public class TokensRepository : ITokensRepository
{
	private const string _keyChainAccountName = "myService";

	public bool SaveKey(string key, string val, string keyDescription)
	{
		var setResult = KeychainHelpers.SetPasswordForUsername(key, val, _keyChainAccountName, SecAccessible.WhenUnlockedThisDeviceOnly, false );

		return setResult == SecStatusCode.Success;
	}

	public string GetKey(string key, string defaultValue)
	{
		return KeychainHelpers.GetPasswordForUsername(key, _keyChainAccountName, false) ?? defaultValue;
	}
		
	public bool SaveKeys(Dictionary<string,string> secrets)
	{
		var result = true;
		foreach (var key in secrets.Keys) 
		{
			result = result && SaveKey(key, secrets [key], string.Empty);
		}

		return result;
	}
}

iCloud

We could use Apple iCloud to push the access tokens to the cloud and share them with other apps. The approach would be similar to what we have done above with the only difference being in the way we are storing these keys. Instead of storing them locally, we push them to Apple iCloud directly. As the SecKeyChain implementation above does support pushing data to iCloud, I won’t go through the implementation details here and simply note the option is available for you.

Third Party Cloud Providers (ie Azure)

Similar to the previous option, but offer more flexibility. This is a very good solution if we already are already using Azure Mobile Services for our mobile app. We can create one more table and then use this table to store and share access tokens. The implementation of this could be similar to the following:

public async Task<string> RefreshTokensInAzureTable()
{
	var tokensListOnAzure = await tokensTable.ToListAsync();
	var tokenEntry = tokensListOnAzure.FirstOrDefault();
	var authorizationParameters = new AuthorizationParameters(_controller);

	var result = "Refreshed an existing Token";
	bool hasARefreshToken = true;

	if (tokenEntry == null) 
	{
		var localAuthResult = await _authContext.AcquireTokenAsync(resourceId1, clientId, new Uri (redirectUrl),  authorizationParameters, UserIdentifier.AnyUser, null);

		tokenEntry = new Tokens {
			WebApi1AccessToken = localAuthResult.AccessToken,
			RefreshToken = localAuthResult.RefreshToken,
			Email = localAuthResult.UserInfo.DisplayableId,
			ExpiresOn = localAuthResult.ExpiresOn
		};
		hasARefreshToken = false;
		result = "Acquired a new Token"; 
	} 
		
	var refreshAuthResult = await _authContext.AcquireTokenByRefreshTokenAsync(tokenEntry.RefreshToken, 
                                                                                    clientId, 
                                                                                    resourceId2);
	tokenEntry.WebApi2AccessToken = refreshAuthResult.AccessToken;
	tokenEntry.RefreshToken = refreshAuthResult.RefreshToken;
	tokenEntry.ExpiresOn = refreshAuthResult.ExpiresOn;

	if (hasARefreshToken) 
	{
		// this will only be called when we try refreshing the tokens (not when we are acquiring new tokens. 
		refreshAuthResult = await _authContext.AcquireTokenByRefreshTokenAsync (refreshAuthResult.RefreshToken, clientId, resourceId1);
		tokenEntry.WebApi1AccessToken = refreshAuthResult.AccessToken;
		tokenEntry.RefreshToken = refreshAuthResult.RefreshToken;
		tokenEntry.ExpiresOn = refreshAuthResult.ExpiresOn;
	}

	if (hasARefreshToken)
		await tokensTable.UpdateAsync (tokenEntry);
	else
		await tokensTable.InsertAsync (tokenEntry);

	return result;
}

Words of Warning

Bearer Tokens

Developers need to understand Bearer Tokens when using Azure AD authentication. Bearer Tokens mean anybody who has the token (bearer of the token) could access and interact with your AAD resource. This offers high flexibility but it could also be a security risk if your key was exposed somehow. This needs to be thought of when implementing any token sharing mechanism.

iOS SecKeyChain is “Secure”

iOS SecKeyChain is “Secure”, right? No, not at all. Apple calls it secure, but on jail-broken devices, you could see the key store as a normal file. Thus, I would highly recommend encrypting these access tokens and any key that you might want to store before persisting it. The same goes for iCloud, Azure, or any of the other approaches we went through above.

Apple AppStore Verification

If you intend on submitting your app to Apple AppStore, then you need to be extra careful with what approach you take to share data between your apps. For enterprises (locally deployed apps), you have the control and you make the call based on your use case. However, Apple has a history of rejecting apps (ie PastePane) for using some of iOS APIs in “an unintended” manner.

I hope you found this series of posts useful, and as usual, if there is something not clear or you need some help with similar projects that you are undertaking, then get in touch, and we will do our best to help. I have pushed the sample code from this post and the previous ones to GitHub, and can be found here

Has.

This blog post is the fourth and final in the series that cover Azure AD SSO in native mobile applications.

  1. Authenticating iOS app users with Azure Active Directory
  2. How to Best handle AAD access tokens in native mobile apps
  3. Using Azure SSO tokens for Multiple AAD Resources From Native Mobile Apps
  4. Sharing Azure SSO Access Tokens Across Multiple Native Mobile Apps (this post).

Get Started with Docker on Azure

Originally posted on siliconvalve:

The most important part of this whole post is that you need to know that the whale in the Docker logo is officially named “Moby Dock“. Once you know that you can probably bluff your way through at least an introductory session on Docker :).

It’s been hard to miss the increasing presence of Docker, particularly if you work in cloud technology. Each of the major cloud providers has raced to provide container services (Azure, AWS, GCE and IBM) and these platforms see benefits in the higher density hosting they can achieve with minimal changes to existing infrastructure.

In this post I’m going to look at first steps to getting Docker running in Azure. There are other posts about that will cover this but there are a few gotchas along the way that I will cover off here.

First You Need a Beard

Anyone…

View original 874 more words

Using a Proxy with Azure AD Sync Services

In this blog I am going to cover some tips and tricks for using Azure AD Sync Services with a proxy… including the specific URLs required for whitelisting, the proxy settings used during the installation, configuration and running of the tool, and a workaround for apps that do not support authenticating proxies.

URL Whitelisting

It is generally recommended to whitelist all the Office 365 URLs to bypass proxy infrastructure as this provides the best performance and avoids issues with applications that are not compatible with an authenticating proxies (OneDrive for Business client installations, Exchange Hybrid services, Azure AD Sync Services and so on…). Although this is the easiest path to adoption and least likely to encounter technical issues, it is not always possible. This is particularly true for security conscious organisations, where whitelisting wildcard addresses may be undesirable.

If you want to be specific with the URLs required for Azure AD Sync Services, the following URLs must bypass proxy authentication:

  • adminwebservice.microsoftonline.com
  • login.microsoftonline.com

Proxy Settings

When you run through the DirectorySyncTool.exe wizard to install and configure Azure AD Sync Services, at the point where you first enter your Azure AD credentials the wizard will use the proxy settings defined for the current logged on Windows user. In this instance, make sure you’ve configured your proxy settings in Internet Options (inetcpl.cpl) for the user running the installation.

In step 8 (Configure), the installation wizard connects to and configures Azure Active Directory. This step of the wizard attempts an outbound HTTPS to login.microsoftonline.com using the proxy settings defined for the Azure AD Sync Services service account. This service account is either the one you specified during the installation (if you ran the DirectorySyncTool.exe with the /serviceAccount* paramater), or the one that was automatically created by the wizard.

I’ve previously written about my recommendations to specify a service account for the installation so that you know the credentials. In this case you can easily configure the proxy settings by launching inetcpl.cpl with the service account. For example:

runas /user:<domain>\<AADSync Service Account> "control.exe inetcpl.cpl"

Once the Azure AD Sync Services installation is complete, all synchronisation events are going to run under the context of the Azure AD Sync Services service account and will rely on the proxy settings defined in inetcpl.cpl.

AADSync with an authenticating Proxy

If for some reason you can’t bypass an authenticating proxy for AADSync, or you’re desperate to get AADSync up and running while you wait for the proxy admin to add the URLs to a whitelist (my scenario), CNTLM to the rescue! I used this recently to get Azure AD Sync Services working with an authenticating proxy and it’s as easy as:

  1. Download and install CNTLM on the AADSync server
  2. Configure the cntlm.ini with the proxy server and authentication details (you can save the account password or an NTLM hash, for those that are concerned about saving credentials in plain text)
  3. Start the CNTLM service
  4. Configure CNTLM as your proxy in Internet Settings (default is 127.0.0.1:3128)
  5. Install and Configure AADSync

AADSync – AD Service Account Delegated Permissions

When you configure Azure AD Sync (AADSync), you need to provide credentials of an account that is used by AADSync’s AD DS Management Agent to connect to your on-premises Active Directory. In previous versions of DirSync this was achieved via running the configuration wizard as a ‘Enterprise Admin’ and thus allowing the installer to create a service account and apply permissions to the Directory on your behalf. The account could have any of these following permissions to your AD DS based on your choices for purpose of the sync:

  1. Write Access to User, Contact, Groups Attributes – Hybrid Exchange Deployment
  2. Password Changes – Password Synchronisation
  3. Write Access to Passwords – Password Write-back (AAD Premium)

There has been a lot of talk lately about new tools to help you configure synchronisation of your directory with ‘(this) many clicks’. While these new tools do some great pre-req checks and wrap everything into a nice shiny wizard that helps guide you through your experience, it currently puts the burden of creating this service account and applying AD DS permissions back on you. It is now your responsibility to raise a change with the Active Directory team, in which you will need explain how you are going to splatter permissions all over their directory.

So we should re-assure the Active Directory team that we can create a service account and appy LEAST permissions on the directory for this account using the following script(s).

Apply all that are appropriate to your scenario:

Exchange Hybrid Deployment:

For rich co-existence between your on-premises Exchange infrastructure and Office 365 you must allow the service account to write-back attributes to your on-premises environment.

Configure Hybrid Write-back:

###--------variables
$DN = "OU=Users,OU=Company,DC=mydomain,DC=com"
$Account = "mydomain\svc_aadsync"

###--------variables

<#
Switches used in cmds

http://technet.microsoft.com/en-us/library/cc771151.aspx

/I:S = Specifies the objects to which you are applying the permissions.'S' - The child objects only
/G = Grants the permissions that you specify to the user or group
WP = Write to a property Permission

#>

###---Update Attributes

#Object type: user
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;proxyAddresses;user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;msExchUCVoiceMailSettings;user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;msExchUserHoldPolicies;user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;msExchArchiveStatus;user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;msExchSafeSendersHash;user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;msExchBlockedSendersHash;user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;msExchSafeRecipientsHash;user'"
Invoke-Expression $cmd
#Object type: group
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;proxyAddresses;group'"
Invoke-Expression $cmd
#Object type: contact
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":WP;proxyAddresses;contact'"
Invoke-Expression $cmd

 

Validate

Use DSACLS to validate your settings
dsacls “\\DCHostname.mydomain.com\OU=Users,OU=Company,DC=mydomain,DC=com”

Your output should resemble:

Inherited to user
 Allow BUILTIN\Pre-Windows 2000 Compatible Access
                                       SPECIAL ACCESS for Group Membership   <Inherited from parent>
                                       READ PROPERTY
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for msExchSafeSendersHash
                                       WRITE PROPERTY
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for msExchArchiveStatus
                                       WRITE PROPERTY
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for msExchUCVoiceMailSettings
                                       WRITE PROPERTY
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for msExchBlockedSendersHash
                                       WRITE PROPERTY
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for msExchSafeRecipientsHash
                                       WRITE PROPERTY
 Inherited to contact
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for proxyAddresses
                                       WRITE PROPERTY
 Inherited to user
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for proxyAddresses
                                       WRITE PROPERTY
 Inherited to group
 Allow MYDOMAIN\svc_aadsync           SPECIAL ACCESS for proxyAddresses
                                       WRITE PROPERTY

Password Synchronisation:

For granting permissions to the service account for reading password hashes from your on-premises AD DS you must allow the special permission of Replicating Directory Changes & Replicating Directory Changes ALL.

Configure Password Synchronisation:

###--------variables
$Account = "mydomain\svc_aadsync"

###--------variables

<#
Switches used in cmds

http://technet.microsoft.com/en-us/library/cc771151.aspx

/G = Grants the permissions that you specify to the user or group
CA = Control access
If you do not specify {ObjectType | Property} to define the specific extended right for control access, this permission applies to all meaningful control accesses on the object; otherwise, it applies only to the specific extended right for that object.
#>

$RootDSE = [ADSI]"LDAP://RootDSE"
$DefaultNamingContext = $RootDse.defaultNamingContext
$ConfigurationNamingContext = $RootDse.configurationNamingContext

###---Update Attributes

#Object type: user
$cmd = "dsacls '$DefaultNamingContext' /G '`"$Account`":CA;`"Replicating Directory Changes`";'"
Invoke-Expression $cmd
$cmd = "dsacls '$DefaultNamingContext' /G '`"$Account`":CA;`"Replicating Directory Changes All`";'"
Invoke-Expression $cmd

Validate

The output of the cmdlet if completed successfully you will find:

Allow mydomain\svc_aadsync           Replicating Directory Changes

 

Password Write-back:

To grant the service account password write-back permission on the directory you must allow the special permissions of Reset Password & Change Password extended rights.

Configure Password Write-back


###--------variables
$DN = "OU=Users,OU=Company,DC=mydomain,DC=com"
$Account = "mydomain\svc_aadsync"

###--------variables

<#
Switches used in cmds

http://technet.microsoft.com/en-us/library/cc771151.aspx

/I:S = Specifies the objects to which you are applying the permissions.'S' - The child objects only
/G = Grants the permissions that you specify to the user or group
CA = Control access
If you do not specify {ObjectType | Property} to define the specific extended right for control access, this permission applies to all meaningful control accesses on the object; otherwise, it applies only to the specific extended right for that object.
#>

###---Update Attributes

#Object type: user

$cmd = "dsacls '$DN' /I:S /G '`"$Account`":CA;`"Reset Password`";user'"
Invoke-Expression $cmd
$cmd = "dsacls '$DN' /I:S /G '`"$Account`":CA;`"Change Password`";user'"
Invoke-Expression $cmd

Validate

Run dsacls “\\DCHostname.mydomain.com\OU=Users,OU=Company,DC=mydomain,DC=com” once again to find your entry:

Allow mydomain\svc_aadsync           Reset Password

 Check your AD MA credentials

  1. Open the ‘Synchronization Service’
  2. Choose ‘Connectors’
  3. Select the Connector with Type ‘Active Directory Domain Services’
  4. Right-Click ‘Properties’
  5. Configure Directory Partitions
  6. Select the radio button below

ADMA_SetCredentials

Add your credentials for the service account

ADMA_SetCredentials2