RDS and Shared Computer Support for Office 365 Pro Plus

There is no denying that the workplace is moving towards a multi-device world.  The majority of information workers (IWs) now have an average of 3 – 4 devices per user.  This can include a PC, notebook, tablet, and phone.  The problem is that Office Professional is licensed per device.  This means that organizations planning to deploy Office Professional have to purchase additional copies of Office to run on these different devices.  For most organizations, this is prohibitively expensive. 

 

This is one of the reasons why Office 365 Pro Plus is such an attractive options for most organizations.  Adopting Office 365 Pro Plus means that you can allow BYOD within your organization and still keep your users productive on their own devices by providing a familiar Office experience.  This is a great value for IWs across a range of industries.

 

But not all of your employees are IWs.  Some organizations have the majority of their workfoce in roles where every device is shared by multiple users.  This is very common in industries such as call centers, mining, retail, and logistics.  These positions are often characterized by shift work where a device is passed from user to user when shifts end.  These shared devices generally belong to the organization, not the individual.  But some of these devices stilll require a copy of Office to read and edit documents and emails.  Sometimes Office runs locally on the machine.  In some instances, Office runs on an Remote Desktop Server (RDS) and presents to the user on the shared device. 

 

Office 365 Pro Plus does not currenly run properly on shared devices.  The reason is because activation is tied to the user’s account.  When multiple users attempt to access the same copy of Office 365 Pro Plus, activation will fail.  This made it impossible to use Office 365 Pro Plus in RDS and other shared environments.  It created challenges for many organizations because it required that they run a different version of Office depending on whether the device is dedicated to a single user or shared by multiple users.

 

The good news is that Microsoft has heard the feedback and has announced a solution.  Shared Computer Activation is a new feature that is due to release in H2 CY2014.  Shared Computer Activation will allow organizations to run Office 365 Pro Plus on RDS for Windows Server 2008 R2 and above.  It will also permit Office 365 Pro Plus to run on shared computers with multiple user profiles. 

 

Shared Computer Activation separates the installation of Office 365 Pro Plus from the activation process.  Using the Office Deployment Tool, Office 365 Pro Plus can be installed in Shared Computer Mode.  Running as a Shared Computer means that Office 365 Pro Plus activation lasts for the duration of a logon session.  When a user logs onto the machine or into an RDS session, activation will be based on the logged on user’s Office 365 Pro Plus license.  Activation will succeed only if the user is properly licensed to run Office 365 Pro Plus.  When a new user signs onto the same device, activation happens again using the new user’s credentials.  Running Office 365 Pro Plus on a Shared Computer does not count against a user’s 5 license limit.  This means that IWs can use shared computers without having to sacrifice one of their personal devices.

 

If you are looking for assistance running Office 365 Pro Plus in environments with shared devices, please contact Kloud Solutions at the following URL:

http://www.kloud.com.au/#

What Is The Microsoft Enterprise Mobility Suite?

Microsoft released the Enterprise Mobility Suite (EMS) back in April 2014. This was a major announcement for Microsoft which has typically focused on traditional information workers (IWs) who sit at a desk for most of the day. The EMS is a license designed for a mobile worker who uses a range of different devices including a PC, tablet, and mobile phone. The EMS assumes that the mobile worker will take advantage of BYOD and choose to use a non-corporate device for accessing corporate data.

 

The EMS enables an organization to be able to embrace mobility and BYOD by address the key areas of concern and risk for all organizations:

1) User Identity and Access

2) Device Management

3) Application Management

4) Data Protection

 

The EMS includes the following components and capabilities:

​1) Azure Active Directory Premium ​- Cloud Identity Management
​2) Windows Intune ​- Mobile Device Management (MDM)
​3) Windows Intune ​- Mobile Application Management (MAM)
​4) Azure Rights Management Services (RMS) –  ​Email and Document Protection

 

Rather than purchasing piecemeal solutions, organizations can license EMS to address the challenges that come with a mobile workforce and BYOD. Instead of resisting change, IT departments can embrace new technologies, keep users happy and productive, and protect their organizations from security threats.

 

If you are looking for guidance on how to enable greater mobility in your workforce, please contact Kloud Solutions at the following URL:

http://www.kloud.com.au/#

End User Access To Spam Quarantine in Office 365

One of the ​features of Office 365 which gets very little attention is Exchange Online Protection (EOP). EOP is a Microsoft cloud service which protects Exchange Online in Office 365 from spam and viruses. EOP is a built-in capability of Office 365. There is no additional license required to use it.

Emails which EOP detects as spam are trapped in a quarantine area. Users were notified that email was quarantined by an automatically generated email message from EOP. The user could then decide if the email was truly spam or a false positive. If the user felt that the email was not spam, there was an option to release the email from quarantine. Released emails are immediately delivered to the user’s inbox.

Microsoft has released a new feature for Office 365 called the spam quarantine page. This new page allows end users to view their emails which are currently in quarantine via a web-based in interface using an Office 365 OrgID. Users can choose to release an email from quarantine and have id delivered to their inbox fromthe spam quarantine page. The console can be accessed via the following URL:

https://admin.protection.outlook.com/quarantine

There is an advanced search option in the spam quarantine page. This allows users to search for a speific email trapped in quarantine. The user can search using the following criteria:

1) Message ID

2) Sender Email Address

3) Recipient Email Address

4) Subject

5) Received

6) Expires

7) Type

If you are looking for guidance on how to migrate to and configure EOP to protect your Office 365 tenant, please contact Kloud Solutions at the following URL:

http://www.kloud.com.au/

Failure Upgrading DirSync with a Remote SQL Instance

I’ve just recently come across an issue when performing the upgrade procedure for the Microsoft Azure Directory Sync tool with a remote SQL database. The procedure seems simple enough at first glance and is documented here.

To break down the process it is only a few simple steps:

Install the new dirsync –

Dirsync.exe /fullsql

Click next on the upgrade wizard until complete

Run Powershell –

Import-Module DirSync

Run the following PowerShell cmdlet to update the backend database –

Install-OnlineCoexistenceTool -UseSQLServer –SqlServer <ServerName> -Upgrade -Verbose -ServiceCredential (Get-Credential)

The Issue

This particular issue will occur during the upgrade procedure on the PowerShell step Install-OnlineCoexistenceTool with the following error –

VERBOSE: Running InstallOnlineCoexistenceTool in Upgrade mode.

Install-OnlineCoexistenceTool : The SQL Server Instance specified during an upgrade must match the previously

configured SQL Server Instance. Expected SQL parameter for upgrade were Server: Instance:

At line:1 char:1

+ Install-OnlineCoexistenceTool -UseSQLServer -SqlServer servername …

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : InvalidOperation: (Microsoft.Onlin…CoexistenceTool:InstallOnlineCoexistenceTool) [Inst

all-OnlineCoexistenceTool], DirectorySyncInstallException

+ FullyQualifiedErrorId : 201,Microsoft.Online.Coexistence.PS.Install.InstallOnlineCoexistenceTool

The first time I got this error, I assumed that I had provided incorrect syntax for the cmdlet and proceeded to try every variant possible. Nothing seemed to satisfy the shell so I started to look elsewhere. Next step along the process was to go look at the possible FIM configuration settings listed in the registry that I knew of –

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\FIMSynchronizationService\Parameters

I found the two keys that I ‘presumed’ that the cmdlet was using for verification –

Server = <Server FQDN>

SQLInstance = <blank>

Based on these two key values I then went back to my shell and tried to enter the syntax exactly as I could see it. I thought that maybe because my ‘SQLinstance’ value was empty, PowerShell was finding it hard for me to process this in the cmdlet of a null value. To cut a long troubleshooting story short, it didn’t matter. I had stared at the cmdlet long enough and resided to the fact that it wasn’t happy about the values stored elsewhere, and I wasn’t going to find them any time soon.

Cause

There was an issue in previous versions of DirSync where the following two registry keys were not written when installed using the /FullSQL flag –

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSOLCoExistence\storeserver

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSOLCoExistence\SQLINSTANCE

DirSync attempts to read these keys when performing the in-place upgrade to verify the SQL Server and Instance name, and then the upgrade fails when it cannot find them.

Solution

Copy value data from key –

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\FIMSynchronizationService\Parameters\Server\<ServerName>

Create a new string value of –

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSOLCoExistence\storeserver

Paste the value data from above

Copy value data from key –

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\FIMSynchronizationService\Parameters\SQLInstance

Create a new string value of –

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\MSOLCoExistence\SQLINSTANCE

Paste the value data from above (if any)

Note: For me this key was blank as the default instance was being used not a named instance.

Re-run cmdlet –

Install-OnlineCoexistenceTool -UseSQLServer –SqlServer <ServerName> -Upgrade -Verbose -ServiceCredential (Get-Credential)

Expected Output –

PS C:\Windows\system32> Install-OnlineCoexistenceTool -UseSQLServer -SqlServer <SQL Server NAME> -Upgrade -Verbose -ServiceCredential (Get-Credential)

cmdlet Get-Credential at command pipeline position 1

Supply values for the following parameters:

Credential

VERBOSE: Running InstallOnlineCoexistenceTool in Upgrade mode.

VERBOSE: Skipping Microsoft SQL Server 2012 SP1 Express installation.

VERBOSE: Upgrading Microsoft Forefront Identity Manager

VERBOSE: AbandonKeys: C:\Program Files\Windows Azure Active Directory Sync\SYNCBUS\Synchronization Service\Bin\miiskmu.exe /q /a

VERBOSE: AbandonKeys: C:\Program Files\Windows Azure Active Directory Sync\SYNCBUS\Synchronization Service\Bin\miiskmu.exe /q /a ExitCode:0

VERBOSE: Uninstalling msiexec.exe /quiet /l miisUninstall.log /x {C9139DEA-F758-4177-8E0F-AA5B09628136}

REBOOT=ReallySuppress…

VERBOSE: Please wait while the Synchronization Engine is uninstalled.

VERBOSE: The Synchronization Engine was successfully removed. Msiexec /x returned 0.

VERBOSE: Installing msiexec.exe /quiet /l “C:\Program Files\Windows Azure Active Directory Sync\miissetup.log” /i “C:\Program Files\Windows Azure Active Directory Sync\SynchronizationService.msi” INSTALLDIR=”C:\Program Files\Windows Azure Active Directory Sync\SYNCBUS” storeserver=<servername> serviceaccount=<credentials> servicedomain=<domain> groupadmins=FIMSyncAdmins groupoperators=FIMSyncOperators groupbrowse=FIMSyncBrowse groupaccountjoiners=FIMSyncJoiners grouppasswordset=FIMSyncPasswordSet servicepassword=<Hidden>

REBOOT=ReallySuppress…

VERBOSE: Please wait while the Synchronization Engine is installed.

VERBOSE: The installation of the Synchronization Engine was successful. Setup returned 0.

VERBOSE: The Synchronization Engine was installed successfully.

VERBOSE: Installing msiexec.exe /quiet /lv “C:\Program Files\Windows Azure Active Directory Sync\MSOIDCRLSetup.log” /i “C:\Program Files\Windows Azure Active Directory Sync\Msoidcli.msi” REBOOT=ReallySuppress…

VERBOSE: Please wait while the Microsoft Online Services Sign-in Assistant service is being installed.

VERBOSE: The Microsoft Online Services Sign-in Assistant service installation succeeded. Setup returned 0.

VERBOSE: The Microsoft Online Services Sign-in Assistant service was installed successfully.

VERBOSE: Installing msiexec.exe /quiet /lv “C:\Program Files\Windows Azure Active Directory Sync\dirsyncUpgrade.log” /i “C:\Program Files\Windows Azure Active Directory Sync\DirectorySync.msi” TARGETDIR=”C:\Program Files\Windows Azure Active Directory Sync\” REBOOT=ReallySuppress…

VERBOSE: Please wait while the Directory Sync tool is installed.

VERBOSE: The Directory Synchronization tool install succeeded. Setup returned 0.

VERBOSE: The Directory Synchronization tool was installed successfully.

 

Once again we must make a big thankyou to Yaran at Microsoft PSS for helping me resolve this issue.

I Keep Getting Prompted to Authenticate in SharePoint Online!

Every once in a while I come across a problem which is vexing and irritating and equally, an absolute joy when it is finally resolved.

The Scene

This particular issue had been on-going for some time with one of my customers with reports coming in from various people saying that they were being prompted to authenticate when opening an Office document in SharePoint Online. “I’ve already logged in so why do I keep getting these prompts?!” – It’s a fair question, especially since the organisation had implemented AD FS to reduce the frequency of authentication dialogue boxes. It seemed that Microsoft was not only being overprotective but overbearing as well. Here’s the dialogue I’m talking about by the way:

Figure 1 – Overprotective

Along with this issue, the customer had started to map network drives to SharePoint. Let’s face it, it’s easier to manage files in a folder structure than in the SPO web interface, and with improvements over the last few years with SharePoint mapping network drives actually works, or at least is should work. While it was working some of the time, there were reports of access issues to these mapped drives, and a different dialogue box which while uglier, on first glance appeared more promising from a technical perspective. A recommendation for a fix and a reference to a KB article!

Figure 2 – Ugly but useful

There are a plethora of articles, wikis, forum posts, etc out there which all talk about adding your SharePoint Online sites to your browser’s Trusted Sites list. The referenced KB article covers it off nicely, and I’ve added the link in case you’ve not come across it, here.

In my particular case the SharePoint Online domains were all happily existing in the IE Trusted Sites list (I checked about 30 times during my investigations)

I’ve mentioned that AD FS is in play, further a redirection URL http://Intranet.KloudySky.com.au is used with a Smart Link in this particular environment to provide a more seamless login experience. Until very recently you were not able to customise the Office 365 Portal page, https://login.microsoftonline.com. Check out this article for more info on how to brand the Sign-In Page with company branding.

While you can now brand the Sign-In page to your heart’s content (if you have an Azure subscription along with your O365 subscription), you’ll need to type in your UPN at least once (normally not an experience IT wants to establish for a new Intranet rollout), and so smart links are not yet dead.

There’s a couple of good resources for creating SmartLinks, including one written by a Kloudy here and a good Office365 community article here which discusses some of the components of the link.

The Solution

I’ve set the scene, and for most of you you’ve probably bypassed everything above and come straight to this point. For those of you who took the slow route of reading from the top; Thanks for sticking around!

The problem I was seeing was due to a very simple issue really. You need to check the check box which says “Keep me signed in”. It even says so in the ugly helpful dialogue box above. The thing is, if you’re using AD FS and SmartLinks you don’t actually hit the Office 365 login page and so don’t get the opportunity to sign in. We need to pass the “Keep me signed in” option to the AD FS server somehow, and the only way to do that is to encode it in the URL.

How do we do that? As it turns out there’s a way!

At the end of the SmartLink is the LoginOptions attribute.

LoginOption=3 will pass the “Keep me signed in” option as off (Not checked)

LoginOption=1 will pass the “Keep me signed in” option as on (Checked)

Don’t keep me signed in. Boo!!!
https://myadfs.Kloudysky.com.au/adfs/ls/?wa=wsignin1.0&wtrealm=urn:federation:MicrosoftOnline&wctx=%5BwctxValue%5D%26wreply%3Dhttps%253A%252F%252FKloudySky%252Esharepoint%252Ecom%252F%255Fforms%252Fdefault%252Easpx%26lc%3D1033%26id%3D123456%26%26LoginOptions%3D3

Keep me signed in. Yay!!!
https://myadfs.Kloudysky.com.au/adfs/ls/?wa=wsignin1.0&wtrealm=urn:federation:MicrosoftOnline&wctx=%5BwctxValue%5D%26wreply%3Dhttps%253A%252F%252FKloudySky%252Esharepoint%252Ecom%252F%255Fforms%252Fdefault%252Easpx%26lc%3D1033%26id%3D123456%26%26LoginOptions%3D1

I only made this change on the Internal SmartLinks by the way. I’ve left the SmartLinks published to the Internet with LoginOptions=3, we don’t really want people on untrusted PCs and devices to remain logged into Office 365 services.

 

I’d like to make a big thankyou to Carolyn at Microsoft PSS for helping me resolve this issue.

 

 

Azure Active Directory Synchronization Tool: Password Sync as Backup for AD FS Federated Domains

Kloud has helped many Australian businesses leverage Microsoft cloud services such as Office 365, Intune and Microsoft Azure and most have implemented Active Directory Federation Services (AD FS) to provide a highly available Single Sign-On (SSO) user experience. In mid-2013, the Windows Azure Active Directory Synchronization Tool was updated to support password synchronisation with Azure Active Directory, which provided an alternative way to leverage on-premises authored identities with Microsoft’s cloud services.

Password synchronisation is a feature of the Azure Active Directory Sync Tool that will synchronise the password hash from your on-premises Active Directory environment to the Azure Active Directory. In this scenario users are able to log into Office 365 using the same password as they use in the on-premises environment, similarly to when using AD FS, however unlike AD FS there is no automatic sign in capability so users will still be prompted to enter credentials on a domain joined device.

For those that have already deployed AD FS or indeed those that are intending to implement AD FS in the future, one of the least publicised feature improvements in the May 2014 update to Office 365 is support for using the password sync feature as a temporary fall-back option for the primary AD FS service and federated authentication.

Another scenario now supported is the ability to have some domains configured for Password Sync while others within the same tenant are enabled for Federated Authentication with AD FS.

Mixing Password Sync and Federated Authentication

It’s quite a common scenario across many Office 365 implementations that I’ve done for customers to have a primary brand and domain such as contoso.com where the majority of users reside and is configured for federated authentication with AD FS. Contoso also owns a subsidiary called Fabrikam and there is no requirement for federated authentication or single sign on.

Previously this scenario would mean that for users with a primary SMTP address of fabrikam.com would either have to maintain a separate password within the Office 365 tenant or have a sub-optimal login experience and be configured for sign in with a UserPrincipalName in the @contoso.com format.

The recent changes to Office 365 allow for the mixed use of federated and password sync enabled domains.

Password Sync as a Temporary Fall-Back for Active Directory Federation Services

A number of smaller organisations I’ve worked with have elected to use a single instance of AD FS, taking advantage of the Single Sign-On capabilities but not including any high availability or site resilience. The Azure Active Directory Synchronization Tool is already a core component of the AD FS infrastructure so enabling Password Sync to provide a backup solution for the Single Sign-On service makes a lot of sense – and it’s free!

If you haven’t already (and you really, really should), deploy the most recent version of the Dirsync tool and enable the Password Sync option when prompted in the Configuration Wizard. A good TechNet article describing the Password Synchronization feature and how to implement it can be found here.

How to Temporarily “Switch” from Federated Authentication to Synchronised Password

The fall-back option is not automatic and requires manual configuration. Federated authentication can be changed to synchronised password authentication on a per-domain basis in the event of an outage to the AD FS infrastructure.

Detailed steps are as follows:

  1. Run the Windows Azure Active Directory Module for Windows PowerShell as an Administrator
  2. Run the following commands from the primary AD FS server:
    1. $Cred = Get-Credential
      #Enter non-federated Office 365 administrator credentials when prompted
    2. Connect-MsolService –Credential $Cred
    3. Convert-MsolDomainToStandard –DomainName <federated domain name> -SkipUserConversion $true -PasswordFile C:\Temp\passwordfile.txt
  3. Once the outage is over use the following command to convert the domain back to federated:
    1. Convert-MsolDomainToFederated –DomainName <federated domain name> -SupportMultipleDomains

It is recommended that you do not change UserPrincipalNames or ImmutableIds after converting your domain to the managed state for users that have been switched to use synchronised passwords.

It is worth noting that switching between Federated Authentication and Synchronised Password Authenication for sign in to Office 365 is not instant and will likely interrupt service access. This may not be a factor in the initial activation (as it’s likely an outage scenario) however it is something to bear in mind when cutting services back to Federated Authentication.

Portable Inversion of Control (IoC) Container for Mobile Development

TDD in Mobile Development – Part 2

This post is the second in a series that talks about TDD in Mobile Development, the links below show the other parts of the series.

TDD in Mobile Development
1. Unit Testing of Platform-Specific Code in Mobile Development.
2. Portable IoC (Portable.TinyIoC) for Mobile Development
3. Cross-Platform Unit Testing – in progress.

In a previous post, we looked at writing unit and integration tests for platform-specific code in Android and iOS. Also, I promised to show you in the next post how to write cross platform unit tests that would assert your logic once and for many platforms. However in order to do that, you need to first consider how testable is your code.

Testability

In order to write good unit tests and have a good test coverage around your code, you need to first consider Code Testability. There are many books and blogs that talks about Testability, I find these posts good in demoing and communicating the basics of what is needed 1, and 2.

Basically, your classes/libraries need to declare upfront what dependencies they have. It is not acceptable to have classes that just instantiate (using new keyword or otherwise) other classes, or invoke static methods/fields from other entities. You might ask why? and I am glad you asked. That’s because such classes/libs are not testable. You cannot mock/fake/change their internal behaviours. To make things worse, some of these external dependencies or static classes might be platform dependent or might require certain environment context to return certain output, which means such code would be only testable in few small scenario cases. Yet worse, using static methods gives you more head-ache when it comes to thread safety, which is another topic by itself.
Therefore, your code should be clean enough and declare its dependancies upfront, so that we could test it in separation of other dependencies. For that, the concept of Inversion of Control (IoC) was introduced. IoC containers enables developers to register dependencies at the start of the application in a central point to have a better visibility of dependencies and to allow for a better testability.

There are many IoC containers out there, but very few of them are fit for mobile development. Also, many of these containers are more geared towards large projects with lots of features that are not relevant to mobile apps. I found this question and answers on StackOverFlow which talks about the popularity and suitability of the latest IoC containers. In our case, we have been glad using TinyIoC.

TinyIoC

TinyIoC is a great light container that allows you to register your entities for the whole app domain. It’s light enough to be included in most small projects, yet feature-rich to offer convenience and flexibility to developers. We have been using it for quite a while now and previously my colleague Mark T. has blogged about it here. TinyIoC comes with mainly two files, the container itself and a small light-weight messengerHub that could be used for communicating messages across different entities/libraries. The bad news for me was that TinyIoC is a platform specific so I had to include a different library in Android and a different one in iOS too. Plus, I could not take that part of my code to my Portable class libraries. So I started thinking about getting this into the next level.

Portable TinyIoC

I forked TinyIoC on github, and simply enough, I got it to compile as a portable library (profile 102) that could be used on the following platforms.
1. Xamarin.Android
2. Xamarin.iOS
3. Windows Phone 8.1+
4. .NET 4.0.3+
All I needed to do was to separate the library into two different parts, a TinyIoC.Core which targets .NET 4.0 (old Reflection API), and a Portable Wrapper that targets Portable profile (102), and now we have a Portable TinyIoC, you can find it on my github account here. I am still working on making it a Nuget Package or submitting a pull request, but so far I have it working in a stable condition and I have all unit tests passing.

Examples of Using TinyIoC on iOS

Like with most IoC containers, you need to register your dependencies (or set auto-discovery to on :) ), so on the start of the app, we register like this:

public static class Bootstrap
{
	public static void BuckleUp (AppDelegate appDelegate)
	{
			TinyIoCContainer.Current.Register();
			TinyIoCContainer.Current.Register (appDelegate);

			TinyIoCContainer.Current.Register (TinyIoCContainer.Current.Resolve ());
			TinyIoCContainer.Current.Register (TinyIoCContainer.Current.Resolve ());

			TinyIoCContainer.Current.Register (TinyIoCContainer.Current.Resolve ());
	}
}		

As you can see, this Bootstrap class gets called from the main AppDelegate, passing it a reference to the app delegate, and it will register all dependencies. Remember that you need to register your dependencies in order, otherwise you might end up with exceptions. The great thing about this is, not only you could mock pretty much everything and test however way you want, but you also do not need to repeat instantiate all dependencies to get a certain entity. As an example, if your viewModel takes 3 parameters in the constructor, all of these are other entities (cloud service, repository, etc), you only need to user container.Resolve() and it will get you your entity with all its dependencies, bingo :)
Also, TinyIoC manages any disposable objects and dispose of them properly.

Examples of Using TinyIoC on Android

On Android, you would not notice much difference, except in the placement of the entry point (BuckleUp()), which in this case gets called from within the MainLauncher activity. Our Android bootstrap would look like this:

public static class Bootstrap
{
        public async static Task BuckleUp(IActivity activity)
        {
            TinyIoCContainer.Current.Register();
            TinyIoCContainer.Current.Register((IApplication)Android.App.Application.Context);
            TinyIoCContainer.Current.Register(activity);
            // more code is omitted 
        }
}          

Conclusions

In Conclusions, I have shown how simple and elegant it is to use an IoC containers. I prefer TinyIoC because it is very light and now we have a portable version of it, so you have no excuse no more. Start looking at integrating TinyIoC.Portable into your next mobile project and I would love to hear your thoughts. In the next post, we will look at Corss-Platform Unit Testing

TDD in Mobile Development
1. Unit Testing of Platform-Specific Code in Mobile Development.
2. Portable IoC (Portable.TinyIoC) for Mobile Development
3. Cross-Platform Unit Testing – in progress.

Do It Yourself Cloud Accelerator – Part II

In the last post I introduced the idea of breaking the secure transport layer between cloud provider and employee with the intention to better deliver those services to employees using company provided infrastructure.

In short we deployed a server which re-presents the cloud secure urls using a new trusted certificate. This enables us to do some interesting things like provide centralised and shared caching across multiple users. The Application Request Routing (ARR) module is designed for delivering massively scalable content delivery networks to the Internet which when turned on its head can be used to deliver cloud service content efficiently to internal employees. So that’s a great solution where we have cacheable content like images, javascript, css, etc. But can we do any better?

Yes we can and it’s all possible because we now own the traffic and the servers delivering it. To test the theory I’ll be using a SharePoint Online home page which by itself is 140K and overall the total page size with all resources uncached is a whopping 1046K.

Compression

Surprisingly when you look at a Fiddler trace of a SharePoint Online page the main page content coming from the SharePoint servers is not compressed (The static content, however, is) and it is also marked as not cacheable (since it can change each request). That means we have a large page download occurring for every page which is particularly expensive if (as many organisations do) you have the Intranet home page as a default on the browser opening.

Since we are using Windows Server IIS to host the Application Request Router we get to take a free ride on some of the other modules that have been built for IIS like, for instance, compression. There are two types of compression available in IIS, static compression which can be used to pre-calculate the compressed output of static files, or dynamic compression which will compress the output of dynamically generated pages on the fly. This is the compression module we need to compress the home page on the way through our router.

Install the Dynamic Compression component of the Web Server(IIS) role

Configuring compression is simple, firstly, make sure the IIS Server level has Dynamic Compression enabled and also the Default Web Site level

By enabling dynamic compression we are allowing the Cloud Accelerator to step in between server and client and inject gzip encoding on anything that isn’t already compressed. On our example home page the effect is to reduce the download content size from a whopping 142K down to 34K

We’ve added compression to the uncompressed traffic coming from SharePoint Online which will help the experience for individuals on the end of low bandwidth links, but is there anything we can do to help the office workers?

BranchCache

BranchCache is a Windows Server role and Windows service that has been around since Server 2008/Win 7 and despite being enormously powerful has largely slipped under the radar. BranchCache is a hosted or peer-to-peer file block sharing technology much like you might find behind torrent style file sharing networks. Yup that’s right, if you wanted to, you could build a huge file sharing network using out of the box Windows technology! But it can be used for good too.

BranchCache operates deep under the covers of Windows operating systems when communicating using one of the BranchCache-enabled protocols HTTP, SMB (file access), or BITS(Background Intelligent Transfer Service). When a user on a BranchCache enable device accesses files on a BranchCache enabled file server or accesses web content on a BranchCache enabled web server the hooks in the HTTP.SYS and SMB stacks kick in before transferring all the content from the server.

HTTP BranchCache

So how does it work with HTTP?

When a request is made from a BranchCache enabled client there is an extra header in the request Accept-Encoding: peerdist which signifies that this client not only accepts normal html responses but also accepts another form of response, content hashes.

If the server has the BranchCache feature enabled it may respond with Content-Encoding: peerdist along with a set of hashes instead of the actual content. Here’s what a BranchCache response looks like:


Note that if there was no BranchCache operating at the server a full response of 89510 bytes of javascript would have been returned by the server. Instead a response of just 308 bytes was returned which contains just a set of hashes. These hashes point to content that can then be requested from a local BranchCache or even broadcast out on the local subnet to see if any other BranchCache enabled clients or cache host servers have the actual content which corresponds to those hashes. If the content has been previously requested by one of the other BranchCache enabled clients in the office then the data is retrieved immediately, otherwise an additional request is made to the server (with MissingDataRequest=true) for the data. Note that this means some users will experience two requests and therefore slower response time until the distributed cache is primed with data.

It’s important at this point to understand the distinction between the BranchCache and the normal HTTP caching that operates under the browser. The browser cache will cache whole HTTP objects where possible as indicated by cache headers returned by the server. The BranchCache will operate regardless of HTTP cache-control headers and operates on a block level caching parts of files rather than whole files. That means you’ll get caching across multiple versions of files that have changed incrementally.

BranchCache Client Configuration

There are a number of ways to configure BranchCache on the client including Group Policy and netsh commands, however the easiest is to use Powsershell. Launch an elevated Powershell command window and execute any of the Branch Cache Cmdlets.

  • Enable-BCLocal: Sets up this client as a standalone BranchCache client; that is it will look in its own local cache for content which matches the hashes indicated by the server.
  • Enable-BCDistributed: Sets up this client to broadcast out to the local network looking for other potential Distributed BranchCache clients.
  • Enable-BCHostedClient: Sets up this client to look at a particular static server nominated to host the BranchCache cache.

While you can use a local cache, the real benefits come from distributed and hosted mode, where the browsing actions of a single employee can benefit the whole office. For instance if Employee A and Employee B are sitting in the same office and both browse to the same site then most of the content for Employee B will be retrieved direct from Employee A’s laptop rather than re-downloading from the server. That’s really powerful particularly where there are bandwidth constraints in the office and common sites that are used by all employees. But it requires that the web server serving the content participates in the Branchcache protocol by installing the BranchCache feature.

HTTP BranchCache on the Server

One of the things you lose when moving to a cloud service (like SharePoint Online) from an on premises server is the ability to install components and features on the server like BranchCache. However, by routing requests via our Cloud Accelerator that capability is available again simply by installing the Windows Server BranchCache Feature.

With the BranchCache feature installed on the Cloud Accelerator immediately turns the SharePoint Online service into a BranchCache enabled service so the size of the content body downloaded to the browser go from this:

To this:

There are some restrictions and configuration to understand. First, you won’t normally see any peerdist hash responses for content body size less than 64KB. Also you’ll need a latency of about 70mS between client and server before BranchCache bothers stepping in. Actually you can change these parameters but it’s not obvious from the public API’s. The settings are stored at this registry key (HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Services\PeerDistKM\Parameters) which will be picked up next time you start the BranchCache service on the server. Changing these parameters can have a big effect on performance and depend on the exact nature of the bandwidth or latency environment the clients are operating in. In the above example I changed the MinContentLength from the default 64K (which would miss most of the content from SharePoint) to 4K. The effect of changing the minimum content size to 4K is quite dramatic on bandwidth but will penalise those on a high latency link due to the multiple requests for many small pieces of data not already available in your cache peers.

The following chart shows the effect of our Cloud Accelerator on the SharePoint Online home page for two employees in a single office. Employee A browses to the site first, then Employee B on another BranchCache enabled client browses to the same page.

Where:

  • Office 365: Out of the box raw service
  • Caching: With caching headers modified by our Cloud Accelerator
  • Compression: With compression added to dynamic content (like the home page)
  • BranchCache 64K: With BranchCache enabled for >64K data
  • BranchCache 4K: With BranchCache enabled for >4K data

So while adopting a cloud based service is often a cost effective solution for businesses, if the result negatively impacts users and user experience then it’s unlikely to gather acceptance and may actually be avoided in preference for old on-premises habits like local files shares and USB drives. The Cloud Accelerator gives us back the ownership of the traffic and the ability to implement powerful features to bring content closer to users who need it.

And remember, this can be done for any web delivered cloud service, not just SharePoint Online as I showed here and can operate for users on the corporate LAN, on someone else’s corporate LAN, in a café on anywhere else where two computers sit on the same network. This gets us one step closer to enabling “Cloud Nirvana” an organisation unleashed from the traditional ring fenced network and replaced cloud services delivered efficiently to any device anytime, anywhere.

Traffic

TDD for Mobile Development – Part 1

jenkins-tests

This post aims at exploring the best practices in terms of code quality and testability for mobile development.
It is part of a series that talks about Unit and Integration Testing in the Mobile space. In particular, I focus on Android and iOS.

For many developers, testing is an after thought, and it is a task that ’s not well considered. But, there ’s heaps of research out there that shows you how much you could save and how test-first could improve your design. I am not going to go into the details of this, but I would assume that you are a test-first kind of person since you are reading this, so let’s get started.

NUnitLite

In this post, I will show NUnit Lite for Xamarin.Android and Xamarin.iOS. NUnitLite as the name indicates is a cut-down version of NUnit. There are versions (builds) for testing iOS apps and for Android. The iOS comes out of the box when installing Xamarin, and it allows you to create a project from a template of NUnit Lite (MonoTouch) project.

This approach is good when you have a platform-specific code that has to be placed in the platform-specific or inside the app project. You could reference your MonoTouch or MonoDroid projects from the NUnitLite project and start your testing.

For Android, there are few versions of NUnitLite, I have worked with this one.

Sometimes, you are developing a component that needs to behave the same way on the two different platforms, but the internal implementation could be platform-specific. To test the platform specific, you put your code into your testing project as normal. But you could also Reference the same NUnitLite test file from both platforms to test both platforms, since it is the same expected behaviour on both platforms. Some developers do not like to have referenced files (me included), so you could create different versions for the two platforms if you wish to do so.

Sample of iOS platform-specific code

public class TestableController : UIViewController
{
    public TestableController ()
    {
    }
 
    public int GetTotal(int first, int second)
    {
        return first + second;
    }
 
}

Sample of Android platform-specific code

namespace Tdd.Mobile.Android
{
    public class TestableController : Fragment
    {
        protected override void OnCreate (Bundle savedInstanceState)
        {
            base.OnCreate (savedInstanceState);
        }
 
        public int GetTotal(int first, int second)
        {
            return first + second;
        }
 
    }
}

Please note that I am not suggesting that you write your code this way or put your login into the UIViewControlloer or Activity classes. The only reason I am doing it this way is to show you how you could test anything inside these platform-specific classes. Ideally, you would put your logic into ViewModels or other form of container that are injected into the controllers. Anyway, assuming that we have some platform-specific logic inside these classes, this is how I would test it.

[TestFixture]
    public class TestableControllerTest
    {
        [Test]
        public void GetTotalTest()
        {
            // arrange
            var controller = new TestableController ();
 
            // act
            var result = controller.GetTotal (2, 3);
 
 
            // assert
            Assert.AreEqual (5, result);
        }
    }

The screenshot below shows the structure of my solution. I have also put the code on GitHub in case you are interested in playing with the code. I would love to hear what you have to say, get in touch if you have any comments or questions.

Tdd Mobile Development Code Structure

Tdd Mobile Development Code Structure

In the next blog post, I will show how most of the code could be placed into testable libraries, and could be easily tested from your IDE (VS or Xamarin Studio), without the need to run an emulator/simulator.

Static DIP Request and VIP Reservation on Microsoft Azure

Firstly, what is Azure VIP (Virtual IP address) and DIP (internal IP address assigned by Azure DHCP) on Microsoft Azure? Microsoft Azure VM has two known IP addresses:

  • VIP: Public IP address pointing to Azure Cloud Service where VM is deployed. Every Cloud Service has a VIP and every Cloud Service can have several VMs. A VIP assigned to Cloud Service won’t be released until last VM on that Cloud Service is Stopped (De-allocated)or Deleted
  • DIP: Internal IP address assigned to the VM by Azure DHCP. DIP won’t be released from Azure VM until the VM is Stopped (De-allocated) or Deleted. OS-level restart/shut down won’t release the DIP.

Below diagram shows the VIP and DIP conceptual figure where 2 VMs are deployed on Azure Cloud Service: azure cloud service

Figure 1 Microsoft Azure VIP and DIP

Static DIP>DIPs are allocated randomly (First come, First Serve) from subnet address pool on VNET when VMs are deployed on to a VNET. Hence, re-deploying VMs in different start-up order to the same VNET will result in different DIP assigned. From Figure 1 above shows KloudVM01 has 10.0.0.11 DIP and KloudVM02 has 10.0.0.10 DIP. If both VMs are stopped (de-allocated), they will lose their VIP and DIP. If  KloudVM01 is started and few minutes later KloudVM02 is started, KloudVM01 will likely have 10.0.0.10 DIP instead of 10.0.0.11.

Request a DIP concept means: The VM will attempt to request a static DIP. However it is no guarantee. The request will fail if existing DIP has been assigned to another VM. PowerShell script below can be ran to set DIP:

PowerShell script below can be ran out of the box which will prompt you to put the Cloud Service Name, VM Name and DIP:

Note: It is recommended to employ separate subnets for static IP address VMs and Dynamic IP address subnet. It will be easier to manage by segregating the IP address type. For example: Subnet 1 for all static IP address VMs and Subnet 2 for all Dynamic IP address PaaS web/worker roles.

VIP Reservation At the time of writing VIP reservation to an existing Cloud Service and VIP reservation for Cloud Services that reside in VNET associated with an affinity group are not supported. However Microsoft is indicating this capability will come in the future. Use the following script to create VIP Reservation:

Use the Get-AzureReservedIP to check all VIP reservation on current Azure subscription. After VIP Reservation executed successfully, The VIP can be used on deployment. The following script is sample of how to use reserved VIP on Azure VM deployment:

The result of scripts above:

New-AzureVM

IPv4 addresses are scarce resource, therefore Microsoft charges a nominal price for VIP under few circumstances. This link will provide further information about the VIP Reservation Pricing and Billing.

Note: Azure subscription has 5 VIP Reservation soft limit. Support ticket can be raised to increase this soft limit.