Mobile platform increases productivity for integrated services company

Customer Overview
Spotless Group is an Australian owned, managed and operated provider of integrated facility management services. With operations across Australia and New Zealand, the Group’s 33,000 employees deliver millions of service hours a year across hundreds of specialist services to industry sectors including:

  • Health
  • Education
  • Leisure, Sport and Entertainment
  • Defence
  • Government
  • Resources
  • Business and Industry (AU) (NZ)
  • Laundries

Business Situation
Spotless service workers and supervisors (employees and sub-contractors) generally work at customer locations rather than offices. Spotless always has a need to supervise the work, capture work related information, disseminate information to, and receive information from these workers.

With the distributed nature of Spotless’ workforce and interaction predominately paper-based or verbal, the organisation was experiencing inefficiencies due to the amount of manual processes involved.  Continuing along this path of a manual mode of operation was resulting in time delays with inefficient use of support staff and an ongoing lack of visibility and effective oversight of service workers and their activities.

In response to the business situation, Kloud proposed a secure mobile computing platform for Spotless’ customers, service managers, and service workers named “MyWork”.

Solution
MyWork is a cloud based mobile platform that comprises the following components:

  • Customer specific web portal for customers to raise, query, and track service requests, query asset registers, query maintenance schedules, etc. It was developed as a set of single-page applications (SPAs) in HTML, JavaScript, and CSS and is hosted on SharePoint Online.
  • Service team web portal for managing timesheets, jobs, audits, etc. It was developed as a set of single-page applications (SPAs) in HTML, JavaScript, and CSS and it is hosted on SharePoint Online.
  • Service team mobile phone apps for managing time sheets, jobs and audits. They were developed as Android and iOS apps using Xamarin’s cross-platform runtime and are distributed via AirWatch mobile device management solution.
  • Platform services that implement a range of HTTP-based services, including integration services with Spotless’ on-premises systems, for the customer web portals, the service team web portal, and the mobile phone apps. They were developed in ASP.NET Web API and are hosted in Microsoft Azure.

Benefits
The MyWork solution was designed with Microsoft’s cloud services, which leveraged Spotless’ existing investments in the Microsoft Azure Platform and SharePoint Online (Office 365).

MyWork supports Spotless’ values that that if a job is worth doing, it is worth doing well by:

  • Putting people first: Engaging customers, service managers, and service workers by providing systems they can access in the office, out of the office, and at remote work sites.
  • Rolling up our sleeves: Ability to assign service requests directly to service workers thus turning around jobs in a more timely and efficient manner.
  • Leading not following: Providing leading edge technology and mobile systems that makes workforce more closely connected to its customers and service workers.
  • Make every dollar count: MyWork provides customers, staff and subcontractors a new way of doing business by moving to a more automated, online and mobile focused platform.

Kloud understood Spotless’ requirement and delivered to this requirement on time and on budget. Their knowledge of the subject matter and technologies combined with their track record of other mobility projects resulted in a positive project outcome. – Peter Lotz, Chief Information Officer, Spotless Group

Purchasing Additional SharePoint Online Storage for Office 365

There are a number of different options for customers to purchase Office 365.  In the U.S.A. and the majority of markets, customers can purchase Office 365 directly from Microsoft via MOSP (Microsoft Online Subscription Program).  This is the most common way for small businesses to purchase Office 365.  Customers can purchase licenses using a credit card.  There is no minimum license quantity for MOSP.  Customers pay for Office 365 via an automatic monthly subscription.

In Australia, Telstra has a syndication agreement with Microsoft.  This means that customers who want to purchase Office 365 in Australia transact the purchase with Telstra.  This service is known as T-Suite.  Billing for T-Suite can be via a monthly credit card payment or the customer’s existing Telstra account.  After purchasing the licenses from Telstra, customers are provided with an Office 365 Org ID and password to access the new tenant. 

Another option for customers to purchase Office 365 is via a volume license (VL) agreement.  For large enterprises that require 250 licenses and above, customers can purchase via an Enterprise Agreement (EA) or Enterprise Subscription Agreement (EAS).  Smaller customers that require between 5 – 249 licenses can purchase Office 365 via an Open Agreement.  VL agreements require a commitment of 1 – 3 years, depending on the agreement.  VL agreements are billed annually.  Customers who are based in Australia and wish to buy Office 365 directly from Microsoft can do so with a VL agreement.

There are many differences between Office 365 purchases via MOSP vs. VL.  The differences include:

1) The prices of the licenses

2) The frequency of the payments

3) The length of commitment

4) The types of SKUs which are available

It is important to consider all of these factors before making a decision on the best way to purchase Office 365 for your organization. 

This blog will focus on one of the major differences between the Office 365 SKUs offered via MOSP vs. an Open agreement.

When customers purchase Office 365 and SharePoint Online, they are provided with 10 GB of storage by default.  This storage can be used to provision a number of different SharePoint Online websites including public and internal websites.  For each Office 365 and SharePoint Online user license purchased, the tenant is provided with an additional 500 MB of storage.  For example, a customer who purchases 10 E3 licenses will receive 10 GB + (10 users) * (500 MB) = 10 GB + 5 GB = 15 GB.  Please note that this pool of SharePoint Online storage is separate from the storage used by OneDrive for Business. Each users who runs OneDrive for Business is now given 1 TB of storage for personal files.

In some instances, customers may want to increase the amount of storage available for SharePoint Online.  Kloud Solutions works with many customers who would like to move their corporate file shares from an on-premises server to SharePoint Online.  The storage required for your file shares may exceed the default storage allocation in SharePoint Online.  Therefore, Microsoft has introduced the option for customers to purchase additional SharePoint storage on a per GB basis. 

There are many different types of Office 365 plans that can be purchased.  You will first need to determine if your existing Office 365 subscription is eligible for additional storage.  SharePoint Online storage is available for the following subscriptions:

  • Office 365 Enterprise E1
  • Office 365 Enterprise E2
  • Office 365 Enterprise E3
  • Office 365 Enterprise E3 for Symphony
  • Office 365 Enterprise E4
  • Office 365 Midsize Business
  • Office Online with SharePoint Plan 1
  • Office Online with SharePoint Plan 2
  • SharePoint Online (Plan 1)
  • SharePoint Online (Plan 2)

SharePoint Online Storage for Small Business is available for the following subscriptions:

  • Office 365 (Plan P1)
  • Office 365 Small Business Premium
  • Office 365 Small Business

If your subscription is one of the above eligible plans, you can purchase Office 365 via MOSP or the T-Suite portal for customers in Australia.

One of the key limitations to consider is that Microsoft does NOT offer the option to purchase additional SharePoint Online storage via an Open Agreement for small and medium businesses.  For instance, you can purchase 10 E3 licenses via an Open Agreement. This would provide 15 GB of SharePoint Online storage using the example above.  However, you would NOT be able to purchase additional GB of storage as the SKU is not available on the Open price list. 

You can mix Open and MOSP licensing in the same Office 365 tenant.  For example, you could buy 10 E3 license via an Open agreement and then apply them to a tenant using an Office 365 product key.  If you wanted to buy an additional 3 GB of storage, you could do so via a credit card in the same tenant.  However, SharePoint Online storage must be tied to another license.  It cannot be purchased by itself.  So you would have to buy at least 1 additional E3 license via MOSP in order to add the additional 3 GB of storage.  This is something to consider when you are pricing an Office 365 solution. 

For reasons of both simplicity and flexibility, Kloud Solutions recommends purchasing Office 365 via MOSP or T-Suite if you need additional SharePoint Online storage today, or if you think you may need it in the future.  Purchasing via MOSP or T-Suite allows you to keep your options open and plan for future storage growth.  Buying Office 365 via Open means that you are locked in to a certain storage allocation as determined by Microsoft.   There is no guarantee that Microsoft’s default storage allocation will meet your requirements. 

It is very likely that Microsoft will increase the default storage allocation for SharePoint Online in the future.  The cost of storage is always declining according to Moore’s Law.  For example, Microsoft recently increased the amount of storage available in OneDrive from 25 GB to 1 TB.  Here is a blog post which references this change:

http://blog.kloud.com.au/2014/05/04/sharepoint-online-storage-improvements-in-office-365/

However, there have been no announcements from Microsoft to date indicating that they plan to increase the default storage for SharePoint Online beyond 10 GB per tenant or 500 MB per user.  There will be future posts to this blog about this topic if there are any relevant updates in the future.

If you have any questions about the different options for purchasing Office 365 from Microsoft or Telstra, please contact Kloud  Solutions using the following URL:

http://www.kloud.com.au/

Mobile Test-Driven Development Part (3) – Running your unit tests from your IDE

TDD in Mobile Development – Part 3
1. Unit Testing of Platform-Specific Code in Mobile Development.
2. Portable IoC (Portable.TinyIoC) for Mobile Development
3. Mobile Test-Driven Development – Running your unit tests from your IDE

This is the third post in my TDD for Mobile Development series. This post shows how we can have test driven development for mobile. We will look at options for running our tests from within our IDE and finding the right test runner for our development env without the need to launch an emulator or deploy to a device every time we want to run the tests.

In a Previous post I showed how to use NUnitLite to write unit/integration tests on Android and iOS. This post shows how you could write your unit tests with NUnit framework and running them from your IDE.

Problems with NUnitLite

NUnitLite does not have a test runner that could be used outside of the mobile OS. This holds true for both Android and iOS. That’s why every time we need to run the tests, we have to deploy into a real device or a simulator/emulator to run the tests.
Now this could be ok and necessary for some platform-specific logic. However, in most cases, we do not have to test the code on the exact platform. Take the example that we had in the previous post,

    public int GetTotal(int first, int second)
    {
        return first + second;
    }

This code is just plain c# code that could be placed outside of the platform specific code and could be used on multiple platforms, and could then be tested conveniently using NUnit.

Portable Class Library (PCL)

This brings us to using PCL (Portable Class Libraries). The beauty of using PCLs is not only in sharing code across multiple platforms, but it also enables us to test our code using full frameworks like NUnit or Microsoft Test (although I would really stick with NUnit :) ).
Bear in mind that PCLs are evolving and everyday there are quite few packages for PCLs are coming up.

Some developers might argue that it is trouble-some to write your code in PCLs since it adds restrictions and only allows you to use a subset of .net that is supported on all configured platforms.

This could be true, but you could get around it by three ways:

1- Only support the platforms that you really need.
I normally use PCL profile 78 or 158. This gives me the two main platforms that I am working on Android and iOS, plus some later versions of Windows phone (8.1), and Silver light. You do not have to use a profiles that tries to support older versions, and you will have less limitations by following this approach.

2- Make use of Nuget Packages.
Installing Nuget packages is a great way of going PCL. Whenever I am trying to do something that is not supported in the .NET subset, I look up the Nuget store and most of the time I would find that somebody has already developed a package that I could just use directly. The other nice thing about Nuget packages, Nuget supports distributing multiple platforms libraries. This means that sometimes you get a package that could support Android, and iOS. In this case you would find two separate folders under /lib (inside the Nuget package) one folder for each platform (Android, iOS). In some other cases, Nuget could give you a portable library where you would get folders (under /lib) like portable-win81+net54+ etc. This means that the dlls inside this folder could be used and referenced from within this kind of profiles (Platforms). This is great news because you could just use the code without worrying about changing anything. Examples of such package are:

a. SQLite.NET-PCL
b. PCLWebUtility
c. Microsoft.Bcl
d. Microsfot.Bcl.Build
e. Microsoft.Bcl.Async
e. Newtonsoft.Json

3. Abstract your platform specific logic and use a platform specific implementation.
Sometimes your logic has to have a platform specific version, let’s say you are doing something with animation, or cryptography where you need to use the platform specific libraries.
The best way to go about this is to have an abstraction that gets injected into the libraries/classes that depends on these (platform-specific) components. This means that your classes/libraries does not have any dependency on the platform specific code. It is only dependent on abstraction. During run-time, you could inject your platform specific implementation via any IoC container or even manually. I have a full post on IoC in Cross-platform here. Also it is worth looking at SQLite.NET-PCL implementation as it follows exactly this approach.

MVVM

MVVM is a great approach for developing software because it ensures that your business logic is not coupled into any presentation layer/component.
There is even MVVMCross which allows you to build apps in a cross-platform fashion. However, I do not prefer to go with MVVMCross because it adds much more complexity than I need and in case I need to develop and change something out of the framework, then I would need to invest a lot in learning and building workarounds. Therefore, what I do is just stick with my ViewModels.
This means I take advantage of the MVVM pattern by having my ViewModels holding all my business logic code and injecting these viewmodels into my controllers/presenters.
The viewModels could also have other services, factories, repositories injected into them (using IoC container or manually) and that way our code is all cross platform and very testable.

        public class CalculatorViewModel : ViewModelBase 
	{
		public int GetTotal(int first, int second)
		{
			return first + second;
		}
	}

        //iOS Controller
	public class CalculatorController : UIViewController
	{
		private readonly CalculatorViewModel _viewModel;

		public CalculatorController (CalculatorViewModel viewModel)
		{
			_viewModel = viewModel;
		}
	}

        //android Controller
        public class CalculatorController : Fragment
	{
		private readonly CalculatorViewModel _viewModel;

		public CalculatorController (CalculatorViewModel viewModel)
		{
			_viewModel = viewModel;
		}
	}

Writing Tests

As you can see from above, our logic is now sitting in the ViewModel and it is all testable regardless of the platform. This also make it easy for us to use any test framework and test runners. This includes NUnit or Microsoft Test. It gets even better, we could even have our test libraries targeting .NET 4.0 or 4.5, which means we could use all the goodness of .NET in writing our tests. This includes using FakeItEasy and RhinoMock.

Running the Tests

Now that we have all this great setup, then we could look at running our tests. For using Microsoft Test, this comes out of the box so no need to install anything extra. If you prefer using NUnit like me, then you could either install the latest version of NUnit (this includes the adapter and the runner). However, there is even a better way, you could just install NUnit Adapter (with Runner) from the Nuget store. This will make the NUnit adapter and runner part of your solution and you would not need to install the framework on all developers machines and your build server (as we will see in the Continuous Integration Server setup later).
To start writing your tests, you could create a class library that targets .NET 4.0 or .NET 4.5, and install NUnit Adapter Nuget package, and start writing your tests like below:

dd Mobile Common Tests Visual_Studio

dd Mobile Common Tests Visual_Studio

Running Mobile TDD Tests Visual Studio

Running Mobile TDD Tests Visual Studio

Tdd Mobile Common Tests in Xamarin Studio

Tdd Mobile Common Tests in Xamarin Studio

Conclusions

In Conclusion, I have demoed in the last three posts (1, 2, and 3) how to have a mobile test-driven development. I hope this motivates you to start looking at improving your code quality and employ some of the tacktics we talked about here. If you have any comments and questions, I would love to hear them so get in touch.

TDD in Mobile Development – Part 3
1. Unit Testing of Platform-Specific Code in Mobile Development.
2. Portable IoC (Portable.TinyIoC) for Mobile Development
3. Mobile Test-Driven Development – Running your unit tests from your IDE

PowerShell Detection Method for SCCM 2012 Application Compliance management

Microsoft System Center Configuration Manager (SCCM) 2012 has a very powerful Application Detection and Delivery model, separate from the existing ‘package and program delivery model’ of previous versions of SCCM & SMS.

The power of this new model is not having to ‘daisy chain’ packages and executables together to achieve a desired outcome.  Using SCCM’s Detection Model reduces the burden in managing a Windows client base in terms of keeping its baseline configuration the same across every client in the Organisation.

I recently assisted a Kloud customer to configure a script delivery application, that was using the Application delivery model and the ‘Detection Method’ to ensure files reached their local Windows 8 folder destinations successfully.  The script simply copies the files where they need to go and the Detection Method then determines the success of that script. If SCCM does not detect the files in their correct destination locations, it attempts again at executing the script.

Benefits in using SCCM 2012 Application and Detection Method Delivery

Using this Application and Detection method provided Kloud’s customer with the following business benefits:

  • Increased reliability of delivering Office template files to a Windows 8 machine and therefore reduced TCO in delivering software to authorised workstations.  If the application files were corrupted or deleted during installation or post-installation (for example a user turning their workstation off during an install), then SCCM detects these files are missing and re-runs the installation
  • Upgrades are made easier, as it does not depend on any Windows 8 workstation having to run a previous installation or ‘package’.  The ‘Detection Method’ of the Application object determines if the correct file version is there (or not) and if necessary re-runs the script to deliver the files.  The ‘Detection Method’ also runs after every install, to guarantee that a client is 100% compliant with that application delivery.
  • Uses SCCM client agent behaviour including BITS, restart handling, use of the ‘Software Center’ application for user initiated installs and Application package version handling – for example, if a single file is updated in the Application source and re-delivered to the Distribution Point, the SCCM client detects a single file has changed, and will only downloads the changed file saving bandwidth (and download charges) from the Distribution Point

Customer Technical Requirements

Kloud’s customer had the following technical requirements:

1. My customer wanted to use an SCCM Application and Detection Rule to distribute ten Office 2010 template files to Windows 8 workstations (managed with the SCCM client)

2. They wanted to be able to drop new Office 2010 template files at any stage into the SCCM source application folder, distribute the application and the SCCM clients download and install those new templates with minimum interference to end users.

3. They also wanted the minimum number of objects in SCCM to manage the application, and wanted the application to ‘self heal’ if a user deleted any of the template files.

4. All code had to be written in PowerShell for ease of support.

Limitations of Native Detection Methods

SCCM 2012 has a great native Detection Rules method for MSI files and file system executables (see native Detection Rule image below:).

NativeDetectionRules

However we quickly worked out its limitations with this native Detection Rule model, namely for the ‘File System’ setting type:

1. Environment variables for user accounts, such as %username% and %userprofile% are not supported

2. File versioning can only work with Windows executables (ie. .EXE) and not metadata embedded in files, for example Word files.

SCCM comes with the ability to run Powershell, VBScript or JScript as part of its Detection Model, and it is documented with VBScript examples at this location:

TechNet Link

Taking these examples, the critical table to follow to get the Detection Model working correctly (and improving your understanding of how your script works in terms of ‘error code’, ‘stdout’ and ‘stderror’) is the following table, kindly reproduced from Microsoft from the TechNet Link above:

Script exit code Data read from STDOUT Data read from STDERR Script result Application detection state
0 Empty Empty Success Not installed
0 Empty Not empty Failure Unknown
0 Not empty Empty Success Installed
0 Not empty Not empty Success Installed
Non-zero value Empty Empty Failure Unknown
Non-zero value Empty Not empty Failure Unknown
Non-zero value Not empty Empty Failure Unknown
Non-zero value Not empty Not empty Failure Unknown

This table tells us that the key to achieving an Application delivery ‘success’ or ‘failure’ using our PowerShell Detection script boils down to achieving either of the rows highlighted in red – any other result (i.e. “Unknown” for the “Application Detection State”) will simply just result in the application not delivering to the client.

The critical part of any Detection Model script is to ensure an error code of ‘0’ is always the result, regardless if the application is installed successfully or has failed. The next critical step is the Powershell object equivalent of populating the ‘stdout’ object. Other script authors may choose to test the ‘stderror’ object as well in their scripts, but I found it unnecessary and preferred to ‘keep it simple’.

After ensuring my script achieved an exit code of ‘0’, I then concentrated on my script either populating the ‘stdout’ object or not populating the ‘stdout’ object – I essentially ignored the ‘stderror’ object completely and ensured my script ran ‘error free’. At all times, for example, I used ‘test-path’ to first test to see a file or folder exists before then attempting to grab its metadata properties. If I didn’t use ‘test-path’, then the script would error if a file or folder was not found and then it would end up in an “unknown” detection state.

I therefore solely concentrated on my script achieving only the highlighted rows (in red) of the table above.

Microsoft provides example of VBScript code to populate the ‘stdout’ (and ‘stderror’) objects and can be found in the TechNet link above – however my method involves just piping a single PowerShell ‘write-host’ command if the Detection Script determines the application has been delivered successfully.  This satisfies populating the ‘stdout’ object and therefore achieving Detection success.

Limitations in using Scripts for Detection

There were two issues in getting a Detection Method working properly: an issue related to the way SCCM delivers files to the local client (specifically upgrades) and an issue with the way Office template files are used.

One of the issues we have is that Word and Excel typically changes a template file (however small the change!) when either application is loaded, by changing either its ‘Date Modified’ timestamp or modifying the file length in bytes of the file (or both). Therefore, using a detection method that determines whether a file has been delivered successfully to the workstation should avoid using a file’s length in byes or its modified timestamp.

The other issue we found is that SCCM has a habit of changing the ‘Date Modified’ timestamp of all files it delivers when it detects an ‘upgrade’ of the source files for that application. It typically does not touch the timestamp of the source files if it delivers a brand new install to a client that has never received the software, however if a single file in the source folder is changed for that application, then SCCM tries to use a previous version of the application in the cache (C:\windows\ccmcache) and only downloads the new file that has change. This results in all files having their ‘Data Modified’ timestamp changing (except for the brand new file). Therefore determining if that application has delivered successfully using ‘Date Modified’ timestamps is not recommended. The key to seeing this process in action is looking at the file properties in the C:\windows\ccmcache\<sccm code> folder for that application, particularly before and after a file is updated in the original source SCCM application folder.

Ultimately, for Kloud’s customer, we used a file’s Metadata to determine the file version and whether the application has been delivered successfully or not. In this example, we used the ‘Company’ metadata field of the Word and Excel template file (found under a file’s ‘Properties’):

Metadata1

I used this Scripting Guy’s TechNet Blog to form the basis of retrieving a file’s metadata using a PowerShell function, and then using that information pulled from the file to determine a good attribute to scan for, in terms of file version control.

One of the limitations I found was that this function (through no fault of its author: Ed Wilson!) does not return ‘Version number’, so we used the ‘Company’ field instead. If someone has worked out a different PowerShell method to retrieve that ‘Version number’ metadata attribute, then feel free to tell me in the comments section below!

The next step in getting this PowerShell script to work correctly, is ensuring that only ‘error code = 0′ is returned when this script is executed.  Any other error code will result in breaking the delivery of that application to the client. The next step is then only ensuring that a ‘write-host’ is executed if it detects that all detected files are installed – in this example, only 10 files that are 100% detected in my array ‘Path’ will result in a ‘write-host’ being sent to the SCCM client and therefore telling SCCM that client has been successfully delivered. If I were to copy that Powershell script locally, run that script and not detect all files on that machine, then that script will not display anything to that Powershell window. This tells the SCCM client that the delivery has failed.  If that script ran locally and only a single ‘write-host’ of ‘all files accounted for!’ was shown to the screen, this tells me the Detection is working.

The sample code for our Detection Method can be found below (all filenames and paths have been changed from my customer’s script for example purposes):


# Authors: Michael Pearn & Ed Wilson [MSFT]
Function Get-FileMetaData
{
  <#
   .Synopsis
    This function gets file metadata and returns it as a custom PS Object
 #Requires -Version 2.0
 #>
 Param([string[]]$folder)
 foreach($sFolder in $folder)
  {
   $a = 0
   $objShell = New-Object -ComObject Shell.Application
   $objFolder = $objShell.namespace($sFolder) 

   foreach ($File in $objFolder.items())
    {
     $FileMetaData = New-Object PSOBJECT
      for ($a ; $a  -le 266; $a++)
       {
         if($objFolder.getDetailsOf($File, $a))
           {
             $hash += @{$($objFolder.getDetailsOf($objFolder.items, $a))  =
                   $($objFolder.getDetailsOf($File, $a)) }
            $FileMetaData | Add-Member $hash
            $hash.clear()
           } #end if
       } #end for
     $a=0
     $FileMetaData
    } #end foreach $file
  } #end foreach $sfolder
} #end Get-FileMetaData

$TemplateVersions = "5.0.2"

$wordStandards = "C:\Program Files (x86)\Customer\Customer Word Standards"
$wordTemplates = "C:\Program Files (x86)\Microsoft Office\Templates"
$wordTheme = "C:\Program Files (x86)\Microsoft Office\Document Themes 14\Theme Colors"
$excelAddins = "C:\Program Files (x86)\Customer\Customer Excel Addins"
$xlRibbon = "C:\Program Files (x86)\Microsoft Office\Office14\ADDINS"
$PPTribbon = "C:\Program Files (x86)\Customer\PowerPoint Templates"
$PPTtemplates = "C:\Program Files (x86)\Microsoft Office\Templates\Customer"

$strFile1 = "Bridge Template.xlsm"
$strFile2 = "Excel Ribbon.xlam"
$strFile3 = "NormalEmail.dotm"
$strFile4 = "PPT Ribbon.ppam"
$strFile5 = "Client Pitch.potx"
$strFile6 = "Client Presentation.potx"
$strFile7 = "Client Report.potx"
$strFile8 = "Blank.potx"
$strFile9 = "Blocks.dotx"
$strFile10 = "Normal.dotm"

$Path = @()
$Collection = @()

$Path += "$excelAddins\$strfile1"
$Path += "$xlRibbon\$strfile2"
$Path += "$PPTribbon\$strfile3"
$Path += "$PPTtemplates\$strfile4"
$Path += "$PPTtemplates\$strfile5"
$Path += "$PPTtemplates\$strfile6"
$Path += "$wordStandards\$strfile7"
$Path += "$excelAddins\$strfile8"
$Path += "$xlRibbon\$strfile9"
$Path += "$PPTribbon\$strfile10"

if (Test-Path $wordStandards) {
$fileMD = Get-FileMetaData -folder $wordStandards
$collection += $fileMD | select path, company
}
if (Test-Path $wordTemplates) {
$fileMD = Get-FileMetaData -folder $wordTemplates
$collection += $fileMD | select path, company
}
if (Test-Path $wordTheme) {
$fileMD = Get-FileMetaData -folder $wordTheme
$collection += $fileMD | select path, company
}
if (Test-Path $excelAddins) {
$fileMD = Get-FileMetaData -folder $excelAddins
$collection += $fileMD | select path, company
}
if (Test-Path $xlRibbon) {
$fileMD = Get-FileMetaData -folder $xlRibbon
$collection += $fileMD | select path, company
}
if (Test-Path $PPTribbon) {
$fileMD = Get-FileMetaData -folder $PPTribbon
$collection += $fileMD | select path, company
}
if (Test-Path $PPTtemplates) {
$fileMD = Get-FileMetaData -folder $PPTtemplates
$collection += $fileMD | select path, company
}
$OKCounter = 0
for ($i=0; $i -lt $Path.length; $i++) {
     foreach ($obj in $collection) {
     If ($Path[$i] -eq $obj.path -and $obj.company -eq $TemplateVersions) {$OKCounter++}
     }
}
if ($OKCounter -eq $path.length) {
write-host "all files accounted for!"
}


I then posted this code into the Detection Model of the application resulting in something similar to the following image:

DetectionModel

If the application has delivered successfully (and the script results in ‘Exit Code = 0′ and a ‘write-host = “all files accounted for!”‘ piping to the ‘Stdout’ object, then the following entry (critical values highlighted in red text below) should appear in the local SCCM client log: C:\Windows\CCM\Logs\AppEnforce.log:


<![LOG[    Looking for exit code 0 in exit codes table...]LOG]!><time=”12:29:13.852-600″ date=”08-08-2014″ component=”AppEnforce” context=”” type=”1″ thread=”2144″ file=”appexcnlib.cpp:505″>
<![LOG[    Matched exit code 0 to a Success entry in exit codes table.]LOG]!><time=”12:29:13.853-600″ date=”08-08-2014″ component=”AppEnforce” context=”” type=”1″ thread=”2144″ file=”appexcnlib.cpp:584″>
<![LOG[    Performing detection of app deployment type User Install - Prod - Office 2010 Templates 5.0.2(ScopeId_92919E2B-F457-4BBD-82FF-0765C1E1E696/DeploymentType_0f69fa14-549d-4397-8a0b-004f0d0e85e7, revision 4) for user.]LOG]!><time=”12:29:13.861-600″ date=”08-08-2014″ component=”AppEnforce” context=”” type=”1″ thread=”2144″ file=”appprovider.cpp:2079″>
<![LOG[+++ Discovered application [AppDT Id: ScopeId_92919E2B-F457-4BBD-82FF-0765C1E1E696/DeploymentType_0f69fa14-549d-4397-8a0b-004f0d0e85e7, Revision: 4]]LOG]!><time=”12:29:16.977-600″ type=”1″ date=”08-08-2014″ file=”scripthandler.cpp:491″ thread=”2144″ context=”” component=”AppEnforce”>
<![LOG[++++++ App enforcement completed (10 seconds) for App DT “User Install – Prod – Office 2010 Templates 5.0.2″ [ScopeId_92919E2B-F457-4BBD-82FF-0765C1E1E696/DeploymentType_0f69fa14-549d-4397-8a0b-004f0d0e85e7], Revision: 4, User SID: S-1-5-21-1938088289-184369731-1547471778-5113] ++++++]LOG]!><time=”12:29:16.977-600″ date=”08-08-2014″ component=”AppEnforce” context=”” type=”1″ thread=”2144″ file=”appprovider.cpp:2366″>


We should also see a status of ‘Installed’ in the ‘Software Center’ application (part of the SCCM client):

SoftwareCenter

Hope this helps with using SCCM application and Detection Method scripting! Any questions, please comment on my post below and I’ll endeavour to get back to you.

HOW I REDUCED THE WORKER ROLE TIME FROM ABOVE 5 HRS TO LESS THAN 1 HOUR

This post talks about my experience in reducing the execution time of the Worker Role from above 5 hours to under 1 hour. This Worker Role is set up to call some external APIs to get a list of items with their promotions and store them locally. A typical batch update process that you would see in many apps. Our client was only interested in quick fixes that would help them reduce the time it is taking the Worker Role to run. We were not allowed to change the architecture or make a big change as they had a release deadline in few weeks. So here is what I have done.

Profiling

Before I start doing any changes, I started by profiling the application to see where the bottleneck is. Quite often you find people doing “optimisations and bug fixes” without pre-defined metrics. This is a big mistake as you cannot measure what you cannot see. You need to quantify the issue first then start applying your changes.

Database

As we started to see the statistics of how slow the worker role was running, we started to understand that there is problem in the way we are interacting with the database. The worker role deletes all items and then inserts them again. Do not ask me why delete and insert again, that ‘s a question to the solution architect/developer of the worker role to answer. Data insertion is happening in huge volumes (millions of records). To reduce the time it is taking for these db transactions, I did the following changes:

1. Disabling Entity Framework Changes Tracking
Entity Framework keeps track of all changes to any entity object in memory to facilitate inserting/updating/deleting records from the database. While this is a good thing when you are using one/few objects, it is a killer when you dealing with millions of records simultaneously. To do that, you just need to configure your EF context to disable changes tracking:

	dbContext.Configuration.AutoDetectChangesEnabled = false;

2. Disabling Entity Framework Validation feature
Similar to the first change, we do not need to add extra overhead just for validating if we are certain of our data. So we switched off EF validation:

	dbContext.Configuration.ValidateOnSaveEnabled = false;

3. Individual Insert vs Bulk Insert
One of the things I found in the worker role is that it is inserting the records one by one in a foreach statement. This could work fine for few items and you would not notice the difference, but when it comes to huge volumes, this kills the performance. So I thought of building an extension for EF context to insert data in bulk, but fortunately I found that somebody has already done that. EF.BulkInsert is an extension for EF that allows you to insert in bulk. It is basically a group of extension methods to your EF context. It is a very lightweight and it works great. The authors show on the project home page that having bulk insert is more than 20 times faster than individual inserts. When using such extension, make sure to configure the settings properly. Things like BatchSize, Timeout, DataStreaming, etc.

4. Transactions
I see this quite often that developers surround their db code with a transaction, and it might be a good thing on paper, but you need to understand the implication of this transaction. Such transactions slow down the whole process, add a huge overload on the db server and the app server, and makes even rolling back or committing harder. Moreover, EF 6 and above already adds a transaction scope for your changes when committing them, so you will either have your changes committed or rolled back, so we there was no need for such a transaction scope, I got rid off it.

Computing

Another bottleneck that I found was in the way we are generating tags. This is just meta data about items and their grouping. It was taking a huge amount of time to loop through all items and create these tags, categories, groups, etc. This was all happening in memory. The change I made to this was very simple but substantial, just made it run in parallel, like this:

	Parallel.ForEach(products, (product) =>
	{
		// code for generating tags, categories, etc
	});

If you are trying this, make sure that your code is thread-safe. I had to fix few issues here or there as in my case the code was not thread-safe, but this was a small change. Also, if you are using a shared list among threads, then you might want to consider using a thread-safe collection like ConcurrentDictionary or ConcurrentBag.

External Resources

The list of items and promotions was accessed from an external API. The worker role was accessing this list for 7 main api endpoints (7 different stores) before starting to process the data. This was very slow. To speed this up, I had to fire multiple request in parallel, similar to what I have done with generating tags, as below:

	var endPoints = new[] {1, 2, 3, 4, 5, 6};            
	Parallel.ForEach(endPoints, (apiEndpoint) =>
	{
		// code for calling external API
	});

Also, we started caching some of the info that we were accessing from the API locally, this saved us a lot of time too.

Doing these small changes above took me less than a day but it had made a huge impact on the way the worker role is running. Having an app running for such a long time in the cloud could cost you lots of money, which my client did not mind. What they really hated was the fact that it is failing so many times. The fact that you need to wait 5 hours means that it is more error-prone, connections could drop, database would timeout, etc. Plus, whenever developers are making any changes, they had to wait for a long time when testing the worker role locally or on the server.

In conclusion, making small changes have benefited my client significantly and they were very satisfied. I hope you find these tips useful, and I would love to hear your thoughts. If you are struggling with a long running process, then get in touch and we will happily help you out.

Highly Available SQL 2012 across Azure VNET (Part 2)

Part 1 can be found here.

In this Part 2 we will discuss:

  • Create DC on Azure and confirm VNET to VNET connectivity
  • SQL VMs
  • Configure WSFC and lastly configure AAG

DC and Connectivity VNET to VNET

First thing first, we need VMs for the Domain Controller (DC) and SQL Server 2012. I will use my script below to create few VMs

I created 2 DC , one on each VNET: AZSEDC001 and AZUSDC001
I registered both as DNS on Azure. The next step , allow ICMP on wf.msc as we are going to test ping on both servers.

mydc01

 

mydco2

Great. Now we have confirmed the connectivity between both DC and connectivity between SEVNET and USVNET.

SQL VMs 

Created 2 SQL VMs ( AZSEDB001 and AZSEDB002) under one Cloud Service (AZSEDB) on Azure-Backend Subnet of SEVNET . Domain Joined both SQL server

Configure WSFC

For this scenario, I created three extra accounts on AD:

1. kloud\kloudinstall – for failover cluster and AG. Give permission Allow for Read all properties and Create Computer Objects via AD. Assign Local Admin permission on both SQL Servers 

2. kloud\svcsql1 and kloud\svcsql2

Next part, Add Failover Clustering feature on both servers and install the HotFix for Windows 2012 cluster-based node http://support.microsoft.com/kb/2803748

1. Create the WSFC Cluster:

wsfc1

2. Create multi – node cluster on azsedc001 (Add all VMs using Wizard and the Wizard will smart enough to detect multi-subnet) and do not choose require support from Microsoft for this cluster.

wsfc2

3. Configure Quorum File Share Witness on other machines. I configure it on SEVNET DC
4. Change the cluster IP address (WSFC will use azsedc001 IP: 10.0.1.4) to unused IP. I used 10.0.1.103 for SEVNET and 192.168.1.110 for USVNET
5. Bring the cluster online:
wsfc3
You can test failover to USVNET by using PowerShell command below:

Click here for more details regarding multi-subnet WSFC

Configure AAG

Prep:
1. Launch wf.msc to allow firewall inbound rules (All SQL Servers). Program: %ProgramFiles%\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Binn\sqlservr.exe
2. Enable AlwaysOn Availability Group (all SQL servers): Launch SQL Server Configuration Wizard SQL Server Services SQL Server (MSSQLSERVER) > Tick Enable AlwaysOn Availability Group > Restart the Services

sql1

 

3.  Launch SQL Server Management Studio.  Add new Security Login for NTAuthority\System , go to SecurablesGrant: Alter any availability group, connect SQL, view server state and installer account with SysAdmin Server Role.

sql2

4. Change the SQL Service Account from NTService\MSSQLSERVER. In this case: svc_sql1 for AZSEDB001 and svc_sql2, svcsql3 for AZSEDB002 and AZUSDB001

sql3

 

AAG Steps:

1. Attach extra disk on AZSEDB001, Format the drive and Create a folder : backup on AZSEDB001. Share backup folder as below:

sql4

 

2. Go to AZSEDB001, Run SQL Management Studio and create new Database: kloud1.

3. Take a full backup of the database: Right click kloud1 database > Tasks Back Up. Remove the default destination to \\azsedb001\backup\kloud1.bak

sql5

 

4. Do the Transactional Backup: Use the same destination file path

5. Restore the full and transactional log backups on azsedb002 and azusdb001. On SQL Server Management Studio (SSMS), Right click databases and select restore database. Click Device and Add Backup Media, Backup File location: \\azsedb001\backup. Choose the backup database: kloud1.bak

sql7

6. Click Options and select Restore with No Recovery in Recovery State and then click Ok to restore the database

sql8

7. Now the fun stuff, We will run AAG Wizard: Right click the AlwaysOn High Availability on SSMS and follow the wizard

sql9

8. In AAG WIzard Specify Replica session, follow instructions as follow:

What we have here: Two replicas – one (AZSEDB002) in SEVNET and one (AZUSDB001) in USVNET. The details configuration:

sql10

Note: AZUSDB001 configured with Asynchronous data replication since AZUSDB001 hosted hundreds of miles away from Southeast Asia data centre, latency will hurt the performances.

9. In the Select Initial Data Synchronization page, select Join only and click Next since we have done backup and restore operations which is recommended as best practice especially for enterprise database

10. Follow the wizard, typical click – click – next. Igone the listener configuration warning. Finish.

The AAG dashboard by now:

sql11

More details can be found here.

Configure the Listener:

Next we will create the AAG listener:

1. Create load-balanced Azure VM endpoints for each Azure VM

2. Install KB2854082 for Windows Server 2008R2 and Windows Server 2012 cluster nodes.

3. Open the firewall ports to allow inbound rules Ports: 59999 specified earlier on Step 1.

4. Create the AG listener:
Open Failover Cluster Manager > Expand the AAG cluster name > Roles >
Right click the AG name and select Add Resource > Client Access Point

sql12

 

 

Click the Resources tab right click the listener > Properties > Note the IP Address Name and Network Name

Get the Cloud Service VIP on both SEVNET and USVNET. Run the script below on AZSEDB001

Once completed:

sql15

Create a dependency on the listener name resource. Right click the Availability Group and click Properties:

sql16

Launch SSMS > Go to AlwaysOn High Availability > Availability Groups > AAG Listener Name > Properties and specify Port: 1433

And that’s it. We have Highly Available SQL 2012 AAG across Azure VNET

Follow this link for more details how to configure AlwaysOn in Azure.

 

 

Securing Emails Outside of Your Organization With Office 365 Message Encryption

​For those of you who have been concerned about email security for a number of years, you may remember a solution from Microsoft called Exchange Hosted Encryption (EHE).  This was a cloud based service which allowed organizations to encrypt emails according to certain defined rules.  For example, you could encrypt emails where the intended recipient was outside of your organization and certain keywords or regular expressions where detected such as a credit card number.  This was a very useful service for protecting emails sent to ANY user, regardless of the relationship with the user’s company.  There was no need to set up federation between the two organization.  All certificates were stored and maintained in the cloud which made it very simple to administer compared to an on-premises solution.

The problem with EHE was that it was a separate service.  It required a completely separate console to configure and administer .  Moreover, using EHE required an additional licensing cost for every user that needed to send encrypted email.  As a result, adoption of EHE was low except for industries where data security was paramount.  Some examples of industries where EHE is very popular include:

1) Financial services including banking and insurance

2) Healthcare

3) Lawyers

4) Contract management

Microsoft recently announced Office 365 Message Encryption as the next release of EHE.  There are a number of improvements in this release which make it far more appealing to deploy and utilize.  First, the service is based on Microsoft Azure Rights Management Services (RMS).  Office 365 integrates beautifully with Azure AD and Azure AD (RMS).  This means that Office 365 Message Encryption is a built-in capability of Office 365.  Deployment and configuration of the service can be performed directly from the Exchange Online Admin Console. 

The following plans include Office 365 Message Encryption:

1) Office 365 E3

2) Office 365 E4

3) Azure AD RMS

4) Enterprise Mobility Suite (Exchange Online not included)

Other Office 365 plans can add Message Encryption as an additional subscription SKU.  Running Exchange Online Protection (EOP) is a pre-requisite to running Message Encryption.

The behavior of Office 365 Message Encryption is controlled by Exchange transport rules.  These rules are configured by an Exchange Online administrator and apply across the organization.  Here are some examples of popular transport rules:

1) Encrypt all emails sent from legal council to a user external to the organization

2) Encrypt all emails sent to a user external to the organization where the phase “encrypt” appears in the subject line

3) Encrypt all emails sent to a user external to the organization where the body contains the number pattern XXXX-XXXX-XXXX-XXXX which resembles a credit card PAN.

When a user sends an email that matches one of these transport rules, the message is encrypted, converted into an HTML attachment, and then transmitted to the recipient.  When the message is received, the end user is given instructions on how to open the encrypted message.  The recipient does NOT require an email account that is trusted by the sender or federated with his organization.  The only requirements is that the email address of the recipient is configured as either a:

1) Microsoft Account

2) Microsoft Organization ID

If the email address of the recipient is NOT configured as one of the above accounts, he will be presented with instructions on how to do so.  This is required before the encrypted message can be opened.

To improve the Office 365 Message Encryption experience for end users, I recommend that you set up at least two transport rules:

1) Transport rule for outbound email based on business rules for data protection

2) Transport rule to decrypt inbound email on delivery to save internal users the extra step

Organizations using Office 365 Message Encryption can customize the experience for the end user.  They can add a corporate logo or standard disclaimer text to every encrypted email.  Customizing the experience requires the user of PowerShell as there is no UI available for message customization in the current release.

If you need assistance securing your corporate email, please contact Kloud solutions at the following URL:

http://www.kloud.com.au/#

Unable to Administer Office 365 Using PowerShell with Multi-Factor Authentication

Back in February, Microsoft announced the release of multi-factor authentication.  This feature allows IT administrators to dramatically increase the security of Office 365 by requiring a second factor of authentication to access the service.  This feature is very simple to configure and use.  It is far simpler to configure multi-factor authentication for Office 365 than it is to enable an equivalent solution on premises.  To learn more about multi-factor authentication, I recommend the following blog post:

http://blog.kloud.com.au/2014/04/16/protect-your-identity-in-the-cloud-with-multi-factor-authentication/

 

There are some limitations of multi-factor authentication that are important to be aware of before turning on this feature.  One key limitation is that PowerShell commands cannot be run with an account that has multi-factor authentication enabled.  Here is why:

1) Authentication of a PowerShell session only accepts a user name and password.  There is no way to provide a second factor.

2) Application passwords cannot be used to authenticate a PowerShell session

All Office 365 administrators will need to run PowerShell commands at some point to administer the service.  Therefore, multiple admin accounts will be required for different administrative scenarios.

 

Kloud Solutions recommends creating three separate Office 365 accounts for global admins who need to run PowerShell:

 

1) A standard user account to perform daily tasks such as checking email or accessing shared files.   This account will have an Office 365 license assigned.  Multi-factor authentication is not required for this account, but it is highly recommended.

2) A global admin account to perform administrative tasks.  This account should only be used when administrative access is required.  Because this account is privileged, I strongly recommend enabling multi-factor authentication to increase the level of security.

3) A global admin account to run PowerShell commands.  This account cannot be secured with multi-factor authentication.  So I recommend leaving it disabled until it is needed.  This will reduce the risk that the account will be compromised without requiring the second authentication factor.

 

If you are looking for assistance with Office 365, PowerShell, or multi-factor authentication, please contact Kloud Solutions at the following URL:

http://blog.kloud.com.au/

Highly Available SQL 2012 across Azure VNET (Part 1: VNET Peering)

Just over a year Microsoft announced the support of SQL Server AlwaysOn Availability Groups (AAG) on Microsoft Azure IaaS. Last month, Microsoft announced the support of SQL AAG between Azure Regions. This is a great news for great technology like SQL Server 2012 for highly available and disaster recovery scenario. SQL AAG released in SQL 2012 and enhanced in SQL 2014. AAG will detect anomaly which will impact SQL availability. When We will discuss how to do this in two blog posts:

  • Part1: Design SQL 2012 AAG across Azure VNET and How to create Microsoft Azure VNET to VNET peering
  • Part2: SQL, WSFC, Configure Quorum and Voting (SQL) and Configure AAG

Part1 SQL 2012 AAG across Azure VNET SQL 2012 AAG is designed to provide high availability for SQL database and Azure IaaS is great place where this technology can live. There are few benefits using Azure IaaS for this scenario:

  • Security features  from Azure as Cloud Provider. The security whitepaper can be found here
  • Azure VM Security Extensions which means we can rest assure when VM is deployed, it is protected from day 1. Details can be found here
  • Azure ILB to provide load balancing solution
  •  99.90% SLA for VNET connectivity (VPN). This feature is backed up by two “hidden” Azure VMs in active-passive configuration
  • Express Route (MPLS) for higher bandwidth requirement – we won’t discuss and use this feature in this blog posts

The architecture components for this scenario: 2 VNET on two different regions to avoid single point of region failure. We will call this VNET: SEVNET (Southeast Asia Region) and USVNET (US West Region). These VNETs will be peered. DC on each VNET to provide AD and DNS service. First DC on Southeast Asia region will be used as File Share Witness. 3 SQL Servers in AAG which 2 SQL will be at SEVNET on Azure Availability Set and 1 SQL will be at USVNET. The constraints for this scenario:

  • Cloud Service cannot span across Azure VNET. For this scenario two Cloud Service will be used for SQL VMs
  • Azure Availability Set (AS) cannot span across VNET and Cloud Service. Two SQL AS will be deployed
  • Azure ILB cannot span across VNET. Only Primary and Secondary SQL will be load balanced on SEVNET

Diagram below illustrates the architecture:

SQLAAGacrossVNET

Diagram above shows SQL AAG is configured across two Azure VNET. This configuration will give resiliency from full Azure region failure. AAG will be configured with 2 replicas (Primary at SEVNET, one replica at SEVNET for automatic failover and the other replica across region at USVNET configured for manual failover and disaster recovery in case of region failure at SEVNET). The listener will be configured o SEVNET which configured to route connections to primary replica. The scenario above also allows offloading read workloads from the Primary replica to readable secondary replicas in Azure region that are closer to the source of the read workloads (For example: reporting/ BI / backup purpose) Microsoft Azure VNET to VNET Peering Now let’s create 2 VNET on Southeast Asia region and US West region. Below is the configuration:

  • SEVNET | Southeast Asia Region | Storage: sevnetstor | Address Space: 10.0.0.0/20 | DNS:  10.0.0.4
  • USVNET | US West Region | Storage: usvnetstor | Address Space: 192.168.0.0/20 | DNS: 10.0.0.4

We will use Regional Virtual Network instead of using Affinity Group for this scenario which will enable us to use ILB for the future use. My colleague Scott Scovell wrote a blog about this a while ago. Create 2 Storage Accounts:

2 storage

Create DNS Server 10.0.0.4 – I registered with DC name AZSEDC001 Create VNET Peering We will use Azure GUI to create VNET-VNET peering Create first VNET at SE Asia Region: Go to NEW>Network Services>Virtual Network>Custom Create> Enter VNET name: SEVNET and select Region Southeast Asia > Next > select DNS Servers: AZSEDC001 > check Configure a site-to-site VPN > On Local Network choose Specify a new Local Network > Next

sevnet1

 

Enter the name of local network as USVNET and specify the address space. On VPN Device IP Address just use temporary one and we will replace that with the actual Public IP Address of the Gateway.

sevnet2

Next – we will configure the IP range of SEVNET . The important bit: Click on Add Gateway Subnet

sevnet3

Next We need to configure the USVNET with the same way. Configure site-to-site connectivity with the local as SEVNET using its address space. Both VNET will be like below:

vnet4

Next: We will need to create Dynamic Routing VPN Gateways for both VNET. Static Routing is not supported.

vnet5

Once completed, get the Gateway IP address for both VNET and replace the temporary VPN IP Address on Local Networks with the actual Gateway IP address we just obtained.

 

vnet6

The last step: Set the IPsec/IKE pre-shared keys for both VNET. We will use Azure PowerShell for this configuration. Firstly we will get the Pre-Shared keys to be used on our PowerShell script.

vnet7

Please ensure You are on the right subscription. Always good habit to use select-azuresubscription -default cmdlet before executing Azure PowerShell script.

vnet8

And That’s it! We should see the successful VNET to VNET Peering :

vnet9

Part 2 we will deep dive on how to configure the SQL AAG on across both VNET

Azure Mobile Services and the Internet of Things

The IT industry is full of buzzwords and “The Internet of Things” (IoT) is one that’s getting thrown about a lot lately. The IoT promises to connect billions of devices and sensors to the internet. How this data is stored, sorted, analysed and surfaced will determine the amount of value it is to your business. With this in mind I thought it’s time to start playing around with some bits and pieces to see if I could create my very own IoT connected array of sensors.

To get started I’ll need a micro-controller that I can attach some sensors to. Second I’ll need some kind of web service and storage to accept and store my raw sensor data. I’m not a developer so I’ve decided to keep things simple to start with. My design goals however, are to make use of cloud services to accept and store my raw data. Azure Mobile Services seems like a good place to start.

I’ve chosen the following components for my IoT Project

  1. Arduino Yun – the Micro-controller board
  2. Temperature Sensor – to detect ambient temperature
  3. Light Sensor – to detect light levels
  4. Pressure Sensor – to detect barometric pressure
  5. Azure Mobile Services – to connect the Arduino Yun to the cloud
  6. NoSQL database – to store my raw data

Arduino Yun

From the Arduino website the board’s capabilities are as follows: “The Arduino Yún is a microcontroller board based on the ATmega32u4 and the Atheros AR9331. The Atheros processor supports a Linux distribution based on OpenWrt named OpenWrt-Yun. The board has built-in Ethernet and WiFi support, a USB-A port, micro-SD card slot, 20 digital input/output pins (of which 7 can be used as PWM outputs and 12 as analog inputs), a 16 MHz crystal oscillator, a micro USB connection, an ICSP header, and a 3 reset buttons.”

Further Details on the Arduino Yun can be found at: http://arduino.cc/en/Main/ArduinoBoardYun?from=Products.ArduinoYUN

Schematics

The following schematic diagram illustrates the wiring arrangements between the Arduino Yun and the sensors. In this blog I’m not going to provide any specific detail in this area, instead we are going to focus on how the Arduino Yun can be programed to send its sensor data to a database in Microsoft’s Azure cloud. (There are loads of other blogs that focus specifically on connecting sensors to the Arduino boards, checkout http://www.arduino.cc/)

 

 

Azure Mobile Services

To make use of Azure Mobile Services you will need an Azure subscription. Microsoft offer a free one month trial with $210 credit to spend on all Azure Services. So what are you waiting for J
http://azure.microsoft.com/en-us/pricing/free-trial/

Ok Back to Azure Mobile Services, Microsoft define Azure Mobile Services as “a scalable and secure backend that can be used to power apps on any platform–iOS, Android, Windows or Mac. With Mobile Services, it’s easy to store app data in the cloud or on-premises, authenticate users, and send push notifications, as well as add your custom backend logic in C# or Node.js.” For my IoT project I’m just going to use Azure Mobile Services as a place to accept connections from the Arduino Yun and store the raw sensor data.

Create a Mobile Service

Creating the mobile service is pretty straight forward. Within the web management portal select New, Compute, Mobile Service then Create.

 

Azure Mobile Services will prompt you for:

  • A URL – This is the end point address the Arduino Yun will use to connect to Azure Mobile Services
  • Database – A NoSQL database to store our raw sensor data
  • Region – Which geographic region will host the mobile service and db
  • Backend – the code family used in the back end. I’m using JavaScript

Next you’ll be asked to specify some database settings including server name. You can either choose an existing server (if you have one) or alternatively create a brand new one on the fly. Azure Mobile Services will prompt you for:

  • Name – That’s the name of your database
  • Server – I don’t have an existing one so I’m selecting “New SQL database Server”
  • Server Login Name – A login name for your new db
  • Server Login Password – The password

Now that we have a database it’s time to create a table to store our raw data. The Azure Management Portal provides an easy to use UI to create a new table. Go to the Service Management page, select the Data tab, and click the “+” sign at the bottom of the page. As we are creating a NoSQL table there is no need to specify a schema. Simply provide a table name and configure the permissions “insert/update/delete/read” operations.

Retrieval of the Application Key

Now it’s time to retrieve the Application Key. This will be used to authenticate the REST API calls, when we post data to the table. To retrieve the application key go to the Dashboard page and select the “manage keys” button at the bottom of the page. Two keys will be displayed, copy the “application” one.

 

 

Create the Table

Within the Mobile Service Management page, select Data

Click Create

 

Once the table has been create its ready to accept values. The Arduino Yun can be programed to send its sensor values to our new database via the Azure Mobile Services REST API. http://msdn.microsoft.com/en-us/library/jj710108.aspx

The Application key retrieved earlier will be used to authenticate the API calls.

 

The Arduino Yun Sketch

Here is the basic code inside the Arduino Yun sketch, the code has the some core functions as follows:

Setup()

Every Arduino Yun sketch contains a setup function, this is where things like the serial port and bridge are initialized.

loop()

Every Arduino Yun sketch contains a loop this is where the sensor values are read and the other functions are called.

send_request()

The send_request function is used to establish a connection with the Azure Mobile Services endpoint. A HTTP POST is formed, authentication takes place and a JSON object is generated with our sensor values and placed in the body. Currently the sample code below sends a single sensor value (lightLevel) to Azure Mobile Services. This could easily be expanded to include all sensor values from the array of sensors connected to the Arduino Yun.

wait_response()

This function waits until there are free bytes available on the connection.

read_response()

This function reads the response bytes from Azure Mobile Services and outputs the HTTP response code to the serial console for debugging / troubleshooting purposes.

 

/* Arduino Yun sketch writes sensor data to Azure Mobile Services.*/

 // Include Arduino Yun libraries
#include <Bridge.h>
#include <YunClient.h>
#include <SPI.h>

 // Azure Mobile Service address
const char *server = “iotarduino.azure-mobile.net”;

// Azure Mobile Service table name
const char *table_name = “iotarduino_data”;

// Azure Mobile Service Application Key
const char *ams_key = “HJRxXXXXXXXXXXXXXXXmuNWAfxXXX”;

 YunClient client;
char buffer[64];

/*Send HTTP POST request to the Azure Mobile Service data API */
void send_request(int lightLevel)
{
Serial.println(“\nconnecting…”);
if (client.connect(server, 80)) {
Serial.print(“sending “);
Serial.println(lightLevel);

// POST URI
sprintf(buffer, “POST /tables/%s HTTP/1.1″, table_name);
client.println(buffer);

 // Host header
sprintf(buffer, “Host: %s”, server);
client.println(buffer);

 // Azure Mobile Services application key
sprintf(buffer, “X-ZUMO-APPLICATION: %s”, ams_key);
client.println(buffer);

 // JSON content type
client.println(“Content-Type: application/json”);

 // POST body
sprintf(buffer, “{\”LightLevel\”: %d}”, lightLevel);

 // Content length
client.print(“Content-Length: “);
client.println(strlen(buffer));

 // End of headers
client.println();

 // Request body
client.println(buffer);

 } else {
Serial.println(“connection failed”);
}
}

 /* Wait for a response */
void wait_response()
{
while (!client.available()) {
if (!client.connected()) {
return;
}
}
}

 /* Read the response and output to the serial monitor */
void read_response()
{
bool print = true;

 while (client.available()) {
char c = client.read();
// Print only until the first carriage return
if (c == ‘\n’)
print = false;
if (print)
Serial.print(c);
}
}

 /* Terminate the connection*/
void end_request()
{
client.stop();
}

 /* Arduino Yun Setup */
void setup()
{
Serial.begin(9600);
Serial.println(“Starting Bridge”);
Bridge.begin();
}

 /* Arduino Yun Loop */
void loop()
{
int val = analogRead(A0);
send_request(val);
 wait_response();
 read_response();
 send_request();
 delay(1000);
}

 

The Sensor Data in Azure

Once the sketch is uploaded to the Arduino Yun and executed, the sensor data can be viewed within the Azure Mobile Services dashboard.

 

The Arduino Yun Serial monitor displays the serial (debugging / troubleshooting) comments as the sketch executes.

 

Conclusion

This was just a bit of fun and this is obviously not an enterprise grade solution, however, I hope it goes some way to illustrating the possibilities that are readily available to all of us. Things can be done today in far fewer steps. Access to powerful compute and storage is easier than ever.

The Arduino Yun is an open source electronic prototyping board that allows people like me, without any real developer skills to mess around with electronics, explore ideas and interact with the outside world. There are loads of interesting Arduino code samples and ideas for projects with real world use cases.

My goal here was to illustrate how easy it is to obtain raw sensor data from the outside world and store it in a place where I have loads of options for data processing. By placing my data in Azure I have access to the power and resources of the Microsoft Azure cloud platform literally at my fingertips. . .Enjoy the possibilities!