HoloLens – Spatial sound

The world of Mixed reality and Augmented reality is only half real without three-dimensional sound effects to support the virtual world. Microsoft deals with this challenge by leveraging the ability of their audio engine to generate Spatial Sound. Spatial sound, as a feature, can simulate three-dimensional sounds in a virtual world based on direction, distance, and other environmental factors. Spatial Sound is based on the concept of sound localization which is a popular topic in the field of sound engineering. Sound localization can be defined as a process of determining the source of the sound, the field or sound, the position of the listener and the media or environment of sound propagation. The following diagram illustrates a virtual world with Spatial Sound enabled:


The concept of sound localization and Spatial Sound is not new to this world. Spatial music has been practiced since biblical ages in the form of the antiphon. The modern form of spatial music was introduced in early 1900 in Germany. Spatial sound empowers a Mixed reality application developer to place sound sources in a three-dimensional space around the user in a convincing manner. Objects in the virtual world can act as sources for these sounds creating an immersive experience for the user.

Scenarios for Spatial Sound

In the world of Mixed Reality, Spatial Sound can be used to make many user scenarios realistic. Following are few of them.

  • Anchoring – The ability to position a virtual object in a virtual world is critical for many Mixed reality applications. In a scenario where the users turn his face away from the object, and it disappears from his viewport, the only way to give the user a scene of the object’s existence in the scene is by propagating localized sounds from the object.
  • Guiding – Spatial sound are found to be very useful in scenarios where users need to be guided to draw their attention to a specific object or space in the three-dimensional world.
  • Simulating physics – Sound plays an important role in emulating realistic physics in the world of Mixed Reality. For example, the impact of a glass sphere dropping behind the user in a three-dimensional world is best simulated by enabling localized shattering sound effects at the point of its collision with the floor.

Implementing Spatial Sound in HoloLens applications

The audio engine is HoloLens uses a technology called HTRF (Head Related Transfer Function) to simulate sounds that are coming from various directions and distances within a virtual world. Head Related Transfer Functions define directivity patterns of the human ears which caters for direction, elevation, distance, and frequency of sound. Unity offers built-in support for Microsoft HTRF extensions.

Configuring the Audio Source

The plug-in can be enabled from the audio manager in Unity (Edit->Project Settings->Audio).


Once the setting is enabled, you should be able to ‘Spatialize’ any audio source attached to a game object in Unity. To configure the audio source, you will need to perform the following steps:

  1. Select the audio source on the game object in the inspector panel
  2. Check the ‘Spatialize’ Checkbox under the options for the audio source3
  3. For best results, change the value of ‘Spatial Blend’ to 14

Your audio source is now configured to play Spatial Sound.

Testing Spatial Sound

The best way to test Spatial Sound is by using the ‘Audio Emitter’ component which comes built-in with the HoloToolKit. Perform the following steps to configure the Audio Emitter.

  1. Add the Audio Emitter component from the inspector panel5
  2. Configure the Update Interval, Max Objects, and Max Distance on the Audio Emitter component. The ‘Update Interval’ determines the frequency in which the Audio Emitter scans for environmental factors which influence the sound output. ‘Max Objects’ specifies the maximum number of influencing objects to be considered and the max distance specifies the maximum radius for the scan.6
  1. Leave the ‘Outer Sphere’ parameter empty for now. This parameter is used to demonstrate audio occlusion
  2. Associate an audio file to your Audio Source and enable the loop7
  3. Run the application and move around the object to feel the Spatial Sound in action.

Measuring sound output

It is often a scenario where you want to measure the sound output produced by an Audio Source to visually represent it. A good example is a case when you need to display an audio histogram within your application.8

The following code can be used to measure the RMS output from an Audio Source which can then be used to paint the histogram spectrum.

Unity also supports a direct function (AudioSource.GetSpectrumData) to retrieve spectrum data from the audio source

To conclude, in this blog, we walked through the fundamentals of spatial sound. After which, we learned about implementing Spatial Sound in a HoloLens application and about how to measure sound output.

HoloLens – Understanding depth (Spatial Mapping)

Building smart applications which can work in a three-dimensional space has many challenges. Amongst these, the one that tops the list is the challenge of understanding and mapping the surrounding 3D world. Applications usually depend on the device and platform capability to resolve this problem. Augmented Reality and Mixed Reality devices ships with built-in technologies to measure the depth of the containing world.

Scenarios of interest

Mapping the world around a device is critical to enable powerful scenarios in this field. Following are few such use cases:

  • Docking/Placement – What makes Mixed Reality different from Augmented Reality is its ability to enable interaction between virtual and physical objects. To make a Mixed Reality scenario realistic, it is critical for the application to understand the mapping of the environment where the user is currently operating from. This will help the application place or dock the holograms obeying the physical bounds of the environment. For example, if the application needs to place a chair in a room, it will need to position it on the floor with enough space to land its four legs.
  • Navigation – Objects in the holographic world should be constrained by the rules of the physical world to make the application look real. For example, a holographic puppy should be able to walk on the floor or jump on to the couch and not walk through the walls and furniture. To enable this, the application should be aware of the depth of each object around the user at any given point in time.
  • Physics – In the real world, the behaviour of an object in motion is highly influenced by factors like inertia, density, elasticity, and so on. To match the similar experience of holograms in the world of Mixed Reality, the application will need to be aware of the environment. For Example, dropping the ball from the roof on the floor will have a different effect from dropping it on a couch.


Depth sensing is not a new problem in the world of computing. However, the rise of Mixed Reality and Augmented Reality enabled devices have taken it to the limelight. Different vendors address this challenge by different names. For examples, Google calls it ‘Depth Perception’. Apple ARKit calls it ‘Depth Map’ and Microsoft calls it ‘Spatial Mapping’.  The underlying technologies used by these devices may be different but the objective of discovering the depth of the environment around the device remains the same. Following are few of the underlying technologies used by these devices to measure depth:

  • Structured Infrared light projector/scanner
  • RGB Depth cameras
  • Time-of-flight camera

Time of Flight

Time of flight technology is specifically of our interest because of its popularity and the fact that it is being used by devices like Microsoft Kinect and HoloLense to measure depth. The technology works on the reflective properties of the objects. It uses the known speed of light to calculate the distance by measuring the time taken for a photon to reflect back to the device sensors. The following diagram illustrates the measurement process:


Experiments state that the time for flight technology works at its best in between the range of approximately 0.5 to 5 metres.  The depth camera in HoloLens works well between 0.85 to 3.01 metres.

Spatial Mapping

Spatial Mapping is a feature shipped with Microsoft HoloLens which provides a representation of real-world surfaces around the device. This can be used by application developers to make the applications environment aware. The feature abstracts the hardware technology used to measure the depth and exposes easily consumable APIs to integrate with.

Spatial Mapping in HoloLens

The best way to leverage Spatial Mapping capabilities in a HoloLens is to integrate it using Unity. To enable Spatial Mapping, you will first need to enable ‘SpatialPerception’ capability on your project. Unity offers two built-in components to support Spatial Mapping.

  • Spatial Mapping Renderer – This component is responsible for visually presenting the spatial map as a mesh to the application.
  • Spatial Mapping Collider – The collider is responsible for enabling interactions between the holograms and the spatial mesh.

These components can be added to an existing unity project from the ‘Add Component’ menu (Add Component > AR > Spatial Mapping Collider/Spatial Mapping Renderer)

The following link talks about setting up Spatial Mapping capabilities in your Unity project in detail.


Tips and tricks

Following are few tips and tricks to optimize Spatial Mapping in your applications.

  • Updating spatial maps – Running a spatial map starts with the trigger for collecting mapping data. This operation is very CPU intensive thereby costing battery life and starvation to other processes. To optimise update cycles, request collision data only when required. The API’s also let you query collision data for selective surfaces.
  • Configuring refresh intervals – It is important to choose an optimal refresh rate for your spatial maps to go light on CPU. You can do this from the Unity’s inspector window.


  • The density of spatial data – Spatial Mapping uses triangle meshes to represent the surfaces it maps. For an application which does not require high-resolution mapping, it is advisable to generate maps with lower triangle density to optimise CPU time and turn around time for the mapping process.
  • Understanding the implementation – It is useful to understand the implementation of Spatial Mapping to perform low-level optimizations specific to your applications. Understanding how ‘SurfaceObserver’ and ‘SurfaceData’ is implemented gives a good insight into how things work. You can refer to the Unity documentation to learn more about this.


  • Mixed Reality Toolkit example – A good place to start on Spatial Mapping is the example shared within the Mixed Reality Toolkit accessible through the following link.


To conclude, in this blog, we briefly discussed the depth mapping problem, solutions to this problem and the technology behind it. We also dived deeper into Spatial Mapping feature of HoloLens and how it can be used with a Unity project.


HoloLens – Mixed Reality Toolkit

Game programming is an entirely different paradigm for an enterprise application developer in terms of the tools, processes and patterns used. But like any other development engagement, to kick start the development phase and to reduces the learning curve, it is always helpful to have a set of pre-baked tools handy. In the world of HoloLens application development, Microsoft’s Mixed Reality Toolkit is your best companion.

The Mixed Reality Toolkit is an open source project driven by Microsoft. It is a collection of scripts and components which will help developers with initial hurdles they may face, when developing applications for Microsoft HoloLens and Windows Mixed Reality headsets. The source code for this project is shared on the Git Hub.

Importing the source code for the tool set

The Git repository for the Mixed Reality Tool set can be accessed using the following link


Following are the steps to clone the code for the toolkit into your VSTS repository

  1. Create a new project on the Visual Studio portal (.visualstudio.com).


  1. Give the project a name and chose the version control as ‘Git’


  1. On the landing page, pull down ‘Import a repository’ section and click on the import button.


  1. Type in the URL for the Git repository (https://github.com/Microsoft/MixedRealityToolkit) and click import.


  1. Once imported, the landing page for your repository will look like the image shown below


Exploring the source code

The best way to explore the source code for the Mixed Reality Toolkit is by trying out the example scenes. The example scenes are located under the path “Mixed Reality Toolkit/Assets/HoloToolkit-Examples” in the repository.


Following are the steps to open an example scene in unity.

  1. Start Unity and open the root folder of the repository which you have cloned on your disk.


  1. In the project explorer, navigate to the HoloToolkit-Examples folder. You should see folders with all the example scenes under this path. For this exercise, let’s try out a slider example. Navigate to “Assets/HoloToolkit-Examples/UX/Scenes” folder and double click on the ‘SliderSamples’ scene.


  1. You should now see the slider in the game window. Click on the play button to test the scene.


Installing the toolkit on a new unity project

The Mixed Reality Toolkit releases a unity package which can be incorporated into your HoloLens unity project. This package helps you configure your project and the scenes with default settings for HoloLens. It also populates your scene with basic objects like camera, input manager, cursor, etc;  Following are the steps to download and install the unity package.

  1. Download the unity package from the following URL



  1. Once downloaded, import the package into unity


  1. Once imported correctly, you should see the ‘Mixed reality toolkit’ menu on your toolbar


  1. Use configure menu to set up the project and the scene settings.


Mixed reality toolkit comprises of a very useful collection of tools which lets you kick start a HoloLens project with a template containing basic objects and settings. I will be covering continuous integration and continuous deployment strategies for HoloLens application in my next blog post.

HoloLens – Continuous Integration

Continuous integration is best defined as the process of constantly merging development artifacts produced or modified by different members of a team into a central shared repository. This task of collating changes becomes more and more complex as the size of the team grows. Ensuring the stability of the central repository becomes a serious challenge in such cases.

A solution to this problem is to validate every merge with automated builds and automated testing. Modern code management platforms like Visual Studio Team Services (VSTS) offers built-in tools to perform these operations. Visual Studio Team Services (VSTS) is a hosted service offering from Microsoft which bundles a collection of Dev Ops services for application developers.

The requirement for a Continuous Integration workflow is important for HoloLens applications considering the agility of the development process. In a typical engagement, designers and developers will work on parallel streams sharing scripts and artifacts which constitute a scene. Having an automated process in place to validate every check-in to the central repository can add tremendous value to the quality of the application. In this blog, we will walk through the process of setting up a Continuous Integration workflow for a HoloLens application using VSTS build and release tools.

Build pipeline

A HoloLens application will have multiple application layers. The development starts with creating the game world using Unity and then proceeds to wiring up backend scripts and services using Visual Studio. To build a HoloLens application package, we need to first build the front-end game world with the Unity compiler and then, the back-end with the visual studio compiler. The following diagram illustrates the build pipeline:


In the following sections, we will walk through the process of setting up the infrastructure for building a HoloLens application package using VSTS.

Build agent setup

VSTS uses build agents to perform the task of compiling an application on the central repository. These build agents can either be Microsoft hosted agents, which is available as a service in VSTS or they can be custom-deployed services managed by you. HoloLens application will require custom build agents as they run custom build tasks for compiling the Unity application. Following are the steps for creating a build agent to run the tasks required for building a HoloLens application:

1.      Provision hosting environment for the build agent

The first step in this process is to provision a machine to run the build agent as a service. I’d recommend using an Azure Virtual Machine hosted within an Azure DevTest Lab for this purpose. The DevTest Lab comes with built-in features for managing start up and shut down schedules for the virtual machines which are very effective in controlling the consumption cost.  Following are the steps for setting up the host environment for the build agent in Azure.

  1. Login to the Azure portal and create a new instance of DevTest LabDevtest labs
  2. Add a Virtual machine to the Lab.Add VM
  3. Pick an image with Visual Studio 2017 pre-installed.Image
  4. Choose the hardware with a high number of CPUs and IOPS as the agents are heavy on disks and compute. I’d advice a D8S_V3 machine for a team of approximately 15 developers.                                                                                                        imagesize
  5. Select the PowerShell artifacts to be added to the Virtual machineselected atrefacts
  6. Provision the Virtual Machine and remote desktop into it.

2.      Create authorization token

Build agent will require an authorized channel to communicate with the build server which in our case is the VSTS service. Following are the steps to generate a token:

  1. On VSTS portal, navigate to the security screen using the profile menumenu
  2. Create a personal access token for the agent to authorize to the server. Ensure that you have selected ‘Agent pools (read, manage) in the authorized scope.Create PAT
  3. Note the generated token. This will be used to configure the agent the build host virtual machine.

3.      Installing and configuring the agent

Once the token is generated we are now ready to configure the VSTS agent. Following are the steps

  1. Remote desktop into the build host virtual machine on Azure
  2. Open the VSTS portal on a browser and navigate to the ‘Agent Queue’ screen. (https://.visualstudio.com/Utopia/_admin/_AgentQueue)
  3. Click on ‘Download Agent’ buttondownload agent
  4. Click on the ‘Download’ button to download the installer onto the disk of your VM. Choose the default download location.configuring account
  5. Follow the steps listed in the previous step to configure the agent using PowerShell commands. Detailed instructions can be found at the below link:


  1. Once configures, the agent should appear on the agent list within the selected pool.agent post creation

This completes the build environment setup. We can now configure a build definition for our HoloLens application.

Build definition

Creating the build definition involves queuing up a sequence of activities to be performed during a build. In our case, this includes the following steps.

  • Performing Unity build
  • Restoring NuGet packages
  • Performing Visual Studio build

Following are the steps to be performed:

  1. Login to the VSTS portal and navigate to the marketplace.Marketplace icon
  2. Search for the ‘HoloLens Unity Build’ component and install it. Make sure that you are selecting the right VSTS project while installing the component.Install task
  3. Navigate to Builds on the VSTS portal and click on the ‘New’ button under ‘Build Definitions’new buld defnition
  4. Select an empty template    template selection
  5. Add the following dev tasks
    1. HoloLens Unity Build
    2. NuGet
    3. Visual Studio Build


  1. Select the Unity Project folder to configure the build taskUnity project folder
  2. Configure the Nuget build task to restore the packages.nuget restoration
  3. Configure the Visual Studio build task by selecting the solution path, platform, and configuration.visual studio build task
  4. Navigate to the ‘Triggers’ tab and Enable the build to be triggered for every check-in.                                                 Trigger

You should now see a build being fired for every merge into the repository. The whole build process for an empty HoloLens application can take anywhere between four to six minutes on an average.

To summarise, in this blog, we learned about the build pipeline for a HoloLens application. We also explored the build agent set up and build definition required to enable continuous integration for a HoloLens application.

HoloLens – Using the Windows Device Portal

Windows Device Portal is a web-based tool which was first introduced by Microsoft in the year 2016. The main purpose of the tool is to facilitate application management, performance management and advanced remote troubleshooting for Windows devices.  Device portal is a lightweight web server built into the Windows Device which can be enabled in the developer mode. On a HoloLens, developer mode can be enabled from the settings application (Settings->Update->For Developers->Developer Mode).

Connecting the Device

Once the developer mode on the device is enabled, the following steps must be performed to connect to the device portal.

  1. Connect the device to the same Wi-Fi network as the computer used to access the portal. Wi-Fi settings on the HoloLens can be reached through the settings application (Settings > Network & Internet > Wi-Fi).
  2. Note the IP address assigned to the HoloLens. This can be found in the ‘Advance Options’ screen under the Wi-Fi settings (Settings > Network & Internet > Wi-Fi > Advanced Options).
  3. On the web browser of your computer, navigate to “https://<HoloLens IP Address>”
  4. As you don’t have a valid certificate installed, you will be prompted with a warning. Proceed to the website by ignoring the warning.

Holo WDP1

  1. For the first connection, you will be prompted with a credential reset screen. Your HoloLens should now blackout with only a number displayed in the centre of your viewport. Note this number and enter this in the textbox highlighter below

Holo WDP2

  1. You should also enter a ‘username’ and ‘password’ for the portal to access the device in future.
  2. Click ‘Pair’ button
  3. You will now be prompted with a form to enter the username and password. Fill this form and click on the ‘Login’ button.Holo WDP3
  4. Successful login will take you to the home screen of the Windows Device Portal

Holo WDP4

Portal features

The portal has a main menu (towards the left of the screen) and a header menu (which also serves as a status bar). Following diagram elaborates on the controls available in the header menu

Holo WDP5

The main menu is divided into three sections

  • Views – Groups the Home page, 3D view and the Mixed Reality Capture page which can be used to record the device perspective.
  • Performance – Groups the tools which are primarily used for troubleshooting applications installed on the device or the device itself (hardware and software)
  • System – functionalities to retrieve and control the device behaviour ranging from the managing the applications installed on the device to control the network settings.

A detailed description of the functionalities available under each menu item can be found on the following link:


Tips and tricks

Windows device portal is an elaborate tool and some of its functions are tailor-made for advanced troubleshooting. Following are few features of interest to get started with the development.

  • Device name – Device name can be access from the home screen under the view category. It is helpful in the long run to meaningfully name your HoloLens for easier identification over a network. You can change this from the Device Portal.

Holo WDP6

  • Sleep setting – HoloLens has a very limited battery life which ranges from 2 to 6 hours based on the applications we are running on it. This makes the sleep settings important. You can control the sleep setting from home screen of the Device Portal. Keep in mind that during development, you may usually leave the device plugged in (using UBS cable to your laptop/desktop). To avoid the pain of starting the device repeatedly, it is recommended to configure the ‘sleep time when plugged’ into a larger number.

Holo WDP7

  • Mixed reality capture – This feature is very handy to capture the device perspective for application demonstrations. You can capture the vide through the PV (Picture/Video) camera, the audio from the microphones, audio from the app and the holograms projected by the app on a collated media stream. The captured media content is saved on the device and can be downloaded to your machine from the portal. However, the camera channel cannot be shared between two applications. So, if you have an application using the camera of the device your recording will not work by default.
  • Capturing ETW trace – Performance tracking feature under the performance category is very handy for collecting ETW traces for a specific period. The portal enables you to start and stop the trace monitoring. The ETL events captured during this period is logged into a file which can be downloaded for further analysis.
  • Application management – Device portal provides a set of features to manage the applications installed on the HoloLens. These features include deploying, starting and stopping applications. This feature is very useful when you want to remotely initiate an application launch for a user during a demo.

Holo WDP8

  • Virtual Keyboard – HoloLens is not engineered to efficiently capture inputs from the keyboard. Pointing at a virtual keyboard using HoloLens and making an ‘airtap’ for every key press can be very stressful. Device portal gives you an alternate way to input text into your application from the ‘virtual input’ screen.

Holo WDP9

Windows Device Portal APIs

Windows Device Portal is built on top of a collection of public REST APIs. The API collection consists of a set of core APIs common to all Windows devices and specialised sets of APIs for specific devices such as HoloLens and Xbox. The HoloLens Device Portal APIs are bundled under the following controllers

  • App deployment – Deploying applications on HoloLens
  • Dump collection – Managing crash dumps from installed applications
  • ETW – Managing ETW events
  • Holographic OS – Managing the OS level ETW events and states of the OS services
  • Holographic Perception – Manage the perception client
  • Holographic Thermal – Retrieve thermal states
  • Perception Simulation Control – Manage perception simulation
  • Perception Simulation Playback – Manage perception playback
  • Perception Simulation Recording – Manage perception recording
  • Mixed Reality Capture – Manage A/V capture
  • Networking – Retrieve information about IP configuration
  • OS Information – Retrieve information about the machine
  • Performance data – Retrieve performance data of a process
  • Power – Retrieve power state
  • Remote Control – Remotely shutdown or restart the device
  • Task Manager – Start/Stop an installed application
  • Wi-Fi Management – Manage Wi-Fi connections

The following link captures the documentation for HoloLens Device Portal APIs.


The library behind this tool is an open source project hosted on GitHub. Following is a link to the source repository


HoloLens – Setting up the Development environment

HoloLens is undoubtedly a powerful invention in the field of Mixed reality. Like any other revolutionary inventions, the success of a technology largely depends upon its usability and its ease of adoptability. This is what makes software development kits (SDK) and application programming interfaces (API) associated with a technology super critical. Microsoft has been very smart on this front when it comes to HoloLens. Rather than re-inventing the wheel, they have integrated HoloLens development model with the existing popular gaming platform ‘Unity’ for modelling a Mixed reality application frontend.

Unity is a cross-platform gaming engine which was first released in the year 2005. Microsoft’s collaboration with Unity was first announced in the year 2013 when it started supporting Windows 8, Windows Phone 8 and Xbox One development tools. Unity’s support for Microsoft HoloLens was announced in 2015, since then, it has been the Microsoft recommended platform for developing game worlds for Mixed reality applications.

With Unity in place for building the game world, the next piece of the puzzle was to choose the right platform for scripting the business logic, deploying and debugging the application. For obvious reasons such as flexibility in code editing and troubleshooting, Microsoft Visual Studio was chosen as the platform for authoring HoloLens applications. Visual Studio Tools for Unity is integrated into to the Unity platform which enables it to generate a Visual Studio Solutions for the game world. Scripts can be coded using C# in Visual Studio to contain the operational logic for the Mixed reality application.

HoloLens Application Architecture

A typical Mixed reality application development had multiple tiers. The frontend game world which is built using Unity, the application logic used to power the game objects which is coded using C# in Visual Studio and backend services which encapsulated the business logic. Following diagram illustrates the architecture of such an application.


In the above diagram, the box towards the right (Delegated services) is an optional tier for a Mixed reality application. However, many real-world scenarios in the space of Augmented and Mixed reality are powered by strong backend systems such as machine learning and data analytics services.

Development Environment

Microsoft does not use a separate SDK for HoloLens development. All you will require is Visual Studio and Windows 10 SDK. The following section lists the ideal hardware requirements for a development machine.

Hardware requirements

  • Operating system – 64 Bit Windows 10 (Pro, Enterprise, or Education)
  • CPU – 64 Bit, 4 cores. GPU supporting Direct X 11.0 or later
  • RAM – 8 GB or more
  • Hypervisor – Support for Hardware-assisted virtualization

Once the hardware is sorted, you will need to install the following tools for HoloLens application development:

Software requirements

  • Visual Studio 2017 or Visual Studio 2015 (Update 3) – While installing Visual Studio, make sure that you have selected the Universal Windows Platform development workload and the Game Development with Unity workload. You may have to repair/upgrade visual studio is its already installed on your machine without these workloads.
  • HoloLens Emulator – The HoloLens emulator can be used to test a Mixed reality application without deploying it on the device. The latest version of the emulator can be downloaded from the Microsoft web site for free. One thing to remember before you install the emulator is to enable the hypervisor on your machine. These may require changes in your system BIOS
  • Unity – Unity is your front-end game world modelling tool. Unity 5.6 (2017.1) or above supports HoloLens application development. Configuring the Unity platform for a HoloLens application is covered in later sections of this blog.

Configuring Unity

It is advisable to take a basic training in Unity before jumping onto HoloLens application development. Unity offers tons of free training materials online. The following link should lead you to a good starting point


Once you are comfortable with basics of Unity, within a Unity project, follow the steps listed in the below URL to configure it for a Microsoft Mixed reality application.


Following are few things to note while we perform the configuration

  • Build settings – While configuring the build settings, it is advisable to enable the ‘Unity C# project’ checkbox to generate a Visual Studio solution which can be used for debugging your application.


  • Player settings – apart from the publishing capabilities (Player settings->Publish settings->Capabilities) listed in the link above, there may be the requirement for your application to have specific capabilities such as an ability to access the picture library or the Bluetooth channel. Make sure that the capabilities required by your application are selected before building the solution.


  • Project settings – Although the above-mentioned link recommends the quality to be set to fastest, this might not be the appropriate setting for all applications. It may be good to start with the quality flag set to ‘Fastest’ and then update the flag if required based on your application needs.


One these settings are configured, you are good to create your game world for the HoloLens application. The unity project can then be built to generate a Visual Studio solution. The build operation will pop up a file dialog box to shoes a folder where you want the visual studio solution to be created. It is recommended to create a new folder for the Visual Studio solution.

Working with Visual Studio

Irrespective of the frontend tool used to create the game world, a HoloLens application will require Visual Studio to deploy and debug the application. The application can be deployed on a HoloLens emulator for development purpose.

The below link details the steps for deploying and debugging an Augmented/Mixed reality application on HoloLens using Visual Studio.


Following are few tips which may help set-up up and deploy the application effectively:

  • Optimizing the deployment – Deploying the application over USB is multiple time faster than deploying it over Wi-Fi. Ensure that the architecture is chosen as ‘X86’ before you fire the deploy.


  • Application settings – You will observe that the solution has a Universal Windows Project set as your start-up project. To change the application properties such as application name, description, visual assets, application capabilities, etc., you can update the package manifest.

Following are the steps:

  1. Click on project properties


2. Click on Package Manifest


3. Use the ‘Application’, ‘Visual Assets’ and ‘Capabilities’ tabs to manage your application properties.


HoloLens application development toolset comes with another powerful tool called ‘Windows Device Portal’ to manage your HoloLens and the applications installed on it. More about this in my next blog.



HoloLens – understanding the device

HoloLens is without doubt the next coolest product launched by Microsoft after Kinect. Before understanding the device lets quickly familiarize ourselves with the domain of Mixed reality and how it is different from Virtual and Augmented reality.

VR, AR and MR

Virtual reality, the first of the lot, is a concept of creating a virtual world about the user. This means all that the user sees or hears is simulated. The concept of virtual reality is not new to us. A simpler form of virtual reality was achieved back in 18th century using panoramic paintings and stereoscopic photo viewers. Probably the first implementation of a commercial virtual reality application was the “Link Trainer”, a flight simulator invented in the year 1929.

The 1st head mounted headset for Virtual reality was invented in the year 1960 by Morton Heilig. This invention enabled the user to be mobile, thereby introducing possibilities of better integrating with their surroundings. The result was the era of sensor packed headsets which can track movement, sense depth, heat, geographical coordinates and so on.

These head mounted devices then became capable of projecting 3D and 2D information on see- through screens. This concept of overlaying the content on the real world was termed as Augmented reality. The concept of Augmented Reality was first introduced in the year 1992 by Boeing where it was implemented to assist workers for assembling wire bundles.

The use cases around Augmented reality started strengthening over the following years. When virtual objects were projected on the real world, the possibilities of these objects interacting with the real-world objects started gaining focus. This brought in the invention of the concept called Mixed reality. Mixed reality can be portrayed as the successor of Augmented reality where the virtual objects projected in the real world are anchored to and interact with the real-world objects. HoloLens is one of the most powerful devices in the market today which can cater to Augmented and Mixed reality applications.

Birth of the HoloLens – Project Baraboo

After Windows Vista, repairing amazon forest and project Natal (popularly known as Kinect) Alex Kipman (Technical fellow, Microsoft) decided to focus his time on a machine which could not only see what a person sees but also understand his environment and project things on his line of vision. While building this device Kipman was keen on preserving the perpetual vision of the user to ensure that he or she does not feel blindfolded. He used his knowledge from his previous invention, Kinect, around depth sensing and recognizing objects.

The end product was a device with an array of cameras, microphones and other smart sensors all feeding information to a specially crafted processing module which Microsoft calls a Holographic Processing Unit (HPU). The device is capable of mapping its surroundings and understating the depth of the world in its field of vision. It can be controlled by gestures and by voice. The user’s head acts like the point device with a cursor which shines in the middle of his view port. HoloLens is also a fully functional Windows 10 computer.

The Hardware

Following are the details of the sensors built into the HoloLens:


  • Inertial measurement unit (IMU) – The HoloLens IMU consists of an accelerometer, gyroscope, and a magnetometer to help track the movements of the user.
  • Environment sensing cameras – The devices come with 4 environment sensing cameras used to recognize the orientation of the device and for spatial mapping
  • Depth Camera – The depth camera in this device is used for finger tracking and for spatial mapping
  • HD Video camera – A generic high definition camera which can be used by applications to capture video stream
  • Microphones – The device is fitted with an array of 4 microphone to capture voice commands and sound from 360 degrees
  • Ambient light sensor – A sensor used to capture the light intensity from the surrounding environment

The HoloLens also comes with the following two built-in processing units and storage


  • Central Processing Unit – Intel Atom 1.04 GHz processor with 4 logical processors.
  • Holographic Processing Unit – HoloLens Graphics processor based on Intel 8086h architecture
  • High Speed memory – 2 GB RAM and 114 MB dedicated Video Memory
  • Storage – 64 GB flash memory.

HoloLens supports Wi-Fi (802.11ac) and Bluetooth (4.1 LE) communication channels. The headset also comes with a 3.5 mm audio jack and a Micro USB 2.0 multi-purpose port, the device has a battery life of nearly 3 hours when fully charged.

More about the software and development tools in my next blog.

Enterprise Application platform with Microservices – A Service Fabric perspective

An enterprise application platform can be defined as a suite of products and services that enables development and management of enterprise applications. This platform should be responsible of abstracting complexities related to application development such as diversity of hosting environments, network connectivity, deployment workflows, etc. In a traditional world, applications are monolithic by nature. A monolithic application is composed of many components grouped into multiple tiers bundled together into a single deployable unit. Each tier here can be developed using a specific technology and will have the ability to scale independently. Monolithic application usually persists data in one common data store.

MicroServices - 1

Although a monolithic architecture logically simplifies the application, it introduces many challenges as the number of applications in your enterprise increases. Following are few issues with a monolithic design

  • Scalability – The unit of scale is scoped to a tier. It is not possible to scale bundled within an application tier without scaling the whole tier. This introduces massive resource wastage resulting in increase in operational expense.
  • Reuse and maintenance – The components within an application tier cannot be consumed outside the tier unless exposed as contracts. This forces development teams to replicate code which becomes very difficult to maintain.
  • Updates – As the whole application is one unit of deployment, updating a component will require updating the whole application which may cause downtime thereby affecting the availability of the application.
  • Low deployment density – The compute, storage and network requirements of an application, as a bundled deployable unit may not match the infrastructure capabilities of the hosting machine (VM). This may lead to wastage of shortage of resources.
  • Decentralized management – Due to the redundancy of components across applications, supporting, monitoring and troubleshooting becomes expensive overheads.
  • Data store bottlenecks – If there are multiple components accessing the data store, it becomes the single point of failure. This forces the data store to be highly available.
  • Cloud unsuitable – The hardware dependency of this architecture to ensure availability doesn’t work well with cloud hosting platforms where designing for failure is a core principle.

A solution to this problem is an Application platform based on Microservices architecture. Microservice architecture is a software architecture pattern where applications are composed of small, independent services which can be aggregated using communication channels to achieve an end to end business use case. The services are decoupled from one another in terms of the execution space in which they operate. Each of these services will have the capability to be scaled, tested and maintained separately.

MicroServices - 2

Microsoft Azure Service Fabric

Service Fabric is a distributed application platform that makes it easy to package, deploy, and manage scalable and reliable Microservices. Following are few advantages of a Service Fabric which makes it the ideal platform to build a Microservice based Application Platform

  • Highly scalable – Every service can be scaled without affecting other services. Service Fabric will support scaling based on VM scale sets which means that these services will have the ability to be auto-scales based on CPU consumption, memory usage, etc.
  • Updates – Services can be updated separately and different versions of a service can be co-deployed to support backward compatibility. Service Fabric also supports automatic rollback during updates to ensure consistency of an application deployment.
  • State redundancy – For state full Microservices, the state can be stored alongside compute for a service. If there are multiple instances of a service running, the state will be replicated for every instance. Service Fabric takes care of replicating the state changes through the stores.
  • Centralized management – The service can be centrally managed, monitored and diagnosed outside application boundaries.
  • High density deployment – Service Fabric supports high density deployment on a virtual machine cluster while ensuring even utilization of resources and distribution of work load.
  • Automatic fault tolerance – The cluster manager of Service Fabric ensures failover and resource balancing in case of a hardware failure. This ensures that your services are cloud ready.
  • Heterogeneous hosting platforms – Service Fabric supports hosting your Microservices across Azure, AWS, On premises or any other datacenter. Cluster manager is capable of managing service deployments with instances spanning multiple datacenters at a time. Apart from Windows, Service Fabric also supports Linux as a host operating system for your micro services.
  • Technology agnostic – Services can be written in any programming language and deployed as executables or hosted within containers. Service Fabric also supports a native Java SDK for Java developers.

Programming models

Service Fabric supports the following four programming models for developing your service:

  • Guest Container – Services packaged into Windows or Linux containers managed by Docker.
  • Guest executables – Services can be packaged as guest executables which are arbitrary executables, written in any language.
  • Reliable Services – An application development framework with fully supported application lifecycle management. Reliable services can be used to develop stateful as well as stateless services and supports transactions.
  • Reliable Actors – A virtual actor based development framework with built-in state management and communication management capabilities. Actor programming model is single threaded and is ideal for hyper scale out scenarios (1000s or instances)

More about Service Fabric programming models can be found here

Service Type

A service type in Service Fabric consists of three components

MicroServices - 3

Code package defines an entry point to the service. This can be an executable or a dynamic linked library. Config package specifies the service configuration for your services and the data package holds static resources like images. Each package can be independently versioned. Service fabric supports upgrade of each of these packages separately. Allocation of a service in a cluster and reallocation of a service on failure are responsibilities of Service Fabric cluster manager. For stateful services, Service Fabric also takes care of replicating the state across multiple instances of a service.

Application Type

A Service Fabric application type is composed of one or more service types.

MicroServices - 4

An application type is a declarative template for creating an application. Service fabric uses application types for packaging, deployment and versioning Microservices.

State stores – Reliable collections

Reliable Collections are highly available, scalable and high performance state store which can be used to store states alongside compute for Microservices. The replication of state and persistence of state on secondary storage is taken care of by the Service Fabric framework. A noticeable difference between Reliable Collections and other high-availability state store (such as cache, tables, queues, etc.) is that the state is kept locally in the service hosting instance while also being made highly available by means of replication. Reliable collections also support transactions and are asynchronous by nature while offering strong consistency.

More about reliable collection can be found here


Service Fabric offers extensive health monitoring capabilities with built-in health status for clusters and services and custom app health reporting. Services are continuously monitored for real-time alerting on problems in production. Performance monitoring overheads are diluted with rich performance metrics for actors and services. Service Fabric analytics is capable of providing repair suggestion thereby supporting preventive healing of services. Custom ETW logs can also be captured for guest executables to ensure centralized logging for all your services. Apart from support for Microsoft tools such as Windows Azure Diagnostics, Operational Insights, Windows Event Viewer and Visual studio diagnostics events viewer, Service Fabric also supports easy integration with third party tools like Kibana and Elastic search as monitoring dashboards.

Conclusion and considerations

Microsoft Service Fabric is potential platform capable of hosting enterprise grade Microservices. Following are few considerations to be aware of while using Service Fabric as your Microservice hosting platform.

  • Choosing a programming model is critical for optimizing the management of Microservices hosted on Service Fabric. Reliable services and Reliable actors are more thickly integrated with the Service Fabric cluster manager compared to guest containers and guest executables.
  • The support for containers on Service Fabric is in an evolving stage. While Windows containers and Hyper-V containers are on the release roadmap, Service Fabric only supports Linux containers as of today.
  • The only two native SDKs supported by Service fabric as of today is based on .net and Java.


Cursor-Scoped Event Aggregation Pattern – for high performance aggregation queries

Often, stateful applications executing enterprise scenarios are not just interested in the current state of the system, but also in the events which caused the transition that resulted in the particular state. This requirement has exponentially increased the popularity of the Event Sourcing Pattern.

Event Sourcing Patterns ensure that all changes/events that caused a transition in application state are stored as a sequence of events in an append-only store. These events can then be queried for tracking history or even be used to reconstruct system state in case of an unexpected application crash.

The Event Sourcing design pattern is best suited to decouple commands and queries to improve efficiency of the system. An implicit dimension which is recorded in this pattern along with the events of interest is the ‘time’ when the event occurred.  This post talks about a design pattern which enhances event sourcing pattern making it capable of portraying views based on a time range.


Apart from the requirement of replaying events to recreate state, some systems also have requirements around aggregating information drawn from events which occurred during a specific timeframe.

Running a query against the event source table to retrieve this information can be an expensive operation. The event source table has a tendency to grow quickly if it is designated to capture events related to multiple entity types. In this case the performance of querying this table to retrieve this information will not be very performant.

Apart from the requirement of replaying events to recreate state, some systems also have requirements around aggregating information drawn from events which occurred during a recent time frame. Running a query against the event source table to retrieve this information will be an expensive operation. The event source table also has a tendency to grow very fast if it is designated to capture events related to multiple entity types. In which case the efficiency of querying the event store to retrieve this information will be sub-optimal.

Event Sourcing Pattern


A solution to this problem is to introduce a Cursor-Scoped Event Aggregation Pattern on top of the Event Sourcing Pattern. The cursor here will be associated with a configurable time range which will define the scope of events the query is interested in. The pattern replicates the tuples of relevance (based on the cursor) in an in-memory data dictionary based on the filtering criteria defined in the query. The Aggregation function will collate the properties of interest within this dictionary to produce results for the query.

Cursor-Scoped Event Aggregation Pattern

The following flowchart captures the activities which are performed on the cursor data store when a new event is appended in the store or when a new query is fired against it.



Related Cloud Design Patterns

  • Event Sourcing Pattern – This pattern assumes an event sourcing pattern to be implemented by the system.
  • Index Table Pattern – Index table pattern is an alternative to improve performance around querying event sources.


  • Eventual consistency – Event Sourcing Pattern is usually implemented in scenarios which can support eventual consistency. As we are querying the event source directly rather than the entities, there is a chance of inconsistency between the state of the entities and the result of the query.
  • Immutability – The event store is immutable, the only way to reverse a transaction is to introduce a compensation event. The Aggregator should be capable of handling compensation events.

Usage Scenarios

  • High performance systems – Systems which required real time response on aggregation queries.

Inside Azure – Deployment workflow with Fabric Controller and Red Dog Front End

Abstracting complexities around developing, deploying and maintaining software applications have diminished the importance of understanding underlying architecture. While this may work well for today’s aggressive delivery cycles, at the same time, it impacts the ability of engineers to build an efficient, optimal solution which aligns with the internal architecture of the hosting platform. Architects and engineers should be cognizant of the architecture of the hosting environment to better design a system. The same holds good for Microsoft Azure as a hosting provider.

This post is an attempt to throw light on the workflow around deploying workload on Microsoft Azure, the systems involved in this process and their high-level architecture.


To start with let’s look at the high level architecture of an Azure datacenter. The following diagram illustrates the physical layout of an Azure Quantum 10 V2 datacenter.


Figure 1.0

Parking the physical layers for a later post, we shall focus on last layer termed as ‘Clusters’ to understand the logical architecture of Microsoft Azure datacenters.

Logical Architecture

Clusters are logical group of server racks. A typical cluster will include 20 server racks hosting approximately 1000 servers. Clusters are also known as ‘Stamps’ internally within Microsoft. Every cluster is managed by a Fabric Controller. Fabric Controller, often considered as the brain of the entire Azure ecosystem is deployed on dedicated resources with in every cluster.


Figure 1.1

Fabric Controller is a highly available, stateful, replicated application. Five machines on every Azure Datacenter cluster are dedicated for Fabric Controller deployment. One server out of the five servers acts as the primary and replicates the state to the other four secondary servers at regular intervals. This is to ensure high availability of this application. Fabric controller is an auto healed system. Hence, when a server goes down, one of the active servers will take charge as the primary and spin up another instance of Fabric Controller.

Fabric controller is responsible for managing the Network, Hardware and the Provisioning of servers in an Azure datacenter. If we visualize the whole datacenter as a machine we can map the server (hardware) as the datacenter itself, kernel (of the operating system) as Fabric controller and processes running on the machine as services hosted on the datacenter. The following image illustrates this perspective:


Figure 1.2

Fabric controller controls the allocation of resources (such as compute and storage), both for your deployed custom applications as well as for built-in Azure services. This provides a layer of abstraction to the consumer there by ensuring better security, scalability and reliability.

Fabric controller takes inputs from following two systems:

  • Physical Datacenter deployment manager – The physical layout of the machines on the racks, their IP addresses, routers address, certificates for accessing the router, etc. in XML format.
  • RDFE (discussed later in the post) – The deployment package and configuration.

Following flowchart captures the boot up tasks performed by the Fabric Controller when it instantiates a cluster:


Figure 1.3


The following diagram illustrates a high-level view of the deployment process:


Figure 2.0

Before we understand the deployment workflow, it’s important to familiarize with another software component in the Azure ecosystem which is primarily responsible for triggering a deployment. RDFE (Red Dog Front End), named after the pre-release code name for Microsoft Azure (Red Dog) is a highly available, Azure deployed application which feeds deployment instructions to the Fabric controller. It is responsible for collecting the deployment artefacts, making copies of it, choosing the target cluster for deployment and triggering the deployment process by sending instructions to the Fabric Controller. The following flowchart details the workflow handled by RDFE:


Figure 2.1

A fact to keep in mind is that Azure portal is stateless and so are the management APIs. During a deployment, the uploaded artefacts are passed on to RDFE which stores these artefacts in a staging store (RDFE store). RDFE chooses five clusters for every deployment. If a cluster fails to complete the deployment operation, RDFE picks another cluster from the other four and restarts the deployment operation.

Once a cluster is picked, the deployment workflow is handed over to the Fabric controller. Fabric controller performs the following tasks to provision the required hardware resources as per the application/service configuration.


Figure 2.2

Things to remember

Following are few key learnings from my understanding of Azure Architecture:

  • Keep your workload small – As discussed in the Deployment section (Figure 2.1, 2.2) , the uploaded workload gets copied multiple times before its deployed on a virtual machine. While deploying an application package, for better efficiency, it is recommended that the code is separated from the other dependent artefacts and only code is packaged as the workload. The dependent artefacts can be separately uploaded into Azure storage or any other persistent store.
  • Avoid load balancer whenever you canFigure 2.2 illustrates steps to update the Azure load balancer post deployment. By default all traffic to a node (server) is routed through the load balancer. Every node(server) is associated with a virtual IP address when it is deployed. Using this to directly communicate with the server reduces one hop in the network route. However, this is not advisable under all circumstances. This may be well suited if the Server is hosting a singleton workload.
  • Deploy the largest resource first – The choice of clusters to deploy is made my RDFE in the early stages of deployment as illustrated in Figure 2.1. Not all clusters have all machine configurations available. So if you want co-allocation of servers on a cluster choose to deploy the largest resource first. This will force RDFE to pick a more capable cluster to deploy the workload. All subsequent workloads will be deployed on the same cluster if they are placed under the one affinity group.
  • Create syspreped master image VHDs with drivers persisted – Azure optimizes the ISO download operation (on a node) by referring to a pre-fetched cache as shown in Figure 2.2.  While using custom images for Virtual Machines, it is advisable to persist drivers so that Azure can utilize pre-fetched driver caching to optimize your deployment