HoloLens – Spatial sound

The world of Mixed reality and Augmented reality is only half real without three-dimensional sound effects to support the virtual world. Microsoft deals with this challenge by leveraging the ability of their audio engine to generate Spatial Sound. Spatial sound, as a feature, can simulate three-dimensional sounds in a virtual world based on direction, distance, and other environmental factors. Spatial Sound is based on the concept of sound localization which is a popular topic in the field of sound engineering. Sound localization can be defined as a process of determining the source of the sound, the field or sound, the position of the listener and the media or environment of sound propagation. The following diagram illustrates a virtual world with Spatial Sound enabled:
1
The concept of sound localization and Spatial Sound is not new to this world. Spatial music has been practiced since biblical ages in the form of the antiphon. The modern form of spatial music was introduced in early 1900 in Germany. Spatial sound empowers a Mixed reality application developer to place sound sources in a three-dimensional space around the user in a convincing manner. Objects in the virtual world can act as sources for these sounds creating an immersive experience for the user.

Scenarios for Spatial Sound

In the world of Mixed Reality, Spatial Sound can be used to make many user scenarios realistic. Following are few of them.

  • Anchoring – The ability to position a virtual object in a virtual world is critical for many Mixed reality applications. In a scenario where the users turn his face away from the object, and it disappears from his viewport, the only way to give the user a scene of the object’s existence in the scene is by propagating localized sounds from the object.
  • Guiding – Spatial sound are found to be very useful in scenarios where users need to be guided to draw their attention to a specific object or space in the three-dimensional world.
  • Simulating physics – Sound plays an important role in emulating realistic physics in the world of Mixed Reality. For example, the impact of a glass sphere dropping behind the user in a three-dimensional world is best simulated by enabling localized shattering sound effects at the point of its collision with the floor.

Implementing Spatial Sound in HoloLens applications

The audio engine is HoloLens uses a technology called HTRF (Head Related Transfer Function) to simulate sounds that are coming from various directions and distances within a virtual world. Head Related Transfer Functions define directivity patterns of the human ears which caters for direction, elevation, distance, and frequency of sound. Unity offers built-in support for Microsoft HTRF extensions.

Configuring the Audio Source

The plug-in can be enabled from the audio manager in Unity (Edit->Project Settings->Audio).
2
Once the setting is enabled, you should be able to ‘Spatialize’ any audio source attached to a game object in Unity. To configure the audio source, you will need to perform the following steps:

  1. Select the audio source on the game object in the inspector panel
  2. Check the ‘Spatialize’ Checkbox under the options for the audio source3
  3. For best results, change the value of ‘Spatial Blend’ to 14

Your audio source is now configured to play Spatial Sound.

Testing Spatial Sound

The best way to test Spatial Sound is by using the ‘Audio Emitter’ component which comes built-in with the HoloToolKit. Perform the following steps to configure the Audio Emitter.

  1. Add the Audio Emitter component from the inspector panel5
  2. Configure the Update Interval, Max Objects, and Max Distance on the Audio Emitter component. The ‘Update Interval’ determines the frequency in which the Audio Emitter scans for environmental factors which influence the sound output. ‘Max Objects’ specifies the maximum number of influencing objects to be considered and the max distance specifies the maximum radius for the scan.6
  1. Leave the ‘Outer Sphere’ parameter empty for now. This parameter is used to demonstrate audio occlusion
  2. Associate an audio file to your Audio Source and enable the loop7
  3. Run the application and move around the object to feel the Spatial Sound in action.

Measuring sound output

It is often a scenario where you want to measure the sound output produced by an Audio Source to visually represent it. A good example is a case when you need to display an audio histogram within your application.8
The following code can be used to measure the RMS output from an Audio Source which can then be used to paint the histogram spectrum.

Unity also supports a direct function (AudioSource.GetSpectrumData) to retrieve spectrum data from the audio source
To conclude, in this blog, we walked through the fundamentals of spatial sound. After which, we learned about implementing Spatial Sound in a HoloLens application and about how to measure sound output.

HoloLens – Understanding depth (Spatial Mapping)

Building smart applications which can work in a three-dimensional space has many challenges. Amongst these, the one that tops the list is the challenge of understanding and mapping the surrounding 3D world. Applications usually depend on the device and platform capability to resolve this problem. Augmented Reality and Mixed Reality devices ships with built-in technologies to measure the depth of the containing world.

Scenarios of interest

Mapping the world around a device is critical to enable powerful scenarios in this field. Following are few such use cases:

  • Docking/Placement – What makes Mixed Reality different from Augmented Reality is its ability to enable interaction between virtual and physical objects. To make a Mixed Reality scenario realistic, it is critical for the application to understand the mapping of the environment where the user is currently operating from. This will help the application place or dock the holograms obeying the physical bounds of the environment. For example, if the application needs to place a chair in a room, it will need to position it on the floor with enough space to land its four legs.
  • Navigation – Objects in the holographic world should be constrained by the rules of the physical world to make the application look real. For example, a holographic puppy should be able to walk on the floor or jump on to the couch and not walk through the walls and furniture. To enable this, the application should be aware of the depth of each object around the user at any given point in time.
  • Physics – In the real world, the behaviour of an object in motion is highly influenced by factors like inertia, density, elasticity, and so on. To match the similar experience of holograms in the world of Mixed Reality, the application will need to be aware of the environment. For Example, dropping the ball from the roof on the floor will have a different effect from dropping it on a couch.

Technologies

Depth sensing is not a new problem in the world of computing. However, the rise of Mixed Reality and Augmented Reality enabled devices have taken it to the limelight. Different vendors address this challenge by different names. For examples, Google calls it ‘Depth Perception’. Apple ARKit calls it ‘Depth Map’ and Microsoft calls it ‘Spatial Mapping’.  The underlying technologies used by these devices may be different but the objective of discovering the depth of the environment around the device remains the same. Following are few of the underlying technologies used by these devices to measure depth:

  • Structured Infrared light projector/scanner
  • RGB Depth cameras
  • Time-of-flight camera

Time of Flight

Time of flight technology is specifically of our interest because of its popularity and the fact that it is being used by devices like Microsoft Kinect and HoloLense to measure depth. The technology works on the reflective properties of the objects. It uses the known speed of light to calculate the distance by measuring the time taken for a photon to reflect back to the device sensors. The following diagram illustrates the measurement process:
TFF
Experiments state that the time for flight technology works at its best in between the range of approximately 0.5 to 5 metres.  The depth camera in HoloLens works well between 0.85 to 3.01 metres.

Spatial Mapping

Spatial Mapping is a feature shipped with Microsoft HoloLens which provides a representation of real-world surfaces around the device. This can be used by application developers to make the applications environment aware. The feature abstracts the hardware technology used to measure the depth and exposes easily consumable APIs to integrate with.

Spatial Mapping in HoloLens

The best way to leverage Spatial Mapping capabilities in a HoloLens is to integrate it using Unity. To enable Spatial Mapping, you will first need to enable ‘SpatialPerception’ capability on your project. Unity offers two built-in components to support Spatial Mapping.

  • Spatial Mapping Renderer – This component is responsible for visually presenting the spatial map as a mesh to the application.
  • Spatial Mapping Collider – The collider is responsible for enabling interactions between the holograms and the spatial mesh.

These components can be added to an existing unity project from the ‘Add Component’ menu (Add Component > AR > Spatial Mapping Collider/Spatial Mapping Renderer)
The following link talks about setting up Spatial Mapping capabilities in your Unity project in detail.
https://developer.microsoft.com/en-us/windows/mixed-reality/spatial_mapping_in_unity

Tips and tricks

Following are few tips and tricks to optimize Spatial Mapping in your applications.

  • Updating spatial maps – Running a spatial map starts with the trigger for collecting mapping data. This operation is very CPU intensive thereby costing battery life and starvation to other processes. To optimise update cycles, request collision data only when required. The API’s also let you query collision data for selective surfaces.
  • Configuring refresh intervals – It is important to choose an optimal refresh rate for your spatial maps to go light on CPU. You can do this from the Unity’s inspector window.

TBU

  • The density of spatial data – Spatial Mapping uses triangle meshes to represent the surfaces it maps. For an application which does not require high-resolution mapping, it is advisable to generate maps with lower triangle density to optimise CPU time and turn around time for the mapping process.
  • Understanding the implementation – It is useful to understand the implementation of Spatial Mapping to perform low-level optimizations specific to your applications. Understanding how ‘SurfaceObserver’ and ‘SurfaceData’ is implemented gives a good insight into how things work. You can refer to the Unity documentation to learn more about this.

https://docs.unity3d.com/Manual/windowsholographic-sm-lowlevelapi.html

  • Mixed Reality Toolkit example – A good place to start on Spatial Mapping is the example shared within the Mixed Reality Toolkit accessible through the following link.

https://github.com/Microsoft/MixedRealityToolkit-Unity/tree/master/Assets/HoloToolkit-Examples/SpatialMapping

To conclude, in this blog, we briefly discussed the depth mapping problem, solutions to this problem and the technology behind it. We also dived deeper into Spatial Mapping feature of HoloLens and how it can be used with a Unity project.
 

HoloLens – Mixed Reality Toolkit

Game programming is an entirely different paradigm for an enterprise application developer in terms of the tools, processes and patterns used. But like any other development engagement, to kick start the development phase and to reduces the learning curve, it is always helpful to have a set of pre-baked tools handy. In the world of HoloLens application development, Microsoft’s Mixed Reality Toolkit is your best companion.
The Mixed Reality Toolkit is an open source project driven by Microsoft. It is a collection of scripts and components which will help developers with initial hurdles they may face, when developing applications for Microsoft HoloLens and Windows Mixed Reality headsets. The source code for this project is shared on the Git Hub.

Importing the source code for the tool set

The Git repository for the Mixed Reality Tool set can be accessed using the following link
https://github.com/Microsoft/MixedRealityToolkit
Following are the steps to clone the code for the toolkit into your VSTS repository

  1. Create a new project on the Visual Studio portal (.visualstudio.com).

MRT1

  1. Give the project a name and chose the version control as ‘Git’

MRT2

  1. On the landing page, pull down ‘Import a repository’ section and click on the import button.

MRT3

  1. Type in the URL for the Git repository (https://github.com/Microsoft/MixedRealityToolkit) and click import.

MRT4

  1. Once imported, the landing page for your repository will look like the image shown below

MRT5

Exploring the source code

The best way to explore the source code for the Mixed Reality Toolkit is by trying out the example scenes. The example scenes are located under the path “Mixed Reality Toolkit/Assets/HoloToolkit-Examples” in the repository.
MRT6
Following are the steps to open an example scene in unity.

  1. Start Unity and open the root folder of the repository which you have cloned on your disk.

MRT7

  1. In the project explorer, navigate to the HoloToolkit-Examples folder. You should see folders with all the example scenes under this path. For this exercise, let’s try out a slider example. Navigate to “Assets/HoloToolkit-Examples/UX/Scenes” folder and double click on the ‘SliderSamples’ scene.

MRT8

  1. You should now see the slider in the game window. Click on the play button to test the scene.

MRT9

Installing the toolkit on a new unity project

The Mixed Reality Toolkit releases a unity package which can be incorporated into your HoloLens unity project. This package helps you configure your project and the scenes with default settings for HoloLens. It also populates your scene with basic objects like camera, input manager, cursor, etc;  Following are the steps to download and install the unity package.

  1. Download the unity package from the following URL

https://github.com/Microsoft/MixedRealityToolkit-Unity/releases

MRT10

  1. Once downloaded, import the package into unity

MRT11

  1. Once imported correctly, you should see the ‘Mixed reality toolkit’ menu on your toolbar

MRT12

  1. Use configure menu to set up the project and the scene settings.

MRT13

Mixed reality toolkit comprises of a very useful collection of tools which lets you kick start a HoloLens project with a template containing basic objects and settings. I will be covering continuous integration and continuous deployment strategies for HoloLens application in my next blog post.

HoloLens – Continuous Integration

Continuous integration is best defined as the process of constantly merging development artifacts produced or modified by different members of a team into a central shared repository. This task of collating changes becomes more and more complex as the size of the team grows. Ensuring the stability of the central repository becomes a serious challenge in such cases.
A solution to this problem is to validate every merge with automated builds and automated testing. Modern code management platforms like Visual Studio Team Services (VSTS) offers built-in tools to perform these operations. Visual Studio Team Services (VSTS) is a hosted service offering from Microsoft which bundles a collection of Dev Ops services for application developers.
The requirement for a Continuous Integration workflow is important for HoloLens applications considering the agility of the development process. In a typical engagement, designers and developers will work on parallel streams sharing scripts and artifacts which constitute a scene. Having an automated process in place to validate every check-in to the central repository can add tremendous value to the quality of the application. In this blog, we will walk through the process of setting up a Continuous Integration workflow for a HoloLens application using VSTS build and release tools.

Build pipeline

A HoloLens application will have multiple application layers. The development starts with creating the game world using Unity and then proceeds to wiring up backend scripts and services using Visual Studio. To build a HoloLens application package, we need to first build the front-end game world with the Unity compiler and then, the back-end with the visual studio compiler. The following diagram illustrates the build pipeline:
pipeline
In the following sections, we will walk through the process of setting up the infrastructure for building a HoloLens application package using VSTS.

Build agent setup

VSTS uses build agents to perform the task of compiling an application on the central repository. These build agents can either be Microsoft hosted agents, which is available as a service in VSTS or they can be custom-deployed services managed by you. HoloLens application will require custom build agents as they run custom build tasks for compiling the Unity application. Following are the steps for creating a build agent to run the tasks required for building a HoloLens application:

1.      Provision hosting environment for the build agent

The first step in this process is to provision a machine to run the build agent as a service. I’d recommend using an Azure Virtual Machine hosted within an Azure DevTest Lab for this purpose. The DevTest Lab comes with built-in features for managing start up and shut down schedules for the virtual machines which are very effective in controlling the consumption cost.  Following are the steps for setting up the host environment for the build agent in Azure.

  1. Login to the Azure portal and create a new instance of DevTest LabDevtest labs
  2. Add a Virtual machine to the Lab.Add VM
  3. Pick an image with Visual Studio 2017 pre-installed.Image
  4. Choose the hardware with a high number of CPUs and IOPS as the agents are heavy on disks and compute. I’d advice a D8S_V3 machine for a team of approximately 15 developers.                                                                                                        imagesize
  5. Select the PowerShell artifacts to be added to the Virtual machineselected atrefacts
  6. Provision the Virtual Machine and remote desktop into it.

2.      Create authorization token

Build agent will require an authorized channel to communicate with the build server which in our case is the VSTS service. Following are the steps to generate a token:

  1. On VSTS portal, navigate to the security screen using the profile menumenu
  2. Create a personal access token for the agent to authorize to the server. Ensure that you have selected ‘Agent pools (read, manage) in the authorized scope.Create PAT
  3. Note the generated token. This will be used to configure the agent the build host virtual machine.

3.      Installing and configuring the agent

Once the token is generated we are now ready to configure the VSTS agent. Following are the steps

  1. Remote desktop into the build host virtual machine on Azure
  2. Open the VSTS portal on a browser and navigate to the ‘Agent Queue’ screen. (https://.visualstudio.com/Utopia/_admin/_AgentQueue)
  3. Click on ‘Download Agent’ buttondownload agent
  4. Click on the ‘Download’ button to download the installer onto the disk of your VM. Choose the default download location.configuring account
  5. Follow the steps listed in the previous step to configure the agent using PowerShell commands. Detailed instructions can be found at the below link:

https://docs.microsoft.com/en-au/vsts/build-release/actions/agents/v2-windows

  1. Once configures, the agent should appear on the agent list within the selected pool.agent post creation

This completes the build environment setup. We can now configure a build definition for our HoloLens application.

Build definition

Creating the build definition involves queuing up a sequence of activities to be performed during a build. In our case, this includes the following steps.

  • Performing Unity build
  • Restoring NuGet packages
  • Performing Visual Studio build

Following are the steps to be performed:

  1. Login to the VSTS portal and navigate to the marketplace.Marketplace icon
  2. Search for the ‘HoloLens Unity Build’ component and install it. Make sure that you are selecting the right VSTS project while installing the component.Install task
  3. Navigate to Builds on the VSTS portal and click on the ‘New’ button under ‘Build Definitions’new buld defnition
  4. Select an empty template    template selection
  5. Add the following dev tasks
    1. HoloLens Unity Build
    2. NuGet
    3. Visual Studio Build

tasks

  1. Select the Unity Project folder to configure the build taskUnity project folder
  2. Configure the Nuget build task to restore the packages.nuget restoration
  3. Configure the Visual Studio build task by selecting the solution path, platform, and configuration.visual studio build task
  4. Navigate to the ‘Triggers’ tab and Enable the build to be triggered for every check-in.                                                 Trigger

You should now see a build being fired for every merge into the repository. The whole build process for an empty HoloLens application can take anywhere between four to six minutes on an average.
To summarise, in this blog, we learned about the build pipeline for a HoloLens application. We also explored the build agent set up and build definition required to enable continuous integration for a HoloLens application.

HoloLens – Continuous Integration

Continuous integration is best defined as the process of constantly merging development artifacts produced or modified by different members of a team into a central shared repository. This task of collating changes becomes more and more complex as the size of the team grows. Ensuring the stability of the central repository becomes a serious challenge in such cases.
A solution to this problem is to validate every merge with automated builds and automated testing. Modern code management platforms like Visual Studio Team Services (VSTS) offers built-in tools to perform these operations. Visual Studio Team Services (VSTS) is a hosted service offering from Microsoft which bundles a collection of Dev Ops services for application developers.
The requirement for a Continuous Integration workflow is important for HoloLens applications considering the agility of the development process. In a typical engagement, designers and developers will work on parallel streams sharing scripts and artifacts which constitute a scene. Having an automated process in place to validate every check-in to the central repository can add tremendous value to the quality of the application. In this blog, we will walk through the process of setting up a Continuous Integration workflow for a HoloLens application using VSTS build and release tools.

Build pipeline

A HoloLens application will have multiple application layers. The development starts with creating the game world using Unity and then proceeds to wiring up backend scripts and services using Visual Studio. To build a HoloLens application package, we need to first build the front-end game world with the Unity compiler and then, the back-end with the visual studio compiler. The following diagram illustrates the build pipeline:
pipeline
In the following sections, we will walk through the process of setting up the infrastructure for building a HoloLens application package using VSTS.

Build agent setup

VSTS uses build agents to perform the task of compiling an application on the central repository. These build agents can either be Microsoft hosted agents, which is available as a service in VSTS or they can be custom-deployed services managed by you. HoloLens application will require custom build agents as they run custom build tasks for compiling the Unity application. Following are the steps for creating a build agent to run the tasks required for building a HoloLens application:

1.      Provision hosting environment for the build agent

The first step in this process is to provision a machine to run the build agent as a service. I’d recommend using an Azure Virtual Machine hosted within an Azure DevTest Lab for this purpose. The DevTest Lab comes with built-in features for managing start up and shut down schedules for the virtual machines which are very effective in controlling the consumption cost.  Following are the steps for setting up the host environment for the build agent in Azure.

  1. Login to the Azure portal and create a new instance of DevTest LabDevtest labs
  2. Add a Virtual machine to the Lab.Add VM
  3. Pick an image with Visual Studio 2017 pre-installed.Image
  4. Choose the hardware with a high number of CPUs and IOPS as the agents are heavy on disks and compute. I’d advice a D8S_V3 machine for a team of approximately 15 developers.                                                                                                        imagesize
  5. Select the PowerShell artifacts to be added to the Virtual machineselected atrefacts
  6. Provision the Virtual Machine and remote desktop into it.

2.      Create authorization token

Build agent will require an authorized channel to communicate with the build server which in our case is the VSTS service. Following are the steps to generate a token:

  1. On VSTS portal, navigate to the security screen using the profile menumenu
  2. Create a personal access token for the agent to authorize to the server. Ensure that you have selected ‘Agent pools (read, manage) in the authorized scope.Create PAT
  3. Note the generated token. This will be used to configure the agent the build host virtual machine.

3.      Installing and configuring the agent

Once the token is generated we are now ready to configure the VSTS agent. Following are the steps

  1. Remote desktop into the build host virtual machine on Azure
  2. Open the VSTS portal on a browser and navigate to the ‘Agent Queue’ screen. (https://.visualstudio.com/Utopia/_admin/_AgentQueue)
  3. Click on ‘Download Agent’ buttondownload agent
  4. Click on the ‘Download’ button to download the installer onto the disk of your VM. Choose the default download location.configuring account
  5. Follow the steps listed in the previous step to configure the agent using PowerShell commands. Detailed instructions can be found at the below link:

https://docs.microsoft.com/en-au/vsts/build-release/actions/agents/v2-windows

  1. Once configures, the agent should appear on the agent list within the selected pool.agent post creation

This completes the build environment setup. We can now configure a build definition for our HoloLens application.

Build definition

Creating the build definition involves queuing up a sequence of activities to be performed during a build. In our case, this includes the following steps.

  • Performing Unity build
  • Restoring NuGet packages
  • Performing Visual Studio build

Following are the steps to be performed:

  1. Login to the VSTS portal and navigate to the marketplace.Marketplace icon
  2. Search for the ‘HoloLens Unity Build’ component and install it. Make sure that you are selecting the right VSTS project while installing the component.Install task
  3. Navigate to Builds on the VSTS portal and click on the ‘New’ button under ‘Build Definitions’new buld defnition
  4. Select an empty template    template selection
  5. Add the following dev tasks
    1. HoloLens Unity Build
    2. NuGet
    3. Visual Studio Build

tasks

  1. Select the Unity Project folder to configure the build taskUnity project folder
  2. Configure the Nuget build task to restore the packages.nuget restoration
  3. Configure the Visual Studio build task by selecting the solution path, platform, and configuration.visual studio build task
  4. Navigate to the ‘Triggers’ tab and Enable the build to be triggered for every check-in.                                                 Trigger

You should now see a build being fired for every merge into the repository. The whole build process for an empty HoloLens application can take anywhere between four to six minutes on an average.
To summarise, in this blog, we learned about the build pipeline for a HoloLens application. We also explored the build agent set up and build definition required to enable continuous integration for a HoloLens application.

HoloLens – Using the Windows Device Portal

Windows Device Portal is a web-based tool which was first introduced by Microsoft in the year 2016. The main purpose of the tool is to facilitate application management, performance management and advanced remote troubleshooting for Windows devices.  Device portal is a lightweight web server built into the Windows Device which can be enabled in the developer mode. On a HoloLens, developer mode can be enabled from the settings application (Settings->Update->For Developers->Developer Mode).

Connecting the Device

Once the developer mode on the device is enabled, the following steps must be performed to connect to the device portal.

  1. Connect the device to the same Wi-Fi network as the computer used to access the portal. Wi-Fi settings on the HoloLens can be reached through the settings application (Settings > Network & Internet > Wi-Fi).
  2. Note the IP address assigned to the HoloLens. This can be found in the ‘Advance Options’ screen under the Wi-Fi settings (Settings > Network & Internet > Wi-Fi > Advanced Options).
  3. On the web browser of your computer, navigate to “https://<HoloLens IP Address>”
  4. As you don’t have a valid certificate installed, you will be prompted with a warning. Proceed to the website by ignoring the warning.

Holo WDP1

  1. For the first connection, you will be prompted with a credential reset screen. Your HoloLens should now blackout with only a number displayed in the centre of your viewport. Note this number and enter this in the textbox highlighter below

Holo WDP2

  1. You should also enter a ‘username’ and ‘password’ for the portal to access the device in future.
  2. Click ‘Pair’ button
  3. You will now be prompted with a form to enter the username and password. Fill this form and click on the ‘Login’ button.Holo WDP3
  4. Successful login will take you to the home screen of the Windows Device Portal

Holo WDP4

Portal features

The portal has a main menu (towards the left of the screen) and a header menu (which also serves as a status bar). Following diagram elaborates on the controls available in the header menu
Holo WDP5
The main menu is divided into three sections

  • Views – Groups the Home page, 3D view and the Mixed Reality Capture page which can be used to record the device perspective.
  • Performance – Groups the tools which are primarily used for troubleshooting applications installed on the device or the device itself (hardware and software)
  • System – functionalities to retrieve and control the device behaviour ranging from the managing the applications installed on the device to control the network settings.

A detailed description of the functionalities available under each menu item can be found on the following link:
https://developer.microsoft.com/en-us/windows/mixed-reality/using_the_windows_device_portal

Tips and tricks

Windows device portal is an elaborate tool and some of its functions are tailor-made for advanced troubleshooting. Following are few features of interest to get started with the development.

  • Device name – Device name can be access from the home screen under the view category. It is helpful in the long run to meaningfully name your HoloLens for easier identification over a network. You can change this from the Device Portal.

Holo WDP6

  • Sleep setting – HoloLens has a very limited battery life which ranges from 2 to 6 hours based on the applications we are running on it. This makes the sleep settings important. You can control the sleep setting from home screen of the Device Portal. Keep in mind that during development, you may usually leave the device plugged in (using UBS cable to your laptop/desktop). To avoid the pain of starting the device repeatedly, it is recommended to configure the ‘sleep time when plugged’ into a larger number.

Holo WDP7

  • Mixed reality capture – This feature is very handy to capture the device perspective for application demonstrations. You can capture the vide through the PV (Picture/Video) camera, the audio from the microphones, audio from the app and the holograms projected by the app on a collated media stream. The captured media content is saved on the device and can be downloaded to your machine from the portal. However, the camera channel cannot be shared between two applications. So, if you have an application using the camera of the device your recording will not work by default.
  • Capturing ETW trace – Performance tracking feature under the performance category is very handy for collecting ETW traces for a specific period. The portal enables you to start and stop the trace monitoring. The ETL events captured during this period is logged into a file which can be downloaded for further analysis.
  • Application management – Device portal provides a set of features to manage the applications installed on the HoloLens. These features include deploying, starting and stopping applications. This feature is very useful when you want to remotely initiate an application launch for a user during a demo.

Holo WDP8

  • Virtual Keyboard – HoloLens is not engineered to efficiently capture inputs from the keyboard. Pointing at a virtual keyboard using HoloLens and making an ‘airtap’ for every key press can be very stressful. Device portal gives you an alternate way to input text into your application from the ‘virtual input’ screen.

Holo WDP9

Windows Device Portal APIs

Windows Device Portal is built on top of a collection of public REST APIs. The API collection consists of a set of core APIs common to all Windows devices and specialised sets of APIs for specific devices such as HoloLens and Xbox. The HoloLens Device Portal APIs are bundled under the following controllers

  • App deployment – Deploying applications on HoloLens
  • Dump collection – Managing crash dumps from installed applications
  • ETW – Managing ETW events
  • Holographic OS – Managing the OS level ETW events and states of the OS services
  • Holographic Perception – Manage the perception client
  • Holographic Thermal – Retrieve thermal states
  • Perception Simulation Control – Manage perception simulation
  • Perception Simulation Playback – Manage perception playback
  • Perception Simulation Recording – Manage perception recording
  • Mixed Reality Capture – Manage A/V capture
  • Networking – Retrieve information about IP configuration
  • OS Information – Retrieve information about the machine
  • Performance data – Retrieve performance data of a process
  • Power – Retrieve power state
  • Remote Control – Remotely shutdown or restart the device
  • Task Manager – Start/Stop an installed application
  • Wi-Fi Management – Manage Wi-Fi connections

The following link captures the documentation for HoloLens Device Portal APIs.
https://developer.microsoft.com/en-us/windows/mixed-reality/device_portal_api_reference.
The library behind this tool is an open source project hosted on GitHub. Following is a link to the source repository
https://github.com/Microsoft/WindowsDevicePortalWrapper

HoloLens – Setting up the Development environment

HoloLens is undoubtedly a powerful invention in the field of Mixed reality. Like any other revolutionary inventions, the success of a technology largely depends upon its usability and its ease of adoptability. This is what makes software development kits (SDK) and application programming interfaces (API) associated with a technology super critical. Microsoft has been very smart on this front when it comes to HoloLens. Rather than re-inventing the wheel, they have integrated HoloLens development model with the existing popular gaming platform ‘Unity’ for modelling a Mixed reality application frontend.
Unity is a cross-platform gaming engine which was first released in the year 2005. Microsoft’s collaboration with Unity was first announced in the year 2013 when it started supporting Windows 8, Windows Phone 8 and Xbox One development tools. Unity’s support for Microsoft HoloLens was announced in 2015, since then, it has been the Microsoft recommended platform for developing game worlds for Mixed reality applications.
With Unity in place for building the game world, the next piece of the puzzle was to choose the right platform for scripting the business logic, deploying and debugging the application. For obvious reasons such as flexibility in code editing and troubleshooting, Microsoft Visual Studio was chosen as the platform for authoring HoloLens applications. Visual Studio Tools for Unity is integrated into to the Unity platform which enables it to generate a Visual Studio Solutions for the game world. Scripts can be coded using C# in Visual Studio to contain the operational logic for the Mixed reality application.

HoloLens Application Architecture

A typical Mixed reality application development had multiple tiers. The frontend game world which is built using Unity, the application logic used to power the game objects which is coded using C# in Visual Studio and backend services which encapsulated the business logic. Following diagram illustrates the architecture of such an application.
hldev1
In the above diagram, the box towards the right (Delegated services) is an optional tier for a Mixed reality application. However, many real-world scenarios in the space of Augmented and Mixed reality are powered by strong backend systems such as machine learning and data analytics services.

Development Environment

Microsoft does not use a separate SDK for HoloLens development. All you will require is Visual Studio and Windows 10 SDK. The following section lists the ideal hardware requirements for a development machine.

Hardware requirements

  • Operating system – 64 Bit Windows 10 (Pro, Enterprise, or Education)
  • CPU – 64 Bit, 4 cores. GPU supporting Direct X 11.0 or later
  • RAM – 8 GB or more
  • Hypervisor – Support for Hardware-assisted virtualization

Once the hardware is sorted, you will need to install the following tools for HoloLens application development:

Software requirements

  • Visual Studio 2017 or Visual Studio 2015 (Update 3) – While installing Visual Studio, make sure that you have selected the Universal Windows Platform development workload and the Game Development with Unity workload. You may have to repair/upgrade visual studio is its already installed on your machine without these workloads.
  • HoloLens Emulator – The HoloLens emulator can be used to test a Mixed reality application without deploying it on the device. The latest version of the emulator can be downloaded from the Microsoft web site for free. One thing to remember before you install the emulator is to enable the hypervisor on your machine. These may require changes in your system BIOS
  • Unity – Unity is your front-end game world modelling tool. Unity 5.6 (2017.1) or above supports HoloLens application development. Configuring the Unity platform for a HoloLens application is covered in later sections of this blog.

Configuring Unity

It is advisable to take a basic training in Unity before jumping onto HoloLens application development. Unity offers tons of free training materials online. The following link should lead you to a good starting point
https://unity3d.com/learn/tutorials
Once you are comfortable with basics of Unity, within a Unity project, follow the steps listed in the below URL to configure it for a Microsoft Mixed reality application.
https://developer.microsoft.com/en-us/windows/mixed-reality/unity_development_overview
Following are few things to note while we perform the configuration

  • Build settings – While configuring the build settings, it is advisable to enable the ‘Unity C# project’ checkbox to generate a Visual Studio solution which can be used for debugging your application.

hldev2

  • Player settings – apart from the publishing capabilities (Player settings->Publish settings->Capabilities) listed in the link above, there may be the requirement for your application to have specific capabilities such as an ability to access the picture library or the Bluetooth channel. Make sure that the capabilities required by your application are selected before building the solution.

hldev3

  • Project settings – Although the above-mentioned link recommends the quality to be set to fastest, this might not be the appropriate setting for all applications. It may be good to start with the quality flag set to ‘Fastest’ and then update the flag if required based on your application needs.

hldev4
One these settings are configured, you are good to create your game world for the HoloLens application. The unity project can then be built to generate a Visual Studio solution. The build operation will pop up a file dialog box to shoes a folder where you want the visual studio solution to be created. It is recommended to create a new folder for the Visual Studio solution.

Working with Visual Studio

Irrespective of the frontend tool used to create the game world, a HoloLens application will require Visual Studio to deploy and debug the application. The application can be deployed on a HoloLens emulator for development purpose.
The below link details the steps for deploying and debugging an Augmented/Mixed reality application on HoloLens using Visual Studio.
https://developer.microsoft.com/en-us/windows/mixed-reality/using_visual_studio
Following are few tips which may help set-up up and deploy the application effectively:

  • Optimizing the deployment – Deploying the application over USB is multiple time faster than deploying it over Wi-Fi. Ensure that the architecture is chosen as ‘X86’ before you fire the deploy.

hldev5

  • Application settings – You will observe that the solution has a Universal Windows Project set as your start-up project. To change the application properties such as application name, description, visual assets, application capabilities, etc., you can update the package manifest.

Following are the steps:

  1. Click on project properties

hldev6

2. Click on Package Manifest

hldev7

3. Use the ‘Application’, ‘Visual Assets’ and ‘Capabilities’ tabs to manage your application properties.

hldev8
HoloLens application development toolset comes with another powerful tool called ‘Windows Device Portal’ to manage your HoloLens and the applications installed on it. More about this in my next blog.
 
 

HoloLens – understanding the device

HoloLens is without doubt the next coolest product launched by Microsoft after Kinect. Before understanding the device lets quickly familiarize ourselves with the domain of Mixed reality and how it is different from Virtual and Augmented reality.

VR, AR and MR

Virtual reality, the first of the lot, is a concept of creating a virtual world about the user. This means all that the user sees or hears is simulated. The concept of virtual reality is not new to us. A simpler form of virtual reality was achieved back in 18th century using panoramic paintings and stereoscopic photo viewers. Probably the first implementation of a commercial virtual reality application was the “Link Trainer”, a flight simulator invented in the year 1929.
The 1st head mounted headset for Virtual reality was invented in the year 1960 by Morton Heilig. This invention enabled the user to be mobile, thereby introducing possibilities of better integrating with their surroundings. The result was the era of sensor packed headsets which can track movement, sense depth, heat, geographical coordinates and so on.
These head mounted devices then became capable of projecting 3D and 2D information on see- through screens. This concept of overlaying the content on the real world was termed as Augmented reality. The concept of Augmented Reality was first introduced in the year 1992 by Boeing where it was implemented to assist workers for assembling wire bundles.
The use cases around Augmented reality started strengthening over the following years. When virtual objects were projected on the real world, the possibilities of these objects interacting with the real-world objects started gaining focus. This brought in the invention of the concept called Mixed reality. Mixed reality can be portrayed as the successor of Augmented reality where the virtual objects projected in the real world are anchored to and interact with the real-world objects. HoloLens is one of the most powerful devices in the market today which can cater to Augmented and Mixed reality applications.

Birth of the HoloLens – Project Baraboo

After Windows Vista, repairing amazon forest and project Natal (popularly known as Kinect) Alex Kipman (Technical fellow, Microsoft) decided to focus his time on a machine which could not only see what a person sees but also understand his environment and project things on his line of vision. While building this device Kipman was keen on preserving the perpetual vision of the user to ensure that he or she does not feel blindfolded. He used his knowledge from his previous invention, Kinect, around depth sensing and recognizing objects.
The end product was a device with an array of cameras, microphones and other smart sensors all feeding information to a specially crafted processing module which Microsoft calls a Holographic Processing Unit (HPU). The device is capable of mapping its surroundings and understating the depth of the world in its field of vision. It can be controlled by gestures and by voice. The user’s head acts like the point device with a cursor which shines in the middle of his view port. HoloLens is also a fully functional Windows 10 computer.

The Hardware

Following are the details of the sensors built into the HoloLens:
HoloSense

  • Inertial measurement unit (IMU) – The HoloLens IMU consists of an accelerometer, gyroscope, and a magnetometer to help track the movements of the user.
  • Environment sensing cameras – The devices come with 4 environment sensing cameras used to recognize the orientation of the device and for spatial mapping
  • Depth Camera – The depth camera in this device is used for finger tracking and for spatial mapping
  • HD Video camera – A generic high definition camera which can be used by applications to capture video stream
  • Microphones – The device is fitted with an array of 4 microphone to capture voice commands and sound from 360 degrees
  • Ambient light sensor – A sensor used to capture the light intensity from the surrounding environment

The HoloLens also comes with the following two built-in processing units and storage
HoloChip

  • Central Processing Unit – Intel Atom 1.04 GHz processor with 4 logical processors.
  • Holographic Processing Unit – HoloLens Graphics processor based on Intel 8086h architecture
  • High Speed memory – 2 GB RAM and 114 MB dedicated Video Memory
  • Storage – 64 GB flash memory.

HoloLens supports Wi-Fi (802.11ac) and Bluetooth (4.1 LE) communication channels. The headset also comes with a 3.5 mm audio jack and a Micro USB 2.0 multi-purpose port, the device has a battery life of nearly 3 hours when fully charged.
More about the software and development tools in my next blog.

Streaming HoloLens Video to Your Web Browser

I am lucky enough to have a play around HoloLens at my office. One of the interests around HoloLens might be how to share what the user wearing HoloLens is actually viewing. In this post, I am going to briefly describe how to setup streaming video from HoloLens to your web browser.

Activating Developer Mode and Device Portal

First of all, Developer Mode in HoloLens must be activated. This is the actual screen the HoloLens user can see. Go to Settings.

Select the Updates &amp; Security menu.

There is a menu, For Developers at the left bottom corner. Air tap the menu.

Now, if the Developer Mode is not activated, turn it on.

Scroll down a bit and you will see the menu, Device Portal. This also needs to be activated. Once we complete this step, we will be able to access to HoloLens from our web browser.

Confirming IP Address

Now, we have HoloLens setup for web browser access. We need to identify what internal IP address that HoloLens is currently using. Go to the Settings screen again and select Network &amp; Internet menu.

Air tap Advanced Options on the Wi-Fi screen.

Now, we can identify the IP address used by HoloLens. In this screen, the internal IP address is 192.168.1.6.

HoloLens Web Portal

We know the IP address for HoloLens, 192.168.1.6. Open a web browser and access to the web portal. We will see a certificate error on the browser by accessing to https://192.168.1.6, but don’t panic. Just proceed for now.

Now, we see the web browser portal screen. We need to register a user for streaming. Go to the Security menu at the top of the browser. This will guide us to user registration page.

If you click the Request PIN button, a PIN will be popping up on our view. Get the PIN and enter it on our web browser, with username and password.

Note: The username and password doesn’t have to be the same as our Microsoft account or Office365 account.

User registration completed.

This Official HoloLens Document will give us more details about setting up Device Portal and User Registration.

Streaming HoloLens Video to Web Browser

Let’s try streaming. HoloLens bundles Mixed Reality Capture (MRC) tools that enable streaming through web browsers.

Click the Mixed Reality Capture menu at the left hand side followed by clicking the Live Preview button in the middle of the screen. We’ll be able to see a small live preview pane just underneath the button. Open a developer tool on your web browser and take the live streaming URL. The URL might look like:

[code lang=text]
api/holographic/stream/live_high.mp4?holo=true&pv=true&mic=true&loopback=true
[/code]

By combining username, password, IP address and the streaming URL, we can directly access to the live streaming URL, without relying on the web portal. We’ve got IP address of 192.168.1.6 in this post, so the direct streaming URL might look like:

[code lang=text]
https://[USERNAME]:[PASSWORD]@192.168.1.6/api/holographic/stream/live_high.mp4?holo=true&pv=true&mic=true&loopback=true
[/code]

Enter the URL on the location bar of our web browser and we now are able to watch the live streaming.

Broadcasting HoloLens over the Internet

So far, we have had a brief overview how to stream HoloLens live video. It only works within a private network. However, if we share our browser through Skype, Hangout or OBS, we can easily broadcast it.

Keynote & Demo Video

Here is a keynote and demo video for HoloLens that I used in a meetup.

Follow ...+

Kloud Blog - Follow