Xamarin Forms – App Center: Show BuildID on iOS App

Introduction

App center helps us to connect support git repositories (i.e GitHub, Bitbucket or VSTS)  and build the app for us either on every code commit or manually. It also sends notifications to users that are registered to it. It will also help us to run tests on the actual device and runs any unit tests in the project.

iOS and Android

App Center can be set up to build individual platforms projects.

Scenario:

Required to show BuildID on iOS APP during the development phase, as shown below. This was required to keep track of bug reports or any other reason.

Screen Shot 2018-01-03 at 19.57.00

Show App Center Build ID in iOS app

Info.plist

Below is the BuildID that is read when traditional approach as below

Screen Shot 2018-01-03 at 20.18.13

Traditional Approach:

This can be achieved by implementation natively on each platform and used via interfaces in PCL or .Net standard projects, as shown below.

Screen Shot 2018-01-03 at 20.21.56

Native Implementation:

Screen Shot 2018-01-03 at 20.25.39

The above code will return the App Version and App build number that are set in the info.plist.

Get BuildID from App Center

This can be achieved using post clone and pre-build – Build scripts, which can be set in the App center’s build configurations as shown below

Screen Shot 2018-01-03 at 20.34.21

Screen Shot 2018-01-03 at 20.36.42

Post Clone

Below is the post clone code to load App Center’s CLI, which is required to be named as appcenter-post-clone.sh.

#!/usr/bin/env bash
npm install -g appcenter-cli

Pre-Build

Below is the pre-build code that set’s the info.plist’s Build number with App Centers BuildID

plutil -replace CFBundleVersion -string $APPCENTER_BUILD_ID $APPCENTER_SOURCE_DIRECTORY/path to ios project/Info.plist

APPCENTER_BUILD_ID is the App Center variable that contain’s BuildID, below is the BuildID that is set to  CFBundleVersion

The most recent Build number as shown below is replaced and set to CFBundleVersion.

Screen Shot 2018-01-03 at 21.20.35

After downloading from App center, iOS should show the latest build number as shown in below screenshot.

build id

I hope you like the post.

 

Replacing the service desk with bots using Amazon Lex and Amazon Connect (Part 2)

Welcome back! Hopefully you had the chance to follow along in part 1 where we started creating our Lex chatbot. In part 2, we attempt to make the conversation more human-like and begin integrating data validation on our slots to ensure we’re getting the correct input.

Creating the Lambda initialisation and validation function

As data validation requires compute, we’ll need to start by creating an AWS Lambda function. Head over to the AWS console, then navigate to the AWS Lambda page. Once you’re there, select Create Function and choose to Author from Scratch then specify the following:

Name: ResetPWCheck

Runtime: Python 2.7 (it’s really a matter of preference)

Role: I use an existing Out of the Box role, “Lambda_basic_execution”, as I only need access to CloudWatch logs for debugging.

Once you’ve populated all the fields, go ahead and select Create Function. The script we’ll be using is provided (further down) in this blog, however before we go through the script in detail, there are two items worth mentioning.

Input Events and Response Formats

It’s well worth familiarising yourself with the page on Lambda Function Input Event and Response Formats in the Lex Developer guide. Every time input is provided to Lex, it invokes the Lambda initalisation and validation function. For example, when I tell my chatbot “I need to reset my password”, the lambda function is invoked and the following event is passed:

Amazon Lex expects a response from the Lambda function in JSON format that provides it with the next dialog action.

Persisting Variables with Session Attributes

There are many ways to determine within your Lambda function where you’re up to in your chat dialog, however my preferred method is to pass state information within the SessionAttributes object of the input event and response as a key/value pair. The SessionAttributes can persist between invocations of the Lambda function (every time input is provided to the chatbot), however you must remember to collect and pass the attributes between input and responses to ensure it persists.

Input Validation Code

With that out of the way, let’s begin looking at the code. The below script is what I’ve used which you can simply copy and paste, assuming you’re using the same slot and intent names in your Lex bot that were used in Part 1.

Let’s break it down.

When the lambda function is first invoked, we check to see if any state is set in the sessionAttributes. If not, we can assume this is the first time the lambda function is invoked and as a result, provide a welcoming response while requesting the User’s ID. To ensure the user isn’t welcomed again, we set a session state so the Lambda function knows to move to User ID validation when next invoked. This is done by setting the “Completed” : “None” key/value pair in the response SessionAttributes.

Next, we check the User ID. You’ll notice the chkUserId function checks for two things; That the slot is populated, and if it is, the length of the field. Because the slot type is AMAZON.Number, any non-numeric characters that are entered will be rejected by the slot. If this occurs, the slot will be left empty, hence this is something we’re looking out for. We also want to ensure the User ID is 6 digits, otherwise it is considered invalid. If the input is correct, we set the session state key/value pair to indicate User ID validation is complete then allow the dialog to continue, otherwise we request the user to re-enter their User ID.

Next, we check the Date of Birth. Because the slot type is strict regarding input, we don’t do much validation here. An utterance for this slot type generally maps to a complete date: YYYY-MM-DD. For validation purpose, we’re just looking for an empty slot. Like the User ID check, we set the session state and allow the dialog to continue if all looks good.

Finally, we check the last slot which is the Month Started. Assuming the input for the month started is correct, we then confirm the intent by reading all the slot values back to the user and asking if it’s correct. You’ll notice here that there’s a bit of logic to determine if the user is using voice or text to interact with Lex. If voice is used, we use Speech Synthesis Markup Language (SSML) to ensure the UserID value is read as digits, rather than as the full number.

If the user is happy with the slot values, the validation completes and Lex then moves to the next Lambda function to fulfil the intent (next blog). If the user isn’t happy with the slot values, the lambda function exits telling the user to call back and try again.

Okay, now that our Lambda function is finished, we need to enable it as a code hook for initialisation and validation. Head over to your Lex bot, select the “ResetPW” intent, then tick the box under Lambda initialisation and validation and select your Lambda function. A prompt will be given to provide permissions to allow your Lex bot to invoke the lambda function. Select OK.

Let’s hit Build on the chatbot, and test it out.

So, we’ve managed to make the conversation a bit more human like and we can now detect invalid input. If you use the microphone to chat with your bot, you’ll notice the User ID value is read as digits. That’s it for this blog. Next blog, we’ll integrate Active Directory and actually get a password reset and sent via SNS to a mobile phone.

Provisioning Hybrid Exchange/Exchange Online Mailboxes with Microsoft Identity Manager

Introduction

Working for Kloud all our projects involve Cloud services, and all our customers have varying and unique requirements. Recently one of our customers embarked on their migration from On-Premise Exchange to Exchange Online. Nothing really groundbreaking there though, however they had a number of unique requirements including management of Litigation Hold. And that needed to be integrated with their existing Microsoft Identity Manager implementation (that currently provisions new users to their Exchange 2013 environment). They also required that management of the Exchange environment still be possible via the Exchange Management Console against a local Exchange server. This post details how I integrated the environments using MIM.

Overview

In order to integrate the Provisioning and Lifecycle management of Exchange Online Mailboxes in a Hybrid Exchange with Microsoft Identity Manager I created a custom PowerShell Management Agent simply because it was going to provide the flexibility I needed.

Provisioning is based on the following process;

  1. MIM Creates new user in Active Directory (no changes to existing MIM provisioning process)
  2. Azure Active Directory Connect synchronises the user to Azure Active Directory
  3. The Exchange Online MIM Management Agent sees the corresponding AAD account for the new user
  4. MIM Declarative Rules trigger the creation of a new Remote Mailbox for the AD/AAD user against the local Exchange 2013 On Premise Server. This allows the EMC to be used to manage mailboxes On Premise even though the mailbox resides in Office365/Exchange Online
  5. AADC/Exchange synchronises the information as part of the Hybrid Exchange topology
  6. MIM sees the EXO Mailbox configuration for the new user and enables Litigation Hold against the EXO Mailbox (if required)

The following diagram graphically depicts this process.

EXO IDM Provisioning Solution.png

Exchange Online PowerShell MA

As always I’m using my favourite PowerShell Management Agent, the Grandfeldt PS MA now available on Github here.

Schema Script

The Schema script configures the schema required for current and future EXO management requirements. The Schema is based on a single Object Class “MailUser” but pulls the information from a combination of Azure AD User and Exchange Online Mailbox object classes for an associated account. Azure AD User objects are prefixed by ‘AAD’. Non AAD prefixed attributes are EXO Mailbox attributes.

Import Script

The Import script connects to both Azure AD and Exchange Online to retrieve Azure AD User accounts and if present the associated mailbox for a user.

It retrieves all Member AAD User Accounts and puts them into a Hash Table. Connectivity to AAD is via the AzureADPreview PowerShell module. It retrieves all Mailboxes and puts them into a Hash Table. It then processes all the mailboxes first including the associated AAD User account (utilising a join via userPrincipalName).

Following processing all mailboxes the remainder of the AAD Accounts (without mailboxes) are processed.

Export Script

The Export script performs the necessary integration against OnPremise Exchange Server 2013 for Provisioning and Exchange Online for the rest of management. Both utilise Remote Powershell. It also leverages the Lithnet MIIS Automation PowerShell Module to query the Metaverse to validate current object statuses.

Wiring it all up

The scripts above will allow you to integrate a FIM/MIM implementation with AAD/EXO for management of users EXO Mailboxes. You’ll need connectivity from the MIM Sync Server to AAD/O365 in order to manage them.  Everything else I wired up using a few Sets, Workflows, Sync Rules and MPR’s.

 

Replacing the service desk with bots using Amazon Lex and Amazon Connect (Part 1)

“What! Is this guy for real? Does he really think he can replace the front line of IT with pre-canned conversations?” I must admit, it’s a bold statement. The IT Service Desk has been around for years and has been the foot in the door for most of us in the IT industry. It’s the face of IT operations and plays an important role in ensuring an organisation’s staff can perform to the best of their abilities. But what if we could take some of the repetitive tasks the service desk performs and automate them? Not only would we be saving on head count costs, we would be able to ensure correct policies and procedures are followed to uphold security and compliance. The aim of this blog is to provide a working example of the automation of one service desk scenario to show how easy and accessible the technology is, and how it can be applied to various use cases.
To make it easier to follow along, I’ve broken this blog up into a number of parts. Part 1 will focus on the high-level architecture for the scenario and begin creating the Lex chatbot.

Scenario

Arguably, the most common service desk request is the password reset. While this is a pretty simple issue for the service desk to resolve, many service desk staff seem to skip over, or not realise the importance of user verification. Both the simple resolution and the strict verification requirement make this a prime scenario to automate.

Architecture

So what does the architecture look like? The diagram below dictates the expected process flow. Let’s step through each item in the diagram.

 

Amazon Connect

The process begins when the user calls the service desk and requests to have their password reset. In our architecture, the service desk uses Amazon Connect which is a cloud based customer contact centre service, allowing you to create contact flows, manage agents, and track performance metrics. We’re also able to plug in an Amazon Lex chatbot to handle user requests and offload the call to a human if the chatbot is unable to understand the user’s intent.

Amazon Lex

After the user has stated their request to change their password, we need to begin the user verification process. Their intent is recognised by our Amazon Lex chatbot, which initiates the dialog for the user verification process to ensure they are who they really say they are.

AWS Lambda

After the user has provided verification information, AWS Lambda, which is a compute on demand service, is used to validate the user’s input and verify it against internal records. To do this, Lambda interrogates Active Directory to validate the user.

Amazon SNS

Once the user has been validated, their password is reset to a random string in Active Directory and the password is messaged to the user’s phone using Amazon’s Simple Notification Service. This completes the interaction.

Building our Chatbot

Before we get into the details, it’s worth mentioning that the aim of this blog is to convey the technology capability. There’s many ways of enhancing the solution or improving validation of user input that I’ve skipped over, so while this isn’t a finished production ready product, it’s certainly a good foundation to begin building an automated call centre.

To begin, let’s start with building our Chatbot in Amazon Lex. In the Amazon Console, navigate to Amazon Lex. You’ll notice it’s only available in Ireland and US East. As Amazon Connect and my Active Directory environment is also in US East, that’s the region I’ve chosen.

Go ahead and select Create Bot, then choose to create your own Custom Bot. I’ve named mine “UserAdministration”. Choose an Output voice and set the session timeout to 5 minutes. An IAM Role will automatically be created on your behalf to allow your bot to use Amazon Polly for speech. For COPPA, select No, then select Create.

Once the bot has been created, we need to identify the user action expected to be performed, which is known as an intent. A bot can have multiple intents, but for our scenario, we’re only creating one, which is the password reset intent. Go ahead and select Create Intent, then in the Add Intent window, select Create new intent. My intent name is “ResetPW”. Select Add, which should add the intent to your bot. We now need to specify some expected sample utterances, which are phrases the user can use to trigger the Reset Password intent. There’s quite a few that could be listed here, but I’m going to limit mine to the following:

  • I forgot my password
  • I need to reset my password
  • Can you please reset my password

The next section is the configuration for the Lambda validation function. Let’s skip past this for the time being and move onto the slots. Slots are used to collect information from the user. In our case, we need to collect verification information to ensure the user is who they say they are. The verification information collected is going to vary between environments. I’m looking to collect the following to verify against Active Directory:

  • User ID – In my case, this is a 6-digit employee number that is also the sAMAccountName in Active Directory
  • User’s birthday – This is a custom attribute in my Active Directory
  • Month started – This is a custom attribute in my Active Directory

In addition to this, it’s also worth collecting and verifying the user’s mobile number. This can be done by passing the caller ID information from Amazon Connect, however we’ll skip this, as the bulk of our testing will be text chat and we need to ensure we have a consistent experience.

To define a slot, we need to specify three items:

  • Name of the slot – Think of this as the variable name.
  • Slot type – The data type expected. This is used to train the machine learning model to recognise the value for the slot.
  • Prompt – How the user is prompted to provide the value sought.

Many slot types are provided by Amazon, two of which has been used in this scenario. For “MonthStarted”, I’ve decided to create my own custom slot type, as the in-built “AMAZON.Month” slot type wasn’t strictly enforcing recognisable months. To create your own slot type, press the plus symbol on the left-hand side of the page next to Slot Types, then provide a name and description for your slot type. Select to Restrict to Slot values and Synonyms, then enter each month and its abbreviation. Once completed, click Add slot to intent.

Once the custom slot type has been configured, it’s time to set up the slots for the intent. The screenshot below shows the slots that have been configured and the expected order to prompt the user.

Last step (for this blog post), is to have the bot verify the information collected is correct. Tick the Confirmation Prompt box and in the Confirm text box provided, enter the following:

Just to confirm, your user ID is {UserID}, your Date of Birth is {DOB} and the month you started is {MonthStarted}. Is this correct?

For the Cancel text box, enter the following:

Sorry about that. Please call back and try again.

Be sure to leave the fulfillment to Return parameters to client and hit Save Intent.

Great! We’ve built the bare basics of our bot. It doesn’t really do much yet, but let’s take it for a spin anyway and get a feel for what to expect. In the top right-hand corner, there’s a build button. Go ahead and click the button. This takes some time, as building a bot triggers machine learning and creates the models for your bot. Once completed, the bot should be available to text or voice chat on the right side of the page. As you move through the prompts, you can see at the bottom the slots getting populated with the expected format. i.e. 14th April 1983 is converted to 1983-04-14.

So at the moment, our bot doesn’t do much but collect the information we need. Admittedly, the conversation is a bit robotic as well. In the next few blogs, we’ll give the bot a bit more of a personality, we’ll do some input validation, and we’ll begin to integrate with Active Directory. Once we’ve got our bot working as expected, we’ll bolt on Amazon Connect to allow users to dial in and converse with our new bot.

How far to take response group

I have been working on a SFB Enterprise Voice Implementation project recently. The client is very keen to use native response group to create a corporate IVR for their receptions. The requirement in particular ended up needing 4 workflows, 19 Queues, 2 Groups and going beyond 2-Level, 4-Options IVR simple cases. The whole implementation won’t be completed under GUI, instead, Lync Powershell is the only way to meet the requirement.

I drew the reception IVR workflow below:

RGS

The root level menu is 7 options with the option 9 to loop back and the sub menu is also up to 8 options to help receptions to reduce the workload.

I like to start with GUI to set up the quickly set up the IVR framework with first 4 options and then we use scripts to extend options and manage the IVR framework. Take the “Reception Main Menu” as an example, I used the below scripts adding in Option 5, Option 6, Option 9.

##Create Option 5

$Workflow=get-csrgsworkflow -name "Reception Main Menu"

$queue = Get-CsRgsQueue -name "Press5sub Queue[R]"

$Question = $workflow.DefaultAction.Question

$Action5 = New-CsRgsCallAction -Action TransferToQueue -QueueID $queue.Identity

$Answer5 = New-CsRgsAnswer -Action $Action5 -DtmfResponse 5 -VoiceResponseList "Option5"

$Question.AnswerList.Add($Answer5)

Set-CsRgsWorkflow -Instance $workflow

##Create Option 6

$Workflow=get-csrgsworkflow -name "Reception Main Menu"

$queue = Get-CsRgsQueue -name "Press6sub Queue[R]"

$Question = $workflow.DefaultAction.Question

$Action6 = New-CsRgsCallAction -Action TransferToQueue -QueueID $queue.Identity

$Answer6 = New-CsRgsAnswer -Action $Action6 -DtmfResponse 6 -VoiceResponseList "Option6"

$Question.AnswerList.Add($Answer6)

Set-CsRgsWorkflow -Instance $workflow

##Create Option 9

$Workflow=get-csrgsworkflow -name "Reception Main Menu"

$queue = Get-CsRgsQueue -name "Press9sub Queue[R]"

$Question = $workflow.DefaultAction.Question

$Action9 = New-CsRgsCallAction -Action TransferToQueue -QueueID $queue.Identity

$Answer9 = New-CsRgsAnswer -Action $Action9 -DtmfResponse 9 -VoiceResponseList "Option9"

$Question.AnswerList.Add($Answer9)

Set-CsRgsWorkflow -Instance $workflow

To manage the business hours of IVR workflows, I used the below scripts to reset/update the business hours:

##Business Hours update

$weekday = New-CsRgsTimeRange -Name "Weekday Hours" -OpenTime 00:08:30 -CloseTime 17:30:00

$x = Get-CsRgsHoursOfBusiness -Identity "service:ApplicationServer:nmlpoolaus01.company.com.au" -Name "Reception Main Menu_434d7c29-9893-4946-afcf-3bb9ac7aad8a"

$x.MondayHours1 = $weekday

$x.TuesdayHours1 = $weekday

$x.WednesdayHours1 = $weekday

$x.ThursdayHours1 = $weekday

$x.FridayHours1 = $weekday

Set-CsRgsHoursOfBusiness -Instance $x

$x

To manage the greeting/announcement of IVR workflows, I used the below scripts to reset/update the IVR greeting:

##greeting/announcement update

$audioFile = import-CsRgsAudioFile -Identity "service:ApplicationServer:nmlpoolaus01.company.com.au" -FileName "Greeting reception.wma" -Content (Get-Content "C:\temp\Greeting Reception.wma" -Encoding byte -readcount 0)

$prompt = New-CsRgsPrompt -AudioFilePrompt $audioFile -TextSpeechPrompt ""

$workflow.DefaultAction.Question.Prompt = $prompt

$workflow.DefaultAction.Question

Set-CsRgsWorkflow $workflow

The native Lync response group is a basic IVR platform that covers most simple cases and can even go as far as multiple level and multiple option IVR with text-to-speech, and speech recognition (Interactive workflow), that’s not too shabby at all!

Hopefully my scripts can help you to extend your Lync IVR RGS workflow. 😊

Use Azure Health to track active incidents in your Subscriptions

siliconvalve

Yesterday afternoon while doing some work I ran into an issue in Azure. Initially I thought this issue was due to a bug in my (new) code and went to my usual debugging helper Application Insights to review what was going on.

The below graphs a little old, but you can see a clear spike on the left of the graphs which is where we started seeing issues and which gave me a clue that something was not right!

App Insights views

Initially I thought this was a compute issue as the graphs are for a VM-hosted REST API (left) and a Functions-based backend (right).

At this point there was no service status indicating an issue so I dug a little deeper and reviewed the detailed Exception information from Application Insights and realised that the source of the problem was the underlying Service Bus and Event Hub features that we use to glue…

View original post 268 more words

Visual Studio Team Services (VSTS) Continuous Integration and Continuous Deployment

I have been working on an Azure Pass Project recently and try to leverage VSTS DevOps CICD features to automatic the build and deployment process. Thanks to my colleague Sean Perera, he helped me and provided a deep dive on the VSTS CICD process.

I am writing this blog to share the whole workflow:

  1. Create new project in VSTS, create Dev branch based on the master branch

1

  1. Establish the connection from local VS to the VSTS project

2

  1. Push web app codes to the VSTS dev branch environment

3

3.1

  1. Set up the endpoint connections between VSTS and Azure:
  • Login to the Azure tenant environment, create new registration for VSTS tenant.

4.1

  • Generate service principle key and keep it safe

4.2

  • Come to VSTS online portal, go to settings -> services-> create a new service endpoint, the service principal client ID will be the Azure application ID, service Principle key will be Azure service principal key.

4.3

  • Click “Verify connection” to make sure it passed the connection testing
  1. Go to Create a build definition:
  • Define the build task: select the repo source, define the azure subscription, the destination to push to, all the app settings and parameter definitions

5.1

  • Go to Triggers and enable the CI settings:

5.2

  1. Create a new release definition
  • Define the release Pipeline: where is the source build and where is the environment, in my case, I am using VSTS to push codes to Azure PaSS environment.

6.1

  • Enable the Continuous Deployment settings

6.2

  • Define the release tasks: in my case I am using pre-build deploy azure app service and also swap from staging to prod environment

6.3

6.4

  1. Auto build and release process

Once I make change on my project code from my local visual studio environment, I commit the code and push up to the VSTS dev environment, VSTS will automatically start the build and release process, complete the release and push to Azure web app environment.

7.1

7.2

  1. Done, test my code in the dev and prod environment. It looks good. the VSTS DevOps features speed up the whole deployment process.

 

HoloLens – Spatial sound

The world of Mixed reality and Augmented reality is only half real without three-dimensional sound effects to support the virtual world. Microsoft deals with this challenge by leveraging the ability of their audio engine to generate Spatial Sound. Spatial sound, as a feature, can simulate three-dimensional sounds in a virtual world based on direction, distance, and other environmental factors. Spatial Sound is based on the concept of sound localization which is a popular topic in the field of sound engineering. Sound localization can be defined as a process of determining the source of the sound, the field or sound, the position of the listener and the media or environment of sound propagation. The following diagram illustrates a virtual world with Spatial Sound enabled:

1

The concept of sound localization and Spatial Sound is not new to this world. Spatial music has been practiced since biblical ages in the form of the antiphon. The modern form of spatial music was introduced in early 1900 in Germany. Spatial sound empowers a Mixed reality application developer to place sound sources in a three-dimensional space around the user in a convincing manner. Objects in the virtual world can act as sources for these sounds creating an immersive experience for the user.

Scenarios for Spatial Sound

In the world of Mixed Reality, Spatial Sound can be used to make many user scenarios realistic. Following are few of them.

  • Anchoring – The ability to position a virtual object in a virtual world is critical for many Mixed reality applications. In a scenario where the users turn his face away from the object, and it disappears from his viewport, the only way to give the user a scene of the object’s existence in the scene is by propagating localized sounds from the object.
  • Guiding – Spatial sound are found to be very useful in scenarios where users need to be guided to draw their attention to a specific object or space in the three-dimensional world.
  • Simulating physics – Sound plays an important role in emulating realistic physics in the world of Mixed Reality. For example, the impact of a glass sphere dropping behind the user in a three-dimensional world is best simulated by enabling localized shattering sound effects at the point of its collision with the floor.

Implementing Spatial Sound in HoloLens applications

The audio engine is HoloLens uses a technology called HTRF (Head Related Transfer Function) to simulate sounds that are coming from various directions and distances within a virtual world. Head Related Transfer Functions define directivity patterns of the human ears which caters for direction, elevation, distance, and frequency of sound. Unity offers built-in support for Microsoft HTRF extensions.

Configuring the Audio Source

The plug-in can be enabled from the audio manager in Unity (Edit->Project Settings->Audio).

2

Once the setting is enabled, you should be able to ‘Spatialize’ any audio source attached to a game object in Unity. To configure the audio source, you will need to perform the following steps:

  1. Select the audio source on the game object in the inspector panel
  2. Check the ‘Spatialize’ Checkbox under the options for the audio source3
  3. For best results, change the value of ‘Spatial Blend’ to 14

Your audio source is now configured to play Spatial Sound.

Testing Spatial Sound

The best way to test Spatial Sound is by using the ‘Audio Emitter’ component which comes built-in with the HoloToolKit. Perform the following steps to configure the Audio Emitter.

  1. Add the Audio Emitter component from the inspector panel5
  2. Configure the Update Interval, Max Objects, and Max Distance on the Audio Emitter component. The ‘Update Interval’ determines the frequency in which the Audio Emitter scans for environmental factors which influence the sound output. ‘Max Objects’ specifies the maximum number of influencing objects to be considered and the max distance specifies the maximum radius for the scan.6
  1. Leave the ‘Outer Sphere’ parameter empty for now. This parameter is used to demonstrate audio occlusion
  2. Associate an audio file to your Audio Source and enable the loop7
  3. Run the application and move around the object to feel the Spatial Sound in action.

Measuring sound output

It is often a scenario where you want to measure the sound output produced by an Audio Source to visually represent it. A good example is a case when you need to display an audio histogram within your application.8

The following code can be used to measure the RMS output from an Audio Source which can then be used to paint the histogram spectrum.

Unity also supports a direct function (AudioSource.GetSpectrumData) to retrieve spectrum data from the audio source

To conclude, in this blog, we walked through the fundamentals of spatial sound. After which, we learned about implementing Spatial Sound in a HoloLens application and about how to measure sound output.

HoloLens – Understanding depth (Spatial Mapping)

Building smart applications which can work in a three-dimensional space has many challenges. Amongst these, the one that tops the list is the challenge of understanding and mapping the surrounding 3D world. Applications usually depend on the device and platform capability to resolve this problem. Augmented Reality and Mixed Reality devices ships with built-in technologies to measure the depth of the containing world.

Scenarios of interest

Mapping the world around a device is critical to enable powerful scenarios in this field. Following are few such use cases:

  • Docking/Placement – What makes Mixed Reality different from Augmented Reality is its ability to enable interaction between virtual and physical objects. To make a Mixed Reality scenario realistic, it is critical for the application to understand the mapping of the environment where the user is currently operating from. This will help the application place or dock the holograms obeying the physical bounds of the environment. For example, if the application needs to place a chair in a room, it will need to position it on the floor with enough space to land its four legs.
  • Navigation – Objects in the holographic world should be constrained by the rules of the physical world to make the application look real. For example, a holographic puppy should be able to walk on the floor or jump on to the couch and not walk through the walls and furniture. To enable this, the application should be aware of the depth of each object around the user at any given point in time.
  • Physics – In the real world, the behaviour of an object in motion is highly influenced by factors like inertia, density, elasticity, and so on. To match the similar experience of holograms in the world of Mixed Reality, the application will need to be aware of the environment. For Example, dropping the ball from the roof on the floor will have a different effect from dropping it on a couch.

Technologies

Depth sensing is not a new problem in the world of computing. However, the rise of Mixed Reality and Augmented Reality enabled devices have taken it to the limelight. Different vendors address this challenge by different names. For examples, Google calls it ‘Depth Perception’. Apple ARKit calls it ‘Depth Map’ and Microsoft calls it ‘Spatial Mapping’.  The underlying technologies used by these devices may be different but the objective of discovering the depth of the environment around the device remains the same. Following are few of the underlying technologies used by these devices to measure depth:

  • Structured Infrared light projector/scanner
  • RGB Depth cameras
  • Time-of-flight camera

Time of Flight

Time of flight technology is specifically of our interest because of its popularity and the fact that it is being used by devices like Microsoft Kinect and HoloLense to measure depth. The technology works on the reflective properties of the objects. It uses the known speed of light to calculate the distance by measuring the time taken for a photon to reflect back to the device sensors. The following diagram illustrates the measurement process:

TFF

Experiments state that the time for flight technology works at its best in between the range of approximately 0.5 to 5 metres.  The depth camera in HoloLens works well between 0.85 to 3.01 metres.

Spatial Mapping

Spatial Mapping is a feature shipped with Microsoft HoloLens which provides a representation of real-world surfaces around the device. This can be used by application developers to make the applications environment aware. The feature abstracts the hardware technology used to measure the depth and exposes easily consumable APIs to integrate with.

Spatial Mapping in HoloLens

The best way to leverage Spatial Mapping capabilities in a HoloLens is to integrate it using Unity. To enable Spatial Mapping, you will first need to enable ‘SpatialPerception’ capability on your project. Unity offers two built-in components to support Spatial Mapping.

  • Spatial Mapping Renderer – This component is responsible for visually presenting the spatial map as a mesh to the application.
  • Spatial Mapping Collider – The collider is responsible for enabling interactions between the holograms and the spatial mesh.

These components can be added to an existing unity project from the ‘Add Component’ menu (Add Component > AR > Spatial Mapping Collider/Spatial Mapping Renderer)

The following link talks about setting up Spatial Mapping capabilities in your Unity project in detail.

https://developer.microsoft.com/en-us/windows/mixed-reality/spatial_mapping_in_unity

Tips and tricks

Following are few tips and tricks to optimize Spatial Mapping in your applications.

  • Updating spatial maps – Running a spatial map starts with the trigger for collecting mapping data. This operation is very CPU intensive thereby costing battery life and starvation to other processes. To optimise update cycles, request collision data only when required. The API’s also let you query collision data for selective surfaces.
  • Configuring refresh intervals – It is important to choose an optimal refresh rate for your spatial maps to go light on CPU. You can do this from the Unity’s inspector window.

TBU

  • The density of spatial data – Spatial Mapping uses triangle meshes to represent the surfaces it maps. For an application which does not require high-resolution mapping, it is advisable to generate maps with lower triangle density to optimise CPU time and turn around time for the mapping process.
  • Understanding the implementation – It is useful to understand the implementation of Spatial Mapping to perform low-level optimizations specific to your applications. Understanding how ‘SurfaceObserver’ and ‘SurfaceData’ is implemented gives a good insight into how things work. You can refer to the Unity documentation to learn more about this.

https://docs.unity3d.com/Manual/windowsholographic-sm-lowlevelapi.html

  • Mixed Reality Toolkit example – A good place to start on Spatial Mapping is the example shared within the Mixed Reality Toolkit accessible through the following link.

https://github.com/Microsoft/MixedRealityToolkit-Unity/tree/master/Assets/HoloToolkit-Examples/SpatialMapping

To conclude, in this blog, we briefly discussed the depth mapping problem, solutions to this problem and the technology behind it. We also dived deeper into Spatial Mapping feature of HoloLens and how it can be used with a Unity project.

 

Xamarin Forms: Mircosoft.EntityFrameworkCore.Sqlite issue with Physical devices

Introduction

Building Xamarin Forms apps using .Net Standard 2.0 is still pretty much new to industry, we are just started to learn how differently we have to configure Xamarin setting to get it working when compared to PCL based projects.

I was building a Xamarin Forms based App using Microsoft’s Entityframeworks SQlite to store app’s data. Entity framework using sqlite is an obvious choice when it comes to building App using .Net Standard 2.0

Simulator

Works well on pretty much on all simulators without any issue, all read/write operations works well.

Issue  – Physical Device

App crashes on physical device, when tried to read or write data from the SQlite database

Error

System.TypeInitializationException: The type initializer for ‘Microsoft.EntityFrameworkCore.EntityFrameworkQueryableExtensions’ threw an exception. —> System.InvalidOperationException: Sequence contains
no matching element

Resolution

Change linker behavior to “Don’t Link”