Exchange Online & Splunk – Automating the solution

NOTES FROM THE FIELD:

I have recently been consulting on, what I think is a pretty cool engagement to integrate some Office365 mailbox data into the Splunk reporting platform.

I initially thought about using a .csv export methodology however through trial & error (more error than trial if I’m being honest), and realising that this method still required some manual interaction, I decided to embark on finding a fully automated solution.

The final solution comprises the below components:

  • Splunk HTTP event collector
    • Splunk hostname
    • Token from HTTP event collector config page
  • Azure automation account
    • Azure Run As Account
    • Azure Runbook
    • Exchange Online credentials (registered to Azure automation account

I’m not going to run through the creation of the automation account, or required credentials as these had already been created, however there is a great guide to configuring the solution I have used for this customer at  https://www.splunk.com/blog/2017/10/05/splunking-microsoft-cloud-data-part-3.html

What the PowerShell script we are using will achieve is the following:

  • Connect to Azure and Exchange Online – Azure run as account authentication
  • Configure variables for connection to Splunk HTTP event collector
  • Collect mailbox data from the Exchange Online environment
  • Split the mailbox data into parts for faster processing
  • Specify SSL/TLS protocol settings for self-signed cert in test environment
  • Create a JSON object to be posted to the Splunk environment
  • HTTP POST the data directly to Splunk

The Code:

#Clear Existing PS Sessions
Get-PSSession | Remove-PSSession | Out-Null
#Create Split Function for CSV file
function Split-array {
param($inArray,[int]$parts,[int]$size)
if($parts) {
$PartSize=[Math]::Ceiling($inArray.count/$parts)
}
if($size) {
$PartSize=$size
$parts=[Math]::Ceiling($inArray.count/$size)
}
$outArray=New-Object’System.Collections.Generic.List[psobject]’
for($i=1;$i-le$parts;$i++) {
$start=(($i-1)*$PartSize)
$end=(($i)*$PartSize)-1
if($end-ge$inArray.count) {$end=$inArray.count-1}
$outArray.Add(@($inArray[$start..$end]))
}
return,$outArray
}
function Connect-ExchangeOnline {
param(
$Creds
)
#Connect to Exchange Online
$Session=New-PSSession –ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/-Credential $Credentials-Authentication Basic -AllowRedirection
$Commands=@(“Add-MailboxPermission”,”Add-RecipientPermission”,”Remove-RecipientPermission”,”Remove-MailboxPermission”,”Get-MailboxPermission”,”Get-User”,”Get-DistributionGroupMember”,”Get-DistributionGroup”,”Get-Mailbox”)
Import-PSSession-Session $Session-DisableNameChecking:$true-AllowClobber:$true-CommandName $commands|Out-Null
}
#Create Variables
$SplunkHost = “Your Splunk hostname or IP Address”
$SplunkEventCollectorPort = “8088”
$SplunkEventCollectorToken = “Splunk Token from Http Event Collector”
$servicePrincipalConnection = Get-AutomationConnection -Name ‘AzureRunAsConnection’
$credentials = Get-AutomationPSCredential -Name ‘Exchange Online’
#Connect to Azure
Add-AzureRMAccount -ServicePrincipal -Tenant $servicePrincipalConnection.TenantID -ApplicationId $servicePrincipalConnection.ApplicationID -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
#Connect to Exchange Online
Connect-ExchangeOnline -Creds $credentials
#Invoke Script
$mailboxes = Get-Mailbox -resultsize unlimited | select-object -property DisplayName, PrimarySMTPAddress, IsMailboxEnabled, ForwardingSmtpAddress, GrantSendOnBehalfTo, ProhibitSendReceiveQuota, AddressBookPolicy
#Get Current Date & Time
$time = get-date -Format s
#Convert Timezone to Australia/Brisbane
$bnetime = [System.TimeZoneInfo]::ConvertTimeBySystemTimeZoneId($time, [System.TimeZoneInfo]::Local.Id, ‘E. Australia Standard Time’)
#Adding Time Column to Output
$mailboxes = $mailboxes | Select-Object @{expression = {$bnetime}; Name = ‘Time’}, DisplayName, PrimarySMTPAddress, IsMailboxEnabled, ForwardingSmtpAddress, GrantSendOnBehalfTo, ProhibitSendReceiveQuota, AddressBookPolicy
#Create Split Array for Mailboxes Spreadsheet
$recipients = Split-array -inArray $mailboxes -parts 5
#Create JSON objects and HTTP Post to Splunk HTTP Event Collector
foreach ($recipient in $recipients) {
foreach($rin$recipient) {
#Create SSL Validation Bypass for Self-Signed Certificate in Testing
$AllProtocols = [System.Net.SecurityProtocolType]’Ssl3,Tls,Tls11,Tls12′
[System.Net.ServicePointManager]::SecurityProtocol = $AllProtocols
[System.Net.ServicePointManager]::ServerCertificateValidationCallback = {$true}
#Get JSON string to post to Splunk
$StringToPost = “{ `”Time`”: `”$($r.Time)`”, `”DisplayName`”: `”$($r.DisplayName)`”, `”PrimarySMTPAddress`”: `”$($r.PrimarySmtpAddress)`”, `”IsMailboxEnabled`”: `”$($r.IsMailboxEnabled)`”, `”ForwardingSmtpAddress`”: `”$($r.ForwardingSmtpAddress)`”, `”GrantSendOnBehalfTo`”: `”$($r.GrantSendOnBehalfTo)`”, `”ProhibitSendReceiveQuota`”: `”$($r.ProhibitSendReceiveQuota)`”, `”AddressBookPolicy`”: `”$($r.AddressBookPolicy)`” }”
$uri = “https://” + $SplunkHost + “:” + $SplunkEventCollectorPort + “/services/collector/raw”
$header = @{“Authorization”=”Splunk ” + $SplunkEventCollectorToken}
#Post to Splunk Http Event Collector
Invoke-RestMethod -Method Post -Uri $uri -Body $StringToPost -Header $header
}
}
Get-PSSession | Remove-PSSession | Out-Null

 

The final output that can be seen in Splunk looks like the following:

11/13/17
12:28:22.000 PM
{ [-]
AddressBookPolicy:
DisplayName: Shane Fisher
ForwardingSmtpAddress:
GrantSendOnBehalfTo:
IsMailboxEnabled: True
PrimarySMTPAddress: shane.fisher@xxxxxxxx.com.au
ProhibitSendReceiveQuota: 50 GB (53,687,091,200 bytes)
Time: 11/13/2017 12:28:22
}Show as raw text·         AddressBookPolicy =  

·         DisplayName = Shane Fisher

·         ForwardingSmtpAddress =  

·         GrantSendOnBehalfTo =  

·         IsMailboxEnabled = True

·         PrimarySMTPAddress = shane.fisher@xxxxxxxx.com.au

·         ProhibitSendReceiveQuota = 50 GB (53,687,091,200 bytes)

I hope this helps some of you out there.

Cheers,

Shane.

 

 

 

Resolving unable to access App published with Barracuda WAF over Azure Express Route

Recently, one of the customers reported they can’t access to all UAT apps from their Melbourne office, but it worked fine for other offices. When they tried to access the UAT app domains, they were getting below errors: “The request service is temporarily unavailable. It is either overloaded or under maintenance. Please try later.”

WAF error

Due to the UAT environment IP restrictions on the WAF, it is normal behaviour for me to get the error messages due to the fact our Kloud office’s public IPs are not in the WAFs’ whitelist. This error approved the web traffic did hit the WAFs. Ping the URL hostname, it returned the correct IP without DNS problems, this means that the web traffic did go to the correct WAF farm considering the customer has a couple of other WAF farms in other countries. So we can focus on the AU WAFs now for the troubleshooting.

I pulled out all the WAFs access logs and planned to go through those to verify if the web traffic was hitting on the AU WAFs or went to somewhere else. I did a log search based on the public IPs which were provided by customer, no results returned for the last 7 days.Search Result 1

interesting. did it mean no traffic from Melbourne office came in? I did another search based on my public IPs, it clearly returned a couple of access logs related with my testing, correct time, correct client IP, correct WAF domain hostname, method is GET, Status is 503 which is correct because my office IP is restricted.

Search Result 2

Since customer mentioned all other offices had no problem to access the UAT app environment, I asked them to provide me with one public IP from another office, we tested it again and verified people in Indian office can successfully open the web app and I can see their web traffic appear in the WAF logs as well. I believed when Melbourne staff tried to browse the web app, the traffic should go to the same WAF farm because the DNS hostname was resolved to the same IP no matter whether in Melbourne or in India.

The question is what exactly happened and what was the root cause? :/

In order to capture another good example, I noted down the time and asked the customer to browse the website again. This time I did an access log search based on the time instead of Melbourne public IPs. I got a couple of results returned with some unknown IPs.

Search result 3

I googled the unknown IPs, it turned out they are Microsoft Australian data centre IPs. Now I kind of felt there are some routing or NAT issues in the customer network. I contacted the customer and provided the unknown IPs, customer did a bit of investigations on this and advised that those unknown IPs are the public IPs for their Azure Express Route interfaces. It makes sense now. Because customer didn’t whitelist their new Azure public IPs, so when web traffic came from the unknown source IPs (Azure Public IPs), WAF doesn’t know them and they were all being blocked as well, just like me. Once I added the new Azure IPs into the app whitelist IPs, all the access issues were resolved.

Resolving “User not found” issue while assigning permissions using SharePoint CSOM

I was recently working on a SharePoint Online project where we were trying to automate library creation and provide required permissions on those libraries. We had an issue while modifying permissions with CSOM code on SharePoint libraries when the Created By user had left the company.

In this post I will outline the cause and the resolution as there was no online reference for resolving this error.

Issue: The CSOM code was throwing an error “User not found” even when creating a User object from web.EnsureUser() method.

Cause: The User object returned by web.EnsureUser() method was empty but not null and hence couldn’t be instantiated while adding after breaking permissions.

Resolution: The resolution to this issue was to explicitly load of the user object, then catch the exception while loading, and set a flag to false which could be later be checked to prevent the add method from erroring out. Yeah, this is a roundabout way of overcoming the issue but it works. Hopefully it will save you some hours.

Below is the code that could be used to do that.

How I prepared for the 70-533 Azure Exam

I have been IT professional for over 7 years now. During this time, I have seen and experienced many critical changes in the IT Infrastructure field. Personally, I started as a network engineer at a software company and then moved to a MSP as infrastructure engineer and looked after servers, firewalls, network, application deploy, etc. for medium and large finance institutions before I join Kloud Solutions and started to evolve by learning the Microsoft Azure. Obviously, Cloud technology is the most significant shift that the IT industry is experiencing today. As a Microsoft guy, it makes sense for me to start utilizing Azure as a platform to provide solutions to our customers.

Back in August, I had a chat with Gary Needham who is one of the Azure guys at work and figured it was time to develop my Azure skills via exam 70-533 Implementing Azure Infrastructure Solutions. As an ops guy with strong hand-on experience, I decided to learn best by doing lab one by one. With a full-time job, I kind of had limited time to study so I wanted to make sure that no matter what labs I was doing, they must be against the exam objectives. As such, I wrote down my goal- obtain Azure certification by the end of October. I had some experience with Azure environment previously by doing Azure Express Route, Azure Backup & Azure Site Recovery, so I was confident that I can achieve that within 3 months’ time.

After quickly getting my subscription ready, I went into the Azure ARM portal and started deploying Azure Resource Group, storage and Virtual networks, subnets and so on. I was getting really excited over the Azure technology. There were lots of fun and this enjoyment made this journey to become an outstanding learning experience.

Azure GitHub quick-start template is a fantastic place to learn Azure template. In the modern IT world, we standardized infrastructure deployment by using JSON template (Infrastructure as code), this can bring us enormous benefits. For me, this came somewhat naturally since I actually started IT as a coder in school. It costs me some trials and errors, I eventually figured out how to use Azure template to deploy Azure ARM resources. It took me a couple of weeks in the lab sessions to deploy Azure resources via template with either powershell or Azure CLI. I felt this is awesome.

Over the next few weeks, I continued to expand my lab environments bit by bit. I like to use Microsoft online documentations. It’s the latest and provide best practice and recommendations during the deployments and configurations. DSC is very a good example, Microsoft provides very good information about the DSC templates and how to build a DSC template, how to use it. Also, how to manage the version control of large DSC environment.

As I went through the exam objects more and more, I tried to link the labs which I did with the simple exam objects in my mind, through this I could remind myself which exam objects I needed to go back to review, this eventually paid off when I took the exam because it helped me to consolidate all the knowledge I quickly learnt.

The online course I was using is Udemy’s 70-533 videos, I am using this course to test myself more from different angles to see if I mastered all the knowledge which are required in the exam.

The practice exam I was using is Microsoft Official Practice exam 70-533. I believe my exam-prep is a bit more in depth than most. I didn’t try to memorize and exam questions and focus on the answers. I never rely on memorization. For some questions I got it wrong, then I went back to the lab and went back to the documents and tried to understand the whole work flow around that particular exam subject. I am a heavy Googler and I like to google the extended questions to know more.

To prepare the exam, I went through the entire collection of practice questions twice to test myself to see if I am ready for the exam. In the first practice exam, I had lots of uncertain questions and it gave me some objective feedbacks, like what is the Azure technology limitations compared with on-prem. Something may be storage limitations, replication limitations, Geo-location limitations, the number of storage accounts limitations per subscriptions, also storage speed limitations. All these knowledges will be tested against designing the Azure solutions. I followed the explanation links and researching any questions I answered incorrectly. At the end of second try, I got 9 questions still incorrect. I noted all the wrong questions and studied hard on those. After I felt comfortable on all the questions, I booked in my exam on Wednesday morning.

On the exam day, I got up at 6:30AM and had a decent breakfast and rode train to the test centre. It took me about an hour to finish all the questions. Eventually I passed the exam 70-533 Implementating Microsoft Azure Infrastructure Solutions.

 

Trust the process:

Read Microsoft Azure Online Documentations and Practice & Implement them in the labs

 

 

Hopefully this blog can help people who is planning to take Azure exam.

Azure Functions Cold Start Workaround

Intro

I love Azure Functions. So much power for so little effort or cost. The only downside is that the consumption model that keeps the cost so dirt-cheap means that unless you are using your Function constantly (in which case, you might be better off with the non-consumption options anyway), you will often be hit with a long delay as your Function wakes up from hibernation.

So very cold…

This isn’t a big deal if you are dealing with a fire and forget queue trigger scenario, but if you have web app that is calling the HTTP trigger and you need to wait for the Function to do it’s job before responding with a 200 OK… that’s a long wait (well over 15 seconds in my experience with a PowerShell function that loads a bunch of modules).

Now, the blunt way to mitigate this (as suggested by some in github issues on the subject) is to set up a timer function in the same Function App to run every 5 minutes to keep things warm. This to me seems like a wasteful and potentially expensive approach. For my use-case, there was a better way that would work.

The Classic CRUD Use-case

Here’s my use case: I’m building some custom SharePoint forms for a customer and using my preferred JS framework, good old Angular 1.x. Don’t believe the hype around the newer frameworks, ng1 still gets the job done with no performance problems at the scale I’m dealing with in SharePoint. It also comes with a very strong ecosystem of libraries to support doing amazing things in the browser. But that’s a topic for another blog.

Anyway, the only thing I couldn’t do effectively on the client-side was break permissions on the list item created using the form and secure it to the creator and some other users (eg. their manager, etc). You need elevated permissions for that. I called on the awesome power of the PnP PowerShell library (specifically the Set-PnPListItemPermission cmdlet) to do this and wrapped it in a PowerShell Azure Function:

Pretty simple. Nice and clean – gets the job done. My Angular service calls this right after saving the item based on the form input by the user. If the Azure Function is starting from cold, then that adds an extra 20 seconds to save operation. Unacceptable and avoidable.

A more nuanced warmup for those who can…

This seemed like such an obvious solution once I hit on it – but I hadn’t thought of it before. When the form is first opened, it’s pretty reasonable to assume that (unless the form is a monster), the user should be hitting that submit/save button in under 5 mins. So that’s when we hit our function with a modified HTTP payload of ‘WARMUP’.

Just a simple ‘If’ statement to bypass the function if the ‘WARMUP’ payload is detected. The Function immediately responds with a 200. We ignore that – this is effectively fire-and-forget. Yes, this would be even simpler if we had a separate warmup Function that did absolutely nothing except warm up the Functions App, but I like that this ensures that my dependent PnP dlls (in the ‘modules’ folder of my Function) have been fired up on the current instance before I hit the function for real. Maybe it makes no difference. Don’t really care – this is simple enough anyway.

Here’s the Angular code that calls the Function (both as a warmup and for real):

Anyway, nothing revolutionary here, I know. But I hadn’t come across this approach before, so I thought it was worth writing up as it suits this standard CRUD forms over data scenario so nicely.

Till next time!

HoloLens – Using the Windows Device Portal

Windows Device Portal is a web-based tool which was first introduced by Microsoft in the year 2016. The main purpose of the tool is to facilitate application management, performance management and advanced remote troubleshooting for Windows devices.  Device portal is a lightweight web server built into the Windows Device which can be enabled in the developer mode. On a HoloLens, developer mode can be enabled from the settings application (Settings->Update->For Developers->Developer Mode).

Connecting the Device

Once the developer mode on the device is enabled, the following steps must be performed to connect to the device portal.

  1. Connect the device to the same Wi-Fi network as the computer used to access the portal. Wi-Fi settings on the HoloLens can be reached through the settings application (Settings > Network & Internet > Wi-Fi).
  2. Note the IP address assigned to the HoloLens. This can be found in the ‘Advance Options’ screen under the Wi-Fi settings (Settings > Network & Internet > Wi-Fi > Advanced Options).
  3. On the web browser of your computer, navigate to “https://<HoloLens IP Address>”
  4. As you don’t have a valid certificate installed, you will be prompted with a warning. Proceed to the website by ignoring the warning.

Holo WDP1

  1. For the first connection, you will be prompted with a credential reset screen. Your HoloLens should now blackout with only a number displayed in the centre of your viewport. Note this number and enter this in the textbox highlighter below

Holo WDP2

  1. You should also enter a ‘username’ and ‘password’ for the portal to access the device in future.
  2. Click ‘Pair’ button
  3. You will now be prompted with a form to enter the username and password. Fill this form and click on the ‘Login’ button.Holo WDP3
  4. Successful login will take you to the home screen of the Windows Device Portal

Holo WDP4

Portal features

The portal has a main menu (towards the left of the screen) and a header menu (which also serves as a status bar). Following diagram elaborates on the controls available in the header menu

Holo WDP5

The main menu is divided into three sections

  • Views – Groups the Home page, 3D view and the Mixed Reality Capture page which can be used to record the device perspective.
  • Performance – Groups the tools which are primarily used for troubleshooting applications installed on the device or the device itself (hardware and software)
  • System – functionalities to retrieve and control the device behaviour ranging from the managing the applications installed on the device to control the network settings.

A detailed description of the functionalities available under each menu item can be found on the following link:

https://developer.microsoft.com/en-us/windows/mixed-reality/using_the_windows_device_portal

Tips and tricks

Windows device portal is an elaborate tool and some of its functions are tailor-made for advanced troubleshooting. Following are few features of interest to get started with the development.

  • Device name – Device name can be access from the home screen under the view category. It is helpful in the long run to meaningfully name your HoloLens for easier identification over a network. You can change this from the Device Portal.

Holo WDP6

  • Sleep setting – HoloLens has a very limited battery life which ranges from 2 to 6 hours based on the applications we are running on it. This makes the sleep settings important. You can control the sleep setting from home screen of the Device Portal. Keep in mind that during development, you may usually leave the device plugged in (using UBS cable to your laptop/desktop). To avoid the pain of starting the device repeatedly, it is recommended to configure the ‘sleep time when plugged’ into a larger number.

Holo WDP7

  • Mixed reality capture – This feature is very handy to capture the device perspective for application demonstrations. You can capture the vide through the PV (Picture/Video) camera, the audio from the microphones, audio from the app and the holograms projected by the app on a collated media stream. The captured media content is saved on the device and can be downloaded to your machine from the portal. However, the camera channel cannot be shared between two applications. So, if you have an application using the camera of the device your recording will not work by default.
  • Capturing ETW trace – Performance tracking feature under the performance category is very handy for collecting ETW traces for a specific period. The portal enables you to start and stop the trace monitoring. The ETL events captured during this period is logged into a file which can be downloaded for further analysis.
  • Application management – Device portal provides a set of features to manage the applications installed on the HoloLens. These features include deploying, starting and stopping applications. This feature is very useful when you want to remotely initiate an application launch for a user during a demo.

Holo WDP8

  • Virtual Keyboard – HoloLens is not engineered to efficiently capture inputs from the keyboard. Pointing at a virtual keyboard using HoloLens and making an ‘airtap’ for every key press can be very stressful. Device portal gives you an alternate way to input text into your application from the ‘virtual input’ screen.

Holo WDP9

Windows Device Portal APIs

Windows Device Portal is built on top of a collection of public REST APIs. The API collection consists of a set of core APIs common to all Windows devices and specialised sets of APIs for specific devices such as HoloLens and Xbox. The HoloLens Device Portal APIs are bundled under the following controllers

  • App deployment – Deploying applications on HoloLens
  • Dump collection – Managing crash dumps from installed applications
  • ETW – Managing ETW events
  • Holographic OS – Managing the OS level ETW events and states of the OS services
  • Holographic Perception – Manage the perception client
  • Holographic Thermal – Retrieve thermal states
  • Perception Simulation Control – Manage perception simulation
  • Perception Simulation Playback – Manage perception playback
  • Perception Simulation Recording – Manage perception recording
  • Mixed Reality Capture – Manage A/V capture
  • Networking – Retrieve information about IP configuration
  • OS Information – Retrieve information about the machine
  • Performance data – Retrieve performance data of a process
  • Power – Retrieve power state
  • Remote Control – Remotely shutdown or restart the device
  • Task Manager – Start/Stop an installed application
  • Wi-Fi Management – Manage Wi-Fi connections

The following link captures the documentation for HoloLens Device Portal APIs.

https://developer.microsoft.com/en-us/windows/mixed-reality/device_portal_api_reference.

The library behind this tool is an open source project hosted on GitHub. Following is a link to the source repository

https://github.com/Microsoft/WindowsDevicePortalWrapper

HoloLens – Setting up the Development environment

HoloLens is undoubtedly a powerful invention in the field of Mixed reality. Like any other revolutionary inventions, the success of a technology largely depends upon its usability and its ease of adoptability. This is what makes software development kits (SDK) and application programming interfaces (API) associated with a technology super critical. Microsoft has been very smart on this front when it comes to HoloLens. Rather than re-inventing the wheel, they have integrated HoloLens development model with the existing popular gaming platform ‘Unity’ for modelling a Mixed reality application frontend.

Unity is a cross-platform gaming engine which was first released in the year 2005. Microsoft’s collaboration with Unity was first announced in the year 2013 when it started supporting Windows 8, Windows Phone 8 and Xbox One development tools. Unity’s support for Microsoft HoloLens was announced in 2015, since then, it has been the Microsoft recommended platform for developing game worlds for Mixed reality applications.

With Unity in place for building the game world, the next piece of the puzzle was to choose the right platform for scripting the business logic, deploying and debugging the application. For obvious reasons such as flexibility in code editing and troubleshooting, Microsoft Visual Studio was chosen as the platform for authoring HoloLens applications. Visual Studio Tools for Unity is integrated into to the Unity platform which enables it to generate a Visual Studio Solutions for the game world. Scripts can be coded using C# in Visual Studio to contain the operational logic for the Mixed reality application.

HoloLens Application Architecture

A typical Mixed reality application development had multiple tiers. The frontend game world which is built using Unity, the application logic used to power the game objects which is coded using C# in Visual Studio and backend services which encapsulated the business logic. Following diagram illustrates the architecture of such an application.

hldev1

In the above diagram, the box towards the right (Delegated services) is an optional tier for a Mixed reality application. However, many real-world scenarios in the space of Augmented and Mixed reality are powered by strong backend systems such as machine learning and data analytics services.

Development Environment

Microsoft does not use a separate SDK for HoloLens development. All you will require is Visual Studio and Windows 10 SDK. The following section lists the ideal hardware requirements for a development machine.

Hardware requirements

  • Operating system – 64 Bit Windows 10 (Pro, Enterprise, or Education)
  • CPU – 64 Bit, 4 cores. GPU supporting Direct X 11.0 or later
  • RAM – 8 GB or more
  • Hypervisor – Support for Hardware-assisted virtualization

Once the hardware is sorted, you will need to install the following tools for HoloLens application development:

Software requirements

  • Visual Studio 2017 or Visual Studio 2015 (Update 3) – While installing Visual Studio, make sure that you have selected the Universal Windows Platform development workload and the Game Development with Unity workload. You may have to repair/upgrade visual studio is its already installed on your machine without these workloads.
  • HoloLens Emulator – The HoloLens emulator can be used to test a Mixed reality application without deploying it on the device. The latest version of the emulator can be downloaded from the Microsoft web site for free. One thing to remember before you install the emulator is to enable the hypervisor on your machine. These may require changes in your system BIOS
  • Unity – Unity is your front-end game world modelling tool. Unity 5.6 (2017.1) or above supports HoloLens application development. Configuring the Unity platform for a HoloLens application is covered in later sections of this blog.

Configuring Unity

It is advisable to take a basic training in Unity before jumping onto HoloLens application development. Unity offers tons of free training materials online. The following link should lead you to a good starting point

https://unity3d.com/learn/tutorials

Once you are comfortable with basics of Unity, within a Unity project, follow the steps listed in the below URL to configure it for a Microsoft Mixed reality application.

https://developer.microsoft.com/en-us/windows/mixed-reality/unity_development_overview

Following are few things to note while we perform the configuration

  • Build settings – While configuring the build settings, it is advisable to enable the ‘Unity C# project’ checkbox to generate a Visual Studio solution which can be used for debugging your application.

hldev2

  • Player settings – apart from the publishing capabilities (Player settings->Publish settings->Capabilities) listed in the link above, there may be the requirement for your application to have specific capabilities such as an ability to access the picture library or the Bluetooth channel. Make sure that the capabilities required by your application are selected before building the solution.

hldev3

  • Project settings – Although the above-mentioned link recommends the quality to be set to fastest, this might not be the appropriate setting for all applications. It may be good to start with the quality flag set to ‘Fastest’ and then update the flag if required based on your application needs.

hldev4

One these settings are configured, you are good to create your game world for the HoloLens application. The unity project can then be built to generate a Visual Studio solution. The build operation will pop up a file dialog box to shoes a folder where you want the visual studio solution to be created. It is recommended to create a new folder for the Visual Studio solution.

Working with Visual Studio

Irrespective of the frontend tool used to create the game world, a HoloLens application will require Visual Studio to deploy and debug the application. The application can be deployed on a HoloLens emulator for development purpose.

The below link details the steps for deploying and debugging an Augmented/Mixed reality application on HoloLens using Visual Studio.

https://developer.microsoft.com/en-us/windows/mixed-reality/using_visual_studio

Following are few tips which may help set-up up and deploy the application effectively:

  • Optimizing the deployment – Deploying the application over USB is multiple time faster than deploying it over Wi-Fi. Ensure that the architecture is chosen as ‘X86’ before you fire the deploy.

hldev5

  • Application settings – You will observe that the solution has a Universal Windows Project set as your start-up project. To change the application properties such as application name, description, visual assets, application capabilities, etc., you can update the package manifest.

Following are the steps:

  1. Click on project properties

hldev6

2. Click on Package Manifest

hldev7

3. Use the ‘Application’, ‘Visual Assets’ and ‘Capabilities’ tabs to manage your application properties.

hldev8

HoloLens application development toolset comes with another powerful tool called ‘Windows Device Portal’ to manage your HoloLens and the applications installed on it. More about this in my next blog.

 

 

Polycom VVX 310 – Unable to do blind transfer internally

I’ve been working with one SFB customer recently. I met some unique issues and I would like to share the experience of what I did to solve the problem.

Issue Description:

Customers were experiencing Polycom handsets unable to transfer external calls to a particular internal 4 – digit number range xxxx. All the agent phones are VVX 310 and agents sign in via extension & pin. When the call transfer failed, what the callers heard is the placid recorded female voice: “we’re sorry, your call cannot be completed as dialled. Please check the number and dial again”. Interesting thing is the call transfer failed scenarios only happen to blind transfers while the supervised transfers worked perfectly. Polycom handsets can successfully make direct calls and receiving calls. Well, this kind of doesn’t make any sense to me.

Investigations:

Firstly, I went through all the SFB dial plans and Gateway routing and transformation rules. The number range was correctly configured and nothing is different from other range.

I upgraded the firmware to the latest V5.5 SFB-enabled version on one of the Polycom handsets. It didn’t make any improvement. The result is still the same.

I was thinking about the Digimap settings on the configuration file which may cause this issue, so I logged into the web interface -> settings -> sip -> Digitmap, removed the regex in the Digitmap field, and rebooted the phone. Still when doing blind transfer to internal number range xxxx, no luck. It failed again. :/

Interesting thing happened when I tested by using the SFB client and log in as two users within the number range xxxx and did the same blind transfer, It worked! When I using the SFB client transfer to the Polycom handset, it also worked. But it stopped working when I did transfer from the Polycom handset.

Since I can hear the Telco’s voice, I thought it would be good to do a tracing from the Sonus end to see why the transfer failed first. From the live trace, I can see the invite number is not what I expected. Something went wrong when the number normalizations happened. The extension was given the wrong prefix. Where the wrong prefix come from?

I logged in the SFB control panel to re-check the voice routing. It shows me nothing is wrong with the user dial plan and user normalization rules. The control panel testing tool gave different prefix result compared with prefix in the live tracing. Where could possibly go wrong?? Ext xxx1 is mapped with SFB user 1, when I log in my SFB client as user 1 and everything works, but when I log in the Polycom phone as Ext xxx1, the blind transfer failed when I transfer to problematic number range xxxx.

All of the sudden, I noticed global dial plan has the strange prefix configured there which was matching the prefix (+613868) pending in the live trace. So I believed, for some reasons Polycom handsets are using the global dial plan when it doing the blind transfer, this may be a bug . The handsets are using global dial plan during the blind transfer while the SFB client is using user profile dial plan. This approved the behaviour difference between the handsets and the desktop clients.

Solution Summary:

After I created a new entry for the number range xxxx in the global dial plan. Polycom phone rebooted and started working again. The result looks all correct. Verified the issue resolved. 

 

 

Hopefully it can help someone else who have similar issues.

 

 

Be SOLID: uncle Bob

We have discussed STUPID issues in programming. The shared modules and tight coupling leads to dependency issues in design. The SOLID principles address those dependency issues in OOP.

SOLID acronym was popularized by Robert Martin as generic design principles dictated by common sense in OOP. They mainly address dependencies and tight coupling. We will discuss SOLID one by one and try to relate each of them with the underline problems and how they try to solve them.

          S – Single Responsibility Principle – SRP

“There should not be more than one reason for something to exist.”

 img.png

As the name suggest, a module/ class etc. should not have more than one responsibility in a system. The more a piece of code is doing, or trying to do, the more fragile, rigid, and difficult to (re)use it gets. Have a look at the code below:

 

class EmployeeService

{

               //constructor(s)//

                Add(Employee emp)

               {

                              //…..//

using (var db = new <Some Database class/ service>()) // or some SINGLETON or factory call, Database.Get()

                              {

try

{

                                              db.Insert(emp);

             db.Commit();

             //more code

}

catch(…)

{

   db.Rollback();

}

}

//….

}

}

 

All looks good yes? There are genuine issues with this code. The EmployeeService has too much responsibilities. The database handling should not be a responsibility of this class. Because of baked in database handling details, the EmployeeService has become rigid and harder to reuse or extend for multiple databases, for example. It’s like a Swiss knife; it looks easy but very rigid and inextensible.

Let’s KISS (keep it simple, stupid) it a bit.

 

//…

Database db = null;

public EmployeeService ()

{

               //…

               db = Database.Get(); // or a constructor etc.

//..

}

Add(Employee emp)

{

               //…

               db.Add<Employee>(emp);

//..

}

 

We have removed the database handling details from the EmployeeService class. This makes the code a bit cleaner and maintainable. It also ensures that everything is doing their job and their job only. Now the class care less about how the database is handled and more about Employee, its true purpose.

Also note that, SRP does not mean a structure/ class will only have a single function/ property etc. It means a piece of code should only have one responsibility related to the business: An Entity service should only be concerned about handling entities and not anything else like database handlings, logging, handling sub entities directly (like saving employee address explicitly) etc.

SRP might increase total number of classes in a module but it also increases their simplicity and reusability. This means in a longer run the codebase remains flexible and adoptive to changes. Singletons are often regarded as opposite of SRP because they quickly become God objects doing too many things (Swiss knife) and introducing too many hidden dependencies into a system.

           O – Open Close Principle – OCP

“Once done, don’t change it, extend it”

motorcycle-sidecar

A class in a system must not be open to any changes, except bug fixing. That means we should not introduce changes to a class to add new features/ functionality to it. Now this does not sound practical because every class would evolve relatively to the business it represents. The OCP says that to add new features, the classes must be extended (open) instead of modified (close). And this introduces abstractions as a part of a business need to add new features into classes, instead of just a fancy have-it.

Developing our classes in the form of abstractions (interfaces/ abstract classes) provides multiple implementation flexibility and greater reusability. It also ensures that once a piece of code is tested, it does not go through another cycle of code changes and retesting for new features. Have a look the above EmployeeService class.

 

class EmployeeService

{

               void Add(Employee emp)

{

//..

db.Add<Employee>(emp);

//…

}

//…

}

 

Now if there was a new requirement that would request an email to be sent to the Finance department, if the newly added employee is a contractor, say. We will have to make changes to this class. Let’s redo the service for the new feature.

 

void Add(Employee emp)

{

//..

db.Add<Employee>(emp);

 

if (emp.Type == EmplolyeeType.Contractor)

               //… send email to finance

//…

}

//…

 

The above, though seems straightforward and a lot easier, is a code smell. It introduces rigid code and hardwired conditioning into a class that would demand retesting all existing use cases related to EmployeeService on top of the new ones. It also makes the code cluttered and harder to manage and reuse as the requirements evolve with time. Instead what we could do is be close to modifications and open to extensions.

 

interface IEmployeeService

{

               void Add(Employee employee);

               //…

}

 

And then;

 

class EmployeeService : IEmployeeService

{

               void Add(Employee employee)

               {

                              //.. add the employee

}

}

class ContractorService: IEmployeeService

{

               void Add(Employee employee)

               {

                              // add the employee

                              // send email to finance.

}

}

 

Of course, we could have an abstract Employee service class instead of the interface that will have a virtual Add method with add the employee functionality, that would be DRY.

Now instead of a single EmployeeService class we have separate classes that are extensions of the EmployeeService abstraction. This way we can keep adding new features into the service without having a need to retest any existing ones. This also removed the unnecessary cluttering and rigidness from the code and made it more reusable.

          L – Liskov Substitution Principle – LSP

“If your duck needs batteries, it’s not a duck”

So Liskov worded the principle as:

               If for each object obj1 of type S, there is an object obj2 of type T, such that for all programs P defined in terms of T, the behaviour of P is unchanged when obj1 is substituted for obj2 then S is a subtype of T

Sounds too complex? I know. Let us say that in English instead.

If we have a piece of code using an object of class Parent, then the code should not have any issues, if we replace Parent object with an object of its Child, where Child inherits Parent.

likso1.jpg

Take the same Employee service code and try to add a new feature in it, Get leaves for an employee.

 

interface IEmployeeService

{

               void Add(Employee employee);

               int GetLeaves(int employeeId);

               //…

}

class EmployeeService : IEmployeeService

{

               void Add(Employee employee)

               {

                              //.. add the employee

}

int GetLeaves (int employeeId)

{

               // Calculate and return leaves

}

}

class ContractorService : IEmployeeService

{

               void Add(Employee employee)

               {

                              // add the employee

                              // send email to finance.

}

int GetLeaves (int employeeId)

{

               //throw some exception

}

}

 

Since the ContractorService does not have any business need to calculate the leaves, the GetLeaves method just throws a meaningful exception. Make sense, right? Now let’s see the client code using these classes, with IEmployeeService as Parent and EmployeeService and ContractorService as its children.

 

IEmployeeService employeeService = new EmployeeService();

IEmployeeService contractorService = new ContractorService ();

employeeService. GetLeaves (<id>);

contractorService. GetLeaves (<id2>);

 

The second line will throw an exception at RUNTIME. At this level, it does not mean much. So what? Just don’t invoke GetLeaves if it’s a ContractorService. Ok let’s modify the client code a little to highlight the problem even more.

 

List<IEmployeeService> employeeServices = new List<IEmployeeService>();

employeeServices.Add(new EmployeeService());

employeeServices.Add(new ContractorService ());

CalculateMonthlySalary(employeeServices);

//..

void CalculateMonthlySalary(IEnumerable<IEmployeeService> employeeServices)

{

               foreach(IEmployeeService eService in employeeServices)

               {

               int leaves = eService. GetLeaves (<id>);//this will break on the second iteration

               //… bla bla

}

}

 

The above code will break the moment it tries to invoke GetLeaves in that loop the second time. The CalculateMonthlySalary knows nothing about ContractorService and only understands IEmployeeService, as it should. But its behaviour changes (it breaks) when a child of IEmployeeService (ContractorService) is used, at runtime. Let’s solve this:

 

interface IEmployeeService

{

               void Add(Employee employee);

               //…

}

interface ILeaveService

{

               int GetLeaves (int employeeId);

               //…

}

class EmployeeService : IEmployeeService, ILeaveService

{

               void Add(Employee employee)

               {

                              //.. add the employee

}

int GetLeaves (int employeeId)

{

               // Calculate and return leaves

}

}

class ContractorService : IEmployeeService

{

               void Add(Employee employee)

               {

                              // add the employee

                              // send email to finance.

}

}

 

Now the client code to calculate leaves will be

 

void CalculateMonthlySalary(IEnumerable<ILeaveService> leaveServices)

 

and wallah, the code is as smooth as it gets. The moment we try to do.

 

List<ILeaveService> leaveServices = new List<ILeaveService>();

leaveServices.Add(new EmployeeService());

leaveServices.Add(new ContractorService ()); //Compile-time error.

CalculateMonthlySalary(leaveServices);

 

It will give us a compile-time error. Because the method CalculateMonthlySalary is now expecting IEnumberable of ILeaveService to calculate leaves of employees, we have a List of ILeaveService, but ContractorService does not implements ILeaveService.

 

List<ILeaveService> leaveServices = new List<ILeaveService>();

leaveServices.Add(new EmployeeService());

leaveServices.Add(new EmployeeService ());

CalculateMonthlySalary(leaveServices);

 

LSP helps fine graining the business requirements and operational boundaries of the code. It also helps identifying the responsibilities of a piece of code and what kind of resources it would need to do its job. This increases SRP, enhances decoupling and reduces useless dependencies (CalculateMonthlySalary does not care about the whole IEmployeeService anymore, and only depends upon ILeaveService).

Breaking down responsibilities sometimes can be a bit hard in complex business requirements and LSP also tends to increase the number of isolated code units (classes, interfaces etc.). But it becomes apparent in simple and carefully designed structures where Tight Coupling and Duplications are avoided.

          I – Interface Segregation Principle – ISP

Don’t give me something I don’t need”

 oop-principles

In LSP, we did see that the method CalculateMonthlySalary had no use of the complete IEmployeeService, it only needed a subset of IEmployeeService, the GetLeaves method. This is, in its basic form, the ISP. It asks to identify the resources needed by a piece of code to do its job and then only provide those resources to it, nothing more. ISP finds real dependencies in code and eliminates unwanted ones. This helps in decoupling code greatly, helps recognising code dependencies, and ensures code isolation and security (CalculateMonthlySalary does not have any access to Add method anymore).

ISP advocates module customization based on OCP; identify requirements and isolate code by creating smaller abstractions, instead of modifications. Also, when we fine-grain pieces of code using ISP, the individual components become smaller. This increases their testability, manageability and reusability. Have a look:

 

class Employee

{

               //…

               string Id;

               string Name;

               string Address;

               string Email;

//…

}

void SendEmail(Employee employee)

{

               //.. uses Name and Email properties only

}

 

The above is a violation of ISP. The method SendEmail has no use of the class Employee, it only uses a Name and an Email to send out emails but is dependent on Employee class definition. This introduces unnecessary dependencies into the system, though it seems small at start. Now the SendEmail method can only be used for Employees and with nothing else; no reusability. Also, it has access to all the other features of Employee, without any requirements; security and isolation. Let’s rewrite it.

 

 void SendEmail(string name, string email)

{

               //.. sends email of whatever

}

 

Now the method does not care about any changes in the Employee class; dependency identified and isolated. It can be reused and is testable with anything, instead of just Employee. In short, don’t be misguided by the word Interface in ISP. It has its usage everywhere.

          D – Dependency Inversion Principle – DIP

To exist, I did not depend upon my sister, and my sister not upon me. We both depended upon our parents

Remember the example we discussed in SRP, where we introduced Database class into the EmployeeService.

 

class EmployeeService : IEmployeeService, ILeaveService

{

Database db = null;

public EmployeeService ()

{

                              //…

                              db = Database.Get(); // or a constructor etc. the EmployeeService is dependent upon Database

//..

}

Add(Employee emp)

{

                              //…

                              db.Add<Employee>(emp);

//..

}

}

 

The DIP dictates that:

               No high-level modules (EmployeeService) should be dependent upon any low-level modules (Database), instead both should depend upon abstractions. And abstractions should not depend upon details, details should depend upon abstractions.

 dip

The EmployeeService here is a high-level module that is using, and dependent upon a low-level module, Database. This introduces a hidden dependency on the Database class. This also increases the coupling between EmployeeService and the Database. The client code using EmployeeService now must have access to the Database class definition, even though it’s not exposed to it and does not know, apparently, that Database class/ service/ factory/ interface exists.

Also note that it does not matter if, to get a Database instance, we use a singleton, factory, or a constructor. Inverting a dependency does not mean replacing its constructor with a service / factory/ singleton call, because then the dependency is just transformed into another class/ interface while remain hidden. One change in Database.Get, for example, could have unforeseen implications on the client code using the EmployeeService without knowing. This makes the code rigid and tightly coupled to details, difficult to test and almost impossible to reuse.

Let’s change it a bit.

 

 class EmployeeService : IEmployeeService, ILeaveService

{

Database db = null;

 public EmployeeService (Database database)

{

                              //…

                              db = database;

//..

}

Add(Employee emp)

{

                              //…

                              db.Add<Employee>(emp);

//..

}

//…

}

 

We have moved the Getting of Database to an argument in the constructor (because the scope of the db variable is class level). Now EmployeeService is not dependent upon the details of Database instantiation. This solves one problem but the EmployeeService is still dependent upon a low-level module (Database). Let’s change that:

 

IDatabase db = null;

public EmployeeService (IDatabase database)

{

//…

         db = database;

//..

}

 

We have replaced the Database with an abstraction (IDatabase). EmployeeService does not depend, nor care about any details of Database anymore. It only cares about the abstraction IDatabase. The Database class will be implementing the IDatabase abstraction (depends upon an abstraction).

Now the actual database implementation can be replaced anytime (testing) with a mock or with any other database details, as per the requirements and the Service will not be affected.

We have covered, in some details, the SOLID design principles, with few examples to understand the underline problems and how those can be solved using these principles. As can be seen, most of the time, a SOLID principle is just common sense and not making STUPID mistakes in designs and giving the business requirements some thought.

 

HoloLens – understanding the device

HoloLens is without doubt the next coolest product launched by Microsoft after Kinect. Before understanding the device lets quickly familiarize ourselves with the domain of Mixed reality and how it is different from Virtual and Augmented reality.

VR, AR and MR

Virtual reality, the first of the lot, is a concept of creating a virtual world about the user. This means all that the user sees or hears is simulated. The concept of virtual reality is not new to us. A simpler form of virtual reality was achieved back in 18th century using panoramic paintings and stereoscopic photo viewers. Probably the first implementation of a commercial virtual reality application was the “Link Trainer”, a flight simulator invented in the year 1929.

The 1st head mounted headset for Virtual reality was invented in the year 1960 by Morton Heilig. This invention enabled the user to be mobile, thereby introducing possibilities of better integrating with their surroundings. The result was the era of sensor packed headsets which can track movement, sense depth, heat, geographical coordinates and so on.

These head mounted devices then became capable of projecting 3D and 2D information on see- through screens. This concept of overlaying the content on the real world was termed as Augmented reality. The concept of Augmented Reality was first introduced in the year 1992 by Boeing where it was implemented to assist workers for assembling wire bundles.

The use cases around Augmented reality started strengthening over the following years. When virtual objects were projected on the real world, the possibilities of these objects interacting with the real-world objects started gaining focus. This brought in the invention of the concept called Mixed reality. Mixed reality can be portrayed as the successor of Augmented reality where the virtual objects projected in the real world are anchored to and interact with the real-world objects. HoloLens is one of the most powerful devices in the market today which can cater to Augmented and Mixed reality applications.

Birth of the HoloLens – Project Baraboo

After Windows Vista, repairing amazon forest and project Natal (popularly known as Kinect) Alex Kipman (Technical fellow, Microsoft) decided to focus his time on a machine which could not only see what a person sees but also understand his environment and project things on his line of vision. While building this device Kipman was keen on preserving the perpetual vision of the user to ensure that he or she does not feel blindfolded. He used his knowledge from his previous invention, Kinect, around depth sensing and recognizing objects.

The end product was a device with an array of cameras, microphones and other smart sensors all feeding information to a specially crafted processing module which Microsoft calls a Holographic Processing Unit (HPU). The device is capable of mapping its surroundings and understating the depth of the world in its field of vision. It can be controlled by gestures and by voice. The user’s head acts like the point device with a cursor which shines in the middle of his view port. HoloLens is also a fully functional Windows 10 computer.

The Hardware

Following are the details of the sensors built into the HoloLens:

HoloSense

  • Inertial measurement unit (IMU) – The HoloLens IMU consists of an accelerometer, gyroscope, and a magnetometer to help track the movements of the user.
  • Environment sensing cameras – The devices come with 4 environment sensing cameras used to recognize the orientation of the device and for spatial mapping
  • Depth Camera – The depth camera in this device is used for finger tracking and for spatial mapping
  • HD Video camera – A generic high definition camera which can be used by applications to capture video stream
  • Microphones – The device is fitted with an array of 4 microphone to capture voice commands and sound from 360 degrees
  • Ambient light sensor – A sensor used to capture the light intensity from the surrounding environment

The HoloLens also comes with the following two built-in processing units and storage

HoloChip

  • Central Processing Unit – Intel Atom 1.04 GHz processor with 4 logical processors.
  • Holographic Processing Unit – HoloLens Graphics processor based on Intel 8086h architecture
  • High Speed memory – 2 GB RAM and 114 MB dedicated Video Memory
  • Storage – 64 GB flash memory.

HoloLens supports Wi-Fi (802.11ac) and Bluetooth (4.1 LE) communication channels. The headset also comes with a 3.5 mm audio jack and a Micro USB 2.0 multi-purpose port, the device has a battery life of nearly 3 hours when fully charged.

More about the software and development tools in my next blog.