HoloLens – Setting up the Development environment

HoloLens is undoubtedly a powerful invention in the field of Mixed reality. Like any other revolutionary inventions, the success of a technology largely depends upon its usability and its ease of adoptability. This is what makes software development kits (SDK) and application programming interfaces (API) associated with a technology super critical. Microsoft has been very smart on this front when it comes to HoloLens. Rather than re-inventing the wheel, they have integrated HoloLens development model with the existing popular gaming platform ‘Unity’ for modelling a Mixed reality application frontend.

Unity is a cross-platform gaming engine which was first released in the year 2005. Microsoft’s collaboration with Unity was first announced in the year 2013 when it started supporting Windows 8, Windows Phone 8 and Xbox One development tools. Unity’s support for Microsoft HoloLens was announced in 2015, since then, it has been the Microsoft recommended platform for developing game worlds for Mixed reality applications.

With Unity in place for building the game world, the next piece of the puzzle was to choose the right platform for scripting the business logic, deploying and debugging the application. For obvious reasons such as flexibility in code editing and troubleshooting, Microsoft Visual Studio was chosen as the platform for authoring HoloLens applications. Visual Studio Tools for Unity is integrated into to the Unity platform which enables it to generate a Visual Studio Solutions for the game world. Scripts can be coded using C# in Visual Studio to contain the operational logic for the Mixed reality application.

HoloLens Application Architecture

A typical Mixed reality application development had multiple tiers. The frontend game world which is built using Unity, the application logic used to power the game objects which is coded using C# in Visual Studio and backend services which encapsulated the business logic. Following diagram illustrates the architecture of such an application.

hldev1

In the above diagram, the box towards the right (Delegated services) is an optional tier for a Mixed reality application. However, many real-world scenarios in the space of Augmented and Mixed reality are powered by strong backend systems such as machine learning and data analytics services.

Development Environment

Microsoft does not use a separate SDK for HoloLens development. All you will require is Visual Studio and Windows 10 SDK. The following section lists the ideal hardware requirements for a development machine.

Hardware requirements

  • Operating system – 64 Bit Windows 10 (Pro, Enterprise, or Education)
  • CPU – 64 Bit, 4 cores. GPU supporting Direct X 11.0 or later
  • RAM – 8 GB or more
  • Hypervisor – Support for Hardware-assisted virtualization

Once the hardware is sorted, you will need to install the following tools for HoloLens application development:

Software requirements

  • Visual Studio 2017 or Visual Studio 2015 (Update 3) – While installing Visual Studio, make sure that you have selected the Universal Windows Platform development workload and the Game Development with Unity workload. You may have to repair/upgrade visual studio is its already installed on your machine without these workloads.
  • HoloLens Emulator – The HoloLens emulator can be used to test a Mixed reality application without deploying it on the device. The latest version of the emulator can be downloaded from the Microsoft web site for free. One thing to remember before you install the emulator is to enable the hypervisor on your machine. These may require changes in your system BIOS
  • Unity – Unity is your front-end game world modelling tool. Unity 5.6 (2017.1) or above supports HoloLens application development. Configuring the Unity platform for a HoloLens application is covered in later sections of this blog.

Configuring Unity

It is advisable to take a basic training in Unity before jumping onto HoloLens application development. Unity offers tons of free training materials online. The following link should lead you to a good starting point

https://unity3d.com/learn/tutorials

Once you are comfortable with basics of Unity, within a Unity project, follow the steps listed in the below URL to configure it for a Microsoft Mixed reality application.

https://developer.microsoft.com/en-us/windows/mixed-reality/unity_development_overview

Following are few things to note while we perform the configuration

  • Build settings – While configuring the build settings, it is advisable to enable the ‘Unity C# project’ checkbox to generate a Visual Studio solution which can be used for debugging your application.

hldev2

  • Player settings – apart from the publishing capabilities (Player settings->Publish settings->Capabilities) listed in the link above, there may be the requirement for your application to have specific capabilities such as an ability to access the picture library or the Bluetooth channel. Make sure that the capabilities required by your application are selected before building the solution.

hldev3

  • Project settings – Although the above-mentioned link recommends the quality to be set to fastest, this might not be the appropriate setting for all applications. It may be good to start with the quality flag set to ‘Fastest’ and then update the flag if required based on your application needs.

hldev4

One these settings are configured, you are good to create your game world for the HoloLens application. The unity project can then be built to generate a Visual Studio solution. The build operation will pop up a file dialog box to shoes a folder where you want the visual studio solution to be created. It is recommended to create a new folder for the Visual Studio solution.

Working with Visual Studio

Irrespective of the frontend tool used to create the game world, a HoloLens application will require Visual Studio to deploy and debug the application. The application can be deployed on a HoloLens emulator for development purpose.

The below link details the steps for deploying and debugging an Augmented/Mixed reality application on HoloLens using Visual Studio.

https://developer.microsoft.com/en-us/windows/mixed-reality/using_visual_studio

Following are few tips which may help set-up up and deploy the application effectively:

  • Optimizing the deployment – Deploying the application over USB is multiple time faster than deploying it over Wi-Fi. Ensure that the architecture is chosen as ‘X86’ before you fire the deploy.

hldev5

  • Application settings – You will observe that the solution has a Universal Windows Project set as your start-up project. To change the application properties such as application name, description, visual assets, application capabilities, etc., you can update the package manifest.

Following are the steps:

  1. Click on project properties

hldev6

2. Click on Package Manifest

hldev7

3. Use the ‘Application’, ‘Visual Assets’ and ‘Capabilities’ tabs to manage your application properties.

hldev8

HoloLens application development toolset comes with another powerful tool called ‘Windows Device Portal’ to manage your HoloLens and the applications installed on it. More about this in my next blog.

 

 

How to Pass Parameters into a PowerApp

I was recently trying to replicate some functionality I’m used to with InfoPath, but this time using a PowerApp. I was looking for a way to get parameters from outside into a PowerApp, and surprisingly this is really not well documented, but actually really quite easy to do. I’ll walk you through the process below.

Note, this isn’t about passing parameters between screens in a PowerApp, that’s well covered in several blog posts, and forums.

First off, start by creating an app. Below I’ve created my “ColinTestApp” App. Next, go into the “Information” pane on the far right (highlighted by the arrow).

In there you’ll find the “Web link”. This is what you’ll use to pass variables into your app via a web URL. Note the App GUID is in in the URL – this is the unique identifier for your app.

Your Web link will look like something like this below (where the GUID of your APP replaces APP GUID).

https://web.powerapps.com/apps/[APP GUID]

To pass parameters to the App, you simply add URL parameters like this below.

https://web.powerapps.com/apps/[APP GUID]?ID=123&Ball=round&River=water

In this case, we’re passing 3 named parameters, “ID”, “Ball”, and “River” (remember, when passing URL’s over HTTP, you need to start with a question mark (?), followed by ampersands (&) for all additional parameters in a “name=value” pairing).

To use these in your PowerApp, you simply refer to them by their parameter name using the “Param” method, like below.

Param("[PARAM NAME]")

For example, in my case, I used it as a start variable on my initial screen. This allowed me to feed to the PowerApp a dynamic value (the ID) of the item I wanted to load at start up. In this scenario, the data was being fed from a SharePoint list, and at load time I was able to load the specific SharePoint list item I wanted into my start screen.

First(Filter(MySharePointListDataSource,ID=Param("ID")))

The above formula basically filters the ID parameter to the “ID” of the data source (a SharePoint list), and then from that set, I select the “first”, just for good measure (even though there should never be more than one matching ID for any given SharePoint list item).

In the above screen, just make sure you have the “DetailForm1” selected (which will allow you to set the DATA pane, and set your datasource for the form), and then be sure that 1 & 2 are the same value (i.e. if your data source (SharePoint list) is “Products”, be sure that Products is in both #1 and #2 above.

Note, I’d previously setup a DataSource that draws from my SharePoint List

So in my scenario “TestList” would be in both #1 and #2 above.

Hopefully this helps some of you out in PowerApp land!

Restoring deleted OneDrive sites in Office365

A customer has requested whether it was possible to restore a OneDrive site that had been deleted when the user’s account was marked for deletion in AD. After a bit of research, I was able to restore the site back and retrieved the files (luckily it was deleted less than 30 days ago).

Read More

Polycom VVX 310 – Unable to do blind transfer internally

I’ve been working with one SFB customer recently. I met some unique issues and I would like to share the experience of what I did to solve the problem.

Issue Description:

Customers were experiencing Polycom handsets unable to transfer external calls to a particular internal 4 – digit number range xxxx. All the agent phones are VVX 310 and agents sign in via extension & pin. When the call transfer failed, what the callers heard is the placid recorded female voice: “we’re sorry, your call cannot be completed as dialled. Please check the number and dial again”. Interesting thing is the call transfer failed scenarios only happen to blind transfers while the supervised transfers worked perfectly. Polycom handsets can successfully make direct calls and receiving calls. Well, this kind of doesn’t make any sense to me.

Investigations:

Firstly, I went through all the SFB dial plans and Gateway routing and transformation rules. The number range was correctly configured and nothing is different from other range.

I upgraded the firmware to the latest V5.5 SFB-enabled version on one of the Polycom handsets. It didn’t make any improvement. The result is still the same.

I was thinking about the Digimap settings on the configuration file which may cause this issue, so I logged into the web interface -> settings -> sip -> Digitmap, removed the regex in the Digitmap field, and rebooted the phone. Still when doing blind transfer to internal number range xxxx, no luck. It failed again. :/

Interesting thing happened when I tested by using the SFB client and log in as two users within the number range xxxx and did the same blind transfer, It worked! When I using the SFB client transfer to the Polycom handset, it also worked. But it stopped working when I did transfer from the Polycom handset.

Since I can hear the Telco’s voice, I thought it would be good to do a tracing from the Sonus end to see why the transfer failed first. From the live trace, I can see the invite number is not what I expected. Something went wrong when the number normalizations happened. The extension was given the wrong prefix. Where the wrong prefix come from?

I logged in the SFB control panel to re-check the voice routing. It shows me nothing is wrong with the user dial plan and user normalization rules. The control panel testing tool gave different prefix result compared with prefix in the live tracing. Where could possibly go wrong?? Ext xxx1 is mapped with SFB user 1, when I log in my SFB client as user 1 and everything works, but when I log in the Polycom phone as Ext xxx1, the blind transfer failed when I transfer to problematic number range xxxx.

All of the sudden, I noticed global dial plan has the strange prefix configured there which was matching the prefix (+613868) pending in the live trace. So I believed, for some reasons Polycom handsets are using the global dial plan when it doing the blind transfer, this may be a bug . The handsets are using global dial plan during the blind transfer while the SFB client is using user profile dial plan. This approved the behaviour difference between the handsets and the desktop clients.

Solution Summary:

After I created a new entry for the number range xxxx in the global dial plan. Polycom phone rebooted and started working again. The result looks all correct. Verified the issue resolved. 

 

 

Hopefully it can help someone else who have similar issues.

 

 

Be SOLID: uncle Bob

We have discussed STUPID issues in programming. The shared modules and tight coupling leads to dependency issues in design. The SOLID principles address those dependency issues in OOP.

SOLID acronym was popularized by Robert Martin as generic design principles dictated by common sense in OOP. They mainly address dependencies and tight coupling. We will discuss SOLID one by one and try to relate each of them with the underline problems and how they try to solve them.

          S – Single Responsibility Principle – SRP

“There should not be more than one reason for something to exist.”

 img.png

As the name suggest, a module/ class etc. should not have more than one responsibility in a system. The more a piece of code is doing, or trying to do, the more fragile, rigid, and difficult to (re)use it gets. Have a look at the code below:

 

class EmployeeService

{

               //constructor(s)//

                Add(Employee emp)

               {

                              //…..//

using (var db = new <Some Database class/ service>()) // or some SINGLETON or factory call, Database.Get()

                              {

try

{

                                              db.Insert(emp);

             db.Commit();

             //more code

}

catch(…)

{

   db.Rollback();

}

}

//….

}

}

 

All looks good yes? There are genuine issues with this code. The EmployeeService has too much responsibilities. The database handling should not be a responsibility of this class. Because of baked in database handling details, the EmployeeService has become rigid and harder to reuse or extend for multiple databases, for example. It’s like a Swiss knife; it looks easy but very rigid and inextensible.

Let’s KISS (keep it simple, stupid) it a bit.

 

//…

Database db = null;

public EmployeeService ()

{

               //…

               db = Database.Get(); // or a constructor etc.

//..

}

Add(Employee emp)

{

               //…

               db.Add<Employee>(emp);

//..

}

 

We have removed the database handling details from the EmployeeService class. This makes the code a bit cleaner and maintainable. It also ensures that everything is doing their job and their job only. Now the class care less about how the database is handled and more about Employee, its true purpose.

Also note that, SRP does not mean a structure/ class will only have a single function/ property etc. It means a piece of code should only have one responsibility related to the business: An Entity service should only be concerned about handling entities and not anything else like database handlings, logging, handling sub entities directly (like saving employee address explicitly) etc.

SRP might increase total number of classes in a module but it also increases their simplicity and reusability. This means in a longer run the codebase remains flexible and adoptive to changes. Singletons are often regarded as opposite of SRP because they quickly become God objects doing too many things (Swiss knife) and introducing too many hidden dependencies into a system.

           O – Open Close Principle – OCP

“Once done, don’t change it, extend it”

motorcycle-sidecar

A class in a system must not be open to any changes, except bug fixing. That means we should not introduce changes to a class to add new features/ functionality to it. Now this does not sound practical because every class would evolve relatively to the business it represents. The OCP says that to add new features, the classes must be extended (open) instead of modified (close). And this introduces abstractions as a part of a business need to add new features into classes, instead of just a fancy have-it.

Developing our classes in the form of abstractions (interfaces/ abstract classes) provides multiple implementation flexibility and greater reusability. It also ensures that once a piece of code is tested, it does not go through another cycle of code changes and retesting for new features. Have a look the above EmployeeService class.

 

class EmployeeService

{

               void Add(Employee emp)

{

//..

db.Add<Employee>(emp);

//…

}

//…

}

 

Now if there was a new requirement that would request an email to be sent to the Finance department, if the newly added employee is a contractor, say. We will have to make changes to this class. Let’s redo the service for the new feature.

 

void Add(Employee emp)

{

//..

db.Add<Employee>(emp);

 

if (emp.Type == EmplolyeeType.Contractor)

               //… send email to finance

//…

}

//…

 

The above, though seems straightforward and a lot easier, is a code smell. It introduces rigid code and hardwired conditioning into a class that would demand retesting all existing use cases related to EmployeeService on top of the new ones. It also makes the code cluttered and harder to manage and reuse as the requirements evolve with time. Instead what we could do is be close to modifications and open to extensions.

 

interface IEmployeeService

{

               void Add(Employee employee);

               //…

}

 

And then;

 

class EmployeeService : IEmployeeService

{

               void Add(Employee employee)

               {

                              //.. add the employee

}

}

class ContractorService: IEmployeeService

{

               void Add(Employee employee)

               {

                              // add the employee

                              // send email to finance.

}

}

 

Of course, we could have an abstract Employee service class instead of the interface that will have a virtual Add method with add the employee functionality, that would be DRY.

Now instead of a single EmployeeService class we have separate classes that are extensions of the EmployeeService abstraction. This way we can keep adding new features into the service without having a need to retest any existing ones. This also removed the unnecessary cluttering and rigidness from the code and made it more reusable.

          L – Liskov Substitution Principle – LSP

“If your duck needs batteries, it’s not a duck”

So Liskov worded the principle as:

               If for each object obj1 of type S, there is an object obj2 of type T, such that for all programs P defined in terms of T, the behaviour of P is unchanged when obj1 is substituted for obj2 then S is a subtype of T

Sounds too complex? I know. Let us say that in English instead.

If we have a piece of code using an object of class Parent, then the code should not have any issues, if we replace Parent object with an object of its Child, where Child inherits Parent.

likso1.jpg

Take the same Employee service code and try to add a new feature in it, Get leaves for an employee.

 

interface IEmployeeService

{

               void Add(Employee employee);

               int GetLeaves(int employeeId);

               //…

}

class EmployeeService : IEmployeeService

{

               void Add(Employee employee)

               {

                              //.. add the employee

}

int GetLeaves (int employeeId)

{

               // Calculate and return leaves

}

}

class ContractorService : IEmployeeService

{

               void Add(Employee employee)

               {

                              // add the employee

                              // send email to finance.

}

int GetLeaves (int employeeId)

{

               //throw some exception

}

}

 

Since the ContractorService does not have any business need to calculate the leaves, the GetLeaves method just throws a meaningful exception. Make sense, right? Now let’s see the client code using these classes, with IEmployeeService as Parent and EmployeeService and ContractorService as its children.

 

IEmployeeService employeeService = new EmployeeService();

IEmployeeService contractorService = new ContractorService ();

employeeService. GetLeaves (<id>);

contractorService. GetLeaves (<id2>);

 

The second line will throw an exception at RUNTIME. At this level, it does not mean much. So what? Just don’t invoke GetLeaves if it’s a ContractorService. Ok let’s modify the client code a little to highlight the problem even more.

 

List<IEmployeeService> employeeServices = new List<IEmployeeService>();

employeeServices.Add(new EmployeeService());

employeeServices.Add(new ContractorService ());

CalculateMonthlySalary(employeeServices);

//..

void CalculateMonthlySalary(IEnumerable<IEmployeeService> employeeServices)

{

               foreach(IEmployeeService eService in employeeServices)

               {

               int leaves = eService. GetLeaves (<id>);//this will break on the second iteration

               //… bla bla

}

}

 

The above code will break the moment it tries to invoke GetLeaves in that loop the second time. The CalculateMonthlySalary knows nothing about ContractorService and only understands IEmployeeService, as it should. But its behaviour changes (it breaks) when a child of IEmployeeService (ContractorService) is used, at runtime. Let’s solve this:

 

interface IEmployeeService

{

               void Add(Employee employee);

               //…

}

interface ILeaveService

{

               int GetLeaves (int employeeId);

               //…

}

class EmployeeService : IEmployeeService, ILeaveService

{

               void Add(Employee employee)

               {

                              //.. add the employee

}

int GetLeaves (int employeeId)

{

               // Calculate and return leaves

}

}

class ContractorService : IEmployeeService

{

               void Add(Employee employee)

               {

                              // add the employee

                              // send email to finance.

}

}

 

Now the client code to calculate leaves will be

 

void CalculateMonthlySalary(IEnumerable<ILeaveService> leaveServices)

 

and wallah, the code is as smooth as it gets. The moment we try to do.

 

List<ILeaveService> leaveServices = new List<ILeaveService>();

leaveServices.Add(new EmployeeService());

leaveServices.Add(new ContractorService ()); //Compile-time error.

CalculateMonthlySalary(leaveServices);

 

It will give us a compile-time error. Because the method CalculateMonthlySalary is now expecting IEnumberable of ILeaveService to calculate leaves of employees, we have a List of ILeaveService, but ContractorService does not implements ILeaveService.

 

List<ILeaveService> leaveServices = new List<ILeaveService>();

leaveServices.Add(new EmployeeService());

leaveServices.Add(new EmployeeService ());

CalculateMonthlySalary(leaveServices);

 

LSP helps fine graining the business requirements and operational boundaries of the code. It also helps identifying the responsibilities of a piece of code and what kind of resources it would need to do its job. This increases SRP, enhances decoupling and reduces useless dependencies (CalculateMonthlySalary does not care about the whole IEmployeeService anymore, and only depends upon ILeaveService).

Breaking down responsibilities sometimes can be a bit hard in complex business requirements and LSP also tends to increase the number of isolated code units (classes, interfaces etc.). But it becomes apparent in simple and carefully designed structures where Tight Coupling and Duplications are avoided.

          I – Interface Segregation Principle – ISP

Don’t give me something I don’t need”

 oop-principles

In LSP, we did see that the method CalculateMonthlySalary had no use of the complete IEmployeeService, it only needed a subset of IEmployeeService, the GetLeaves method. This is, in its basic form, the ISP. It asks to identify the resources needed by a piece of code to do its job and then only provide those resources to it, nothing more. ISP finds real dependencies in code and eliminates unwanted ones. This helps in decoupling code greatly, helps recognising code dependencies, and ensures code isolation and security (CalculateMonthlySalary does not have any access to Add method anymore).

ISP advocates module customization based on OCP; identify requirements and isolate code by creating smaller abstractions, instead of modifications. Also, when we fine-grain pieces of code using ISP, the individual components become smaller. This increases their testability, manageability and reusability. Have a look:

 

class Employee

{

               //…

               string Id;

               string Name;

               string Address;

               string Email;

//…

}

void SendEmail(Employee employee)

{

               //.. uses Name and Email properties only

}

 

The above is a violation of ISP. The method SendEmail has no use of the class Employee, it only uses a Name and an Email to send out emails but is dependent on Employee class definition. This introduces unnecessary dependencies into the system, though it seems small at start. Now the SendEmail method can only be used for Employees and with nothing else; no reusability. Also, it has access to all the other features of Employee, without any requirements; security and isolation. Let’s rewrite it.

 

 void SendEmail(string name, string email)

{

               //.. sends email of whatever

}

 

Now the method does not care about any changes in the Employee class; dependency identified and isolated. It can be reused and is testable with anything, instead of just Employee. In short, don’t be misguided by the word Interface in ISP. It has its usage everywhere.

          D – Dependency Inversion Principle – DIP

To exist, I did not depend upon my sister, and my sister not upon me. We both depended upon our parents

Remember the example we discussed in SRP, where we introduced Database class into the EmployeeService.

 

class EmployeeService : IEmployeeService, ILeaveService

{

Database db = null;

public EmployeeService ()

{

                              //…

                              db = Database.Get(); // or a constructor etc. the EmployeeService is dependent upon Database

//..

}

Add(Employee emp)

{

                              //…

                              db.Add<Employee>(emp);

//..

}

}

 

The DIP dictates that:

               No high-level modules (EmployeeService) should be dependent upon any low-level modules (Database), instead both should depend upon abstractions. And abstractions should not depend upon details, details should depend upon abstractions.

 dip

The EmployeeService here is a high-level module that is using, and dependent upon a low-level module, Database. This introduces a hidden dependency on the Database class. This also increases the coupling between EmployeeService and the Database. The client code using EmployeeService now must have access to the Database class definition, even though it’s not exposed to it and does not know, apparently, that Database class/ service/ factory/ interface exists.

Also note that it does not matter if, to get a Database instance, we use a singleton, factory, or a constructor. Inverting a dependency does not mean replacing its constructor with a service / factory/ singleton call, because then the dependency is just transformed into another class/ interface while remain hidden. One change in Database.Get, for example, could have unforeseen implications on the client code using the EmployeeService without knowing. This makes the code rigid and tightly coupled to details, difficult to test and almost impossible to reuse.

Let’s change it a bit.

 

 class EmployeeService : IEmployeeService, ILeaveService

{

Database db = null;

 public EmployeeService (Database database)

{

                              //…

                              db = database;

//..

}

Add(Employee emp)

{

                              //…

                              db.Add<Employee>(emp);

//..

}

//…

}

 

We have moved the Getting of Database to an argument in the constructor (because the scope of the db variable is class level). Now EmployeeService is not dependent upon the details of Database instantiation. This solves one problem but the EmployeeService is still dependent upon a low-level module (Database). Let’s change that:

 

IDatabase db = null;

public EmployeeService (IDatabase database)

{

//…

         db = database;

//..

}

 

We have replaced the Database with an abstraction (IDatabase). EmployeeService does not depend, nor care about any details of Database anymore. It only cares about the abstraction IDatabase. The Database class will be implementing the IDatabase abstraction (depends upon an abstraction).

Now the actual database implementation can be replaced anytime (testing) with a mock or with any other database details, as per the requirements and the Service will not be affected.

We have covered, in some details, the SOLID design principles, with few examples to understand the underline problems and how those can be solved using these principles. As can be seen, most of the time, a SOLID principle is just common sense and not making STUPID mistakes in designs and giving the business requirements some thought.

 

Adding Bot to Microsoft Teams

If you are following up on my previous blog posts about Bots and integrating LUIS with them, you are almost done with building bots and already had some fun with it. Now it’s time to bring them to life and let internal or external users interact with Bot via some sort of front end channel accessible by them. If you haven’t read my previous posts on the subject yet, please give them a read at Creating a Bot and Creating a LUIS app before reading further.

In this blog post, we will be integrating our previously created intelligent Bots into Microsoft Teams channel. Following a step by step process, you can add your bot to MS Teams channel.

Bringing Bot to Life

  1. As a first step, you need to create a package as outlined here and build a manifest as per the schema listed here. This will include your Bot logos and a manifest file as shown below.

  2. Once manifest file is created, you need to zip it along with logos, as shown above, to make it a package with (*.zip)
  3. Open Microsoft team interface, select the particular team you want to add Bot to and go to Manage team section as highlighted below.

  4. Click on Bots tab, and then select Sideload a bot as highlighted and upload your previously created zip file

  5. Once successful, it will show the bot that you have just added to your selected team as shown below.

  6. If everything went well, your Bot is now ready and available in team’s conversation window to interact with. While addressing Bot, you need to start with symbol @BotName to direct messages to Bot as shown below.

  7. Based on the configuration you have done as part of the manifest file, your command list will be available against your Bot name.

  8. Now you can ask your Bot question that you have trained your LUIS app with and it will respond as programmed.

  9. You just need to ensure your Bot is programmed to respond possible questions your end user can ask it for.

  10. You can program a bot to acknowledge user first and then respond in detail on user’s question. If the response contains multiple records, you can represent it using cards as shown below.

  11. Or if a response requires some additional actions, you can have a link or a button to launch a URL directly from your team conversation.

  12. Besides adding Bot to teams, you can add tabs to a team as well which can show any SPA (single page application) or even a dashboard built as per your needs. Below is just an example of what can be achieved using tabs inside MS Teams.

As MS Teams is evolving as a group chat software, it can be leveraged to build useful integrations as a front face to many of the organisation’s needs capitalising on Bots as an example.

HoloLens – understanding the device

HoloLens is without doubt the next coolest product launched by Microsoft after Kinect. Before understanding the device lets quickly familiarize ourselves with the domain of Mixed reality and how it is different from Virtual and Augmented reality.

VR, AR and MR

Virtual reality, the first of the lot, is a concept of creating a virtual world about the user. This means all that the user sees or hears is simulated. The concept of virtual reality is not new to us. A simpler form of virtual reality was achieved back in 18th century using panoramic paintings and stereoscopic photo viewers. Probably the first implementation of a commercial virtual reality application was the “Link Trainer”, a flight simulator invented in the year 1929.

The 1st head mounted headset for Virtual reality was invented in the year 1960 by Morton Heilig. This invention enabled the user to be mobile, thereby introducing possibilities of better integrating with their surroundings. The result was the era of sensor packed headsets which can track movement, sense depth, heat, geographical coordinates and so on.

These head mounted devices then became capable of projecting 3D and 2D information on see- through screens. This concept of overlaying the content on the real world was termed as Augmented reality. The concept of Augmented Reality was first introduced in the year 1992 by Boeing where it was implemented to assist workers for assembling wire bundles.

The use cases around Augmented reality started strengthening over the following years. When virtual objects were projected on the real world, the possibilities of these objects interacting with the real-world objects started gaining focus. This brought in the invention of the concept called Mixed reality. Mixed reality can be portrayed as the successor of Augmented reality where the virtual objects projected in the real world are anchored to and interact with the real-world objects. HoloLens is one of the most powerful devices in the market today which can cater to Augmented and Mixed reality applications.

Birth of the HoloLens – Project Baraboo

After Windows Vista, repairing amazon forest and project Natal (popularly known as Kinect) Alex Kipman (Technical fellow, Microsoft) decided to focus his time on a machine which could not only see what a person sees but also understand his environment and project things on his line of vision. While building this device Kipman was keen on preserving the perpetual vision of the user to ensure that he or she does not feel blindfolded. He used his knowledge from his previous invention, Kinect, around depth sensing and recognizing objects.

The end product was a device with an array of cameras, microphones and other smart sensors all feeding information to a specially crafted processing module which Microsoft calls a Holographic Processing Unit (HPU). The device is capable of mapping its surroundings and understating the depth of the world in its field of vision. It can be controlled by gestures and by voice. The user’s head acts like the point device with a cursor which shines in the middle of his view port. HoloLens is also a fully functional Windows 10 computer.

The Hardware

Following are the details of the sensors built into the HoloLens:

HoloSense

  • Inertial measurement unit (IMU) – The HoloLens IMU consists of an accelerometer, gyroscope, and a magnetometer to help track the movements of the user.
  • Environment sensing cameras – The devices come with 4 environment sensing cameras used to recognize the orientation of the device and for spatial mapping
  • Depth Camera – The depth camera in this device is used for finger tracking and for spatial mapping
  • HD Video camera – A generic high definition camera which can be used by applications to capture video stream
  • Microphones – The device is fitted with an array of 4 microphone to capture voice commands and sound from 360 degrees
  • Ambient light sensor – A sensor used to capture the light intensity from the surrounding environment

The HoloLens also comes with the following two built-in processing units and storage

HoloChip

  • Central Processing Unit – Intel Atom 1.04 GHz processor with 4 logical processors.
  • Holographic Processing Unit – HoloLens Graphics processor based on Intel 8086h architecture
  • High Speed memory – 2 GB RAM and 114 MB dedicated Video Memory
  • Storage – 64 GB flash memory.

HoloLens supports Wi-Fi (802.11ac) and Bluetooth (4.1 LE) communication channels. The headset also comes with a 3.5 mm audio jack and a Micro USB 2.0 multi-purpose port, the device has a battery life of nearly 3 hours when fully charged.

More about the software and development tools in my next blog.

Deploying Cloud-only mailboxes in Office 365 using On-Premises Directory objects

First published at https://nivleshc.wordpress.com

In this blog, I will show you how to create Cloud-only mailboxes in Exchange Online (Exchange Online is the messaging part of Office 365) that are bound to objects synchronised from your on-premises Active Directory. The Cloud-only approach is different to the Hybrid approach because you do not need an Exchange server deployed in your on-premises environment.

There are a few reasons why you would want to link your Cloud-only mailboxes to your on-premises Active Directory. The most important reason is to ensure you don’t have multiple identities for the same user. Another reason is to provide the notion of single-sign-on. This can be established by using the password synchronisation feature of Azure AD Connect (this will be discussed abit later).

Ok, lets get started.

The diagram below shows what we will be doing. In a nutshell, we will replicate our on-premises Active Directory objects to Azure AD (these will be filtered so that only required objects are synchronised to Azure AD) using Azure AD Connect Server. Once in Azure AD, we will appropriately license the objects using the Office 365 Admin portal (any license bundle that contains the Exchange Online Plan 2 license is fine. Even Exchange Online Plan 2 by itself is sufficient).

Onpremise AD Objects Synchronised AAD

Prepare your Office 365 Tenant

Once you have obtained your Office 365 tenant, add and verify the domain you will be using for your email addresses (for instance, if your email address will be tom.jones@contoso.com, then add contoso.com in Office 365 Admin Center under Setup\Domains). You will be provided with a TXT entry that you will need to use to create a DNS entry under the domain, to prove ownership.

Once you have successfully verified the domain ownership, you will be provided with the MX entry value for the domain. This must be used to create an MX entry DNS record for the domain so that all emails are delivered to Office 365.

Prepare your on-premises Active Directory

You must check to ensure your on-premises Active Directory does not contain any objects that are incompatible with Azure AD. To perform this check, run idFix in your environment.

Note of advice - idFix, by default runs across all your Active Directory objects. You do not have to fix objects that you won't be synchronising to Azure AD

It is highly recommended that your on-premise Active Directory user objects have their userprincipalname (upn) matched to their primary email address. This will remove any confusion that users might face when accessing the Office 365 services via a web browser as Office 365 login pages refer to username as “email address”.

Next, ensure that the E-mail field for all users in Active Directory contains the UPN for the user.

ADUser

Deploy and Configure Azure AD Connect Server

Ensure all the prerequisites have been met, as outlined at https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnect-prerequisites

Next, follow the article at https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnect-select-installation to deploy and configure your Azure AD Connect (AADC) Server.

During the configuration of AADC, you will be asked to specify which on-premise Active Directory objects should be synchronised to Azure AD. Instead of synchronising all your on-premise Active Directory objects, choose the Organisational Unit that contains all the users, groups and contacts you want to synchronise to Azure AD.

Choose the Password Synchronisation option while installing the AADC server. This will synchronise your on-premise password hashes to Azure AD, enabling users to use their on-premises credentials to access Office 365 services

At this stage, your AADC server would have already done an initial run, which would have created objects in Azure AD. These are visible using the Office 365 Admin Center.

After the initial sync, AADC runs an automatic synchronisation every 30 minutes to Azure AD

Provision Mailboxes

Now that everything has been done, open Office 365 Admin Center. Under Users\Active Users you will see all the on-premise users that have been synchronised.

Click on each of the users and then in the next screen click Edit beside Product licenses and select the location of the user and also the combination of license options you want to assign the user. Ensure you select at least Exchange Online (Plan 2) as this is needed to provision a user mailbox. Click on Save.

As soon as you assign the Exchange Online (Plan 2) license, the mailbox provisioning starts. This shouldn’t take more than 10 minutes to finish. You can check the progress by clicking the user in Office 365 Admin Center and then Mail Settings at the bottom of the screen. Once the mailbox has been successfully provisioned, the We are preparing a mailbox for this user message will disappear and instead details about the user mailbox will be shown.

Once the mailbox has been provisioned, open the Exchange Admin Center and then click on recipients from the left menu. In the right hand side screen, click mailboxes. This will show you details about the mailboxes that have been provisioned so far. The newly created user mailbox should be listed there as well.

Thats it folks! You have successfully created an Exchange Online mailbox that is attached to your on-premises Active Directory user object.

Any changes to the Office 365 object (display name etc) will have to be done via the on-premises Active Directory. These changes will be synchronised to Azure AD every 30 minutes and will be reflected in the Exchange Online mailbox

If you try to modify any of the attributes via the Office 365 or Exchange Online Admin Center, you will receive the following error

The action '<cmdlet>', '<property>', can't be performed on the object '<name>' because the object is being synchronized from your on-premises organisation.

Some Additional Information

Please note that the following is not supported by Microsoft.

There are times when you need to have additional email aliases attached to a user mailbox. To do this, follow the below steps

  1. Open Active Directory Users and Computers in your on-premises Active Directory
  2. In the top menu, click View and then select Advanced Features
  3. Navigate to the user object that you want to add additional email aliases to and double click to open its settings
  4. From the tabs click on Attribute Editor
  5. Under Attributes locate proxyAddresses and click on Edit (or double click) to open it
  6. In the values field, first enter the current email address, prefixed with SMTP: (ensure the smtp is in upper case).
  7. Then add all the email aliases that you want to assign to this user. Ensure each email alias is prefixed with smtp:  The email domain for the aliases has to be a domain that is already added and verified in Office 365 Admin Center.
  8. If you need to change the reply-to (primary smtp) address for the user then remove the value that currently has the upper case SMTP: assigned to it and then re-add it, however prefix it with a lower case smtp:. Then remove the alias that  you want to assign as the reply-to (primary smtp) and re-add it, however prefix it with an upper case SMTP:

ADUser_ProxyAddresses

I hope the blog helps out those who might be wanting to use the Cloud Only instead of the Hybrid deployment approach to Office 365.

Have a great day 😉

Security assessment for Australia’s leading professional services firm

Customer Overview

Professional services firm with global reach and deep expertise in audit and assurance, tax and advisory with a large presence in Australia.

Business Situation

A leading professional services firm was assessing new technology to drive innovative solutions and offerings as part of their digital transformation program. Having recently adopted public cloud, the organisation was looking to increase the use of public cloud to assist in delivering solutions while also realising the benefits from a cost savings and agility perspective.

With security a key consideration for the design and implementation of any cloud-based platform, the company was seeking to ensure standards for establishing necessary controls. The firm sought an independent assessment of the public cloud platform to determine its viability and to understand its security posture in order to take remedial measures to improve it.

Solution

To address the company’s concerns and requirements, Kloud recommended and developed a cloud security governance and control framework which helped to define;

  • Organisational direction on adoption and consumption of cloud services (e.g. SaaS, PaaS & IaaS) and the preferred cloud providers
  • A governance model for request, approval, implementation and maintenance of cloud based services
  • A comprehensive set of security controls aligning to the organisation’s security policies & standards and industry standards such as ISO27001 and CSA’s CCM

Kloud also developed a reference security architecture to define the architectural and security components requiring implementation to enable sufficient security controls outlined in the initial framework.

Kloud conducted a comprehensive security assessment of the overall architecture and the platform comprising of the cloud service deployed. The assessment also covered non-technical aspects of the solutions, including:

  • User provisioning
  • Access management including privileged access and user access revalidation
  • Logging and auditing
  • Incident response
  • Data handling
  • Software development practices

Gaps and areas of non-compliance within security controls framework were documented and rated based on the risk it posed to the organisation. Mitigation controls were defined and prioritised based on the risk rating and an implementation roadmap was defined and presented to the business.

Kloud helped the company in identifying the security posture of the platform and provide recommendations on improving its overall security strategy.

Benefits

  • Overall security state of the platform and risk position
  • Immediate areas of focus
  • Improved compliance
  • Higher level of confidence in the platform and the ability to demonstrate and sell services to their clients

Don’t be STUPID: Design Principles 101

When we talk about OOP, we think of classes and classes and so on. But mostly we don’t care about being lazy. I love it when programmers are lazy. It means they will write less code doing more work (being lazy the write way). More and more people often write too much code and that’s just STUPID. Now now don’t get all worked up, it’s the reality of most of the code we see and It’s important to not be STUPID before being SOLID. Let me explain.

STUPID is a set of principles to avoid while writing code. Most of the time these are obvious things but we let them pass anyway. But they almost always have adverse impact on testability, quality, extensibility, maintainability, and reusability. 

stupid 

STUPID is:

Singleton

Tight Coupling

Untestability

Premature Optimization

Indescriptive Naming

Duplication

          Singleton

Ok STOP using singleton everywhere. It’s not a magic pattern. Most of the time you see singleton used as global state holder, and it’s just sad. This creates a lot of problems in testing since the instantiation cannot be controlled and they, by nature, can retain states (often hidden) across invocations. They also hide dependencies. Consider the following: 

class Account

{

   private DebtCalculator calculator = DebtCalculator.GetInstance(); //getting a singleton

}

Against this:

class Account

{

     public Account(IDebtCalculator c)

     {

               this.calculator = c;

     }

} 

Another, and perhaps a widely used example would be a database class that provides an instance of Database.

Database.GetInstance();

This looks easy and maintainable, yes? In reality, it’s just stupid code. This tightly coupled the client code with the Database class. Now if we want to get another instance, or connection to another database, this will be a big mess. Also testing this is always a problem. Adding new features (extensions) to Database class might create issues as well. Especially since any extension will be accessible to all the client code, regardless of need. This creates unnecessary and cluttered code and introduces security concerns.

 Need I say more? Of course, I am not saying it’s an anti-pattern on its own. Rather most of the time it’s unnecessary to even have a singleton and the underlying problem can be solved much easier. So, use it wisely.

          Tight Coupling

decoupled-code-so-hot-right-now

Coupling is the measure of changes required in other modules to make a change in one module. The greater it is, the more coupled your code is. Tightly coupled modules are difficult to use and expensive to changes and hard to test. They also introduce cluttered code everywhere. In essence, it could be regarded as a generalisation of Singleton (global state). The moment we start using;

<some class>.<some method/ property> or

<some singleton instance>.<method/ property>

We tightly couple the client code with that class. This makes extending that class difficult, and testing the client code harder.

class Account

{

               Private Logger log;

               public Account()

      {

               log = new Logger();

               log.Write(“Account created”);

      }

}

v/s

private ILog log;

public Account(ILog l)

{

               log = l;// or any logger factory.

               //now account is not dependent upon the Logger class.

} 

Loose coupling makes testing, and feature extensions a breeze. 

          Untestability

In theory, testing a piece of code should be the easiest of things. In reality, tight coupling, along with hidden dependencies makes it a lot harder and expensive to test modules. The result is bug prone modules. Whenever we don’t unit test our code because “there is no time”, the real reason is that our code is too hard to test

          Premature Optimization

premature-optimization-no-thumb

Two rules:

o   Don’t do it

o   Don’t do it yet

A piece of code can be optimized to work as fast as possible only to later find out that it does not do what it’s supposed to. So:

o   Make it work

o   Make it right

o   Make it fast

Of course, this does not mean making basic design mistakes or bad code should be allowed. When you follow standard design guidelines and maintained test cases, you optimized mercilessly as much as you want afterwards. Don’t waste your time in finding and using stupid micro optimization techniques that in production, does not do much. 

Replacing foreach loops with for, unnecessary replacing strings with string builders, string.Format v/s Replace v/s bla bla. All those premature optimizations are just stupid, unless of course saving micro seconds and tiny bits of memory is a requirement in your code. Most of the time, these things are rarely the bottlenecks. Its design flaws, and wrong joins in queries, and untested code etc. that create problems.

These crazy don’t use <some stupid stuff> for micro optimization without need just distracts you from designing and writing testable and maintainable code

          Indescriptive Naming

The most obvious of the lot. Name you stuff right people. Classes, properties, methods, namespaces bla bla. Don’t write code that only you can understand. Write it for other to read and understand. Don’t use abbr. (pun intended). And follow the industry standards not yours.

naming 

          Duplication

DRY (don’t repeat yourself) is a key in writing good reusable, testable and maintainable code. And the most important, DRY helps avoiding runtime issues that slips through testing, undetected. I saw a piece of code where resources were used via relative paths converted to absolute ones everywhere. It went through testing but broke in staging because at a few places, it was still using the paths without conversion to absolute. The conversion was duplicated everywhere since it was just a single line. The solution, remove all conversions and configure it ONCE however you like and use whenever you need. Now that configuration can be controlled, injected, reused, and tested.

DontRepeatYourself

Programmers are lazy, and not the good kind of lazy, most of the time. Introducing any tight coupling open doors to copy pasting the same thing throughout the code because it now cannot be reused, and it’s just easy with control + c and control + v. who cares if it’s a horrible code smell that is neither testable nor maintainable. The worse thing about duplicate code; removing or fixing it is expensive with no payback on performance or anything. 

In short, these are the ways of STUPID. Now we will move to SOLID and it would be easier to understand and relate those principles with these basic design and code issues in mind.