HoloLens – Setting up the Development environment

HoloLens is undoubtedly a powerful invention in the field of Mixed reality. Like any other revolutionary inventions, the success of a technology largely depends upon its usability and its ease of adoptability. This is what makes software development kits (SDK) and application programming interfaces (API) associated with a technology super critical. Microsoft has been very smart on this front when it comes to HoloLens. Rather than re-inventing the wheel, they have integrated HoloLens development model with the existing popular gaming platform ‘Unity’ for modelling a Mixed reality application frontend.

Unity is a cross-platform gaming engine which was first released in the year 2005. Microsoft’s collaboration with Unity was first announced in the year 2013 when it started supporting Windows 8, Windows Phone 8 and Xbox One development tools. Unity’s support for Microsoft HoloLens was announced in 2015, since then, it has been the Microsoft recommended platform for developing game worlds for Mixed reality applications.

With Unity in place for building the game world, the next piece of the puzzle was to choose the right platform for scripting the business logic, deploying and debugging the application. For obvious reasons such as flexibility in code editing and troubleshooting, Microsoft Visual Studio was chosen as the platform for authoring HoloLens applications. Visual Studio Tools for Unity is integrated into to the Unity platform which enables it to generate a Visual Studio Solutions for the game world. Scripts can be coded using C# in Visual Studio to contain the operational logic for the Mixed reality application.

HoloLens Application Architecture

A typical Mixed reality application development had multiple tiers. The frontend game world which is built using Unity, the application logic used to power the game objects which is coded using C# in Visual Studio and backend services which encapsulated the business logic. Following diagram illustrates the architecture of such an application.

hldev1

In the above diagram, the box towards the right (Delegated services) is an optional tier for a Mixed reality application. However, many real-world scenarios in the space of Augmented and Mixed reality are powered by strong backend systems such as machine learning and data analytics services.

Development Environment

Microsoft does not use a separate SDK for HoloLens development. All you will require is Visual Studio and Windows 10 SDK. The following section lists the ideal hardware requirements for a development machine.

Hardware requirements

  • Operating system – 64 Bit Windows 10 (Pro, Enterprise, or Education)
  • CPU – 64 Bit, 4 cores. GPU supporting Direct X 11.0 or later
  • RAM – 8 GB or more
  • Hypervisor – Support for Hardware-assisted virtualization

Once the hardware is sorted, you will need to install the following tools for HoloLens application development:

Software requirements

  • Visual Studio 2017 or Visual Studio 2015 (Update 3) – While installing Visual Studio, make sure that you have selected the Universal Windows Platform development workload and the Game Development with Unity workload. You may have to repair/upgrade visual studio is its already installed on your machine without these workloads.
  • HoloLens Emulator – The HoloLens emulator can be used to test a Mixed reality application without deploying it on the device. The latest version of the emulator can be downloaded from the Microsoft web site for free. One thing to remember before you install the emulator is to enable the hypervisor on your machine. These may require changes in your system BIOS
  • Unity – Unity is your front-end game world modelling tool. Unity 5.6 (2017.1) or above supports HoloLens application development. Configuring the Unity platform for a HoloLens application is covered in later sections of this blog.

Configuring Unity

It is advisable to take a basic training in Unity before jumping onto HoloLens application development. Unity offers tons of free training materials online. The following link should lead you to a good starting point

https://unity3d.com/learn/tutorials

Once you are comfortable with basics of Unity, within a Unity project, follow the steps listed in the below URL to configure it for a Microsoft Mixed reality application.

https://developer.microsoft.com/en-us/windows/mixed-reality/unity_development_overview

Following are few things to note while we perform the configuration

  • Build settings – While configuring the build settings, it is advisable to enable the ‘Unity C# project’ checkbox to generate a Visual Studio solution which can be used for debugging your application.

hldev2

  • Player settings – apart from the publishing capabilities (Player settings->Publish settings->Capabilities) listed in the link above, there may be the requirement for your application to have specific capabilities such as an ability to access the picture library or the Bluetooth channel. Make sure that the capabilities required by your application are selected before building the solution.

hldev3

  • Project settings – Although the above-mentioned link recommends the quality to be set to fastest, this might not be the appropriate setting for all applications. It may be good to start with the quality flag set to ‘Fastest’ and then update the flag if required based on your application needs.

hldev4

One these settings are configured, you are good to create your game world for the HoloLens application. The unity project can then be built to generate a Visual Studio solution. The build operation will pop up a file dialog box to shoes a folder where you want the visual studio solution to be created. It is recommended to create a new folder for the Visual Studio solution.

Working with Visual Studio

Irrespective of the frontend tool used to create the game world, a HoloLens application will require Visual Studio to deploy and debug the application. The application can be deployed on a HoloLens emulator for development purpose.

The below link details the steps for deploying and debugging an Augmented/Mixed reality application on HoloLens using Visual Studio.

https://developer.microsoft.com/en-us/windows/mixed-reality/using_visual_studio

Following are few tips which may help set-up up and deploy the application effectively:

  • Optimizing the deployment – Deploying the application over USB is multiple time faster than deploying it over Wi-Fi. Ensure that the architecture is chosen as ‘X86’ before you fire the deploy.

hldev5

  • Application settings – You will observe that the solution has a Universal Windows Project set as your start-up project. To change the application properties such as application name, description, visual assets, application capabilities, etc., you can update the package manifest.

Following are the steps:

  1. Click on project properties

hldev6

2. Click on Package Manifest

hldev7

3. Use the ‘Application’, ‘Visual Assets’ and ‘Capabilities’ tabs to manage your application properties.

hldev8

HoloLens application development toolset comes with another powerful tool called ‘Windows Device Portal’ to manage your HoloLens and the applications installed on it. More about this in my next blog.

 

 

HoloLens – understanding the device

HoloLens is without doubt the next coolest product launched by Microsoft after Kinect. Before understanding the device lets quickly familiarize ourselves with the domain of Mixed reality and how it is different from Virtual and Augmented reality.

VR, AR and MR

Virtual reality, the first of the lot, is a concept of creating a virtual world about the user. This means all that the user sees or hears is simulated. The concept of virtual reality is not new to us. A simpler form of virtual reality was achieved back in 18th century using panoramic paintings and stereoscopic photo viewers. Probably the first implementation of a commercial virtual reality application was the “Link Trainer”, a flight simulator invented in the year 1929.

The 1st head mounted headset for Virtual reality was invented in the year 1960 by Morton Heilig. This invention enabled the user to be mobile, thereby introducing possibilities of better integrating with their surroundings. The result was the era of sensor packed headsets which can track movement, sense depth, heat, geographical coordinates and so on.

These head mounted devices then became capable of projecting 3D and 2D information on see- through screens. This concept of overlaying the content on the real world was termed as Augmented reality. The concept of Augmented Reality was first introduced in the year 1992 by Boeing where it was implemented to assist workers for assembling wire bundles.

The use cases around Augmented reality started strengthening over the following years. When virtual objects were projected on the real world, the possibilities of these objects interacting with the real-world objects started gaining focus. This brought in the invention of the concept called Mixed reality. Mixed reality can be portrayed as the successor of Augmented reality where the virtual objects projected in the real world are anchored to and interact with the real-world objects. HoloLens is one of the most powerful devices in the market today which can cater to Augmented and Mixed reality applications.

Birth of the HoloLens – Project Baraboo

After Windows Vista, repairing amazon forest and project Natal (popularly known as Kinect) Alex Kipman (Technical fellow, Microsoft) decided to focus his time on a machine which could not only see what a person sees but also understand his environment and project things on his line of vision. While building this device Kipman was keen on preserving the perpetual vision of the user to ensure that he or she does not feel blindfolded. He used his knowledge from his previous invention, Kinect, around depth sensing and recognizing objects.

The end product was a device with an array of cameras, microphones and other smart sensors all feeding information to a specially crafted processing module which Microsoft calls a Holographic Processing Unit (HPU). The device is capable of mapping its surroundings and understating the depth of the world in its field of vision. It can be controlled by gestures and by voice. The user’s head acts like the point device with a cursor which shines in the middle of his view port. HoloLens is also a fully functional Windows 10 computer.

The Hardware

Following are the details of the sensors built into the HoloLens:

HoloSense

  • Inertial measurement unit (IMU) – The HoloLens IMU consists of an accelerometer, gyroscope, and a magnetometer to help track the movements of the user.
  • Environment sensing cameras – The devices come with 4 environment sensing cameras used to recognize the orientation of the device and for spatial mapping
  • Depth Camera – The depth camera in this device is used for finger tracking and for spatial mapping
  • HD Video camera – A generic high definition camera which can be used by applications to capture video stream
  • Microphones – The device is fitted with an array of 4 microphone to capture voice commands and sound from 360 degrees
  • Ambient light sensor – A sensor used to capture the light intensity from the surrounding environment

The HoloLens also comes with the following two built-in processing units and storage

HoloChip

  • Central Processing Unit – Intel Atom 1.04 GHz processor with 4 logical processors.
  • Holographic Processing Unit – HoloLens Graphics processor based on Intel 8086h architecture
  • High Speed memory – 2 GB RAM and 114 MB dedicated Video Memory
  • Storage – 64 GB flash memory.

HoloLens supports Wi-Fi (802.11ac) and Bluetooth (4.1 LE) communication channels. The headset also comes with a 3.5 mm audio jack and a Micro USB 2.0 multi-purpose port, the device has a battery life of nearly 3 hours when fully charged.

More about the software and development tools in my next blog.

Active Directory – What are Linked Attributes?

A customer request to add some additional attributes to their Azure AD tenant via Directory Extensions feature in the Azure AD Connect tool, lead me into further investigation. My last blog here set out the customer request, but what I didn’t detail in that blog was one of the attributes they also wanted to extend into Azure AD was directReports, an attribute they had used in the past for their custom built on-premise applications to display the list of staff the user was a manager for. This led me down a rabbit hole where it took a while to reach the end.

With my past experience in using Microsoft Identity Manager (formally Forefront Identity Manager), I knew directReports wasn’t a real attribute stored in Active Directory, but rather a calculated value shown using the Active Directory Users and Computers console. The directReports was based on the values of the manager attribute that contained the reference to the user you were querying (phew, that was a mouthful). This is why directReport and other similar type of attributes such as memberOf were not selectable for Directory Extension in the Azure AD Connect tool. I had never bothered to understand it further than that until the customer also asked for a list of these type of attributes so that they could tell their application developers they would need a different technique to determine these values in Azure AD. This is where the investigation started which I would like to summarise as I found it very difficult to find this information in one place.

In short, these attributes in the Active Directory schema are Linked Attributes as detailed in this Microsoft MSDN article here:

Linked attributes are pairs of attributes in which the system calculates the values of one attribute (the back link) based on the values set on the other attribute (the forward link) throughout the forest. A back-link value on any object instance consists of the DNs of all the objects that have the object’s DN set in the corresponding forward link. For example, “Manager” and “Reports” are a pair of linked attributes, where Manager is the forward link and Reports is the back link. Now suppose Bill is Joe’s manager. If you store the DN of Bill’s user object in the “Manager” attribute of Joe’s user object, then the DN of Joe’s user object will show up in the “Reports” attribute of Bill’s user object.

I then found this article here which further explained these forward and back links in respect of which are writeable and which are read-only, the example below referring to the linked attributes member/memberOf:

Not going too deep into the technical details, there’s another thing we need to know when looking at group membership and forward- and backlinks: forward-links are writable and backlinks are read-only. This means that only forward-links changed and the corresponding backlinks are computed automatically. That also means that only forward-links are replicated between DCs whereas backlinks are maintained by the DCs after that.

The take-out from this is the value in the forward-link can be updated, the member attribute in this case, but you cannot update the back-link memberOf. Back-links are always calculated automatically by the system whenever an attribute that is a forward-link is modified.

My final quest was to find the list of linked attributes without querying the Active Directory schema which then led me to this article here, which listed the common linked attributes:

  • altRecipient/altRecipientBL
  • dLMemRejectPerms/dLMemRejectPermsBL
  • dLMemSubmitPerms/dLMemSubmitPermsBL
  • msExchArchiveDatabaseLink/msExchArchiveDatabaseLinkBL
  • msExchDelegateListLink/msExchDelegateListBL
  • publicDelegates/publicDelegatesBL
  • member/memberOf
  • manager/directReports
  • owner/ownerBL

There is further, deeper technical information about linked attributes such as “distinguished name tags” (DNT) and what is replicated between DCs versus what is calculated locally on a DC, which you can read in your own leisure in the articles listed throughout this blog. But I hope the summary is enough information on how they work.

Using Microsoft Azure Table Service REST API to collect data samples

Sometimes we need a simple solution that requires collecting data from multiple sources. The sources of data can be IoT devices or systems working on different platforms and in different places. Traditionally, integrators start thinking about implementation of a custom centralised REST API with some database repository. This solution can take days to implement and test, it is very expensive and requires hosting, maintenance, and support. However, in many cases, it is not needed at all. This post introduces the idea that out-of-the-box Azure Tables REST API is good enough to start your data collection, research, and analysis in no time. Moreover, the suggested solution offers very convenient REST API that supports JSON objects and very flexible NoSQL format. Furthermore, what’s great is that you do not need to write lots of code and hire programmers. Anybody who understands how to work with REST API, create headers and put JSON in the Web request body can immediately start working on a project and sending data to a very cheap Azure Tables storage. Additional benefits of using Azure Tables are: native support in Microsoft Azure Machine Learning, other statistical packages also allow you to download data from Azure Tables.

Microsoft provides Azure Tables SDKs for various languages and platforms. By all means you should use these SDKs; your life will be much easier. However, some systems don’t have this luxury and require developers to work with Azure Tables REST API directly. The only requirement for your system is that it should be able to execute web requests, and you should be able to work with the headers of these requests. Most of the systems satisfy this requirement. In this post, I explain how to form web requests and work with the latest Azure Table REST API from your code. I’ve also created a reference code to support my findings. It is written in C# but this technique can be replicated to other languages and platforms.

The full source code is hosted on GitHub here:

https://github.com/dimkdimk/AzureTablesRestDemo

Prerequisites.

You will need an Azure subscription. Create there a new storage account and create a test table with the name “mytable” in it. Below is my table that I’ve created in the Visual Studio 2015.

Picture 1

I’ve created a helper class that has two main methods: RequestResource and InsertEntity.

The full source code of this class is here:

Testing the class is easy. I’ve created a console application, prepared a few data samples and called our Azure Tables helpers methods. The source code of this program is below.

The hardest part in calling Azure Tables Web API is creating encrypted signature string to form Authorise header. It can be a bit tricky and not very clear for beginners. Please take a look at the detailed documentation that describes how to sign a string for various Azure Storage services: https://msdn.microsoft.com/en-au/library/dd179428.aspx

To help you with the authorisation code, I’ve written an example of how it can be done in the class AzuretableHelper. Please take a look at the code that creates strAuthorization variable. First, you will need to form a string that contains your canonical resource name and current time in the specific format including newline characters in-between. Then, this string has to be encrypted with so-called HMAC SHA-256 encryption algorithm. The key for this encryption code is a SharedKey that you can obtain from your Azure Storage Account as shown here:

Picture 2

The encrypted Authorisation string has to be re-created on each request you execute. Otherwise, Azure API will reject your requests with the Unuthorised error message in the response.

The meaning of other request’s headers is straightforward. You specify the version, format, and other attributes. To see the full list of methods you can perform on Azure Tables and to read about all attributes and headers you can refer to MSDN documentation:

https://msdn.microsoft.com/library/dd179423.aspx

Once you’ve mastered this “geeky” method of using Azure Tables Web API, you can send your data straight to your Azure Tables without using any intermediate Web API facilities. Also, you can read Azure Tables data to receive configuration parameters or input commands from another applications. A similar approach can be applied to other Azure Storage API, for example, Azure Blob storage, or Azure Queue.

 

 

 

 

Moving SharePoint Online workflow task metadata into the data warehouse using Nintex Flows and custom Web API

This post suggests the idea of automatic copying of SharePoint Online(SPO) workflow tasks’ metadata into the external data warehouse.  In this scenario, workflow tasks are becoming a subject of another workflow that performs automatic copying of task’s data into the external database using a custom Web API endpoint as the interface to that database. Commonly, the requirement to move workflow tasks data elsewhere arises from limitations of SPO. In particular, SPO throttles requests for access to workflow data making it virtually impossible to create a meaningful workflow reporting system with large amounts of workflow tasks. The easiest approach to solve the problem is to use Nintex workflow to “listen” to the changes in the workflow tasks, then request the task data via SPO REST API and, finally, send the data to external data warehouse Web API endpoint.

Some SPO solutions require creation of a reporting system that includes workflow tasks’ metadata. For example, it could be a report about documents with statuses of workflows linked to these documents. Using conventional approach (ex. SPO REST API) to obtain the data seems unfeasible as SPO throttles requests for workflow data. In fact, the throttling is so tight that generation of reports with more than a hundred of records is unrealistic. In addition to that, many companies would like to create Business Intelligence(BI) systems analysing workflow tasks data. Having data warehouse with all the workflow tasks metadata can assist in this job very well.

To be able to implement the solution a few prerequisites must be met. You must know basics of Nintex workflow creation and to be able to create a backend solution with the database of your choice and custom Web API endpoint that allows you to write the data model to that database. In this post we have used Visual Studio 2015 and created ordinary REST Web API 2.0 project with Azure SQL Database.

The solution will involve following steps:

  1. Get sample of your workflow task metadata and create your data model.
  2. Create a Web API capable of writing data model to the database.
  3. Expose one POST endpoint method of the Web REST API that accepts JSON model of the workflow task metadata.
  4. Create Nintex workflow in the SPO list storing your workflow tasks.
  5. Design Nintex workflow: call SPO REST API to get JSON metadata and pass this JSON object to your Web API hook.

Below is detailed description of each step.

We are looking here to export metadata of a workflow task. We need to find the SPO list that holds all your workflow tasks and navigate there. You will need a name of the list to be able to start calling SPO REST API. It is better to use a REST tool to perform Web API requests. Many people use Fiddler or Postman (Chrome Extension) for this job. Request SPO REST API to get a sample of JSON data that you want to put into your database. The request will look similar to this example:

Picture 1

The key element in this request is getbytitle(“list name”), where “list name” is SPO list name of your workflow tasks. Please remember to add header “Accept” with the value “application/json”. It tells SPO to return JSON instead of the HTML. As a result, you will get one JSON object that contains JSON metadata of Task 1. This JSON object is the example of data that you will need to put into your database. Not all fields are required in the data warehouse. We need to create a data model containing only fields of our choice. For example, it can look like this one in C# and all properties are based on model returned earlier:

The next step is to create a Web API that exposes a single method that accepts our model as a parameter from the body of the request. You can choose any REST Web API design. We have created a simple Web API 2.0 in Visual Studio 2015 using general wizard for MVC, Web API 2.0 project. Then, we have added an empty controller and filled  it with the code that works with the Entity Framework to write data model to the database. We have also created code-first EF database context that works with just one entity described above.

The code of the controller:

The code of the database context for Entity Framework

Once you have created the Web API, you should be able to call Web API method like this:
https://yoursite.azurewebsites.net/api/EmployeeFileReviewTasksWebHook

You will need to put your model data in the request body as a JSON object. Also don’t forget to include proper headers for your authentication and header “Accept” with “application/json” and set type of the request to POST. Once you’ve tested the method, you can move on to the next steps. For example, below is how we tested it in our project.

Picture 4

Next, we will create a new Nintex Workflow in the SPO list with our workflow tasks. It is all straightforward. Click Nintex Workflows, then create a new workflow and start designing it.

Picture 5

Picture 6

Once you’ve created a new workflow, click on Workflow Settings button. In the displayed form please set parameters as it shown on screenshot below. We set “Start when items are created” and “Start when items are modified”. In this scenario, any modifications of our Workflow task will start this workflow automatically. It also includes cases when Workflow task have been modified by other workflows.

Picture 7.1

Create 5 steps in this workflow as it shown on the following screenshots labelled as numbers 1 to 5. Please keep in mind that blocks 3 and 5 are there to assist in debugging only and not required in production use.

Picture 7

Step 1. Create a Dictionary variable that contains SPO REST API request headers. You can add any required headers, including Authentication headers. It is essential here to include Accept header with “application/json” in it to tell SPO that we want JSON in responses. We set Output variable to SPListRequestHeaders so we can use it later.

Picture 8

Step 2. Call HTTP Web Service. We call SPO REST API here. It is important to make sure that getbytitle parameter is correctly set to your Workflow Tasks list as we discussed before. The list of fields that we want to be returned is defined in the “$select=…” parameter of OData request. We need only fields that are included in our data model. Other settings are straightforward: we supply our Request Headers created in Step 1 and create two more variables for response. SPListResponseContent will get resulting JSON object that we going to need at the Step 4.

Picture 9

Step 3 is optional. We’ve added it to debug our workflow. It will send an email with the contents of our JSON response from the previous step. It will show us what was returned by SPO REST API.

Picture 10

Step 4. Here we are calling our custom API endpoint with passing JSON object model that we got from SPO REST API. We supply full URL of our Web Hook, Set method to POST and in the Request body we inject SPListResponseContent from the Step 2. We’re also capturing response code to display later in workflow history.

Picture 11

Step 5 is also optional. It writes a log message with the response code that we have received from our API Endpoint.

Picture 12

Once all five steps are completed, we publish this Nintex workflow. Now we are ready for testing.

To test the system, open list of our Workflow tasks. Click on any task and modify any of task’s properties and save the task. This will initiate our workflow automatically. You can monitor workflow execution in workflow history. Once workflow is completed, you should be able to see messages as displayed below. Notice that our workflow has also written Web API response code at the end.

Picture 13

To make sure that everything went well, open your database and check the records updated by your Web API. After every Workflow Task modification you will see corresponding changes in the database. For example:

Picture 14

 

In this post we have shown that automatic copying of Workflow Tasks metadata into your data warehouse can be done with a simple Nintex Workflow setup and performing only two REST Web API requests. The solution is quite flexible as you can select required properties from the SPO list and export into the data warehouse. We can easily add more tables in case if there are more than one workflow tasks lists. This solution enables creation of powerful reporting system  using data warehouse and also allows to employ BI data analytics tool of your choice.

Creating Microsoft Identity Manger (MIM) Run Profiles using PowerShell (post MIM rollup build 4.3.2124.0)

A new hotfix rollup was released on the 11th of March for Microsoft Identity Manager that contains a number of fixes and some new functionality.

One new feature according to the release notes is a new cmdlet Add-MIISADMARunProfileStep

This cmdlet allows the creation of MIM Synchronisation Management Agent Run Profiles using PowerShell.

From the MS Documentation

Add-MIISADMARunProfileStep -MAName ‘AD_MA’ -Partition ‘DC=CONTOSO,DC=COM’ -StepType ‘FI’ -ProfileName ‘ADMA_FULLIMPORT’

Possible values of the StepType parameter (short form or long one can be used):
“FI”, “FULL IMPORT”
“FS”, “FULL SYNCHRONIZATION”
“FIFS”, “FULL IMPORT AND FULL SYNCHRONIZATION”
“FIDS”, “FULL IMPORT AND DELTA SYNCHRONIZATION”
“DI”, “DELTA IMPORT”
“DS”, “DELTA SYNCHRONIZATION”
“DIDS”, “DELTA IMPORT AND DELTA SYNCHRONIZATION”
“EXP”,”EXPORT”

The neat feature of this cmdlet is that it will create the Run Profile if it doesn’t exist.

Cool, so let’s have a play with it. I installed the Rollup on the Synchronisation server in my dev environment. I then went looking for the PS module, and it can’t be seen.

That’s a bit weird. Re-read the release notes. New cmdlet. Ah, maybe the new cmdlet has been added to the old PSSnapIn ?

Yes, there it is. Add-PSSnapin MIIS.MA.Config gives us access to the new cmdlet.

Creating Run Profiles using PowerShell.

Here is a Management Agent from my Dev environment with no Run Profiles configured.

Using ISE I used the new cmdlet to create a Run Profile to run a Stage (Full Import) on the ADMA. Note: You need to put parentheses around the Partition name.

And there you go. The Run Profile didn’t exist so it got created.

A nice enhancement would be if you could create multi-step Run Profiles. That would really save some time, to be able to script that.

Follow Darren on Twitter @darrenjrobinson

Provision Users for Exchange with FIM/MIM 2016 using the Granfeldt PowerShell MA, avoiding the AD MA (no-start-ma) error

Forefront / Microsoft Identity Manager provides Exchange Mailbox provisioning out of the box on the Active Directory Management Agent. I’ve used it in many many implementations over the years. However, in my first MIM 2016 implementation in late 2015 I ran into issues with something I’d done successfully many times before.

I was getting “no-start-ma” on the AD MA on export to AD. The point at which the MA sets up its connection to the Exchange environment. After some searching I found Thomas’s blog detailing the problem and a solution. In short update the MIM Sync Server to .NET 4.6. For me this was no-joy. However when MS released the first rollup update for MIM in December everything fired up and worked as normal.

Step forward a month as I was finalising development for the MIM solution I was building for my customer and my “no-start-ma” error was back when I re-enabled mailbox provisioning. Deselect the Exchange Provisioning option on the AD MA and all is good. Re-enable it and it fails. One week left of dev and I need mailbox provisioning so time for a work around whilst I lodge a Premier Support ticket.

So how can I get mailbox provisioning working reliably and quickly? I was already using Søren Granfeldt’s PowerShell MA for managing users Terminal Services configuration, Home Directories and Lync/Skype for Business. What’s one more. Look out for blog posts on using the PS MA to perform those other functions that I’ll be posting in the coming weeks.

Using the Granfeldt PowerShell Management Agent to Provision Exchange Mailboxes

In this blog post I’ll document how you can enable Mailbox Provisioning in Exchange utilising Søren Granfeldt’s extremely versatile PowerShell Management Agent. I’ll show you how to do the minimum of enabling a user with a mailbox. Understanding how this is done you can then easily then extend the functionality for lifecycle management (e.g. change account settings for POP/IMAP/ActiveSync and de-provisioning).

My Exchange PS MA is used in conjunction with an Active Directory MA and Declarative Provisioning Rules in the MIM Portal. Essentially all the AD MA does, when you enable Exchange Provisioning (when it works) is call the ‘update-recipient’ cmdlet to finish of the mailbox provisioning. My Exchange PSMA does the same thing.

Overview

There are three attributes you need to supply values for in order to then provision them a mailbox (on top of having an Active Directory account, or course);

  • mailNickName
  • homeMDB
  • homeExchangeServerName

The later two I’m flowing the appropriate values for using my Active Directory MA. I’m setting those attributes on the AD MA as I’m provisioning the AD account on that MA which then lets me set those two attributes as initial flow only. I’m doing that as over time it is highly likely that those attribute values may change with normal business as usual messaging admin tasks. I don’t want my Exchange MA stomping all over them.

Getting Started with the Granfeldt PowerShell Management Agent

First up, you can get it from here. Søren’s documentation is pretty good but does assume you have a working knowledge of FIM/MIM and this blog post is no different. Configuration tasks like adding additional attributes the User Object Class in the MIM Portal, updating MPR’s, flow rules, Workflows, Sets etc are assumed knowledge and if not is easily Bing’able for you to work it out.

Three items I had to work out that I’ll save you the pain of are;

  • You must have a Password.ps1 file. Even though we’re not doing password management on this MA, the PS MA configuration requires a file for this field. The .ps1 doesn’t need to have any logic/script inside it. It just needs to be present
  • The credentials you give the MA to run the scripts as, needs to be in the format of just ‘accountname’ NOT ‘domain\accountname’. I’m using the service account that I’ve used for the Active Directory MA. The target system is the same directory service and the account has the permissions required (you’ll need to add the management agent account to the appropriate Exchange role group for user management)
  • The path to the scripts in the PS MA Config must not contain spaces and be in old-skool 8.3 format. I’ve chosen to store my scripts in an appropriately named subdirectory under the MIM Extensions directory. Tip: from a command shell use dir /x to get the 8.3 directory format name. Mine looks like C:\PROGRA~1\MICROS~4\2010\SYNCHR~1\EXTENS~2\Exchange

Schema Script (schema.ps1)

As I’m using the OOTB (out of the box) Active Directory MA to provision the AD account and only showing mailbox provisioning, the schema only consists of the attributes needed to know the state of the user with respect to enablement and the attributes associated with enabling and confirming a user for a mailbox.

https://gist.github.com/darrenjrobinson/ae46cdfccb825dce69b3

Password Script (password.ps1)

Empty as described above.

Import Script (Import.ps1)

Import values for attributes defined in the schema.

Export Script (Export.ps1)

The business part of the MA. Take the mailnickName attribute value flowed from FIM, (the other required attributes are populated via the AD MA) and call update-recipient to provision the mailbox.

Wiring it all together

In order to wire the functionality all together there are the usual number of configuration steps to be completed. Below I’ve shown a number of the key points associated with making it all work.

Basically create the PS MA, import attributes from the PS MA, add any additional attributes to the Portal Schema, update the Portal Filter to allow Administrators to use the attribute, update the Synchronisation MPR to allow the Sync Engine to flow in the new attribute, create the Set used for the transition, create your Synchronisation Rule, create your Mailbox Workflow, create your Mailbox MPR, create your MA Run Profiles and let it loose.

Management Agent Configuration

As per the tips above, the format for the script paths must be without spaces etc. I’m using 8.3 format and I’m using the same service account as my AD MA.

Password script must be specified but as we’re not doing password management its empty as detailed above.

If your schema.ps1 file is formatted correctly you can select your attributes.

My join rule is simple. AccountName (which as you’ll see in the Import.ps1 is aligned with sAMAccountName) to AccountName in the MetaVerse.

My import flows are a combination of logic used for other parts of my solution, a Boolean flag & Mailbox GUID to determine if the user has a mailbox or not (used for my Transition Set and my Export script).

Below is my rules extension that sets a boolean value in the MV and then flowed to the MIM Portal that I use in my Transition Set to trigger my Synchronisation Rule.

Synchronisation Rules

My Exchange Outbound Sync rule doesn’t and isn’t complex. All it is doing is sync’ing out the mailnickName attribute and applying the rule based on an MPR, Set and Workflow.

For this implementation my outbound attribute flow for mailnickName is a simple firstname.lastname format.

Set

I have a Set that I use as a ‘transition set’ to trigger provisioning to Lync. My Set looks to see if the user account exists in AD (I flow in the AD DN to an attribute in the Portal) and the mailbox status (set by the Advanced Flow Rule shown above). I also have (not shown in the screenshot) a Boolean attribute in the MIM Portal that is set based on an advanced flow rule on the AD MA that has some logic to determine if employment date as sourced from my HR Management Agent is current and the user should be active or not).

Workflow

An action based workflow that will use the trigger the Synchronisation rule for Exchange Mailbox creation.

MPR

Finally my MPR for provisioning mailboxes is based on the transition set,

and my Mailbox Workflow.

Summary

Using the Granfeldt PowerShell MA I was able to quickly abstract Mailbox Provisioning from the AD Management Agent and perform the functionality on its own MA.

 

Follow Darren on Twitter @darrenjrobinson

Azure ExpressRoute in Australia via Equinix Cloud Exchange

Microsoft Azure ExpressRoute provides dedicated, private circuits between your WAN or datacentre and private networks you build in the Microsoft Azure public cloud. There are two types of ExpressRoute connections – Network (NSP) based and Exchange (IXP) based with each allowing us to extend our infrastructure by providing connectivity that is:

  • Private: the circuit is isolated using industry-standard VLANs – the traffic never traverses the public Internet when connecting to Azure VNETs and, when using the public peer, even Azure services with public endpoints such as Storage and Azure SQL Database.
  • Reliable: Microsoft’s portion of ExpressRoute is covered by an SLA of 99.9%. Equinix Cloud Exchange (ECX) provides an SLA of 99.999% when redundancy is configured using an active – active router configuration.
  • High Speed speeds differ between NSP and IXP connections – but go from 10Mbps up to 10Gbps. ECX provides three choices of virtual circuit speeds in Australia: 200Mbps, 500Mbps and 1Gbps.

Microsoft provided a handy table comparison between all different types of Azure connectivity on this blog post.

ExpressRoute with Equinix Cloud Exchange

Equinix Cloud Exchange is a Layer 2 networking service providing connectivity to multiple Cloud Service Providers which includes Microsoft Azure. ECX’s main features are:

  • On Demand (once you’re signed up)
  • One physical port supports many Virtual Circuits (VCs)
  • Available Globally
  • Support 1Gbps and 10Gbps fibre-based Ethernet ports. Azure supports virtual circuits of 200Mbps, 500Mbps and 1Gbps
  • Orchestration using API for automation of provisioning which provides almost instant provisioning of a virtual circuit.

We can share an ECX physical port so that we can connect to both Azure ExpressRoute and AWS DirectConnect. This is supported as long as we use the same tagging mechanism based on either 802.1Q (Dot1Q) or 802.1ad (QinQ). Microsoft Azure uses 802.1ad on the Sell side (Z-side) to connect to ECX.

ECX pre-requisites for Azure ExpressRoute

The pre-requisites for connecting to Azure regardless the tagging mechanism are:

  • Two Physical ports on two separate ECX chassis for redundancy.
  • A primary and secondary virtual circuit per Azure peer (public or private).

Buy-side (A-side) Dot1Q and Azure ExpressRoute

The following diagram illustrates the network setup required for ExpressRoute using Dot1Q ports on ECX:

Dot1Q setup

Tags on the Primary and Secondary virtual circuits are the same when the A-side is Dot1Q. When provisioning virtual circuits using Dot1Q on the A-Side use one VLAN tag per circuit request. This VLAN tag should be the same VLAN tag used when setting up the Private or Public BGP sessions on Azure using Azure PowerShell.

There are few things that need to be noted when using Dot1Q in this context:

  1. The same Service Key can be used to order separate VCs for private or public peerings on ECX.
  2. Order a dedicated Azure circuit using Azure PowerShell Cmdlet (shown below) and obtain the Service Key and use the this to raise virtual circuit requests with Equinix.https://gist.github.com/andreaswasita/77329a14e403d106c8a6

    Get-AzureDedicatedCircuit returns the following output.Get-AzureDedicatedCircuit Output

    As we can see the status of ServiceProviderProvisioningState is NotProvisioned.

    Note: ensure the physical ports have been provisioned at Equinix before we use this Cmdlet. Microsoft will start charging as soon as we create the ExpressRoute circuit even if we don’t connect it to the service provider.

  3. Two physical ports need to be provisioned for redundancy on ECX – you will get the notification from Equinix NOC engineers once the physical ports have been provisioned.
  4. Submit one virtual circuit request for each of the private and public peers on the ECX Portal. Each request needs a separate VLAN ID along with the Service Key. Go to the ECX Portal and submit one request for private peering (2 VCs – Primary and Secondary) and One Request for public peering (2VCs – Primary and Secondary).Once the ECX VCs have been provisioned check the Azure Circuit status which will now show Provisioned.expressroute03

Next we need to configure BGP for exchanging routes between our on-premises network and Azure as a next step, but we will come back to this after we have a quick look at using QinQ with Azure ExpressRoute.

Buy-side (A-side) QinQ Azure ExpressRoute

The following diagram illustrates the network setup required for ExpressRoute using QinQ ports on ECX:

QinQ setup

C-TAGs identify private or public peering traffic on Azure and the primary and secondary virtual circuits are setup across separate ECX chassis identified by unique S-TAGs. The A-Side buyer (us) can choose to either use the same or different VLAN IDs to identify the primary and secondary VCs.  The same pair of primary and secondary VCs can be used for both private and public peering towards Azure. The inner tags identify if the session is Private or Public.

The process for provisioning a QinQ connection is the same as Dot1Q apart from the following change:

  1. Submit only one request on the ECX Portal for both private and public peers. The same pair of primary and secondary virtual circuits can be used for both private and public peering in this setup.

Configuring BGP

ExpressRoute uses BGP for routing and you require four /30 subnets for both the primary and secondary routes for both private and public peering. The IP prefixes for BGP cannot overlap with IP prefixes in either your on-prem or cloud environments. Example Routing subnets and VLAN IDs:

  • Primary Private: 192.168.1.0/30 (VLAN 100)
  • Secondary Private: 192.168.2.0/30 (VLAN 100)
  • Primary Public: 192.168.1.4/30 (VLAN 101)
  • Secondary Public: 192.168.2.4/30 (VLAN 101)

The first available IP address of each subnet will be assigned to the local router and the second will be automatically assigned to the router on the Azure side.

To configure BGP sessions for both private and public peering on Azure use the Azure PowerShell Cmdlets as shown below.

Private peer:

Public peer:

Once we have configured the above we will need to configure the BGP sessions on our on-premises routers and ensure any firewall rules are modified so that traffic can be routed correctly.

I hope you’ve found this post useful – please leave any comments or questions below!

Read more from me on the Kloud Blog or on my own blog at www.wasita.net.

Secure Azure Virtual Network and create DMZ on Azure VNET using Network Security Groups (NSG)

At TechEd Europe 2014, Microsoft announced the General Availability of Network Security Groups (NSGs) which add security feature to Azure’s Virtual Networking capability. Network Security Groups provides Access Control on Azure Virtual Network and the feature that is very compelling from security point of view. NSG is one of the feature Enterprise customers have been waiting for.

What are Network Security Groups and how to use them?

Network Security Groups allow us to control traffic (ingress and egress) on our Azure VNET using rules we define and provide segmentation within VNET by applying Network Security Groups to our subnet as well as Access Control to VMs.

What’s the difference between Network Security Groups and Azure endpoint-based ACLs? Azure endpoint-based ACLs work only on VM public port endpoint. NSGs are able to work on one or more VMs and controls all ingress and egress traffic on the VM. In addition NSG can be associated with a subnet and to all VMs in that subnet.

NSG Features and Constraints

NSG features and constraints are as follow:

  • 100 NSGs per Azure subscription
  • One VM / Subnet can only be associated with One NSG
  • One NSG can contain up to 200 Rules
  • A Rule has characteristics as follow:
    • Name
    • Type: Inbound/Outbound
    • Priority: Integer between 100 and 4096
    • Source IP Address: CIDR of Source IP Range
    • Source Port Range: Range between 0 and 65000
    • Destination IP Range: CIDR of Destination IP Range
    • Destination Port Range: Integer or Range between 0 and 65000
    • Protocol: TCP, UDP or use * for Both
    • Access: Allow/Deny
    • Rules processed in the order of priority. Rule with lower priority is processed before rules with higher priority numbers.

Security Zone Model

Designing isolated Security Zones within an Enterprise network is an effective strategy for reducing many types of risk, and this applies in Azure also. We need to work together with Microsoft as our Cloud Vendor to secure our Azure environment. Our On-Premises knowledge to create Security Zones model can be applied to our Azure environment.

As a demonstration I will pick the simplest Security Zone model which I will apply on my test Azure enviroment just to get some ideas how NSG will work. I will create 3 layers of Security Zone model for my test Azure environment. This simple security zone is only for the demo purpose and might not be suitable for your Enterprise environment.

  • Internet = Attack Vectors / Un-trusted
  • Front-End = DMZ
  • App / Mid-Tier = Trusted Zone
  • DB / Back-end = Restricted Zone

Based on Security Zone model above, I created my test Azure VNET :

Azure VNET: SEVNET
Address Space: 10.0.0.0/20
Subnets:
Azure-DMZ – 10.0.2.0/25
Azure-App – 10.0.0.0/25
Azure-DB – 10.0.1.0/25

Multi Site Connectivity to: EUVNET (172.16.0.0/16) and USVNET (192.168.0.0/20).

The diagram below illustrates above scenario:

Security Zone 1

Lock ‘Em Down

After we decided our simple Security Zone model it’s time to lock them down and secure the zones.

The diagram below illustrates how the traffic flow will be configured:

Security Zone 2

In High Level the Traffic flow as follow:

  • Allow Internet ingress and egress on DMZ
  • Allow DMZ – App ingress and egress
  • Allow App – DB ingress and egress
  • Deny DMZ-DB ingress and egress
  • Deny App-Internet ingress and egress
  • Deny DB-Internet ingress and egress
  • Deny EUVNET-DB on SEVNET ingress and egress
  • Deny USVNET-DB on SEVNET ingress and egress

Section below will show the examples of Azure NSG rules table will look like.

NSG Rules Table

Azure DMZ NSG Rules Table

Name Source IP Source Port Destination IP Destination Port Port Type Action Priority
RDPInternet-DMZ * 63389 10.0.2.0/25 63389 TCP Inbound Allow 347
Internet-DMZSSL * 443 10.0.2.0/25 443 TCP Inbound Allow 348
Internet-DMZDRS * 49443 10.0.2.0/25 49443 TCP Inbound Allow 349
USVNET-DMZ 192.168.0.0/20 * 10.0.2.0/25 * * Inbound Allow 400
EUVNET-DMZ 172.16.0.0/16 * 10.0.2.0/25 * * Inbound Allow 401
DMZ-App 10.0.2.0/25 * 10.0.0.0/25 * * Outbound Allow 500
DMZ-DB 10.0.2.0/25 * 10.0.1.0/25 * * Outbound Deny 600
Allow VNET Inbound Virtual_Network * Virtual_Network * * Inbound Allow 65000
Allow Azure Internal Load Balancer Inbound Azure_LoadBalancer * * * * Inbound Allow 65001
Deny All Inbound * * * * * Inbound Deny 65500
Allow VNET Outbound Virtual_Network * Virtual_Network * * Outbound Allow 65000
Allow Internet Outbound * * INTERNET * * Outbound Allow 65001
Deny All Outbound * * * * * Outbound Deny 65500

Azure App NSG Rules Table

Name Source IP Source Port Destination IP Destination Port Prot Type Action Priority
DMZ-App 10.0.2.0/25 * 10.0.0.0/25 * * Inbound Allow 348
USVNET-App 192.168.0.0/20 * 10.0.0.0/25 * * Inbound Allow 400
EUVNET-App 172.16.0.0/16 * 10.0.0.0/25 * * Inbound Allow 401
App-DMZ 10.0.0.0/25 * 10.0.2.0/25 * * Outbound Allow 500
App-DB 10.0.0.0/25 * 10.0.1.0/25 * * Outbound Allow 600
App-Internet 10.0.0.0/25 * INTERNET * * Outbound Deny 601
Allow VNET Inbound Virtual_Network * Virtual_Network * * Inbound Allow 65000
Allow Azure Internal Load Balancer Inbound Azure_LoadBalancer * * * * Inbound Allow 65001
Deny All Inbound * * * * * Inbound Deny 65500
Allow VNET Outbound Virtual_Network * Virtual_Network * * Outbound Allow 65000
Allow Internet Outbound * * INTERNET * * Outbound Allow 65001
Deny All Outbound * * * * * Outbound Deny 65500

Azure DB NSG Rules Table

Name Source IP Source Port Destination IP Destination Port Prot Type Action Priority
App-DB 10.0.0.0/25 * 10.0.1.0/25 * * Inbound Allow 348
USVNET-App 192.168.0.0/20 * 10.0.1.0/25 * * Inbound Deny 400
EUVNET-App 172.16.0.0/16 * 10.0.1.0/25 * * Inbound Deny 401
DB-DMZ 10.0.1.0/25 * 10.0.2.0/25 * * Outbound Deny 500
DB-App 10.0.1.0/25 * 10.0.0.0/25 * * Outbound Allow 600
DB-Internet 10.0.0.0/25 * INTERNET * * Outbound Deny 601
Allow VNET Inbound Virtual_Network * Virtual_Network * * Inbound Allow 65000
Allow Azure Internal Load Balancer Inbound Azure_LoadBalancer * * * * Inbound Allow 65001
Deny All Inbound * * * * * Inbound Deny 65500
Allow VNET Outbound Virtual_Network * Virtual_Network * * Outbound Allow 65000
Allow Internet Outbound * * INTERNET * * Outbound Allow 65001
Deny All Outbound * * * * * Outbound Deny 65500

Tables above will give us some ideas how to plan our Azure NSGs in order to establish our Security Zones.

Get Started using NSGs!

At the time this post was written, NSG is exposed only through PowerShell and REST API. To use PowerShell, we need version 0.8.10 of the Azure PowerShell module.

The commands are as follow:

  • Get-AzureNetworkSecurityGroup
  • Get-AzureNetworkSecurityGroupConfig
  • Get-AzureNetworkSecurityGroupForSubnet
  • New-AzureNetworkSecurityGroup
  • Remove-AzureNetworkSecurityGroup
  • Remove-AzureNetworkSecurityGroupConfig
  • Remove-AzureNetworkSecurityGroupFromSubnet
  • Remove-AzureNetworkSecurityRule
  • Set-AzureNetworkSecurityGroupConfig
  • Set-AzureNetworkSecurityGroupToSubnet
  • Set-AzureNetworkSecurityRule

Here are some examples:

Personally I will be recommending Azure NSG for every Azure Production deployment I perform in future!

Leave a comment below or contact us if you have any questions regarding Azure especially in more complex Enterprise scenarios.

http://www.wasita.net

Azure VM Security using Azure VM Security Extensions, ConfigMgr and SCM Part 2

This post is part of the series. Part 1 can be found here. As I mentioned on previous post, this post to wrap up my session at TechEd Sydney 2014 DCI315 Azure VM Security ad Compliance Management with Configuration Manager and SCM.

Let’s jump to our next focus:

Patch Azure VM

ConfigMgr  is long famous for its capability for patch management. Three points on how the patch management lifecycle is running with ConfigMgr 2012 R2 for our Azure VMs:

  • Scan and Measure
    Scan&Measure
  • Remediate Non-Compliant – Patch the non-compliant
  • Reporting
    reportdefinition

Patch is straight forward and utilize ADR (Automatic Deployment Rules) to set schedule update/patch. The next one is the interesting one which many of us actually are not realizing the next tool available from Microsoft for FREE.

Harden Azure VM

We will harden our Azure VM using ConfigMgr 2012 R2 and SCM. What is SCM ? SCM is Security Compliance Manager, Solution Accelerator provided by Microsoft to create Server Hardening baseline which we can utilize to harden our servers both On-Premises/Azure. The question is why using both ConfigMgr and SCM ? I will borrow my slide deck from TechEd 2014 Sydney

Why ConfigMgr and SCM

  • First Tailored Baselines. You can leverage baselines developed by Microsoft+CIS+NIST on SCM and tailored it reflecting your business requirements
  • Server Roles. Why the baselines have been developed per server roles? Simple answer: To reduce attack surface by allowing specific functionality, services, permissions per server role
  • The bottom three from the illustration above is ConfigMgr 2012 R2 specific functions. Third reason: Monitor Compliance. ConfigMgr 2012 R2 has features called Compliance Settings which will allow us to monitor our baselines if there’s any differentiation.
  • Report function on ConfigMgr 2012 R2 leveraging SQL Server Reporting Services (SSRS) will give us visibility and reporting capabilities (around 469 in-built reports)
  • Auto-Remediate is one of “hero” features on ConfigMgr 2012 R2 which will allow ConfigMgr to auto-remediate when our Azure servers are not compliant. It is like self-healing capability

So the next question will be how do we use ConfigMgr and SCM together? The idea to build your own Compliance Management using both technology is leveraging Group Policy capability.

  • Use GPO to push our Server Hardening tailored baselines. You can export SCM baselines as GPO backup and import the settings into your GPOs
  • Export tailored SCM baselines to *.CAB and Import it to ConfigMgr 2012 R2. Use ConfigMgr Compliance Settings to Monitor
  • Use ConfigMgr Reporting Services to Report and provide visibility
  • Use Auto-Remediate features if necessary or fix non-compliant differentiation

Diagram below illustrates how the Compliance Management works using ConfigMgr 2012 R2 and SCM

compliance management

Key Takeaways

If I can summarize my key takeaways from my session:

  • PPH (Protect, Patch and Harden) = ConfigMgr 2012 R2 + SCM
  • Use SCM! SCM is FREE
  • Microsoft + You = Secure Cloud

There is no silver bullet to secure our environment, therefore pro-active approach is required to secure both our On-Premises environment and Azure environment.

Remeber: the strength of our security perimeter is only as strong as our weakest link.