Bots: An Understanding of Time

Some modern applications must understand time, because the messages they receive contain time sensitive information. Consider a modern Service Desk solution, that may have to retrieve tickets based on a date range (the span between dates) or a duration of time.

In this blog post, I’ll explain how bots can interpret date ranges and durations, so they can respond to natural language queries provided by users, either via keyboard or microphone.

First, let’s consider  the building blocks of a bot, as depicted in the following view:

The client runs an application that sends messages to a messaging endpoint in the cloud. The connection between the client and the endpoint is called a channel. The message is basically something typed or spoken by the user.

Now, the bot must handle the message and provide a response. The challenge here is interpreting what the user said or typed. This is where cognitive services come in.

A cognitive service is trained to take an message from the user and resolve it into an intent (a function the bot can then execute). The intent determines which function the bot will execute, and the resulting response to the user.

To build time/date intelligence into a bot, the cognitive service must be configured to recognise date/time sensitive information in messages, and the bot itself must be able to convert this information into data it can use to query data sources.

Step 1: The Cognitive Service

In this example, I’ll be using the LIUS cognitive service. Because my bot resides in an Australia based Azure tenant, I’ll be using the endpoint. I’ve created an app called Service Desk App.

Next, I need to build some Intents and Entities and train LUIS.

  • An Entity is an thing or phrase (or set of things or phrases) that may occur in in an utterance. I want LUIS (and subsequently the bot) to identify such entities in message provided to it.

The good news is that LUIS has a prebuilt entity called datetimeV2 so let’s add that to our Service Desk App. You may also want to add additional entities, for example: a list of applications managed by your service desk (and their synonyms), or perhaps resolver groups.

Next, we’ll need an Intent so that LUIS can have the bot execute the correct function (i.e. provide a response appropriate to the message). Let’s create an Intent called List.Tickets.

  • An Intent, or intention represents something the user wants to do (in this case, retrieve tickets from the service desk). A bot may be designed to handle more than one Intent. Each Intent is mapped to a function/method the bot executes.

I’ll need to provide some example utterances that LUIS can associate with the List.Tickets intent. These utterances must contain key words or phrases that LUIS can recognise as entities. I’ll use two examples:

  • “Show me tickets lodged for Skype in the last 10 weeks”
  • “List tickets raised for SharePoint  after July this year”

Now, assuming I’ve created an list based entity called Application (so LUIS knows that Skype and SharePoint are Applications), LUIS will recognise these terms as entities in the utterances I’ve provided:

Now I can train LUIS and test some additional utterances. As a general rule, the more utterances you provide, the smarter LUIS gets when resolving a message provided by a user to an intent. Here’s an example:

Here, I’ve provided a message that is a variation of utterances provided to LUIS, but it is enough for LUIS to resolve it to the List.Tickets intent. 0.84 is a measure of certainty – not a percentage, and it’s weighted against all other intents. You can see from the example that LUIS has correctly identified the Application (“skype”), and the measure of time  (“last week”).

Finally, I publish the Service Desk App. It’s now ready to receive messages relayed from the bot.

Step 2: The Bot

Now, it’s possible to create a bot from the Azure Portal, which will automate many of the steps for you. During this process, you can use the Language Understanding template to create a bot with a built in LUISRecognizer, so the code will be generated for you.

  • Recognizer is a component (class) of the bot that is responsible determining intent. The LUISRecognizer does this by relaying the message to the LUIS cognitive service.

Let’s take a look at the bot’s handler for the List.Tickets intent. I’m using Node.js here.

The function that handles the List.Tickets intent uses the EntityRecognizer class and findEntity method to extract entities identified by LUIS and returned in the payload (results).

It passes these values to a function called getData . In this example, I’m going to have my bot call a (fictional) remote service at This service will support the Open Data (OData) Protocol, allowing me to query data using the query string. Here’s the code:

(note I am using the sync-request package to call the REST service synchronously).

Step 3: Chrono

So let’s assume we’ve sent the following message to the bot:

  • “List tickets raised for SharePoint  after July this year”

It’s possible to query an OData data source for date based information using syntax as follows:

  • $filter=CreatedDate gt datetime’2018-03-08T12:00:00′ and CreatedDate lt datetime’2018-07-08T12:00:00′

So we need to be able to convert ‘after July this year’ to something we can use in an OData query string.

Enter chrono-node and dateformat – neat packages that can extract date information from natural language statements and convert the resulting date into ISO UTC format respectively. Let’s put them both to use in this example:

It’s important to note that chrono-node will ignore some information provided by LUIS (in this case the word ‘after’, but also ‘last’ and ‘before’), so we need a function to process additional information to create the appropriate filter for the OData query:

Handling time sensitive information is a crucial when building modern applications designed to handle natural language queries. After all, wouldn’t it be great to ask for information using your voice, Cortana,  and your mobile device when on the move! For now, these modern apps will be dependent on data in older systems with APIs that require dates or date ranges in a particular format.

The beauty of languages like Node.js and the npm package manager is that building these applications becomes an exercise in assembling building blocks as opposed to writing functionality from scratch.

Getting Started with Adaptive Cards and the Bot Framework

This article will provide an introduction to working with AdaptiveCards and the Bot Framework. AdaptiveCards provide bot developers with an option to create their own card templates to suit variety of different scenarios. I’ll also show you a couple of tricks with node.js that will help you design smart.

Before I run through the example, I want to point you to some great resources from which will help you build and test your own AdaptiveCards:

  • The schema explorer provides a breakdown of the constructs you can use to build your AdaptiveCards. Note that there are limitations to the schema so don’t expect to do all the things you can do with regular mark-up.
  • The schema visualizer is a great tool to enable you (and your stakeholders) to give the cards a test drive.

There are many great examples online (start with GitHub), so you can go wild with your own designs.

In this example, we’re going to use an AdaptiveCard to display an ‘About’ card for our bot. Schemas for AdaptiveCards are JSON payloads. Here’s the schema for the card:

This generates the following card (go play in the visualizer):


We’ve got lots of %placeholders% for information the bot will insert at runtime. This information could be sourced, for example, from a configuration file collocated with the bot, or from a service the bot has to invoke.

Next, we need to define the components that will play a role in populating our About card. My examples here will use node.js. The following simple view outlines what we need to create in our Visual Studio Code workspace:


The about.json file contains the schema for the AdaptiveCard (which is the code in the script block above). I like to create a folder called ‘cards’ in my workspace and store the schemas for each AdaptiveCard there.

The Source Data

I’m going to use dotenv to store the values we need to plug into our AdaptiveCard at runtime. It’s basically a config file (.env) that sits with your bot. Here we declare the values we want inserted into the AdaptiveCard at runtime:

This is fine for the example here but in reality you’ll probably be hitting remote services for records and parsing returned JSON payloads, rendering carousels of cards.

The Class

about.js is the object representation of the card. It provides attributes for each item of source data and a method to generate a card schema for our bot. Here we go:

The constructor simply offloads incoming arguments to class properties. The toCard() method reads the about.json schema and recursively does a find/replace job on the class properties. A card is created and the updated schema is assigned to the card’s content property. The contentType attribute in the JSON payload tells a handling function that the schema represents an AdaptiveCard.

The Bot

In our bot we have a series of event handlers that trigger based on input from the user via the communication app, or from a cognitive service, which distils input from the user into an intent.

For this example, let’s assume that we have an intent called Show.Help. Utterances from the user such as ‘tell me about yourself’ or quite simply ‘help’ might resolve to this intent.

So we need to add a handler (function) in app.js that responds to the Show.Help intent (this is called a triggerAction). The handler deals with the dialog (interaction) between the user and the bot so we need it to both generate the About card and handle any interactions the card supports (such as clicking the Submit Feedback button on the card).

Note that the dialog between user and bot ends when the endDialog function is called, or when the conditions of the cancelAction are met.

Here’s the code for the handler:

The function starts with a check to see if a dialog is in session (i.e. a message was received). If not (the else condition), it’s a new dialog.

We instantiate an instance of the About class and use the toCard() method to generate a card to add to the message the bot sends back to the channel. So you end up with this:


And there you have it. There are many AdaptiveCard examples online but I couldn’t find any for Node.js that covered the manipulation of cards at runtime. Now, go forth and build fantastic user experiences for your customers!

Your Modern Collaboration Landscape

There are many ways people collaborate within your organisation. You may or may not enjoy the fruits of that collaboration. Does your current collaboration landscape cater for the wide variety of groups that form (organically or inorganically) to build relationships and develop your business?

Moving to the cloud is a catalyst for re-evaluating your collaboration solutions and their value. Platforms like Office 365 are underpinned by search/discovery tools that can traverse and help draw insight from the output of collaboration, including conversations and connections between people and information. Modern applications open up new opportunities to build working groups that include people form outside your organisation with whom you can freely, and securely share content.

I’ve been in many discussions with customers on how enabling technologies play a role in the modern collaborative landscape. Part of this discussion is about identifying the various group archetypes and how characteristics can align or differ. I’ve developed a view that forms these groups into three ‘tiers’, as follows:


Organisations should consider a solution for each tier, because there are requirements in each tier that are distinct. The challenge for an organisation (as part of a wider Digital Workplace strategy) is to:

  • Understand how existing and prospective solutions will meet collaboration requirements in each tier, and communicate that understanding.
  • Develop a platform where information managed in each tier can be shared with other tiers.

Let’s go into the three tiers in more detail.

Tier One (Intranet)

Most organisations I work with have an established Tier One business solution, like a corporate intranet. These are the first to mature. They are logically represented as hierarchy of containers (i.e. sites), with a mix of implicit and explicit access control (and associated auditing difficulties). The principal use is to store documents and host authored web content (such as news). Tier One systems are usually dependent on solutions in other tiers to facilitate (and retain) group conversations or discussions.

  • Working groups are hierarchical and long term, based off a need to model the relationships between groups in an organisation (e.g. Payroll sits under Finance, Auditing sits under Payroll )
  • Activity here is closed and formal. Contribution is restricted to smaller groups.
  • Information is one-way and top down. Content is authored and published for group-wide or organisation-wide consumption.
  • To get things done, users will be more dependent on a Service Desk (for example: managing access control, provisioning new containers), at the cost of agility.
  • Groups are established here to work towards a business outcome or goal (deliver a project, achieve our organisations objectives for 2019).

Tier Three (Social Network):

Tier Three business solutions represent your organisation’s social network. Maturity here ranges from “We launched [insert platform here] and no-one is using it” to “We’ve seen an explosion in adoption and it’s Giphy city”. They are usually dependent on solutions in other tiers to provide capabilities such as web content/document management (case in point: O365 Groups and Yammer).

  • Tier Three groups here are flattened, and cannot by design model a hierarchy. They tend to be long term, and prone to stagnation.
  • Groups represent communities, capabilities and similar interest groups, all of which are of value to your organisation. At this point you say: “I understand how the ‘Welcome to Yammer’ group is valuable, but what about the ‘Love Island Therapy’ group?”. At this point I say: “Here you have a collection of individuals who are proactively using and adopting your platform”.
  • Unlike in the other tiers, groups here tend to have no business outcome, although they’ll have objectives to gain momentum and visibility.
  • Collaboration here is open (public) and informal, down to the #topics people discuss and the language that is used.
  • A good Tier Three solution will be fully self service, subject to a pre-defined usage policy. There should be no restrictions beyond group level moderation in terms of who can contribute. If it’s public or green it’s fair game!
  • Tier Three groups have the biggest membership, and can support thousands of members.

Tier Two (Workspaces)

Tier Two comes last, because in my experience it’s the capability that is the least developed in organisations I work with and the last to mature.

A Tier Two business solution delivers a collaborative area for teams such as working groups, committees and project teams. They will provide a combination of features inherent in Tier One and Tier Three solutions. For example, the chat/discussion capabilities of a Tier Three solution and the content management capabilities of a Tier One solution

  • Tier Two groups here are flattened, and cannot by design model a hierarchy. They tend to me short term, in place to support a timeboxed initiative or activity.
  • Groups represent working groups, committees and project teams, with a need to create content and converse. These groups are coalitions, including representation from different organisational groups that need to come together to deliver an outcome.
  • Groups work towards a business outcome, for example: develop a business case, deliver a document.
  • Collaboration here tends to be closed (restricted to a small group) and semi-formal, but it is possible for such groups to be both closed, formal and open, informal.
  • A good Tier Two solution will be fully self service, subject to a pre-defined usage policy. There should be no restrictions beyond group level moderation in terms of who can contribute.
  • Groups represent a small number of individuals, and do not grow to the size of departmental (Tier One) groups or social (Tier Three) groups.

The three-tiers view identifies the different ways collaboration happens with in your organisation. It is solution agnostic, you can advocate any technology in any tier if it meets the requirement. The view helps evaluate the diverse needs of your organisation, and determine how effective your current solutions are at meeting requirements for collaboration and information working.



Agile Teams to High Performing Machines

Agile teams are often under scrutiny as they find their feet and as their sponsors and stakeholders realign expectations. Teams can struggle due to many reasons. I won’t list them here, you’ll find many root causes online and may have a few yourself.

Accordingly, this article is for Scrum Masters, Delivery Managers or Project Managers who may work to help turn struggling teams into high performing machines. The key to success here is measures, measures and measures. 

I have a technique I use to performance manage agile teams involving specific Key Performance Indicators (KPIs). To date it’s worked rather well. My overall approach is as follows:

  • Present the KPIs to the team and rationalise them. Ensure you have the team buy-in.
  • Have the team initially set targets against each KPI. It’s OK to be conservative. Goals should be achievable in the current state and subsequently improved upon.
  • Each sprint, issue a mid-sprint report, detailing how the team is tracking against KPIs. Use On Target and Warning, respectively to indicate where the team has to up it’s game.
  • Provide a KPI de-brief as part of the retrospective. Provide insight into why and KPIs were not satisfied.
  • Work with the team on setting the KPIs for the next sprint at the retrospective

I use a total of five KPIs, as follows:

  • Total team hours worked (logged) in scrum
  • Total [business] value delivered vs projected
  • Estimation variance (accuracy in estimation)
  • Scope vs Baseline (effectiveness in managing workload/scope)
  • Micro-Velocity (business value the team can generate in one hour)

I’ve provided a Agile Team Performance Tracker for you to use that tracks some of the data required to use these measures. Here’s an example dashboard you can build using the tracker (click to enlarge):


In this article, I’d like to cover some of these measures in detail, including how tracking these measures can start to affect positive change in team performance. These measures have served me well and help to provide clarity to those involved.

Estimation Variance

Estimation variance is a measure I use to track estimation efficiency over time. It relies on the team providing hours based estimates for work items but is attributable to your points based estimates. As a team matures and gets used to estimation. I expect the time invested to more accurately reflect what was estimated.

I define this KPI as a +/-X% value.

So for example, if the current Estimation Variance is +/-20%, it means the target for team hourly estimates, on average for work items in this sprint, should be tracking no more than 20% above or below logged hours for those work items. I calculate the estimation variance as follows:

[estimation variance] = ( ([estimated time] / [actual time]) / [estimated time] ) x 100

If the value is less than the negative threshold, it means the team is under-estimating. If the value is more than the positive threshold, it means the team is over-estimating. Either way, if you’re outside the threshold, it’s bad news.

“But why is over-estimating an issue?” you may ask yourself? “An estimate is just an estimate. The team can simply move more items from the backlog into the sprint“. Remember that estimates are used as baselining for future estimates and planning activities. A lack of discipline in this area may impede release dates for your epics.

You can use this measure against each of the points tiers your team uses. For example:


n this example, the team is under-estimating bigger ticket items (5’s and 8’s), so targeted efforts can be made next estimation session to bring this within target threshold. Overall though in this example the team is tracking pretty well – the overall variance of -4.30% could well be within target KPI for this sprint.

Scope vs Baseline

Scope vs Baseline is a measure used to assess the team’s effectiveness at managing scope. Let’s consider the following 9-day sprint burndown:


The baseline represents the blue line. This is the projected burn-down based on the scope locked in at the start of the sprint. The scope is the orange line, representing the total scope yet to be delivered on each day of the sprint.

Obviously, a strong team tracks against or below the baseline, and will take on additional scope to stay aligned to the baseline without falling too far below it. Teams that overcommit/underdeliver will ‘flatline (not burn down) and track above the baseline, and even worse may increase scope when tracking above the baseline.

The Scope vs Baseline measure is tracked daily, with KPI calculation an average across all days in the sprint.

I define this KPI as a +/-X% value.

So for example, if the current Scope vs Baseline is +/-10%, it means the actual should not track on average more than 10% above or below the baseline. I calculate the estimation variance as follows:

[scope vs baseline] = ( [actual / projected] * 100 ) – 100

Here’s an example based on the burndown chart above:


The variance column stores the value for the daily calculation. The result is the Scope vs Baseline KPI (+4.89%). We see the value ramp up into the positive towards the end of sprint, representing our team’s challenge closing out it’s last work items. We also see the team tracking at -60% below the baseline on day 5, which subsequently triggers a scope increase to track against the baseline – a behaviour indicative of a good performing team.


Velocity is the most well known measure. If it goes up, the team is well oiled and delivering business value. If it goes down, it’s the by-product of team attrition, communication breakdowns or other distractions.

Velocity is a relative measure, so whether it’s good or bad depends on the team and the measures taken in past sprints.

What I do is create a variation on the velocity measure that is defined as follows:

[micro velocity] = SUM([points done]) / SUM([hours worked])

I use a daily calculation of macro-velocity (vs past iterations) to determine the impact team attrition and on-boarding new users will have within a single sprint.

In conclusion, using some measures as KPIs on top of (but dependent on) the reports provided by the likes of Jira and Visual Studio Online can really help a team to make informed decisions on how to continuously improve. Hopefully some of these measures may be useful to you.

5 Tips: Designing Better Bots

Around about now many of you will be in discussions internally or with your partners on chatbots and their applications.

The design process for any bot distils a business process and associated outcome into a dialog. This is a series of interactions between the user and the bot where information is exchanged. The bot must deliver that outcome expediently, seeking clarifications where necessary.

I’ve been involved in many workshops with customers to elicit and evaluate business processes that could be improved through the use of bots. I like to advocate a low risk, cost effective and expedient proof of concept, prior to a commitment to full scale development of a solution. Show, rather than tell, if you will.

With that in mind, I present to you my list of five house rules or principles to consider when deciding if a bot can help improve a business process:

1. A bot can’t access information that is not available to your organisation

Many bots start out life as a proof of concept, or an experiment. Time and resources will be limited at first. You want to prove the concept expediently and with agility. You’ll want to avoid blowing the scope in order to stand up new data sources or staging areas for data.

As you elaborate on the requirements, as yourself where the data is coming from and how it is currently aggregated or modified in order to satisfy the use case. Your initial prototype may well be constrained by the data sources currently in place within your organisation (and accessibility to those data sources).

Ask the question “Where is this information at rest?”, “How do you access it?”, “It is manually modified?”.

2. Don’t ask the for information the user doesn’t know or has to go and look up 

Think carefully – does the bot really need to seek clarification? Let’s consider the following example:


In practice, you’re forcing the user to sign in here to some system or dig around their inbox and copy/paste a unique identifier. I’ve yet to meet anyone who has the capacity to memorise things like their service ticket reference numbers. You can design smarter. For example:

  1. Assume the user is looking for the last record they created (you can use your existing data to determine if this is likely)
  2. Show them their records. Get them to pick one.
  3. Use the dialog flow to retain the context of a specific record

By all means, have the bot accommodate for scenarios where user does provide a reference number. Remember, your goal is to reduce time to the business outcome and eliminate menial activity. (Trust me. Looking up stuff in one system to use in another system is menial activity).

3. Let’s be clear – generic internet keyword searches are an exception


When Siri says ‘here’s what I found on the Internet’, it’s catching an exception; a fall-back option because it’s not been able to field your query. It’s far better than ‘sorry, I can’t help you’. A generic internet/intranet keyword search should never be your primary use case. Search and discovery activity is key to a bot’s value proposition, but these functions should be underpinned by a service fabric that targets (and aggregates) specific organisational data sources. You need to search the internet? Please, go use Bing or Google.

4. Small result sets please

As soon as a chat-bot has to render more than 5 records in one response, I consider this an auto-fail.

Challenge any assertion that a user would want to see a list of more than 5 results, and de-couple the need to summarise data from the need to access individual records. Your bot needs to respond quickly, so avoid expensive queries for data and large resulting data sets that need to be cached somewhere. For example:


In this example, the bot provides a summary which enough information to provide the user with an option to take further action (do you want to escalate this?). It also informs the most appropriate criteria for the next user driven search/discovery action (reports that are waiting on me).

Result sets may return tens, hundreds or thousands of records but the user inevitably narrows this down to one, so the question is “how do you get from X results down to 1”.

Work with the design principal that the bot should apply a set of criteria that returns the ‘X most likely’. Use default criteria based on the most common filtering scenario but allow the user to re-define that criteria. For example:



5. Don’t remove the people element

Remember a bot should eliminate the menial work a person does, not substitute the person. If you’re thinking of designing a bot to substitute or impersonate a person, think again.


No one wants to do menial work, and I’d hedge my bets that there is no one in your organisation who’s workload is 100% menial. Those that do menial work would much rather re-focus efforts on more productive and rewarding endeavours.

Adding a few Easter Eggs to your bot (i.e. human-like responses) is a nice to have. Discovery and resulting word of mouth can assist with adoption.

Consider whether your process involves the need to connect with a person (either as a primary use case, or edge case). If this is the case, ensure you can expedite the process. Don’t simply serve up a contact number. Instead, consider how to connect the user directly (via tools like Skype, if they are ‘online’). Alternatively, allow the user request a call-back.


Remember, there are requirements bots and agents cannot satisfy. Define some core principles or house rules with your prospective product owners and test their ideas against them. It can lead you to high value business solutions.

Selling Adoption Services

It’s no secret that Partners have challenges selling change & adoption services. I’m talking specifically about Partners who will provide offerings to complement other professional services they provide, like technical delivery. Proposals are subject to scrutiny and in the absence of a value proposition, change & adoption will be one of the first things to go in a re-scoping exercise.

“It’s OK. We do change & adoption ourselves”

Organisations like Kloud deliver a range of professional services centric to our mission to ‘move organisations to the cloud’. In order to succeed, we call upon other competencies to support technical delivery such as project management, delivery management, and change & adoption.

Your investment in change management equates directly to the impact you think a change is going to have. You take into consideration the impact to people your organisation, the cost of delivering activities to support them, and logistics required to deliver them.

“There are known knowns; there are things we know we know…. But there are also unknown unknowns – the [things] we don’t know we don’t know”

Sometimes the impact of change to your organisation may not be clear because the functions of the technology you are introducing aren’t fully understood. In-house change teams can struggle applying what they’ve been taught in practice.

In my experience a successful change team is a coalition of people who can engage the business, people who understand the technology, and people who understand the methodology. The objective for Partners therefore, is to present a compelling offering to their customers which brings together the customer’s knowledge of and ability to engage with their business, and the Partner’s experience of delivering change management activities specifically to the (cloud) services it specialises in.

The Return on Investment

The value proposition for an adoption service is crucial. Prosci have led the charge. They understand that demonstrating a return on Investment is key in prospective customers adopting their own change management framework. The assertion is that success (based on measures you define) will depend on a level of investment in managing organizational change (whatever that looks like; small to large). There are many attempts to distil this into a basic equation (e.g. CHANGE + CHANGE MANAGEMENT = SUCCESS!)

Prosci constitutes a valuable and powerful set of tools and techniques, hardened by years of investment and research. I see it as a box of tools that can be selectively applied. Partners can provide compelling offerings by taking the foundation of these methodologies and developing compelling, cost effective solutions to support their products and services. We see evidence of this through the evolution of change and adoption services Microsoft are providing through FastTrack, available to Partners to support adoption of O365 workloads.

When forecasting ROI, Partners can refer to their own case studies to demonstrate ROI and draw parallels between engagements (“the conditions are similar, therefore the outcome will be similar”). Those that are licensed to do so can call upon the Prosci ‘library of statistics’ to support their value proposition.

Partners like Kloud who specialise in the enablement of cloud services can lean on the capabilities of modern cloud technology to support their campaigns. O365 in particular (My Analytics, Power BI Adoption Content Pack, Yammer) can serve up many measures and insights that historically would have difficult to elicit. In short: technology is helping to strengthen the change management value proposition.

The funny thing is, Partners need a change management framework to successfully embed a change management competency within a customer, and the same is required sell the value proposition within its own organisation (to its sales representatives, account managers, engagement leads). Consider this a good test of your offering.

Moving forwards, now is a great time for Partners to be discussing change management with their customers. What’s their level of maturity? How successful have initiatives been? Is the ROI clear and is it retrospectively validated? What is their current view of Change Management? What are they looking to Partners to provide?

Connecting Surface Hubs to Microsoft Operations Management Suite

At Kloud, we have Surface Hubs in our Melbourne, Adelaide and Sydney regional offices. It makes sense  to monitor them using a central service, to which monitoring and diagnostic information can be aggregated.

This blog post covers the process of connecting your Surface Hub devices to Microsoft Operations Management Suite (OMS).

Before you start:

  • You need a Surface Hub (yes, obvious I know) set up and connected to your network. It’s a good idea to setup the device account before you connect it to OMS.
  • You’ll need to have registered an OMS workspace. Select a pricing scheme (there is a free option) suitable for you. (New to OMS?)
  • You’ll need read access to your Azure Subscription (else you won’t be able to add solutions to your workspace).
  • Ensure your Surface Hub device (network) names can be used to differentiate the devices if your enterprise has more than one Surface Hub. The hubs will be listed by device name (not device account) via OMS dashboards. You can change the device name via Settings > Device management > About)
  • You’ll need the administrator credential for your Surface Hub so that you can connect it (it will ask for this when you launch the Settings app)

Step 1: Add the Surface Hub Solution to your OMS Workspace

Sign-in to OMS at <workspace> Open the solution gallery. Look for the Surface Hub Solution:

Surface Hub Solution

You can add Solutions to your Workspace providing you have read-access to your Azure subscription. It’s going to add a new tile to your Workspace’s Home Dashboard, but you won’t be able to interact with it until you’ve connected at least one Surface Hub, so you’ll see this:

Surface Hub (No Connected Devices)

You’ll need the workspace id and workspace key. Get these from Operations Management Suite via Settings > Connected Services > Windows Servers (you can use either primary key or secondary key as your workspace key. I recommend you make this information accessible from the Surface Hub as you’ll need to copy/paste it. Use OneNote, a text file on your OneDrive, or email it to yourself, whatever works.

OMS - Windows Devices

Note this view denotes the number of connected Windows Computers. Surface Hubs are Windows Computers, so adding a new one will increase the number of connected Windows Computers by 1.

Step 2: Connect your Surface Hub

Now, fire up your Surface Hub and open the Settings app. You’ll need the Administrative account to access the Settings app on a Surface Hub.

From the Settings app, browse to Device management, then tap Configure OMS Settings.

Hub - Device Management

Enter your workspace ID and workspace (primary/secondary) key. Tapping apply will kick-off the enrolment process. This will provision an agent on the Surface Hub which is going to talk to your OMS service.

Connect to OMS

Step 3: Back to Operations Management Suite

It may take some time for each Surface Hub to register with OMS. The tile says ’up to 24 hours’ but for us it happened within minutes. You’ll know the device has enrolled because the tile on your home page will update:

Tile - Active Devices

Next, you can fine tune the information an installed agent will send to OMS. These agents are installed automatically on Surface Hubs when you connect them to OMS.

From Home > Settings > Data, you can select items from the Windows Event Logs you want sent from devices to OMS. The example below will push logged entries with a level of Error in both Application and System logs.

Things to note:

  • If you change the device name after the Surface Hub has been enrolled, OMS will treat the renamed device as a new device, since it differentiates by device name).  It’s a good idea to set a proper device name (and settle on a naming convention for all your devices) before you connect one.
  • OMS will not collate historical diagnostic logs from these devices. It will start collecting new information as soon as the local agent configuration is updated (i.e. you tell OMS to start collecting additional information from the device.)