Agile Teams to High Performing Machines

Agile teams are often under scrutiny as they find their feet and as their sponsors and stakeholders realign expectations. Teams can struggle due to many reasons. I won’t list them here, you’ll find many root causes online and may have a few yourself.

Accordingly, this article is for Scrum Masters, Delivery Managers or Project Managers who may work to help turn struggling teams into high performing machines. The key to success here is measures, measures and measures. 

I have a technique I use to performance manage agile teams involving specific Key Performance Indicators (KPIs). To date it’s worked rather well. My overall approach is as follows:

  • Present the KPIs to the team and rationalise them. Ensure you have the team buy-in.
  • Have the team initially set targets against each KPI. It’s OK to be conservative. Goals should be achievable in the current state and subsequently improved upon.
  • Each sprint, issue a mid-sprint report, detailing how the team is tracking against KPIs. Use On Target and Warning, respectively to indicate where the team has to up it’s game.
  • Provide a KPI de-brief as part of the retrospective. Provide insight into why and KPIs were not satisfied.
  • Work with the team on setting the KPIs for the next sprint at the retrospective

I use a total of five KPIs, as follows:

  • Total team hours worked (logged) in scrum
  • Total [business] value delivered vs projected
  • Estimation variance (accuracy in estimation)
  • Scope vs Baseline (effectiveness in managing workload/scope)
  • Micro-Velocity (business value the team can generate in one hour)

I’ve provided a Agile Team Performance Tracker for you to use that tracks some of the data required to use these measures. Here’s an example dashboard you can build using the tracker (click to enlarge):

dashboard

In this article, I’d like to cover some of these measures in detail, including how tracking these measures can start to affect positive change in team performance. These measures have served me well and help to provide clarity to those involved.

Estimation Variance

Estimation variance is a measure I use to track estimation efficiency over time. It relies on the team providing hours based estimates for work items but is attributable to your points based estimates. As a team matures and gets used to estimation. I expect the time invested to more accurately reflect what was estimated.

I define this KPI as a +/-X% value.

So for example, if the current Estimation Variance is +/-20%, it means the target for team hourly estimates, on average for work items in this sprint, should be tracking no more than 20% above or below logged hours for those work items. I calculate the estimation variance as follows:

[estimation variance] = ( ([estimated time] / [actual time]) / [estimated time] ) x 100

If the value is less than the negative threshold, it means the team is under-estimating. If the value is more than the positive threshold, it means the team is over-estimating. Either way, if you’re outside the threshold, it’s bad news.

“But why is over-estimating an issue?” you may ask yourself? “An estimate is just an estimate. The team can simply move more items from the backlog into the sprint“. Remember that estimates are used as baselining for future estimates and planning activities. A lack of discipline in this area may impede release dates for your epics.

You can use this measure against each of the points tiers your team uses. For example:

agile1

n this example, the team is under-estimating bigger ticket items (5’s and 8’s), so targeted efforts can be made next estimation session to bring this within target threshold. Overall though in this example the team is tracking pretty well – the overall variance of -4.30% could well be within target KPI for this sprint.

Scope vs Baseline

Scope vs Baseline is a measure used to assess the team’s effectiveness at managing scope. Let’s consider the following 9-day sprint burndown:

agile2.PNG

The baseline represents the blue line. This is the projected burn-down based on the scope locked in at the start of the sprint. The scope is the orange line, representing the total scope yet to be delivered on each day of the sprint.

Obviously, a strong team tracks against or below the baseline, and will take on additional scope to stay aligned to the baseline without falling too far below it. Teams that overcommit/underdeliver will ‘flatline (not burn down) and track above the baseline, and even worse may increase scope when tracking above the baseline.

The Scope vs Baseline measure is tracked daily, with KPI calculation an average across all days in the sprint.

I define this KPI as a +/-X% value.

So for example, if the current Scope vs Baseline is +/-10%, it means the actual should not track on average more than 10% above or below the baseline. I calculate the estimation variance as follows:

[scope vs baseline] = ( [actual / projected] * 100 ) – 100

Here’s an example based on the burndown chart above:

agile3

The variance column stores the value for the daily calculation. The result is the Scope vs Baseline KPI (+4.89%). We see the value ramp up into the positive towards the end of sprint, representing our team’s challenge closing out it’s last work items. We also see the team tracking at -60% below the baseline on day 5, which subsequently triggers a scope increase to track against the baseline – a behaviour indicative of a good performing team.

Micro-Velocity

Velocity is the most well known measure. If it goes up, the team is well oiled and delivering business value. If it goes down, it’s the by-product of team attrition, communication breakdowns or other distractions.

Velocity is a relative measure, so whether it’s good or bad depends on the team and the measures taken in past sprints.

What I do is create a variation on the velocity measure that is defined as follows:

[micro velocity] = SUM([points done]) / SUM([hours worked])

I use a daily calculation of macro-velocity (vs past iterations) to determine the impact team attrition and on-boarding new users will have within a single sprint.


In conclusion, using some measures as KPIs on top of (but dependent on) the reports provided by the likes of Jira and Visual Studio Online can really help a team to make informed decisions on how to continuously improve. Hopefully some of these measures may be useful to you.

5 Tips: Designing Better Bots

Around about now many of you will be in discussions internally or with your partners on chatbots and their applications.

The design process for any bot distils a business process and associated outcome into a dialog. This is a series of interactions between the user and the bot where information is exchanged. The bot must deliver that outcome expediently, seeking clarifications where necessary.

I’ve been involved in many workshops with customers to elicit and evaluate business processes that could be improved through the use of bots. I like to advocate a low risk, cost effective and expedient proof of concept, prior to a commitment to full scale development of a solution. Show, rather than tell, if you will.

With that in mind, I present to you my list of five house rules or principles to consider when deciding if a bot can help improve a business process:

1. A bot can’t access information that is not available to your organisation

Many bots start out life as a proof of concept, or an experiment. Time and resources will be limited at first. You want to prove the concept expediently and with agility. You’ll want to avoid blowing the scope in order to stand up new data sources or staging areas for data.

As you elaborate on the requirements, as yourself where the data is coming from and how it is currently aggregated or modified in order to satisfy the use case. Your initial prototype may well be constrained by the data sources currently in place within your organisation (and accessibility to those data sources).

Ask the question “Where is this information at rest?”, “How do you access it?”, “It is manually modified?”.

2. Don’t ask the for information the user doesn’t know or has to go and look up 

Think carefully – does the bot really need to seek clarification? Let’s consider the following example:

ken_10

In practice, you’re forcing the user to sign in here to some system or dig around their inbox and copy/paste a unique identifier. I’ve yet to meet anyone who has the capacity to memorise things like their service ticket reference numbers. You can design smarter. For example:

  1. Assume the user is looking for the last record they created (you can use your existing data to determine if this is likely)
  2. Show them their records. Get them to pick one.
  3. Use the dialog flow to retain the context of a specific record

By all means, have the bot accommodate for scenarios where user does provide a reference number. Remember, your goal is to reduce time to the business outcome and eliminate menial activity. (Trust me. Looking up stuff in one system to use in another system is menial activity).

3. Let’s be clear – generic internet keyword searches are an exception

ken_9

When Siri says ‘here’s what I found on the Internet’, it’s catching an exception; a fall-back option because it’s not been able to field your query. It’s far better than ‘sorry, I can’t help you’. A generic internet/intranet keyword search should never be your primary use case. Search and discovery activity is key to a bot’s value proposition, but these functions should be underpinned by a service fabric that targets (and aggregates) specific organisational data sources. You need to search the internet? Please, go use Bing or Google.

4. Small result sets please

As soon as a chat-bot has to render more than 5 records in one response, I consider this an auto-fail.

Challenge any assertion that a user would want to see a list of more than 5 results, and de-couple the need to summarise data from the need to access individual records. Your bot needs to respond quickly, so avoid expensive queries for data and large resulting data sets that need to be cached somewhere. For example:

ken_12.PNG

In this example, the bot provides a summary which enough information to provide the user with an option to take further action (do you want to escalate this?). It also informs the most appropriate criteria for the next user driven search/discovery action (reports that are waiting on me).

Result sets may return tens, hundreds or thousands of records but the user inevitably narrows this down to one, so the question is “how do you get from X results down to 1”.

Work with the design principal that the bot should apply a set of criteria that returns the ‘X most likely’. Use default criteria based on the most common filtering scenario but allow the user to re-define that criteria. For example:

ken_8

 

5. Don’t remove the people element

Remember a bot should eliminate the menial work a person does, not substitute the person. If you’re thinking of designing a bot to substitute or impersonate a person, think again.

ken_11.PNG

No one wants to do menial work, and I’d hedge my bets that there is no one in your organisation who’s workload is 100% menial. Those that do menial work would much rather re-focus efforts on more productive and rewarding endeavours.

Adding a few Easter Eggs to your bot (i.e. human-like responses) is a nice to have. Discovery and resulting word of mouth can assist with adoption.

Consider whether your process involves the need to connect with a person (either as a primary use case, or edge case). If this is the case, ensure you can expedite the process. Don’t simply serve up a contact number. Instead, consider how to connect the user directly (via tools like Skype, if they are ‘online’). Alternatively, allow the user request a call-back.

ken_13.PNG


Remember, there are requirements bots and agents cannot satisfy. Define some core principles or house rules with your prospective product owners and test their ideas against them. It can lead you to high value business solutions.

Selling Adoption Services

It’s no secret that Partners have challenges selling change & adoption services. I’m talking specifically about Partners who will provide offerings to complement other professional services they provide, like technical delivery. Proposals are subject to scrutiny and in the absence of a value proposition, change & adoption will be one of the first things to go in a re-scoping exercise.

“It’s OK. We do change & adoption ourselves”

Organisations like Kloud deliver a range of professional services centric to our mission to ‘move organisations to the cloud’. In order to succeed, we call upon other competencies to support technical delivery such as project management, delivery management, and change & adoption.

Your investment in change management equates directly to the impact you think a change is going to have. You take into consideration the impact to people your organisation, the cost of delivering activities to support them, and logistics required to deliver them.

“There are known knowns; there are things we know we know…. But there are also unknown unknowns – the [things] we don’t know we don’t know”

Sometimes the impact of change to your organisation may not be clear because the functions of the technology you are introducing aren’t fully understood. In-house change teams can struggle applying what they’ve been taught in practice.

In my experience a successful change team is a coalition of people who can engage the business, people who understand the technology, and people who understand the methodology. The objective for Partners therefore, is to present a compelling offering to their customers which brings together the customer’s knowledge of and ability to engage with their business, and the Partner’s experience of delivering change management activities specifically to the (cloud) services it specialises in.

The Return on Investment

The value proposition for an adoption service is crucial. Prosci have led the charge. They understand that demonstrating a return on Investment is key in prospective customers adopting their own change management framework. The assertion is that success (based on measures you define) will depend on a level of investment in managing organizational change (whatever that looks like; small to large). There are many attempts to distil this into a basic equation (e.g. CHANGE + CHANGE MANAGEMENT = SUCCESS!)

Prosci constitutes a valuable and powerful set of tools and techniques, hardened by years of investment and research. I see it as a box of tools that can be selectively applied. Partners can provide compelling offerings by taking the foundation of these methodologies and developing compelling, cost effective solutions to support their products and services. We see evidence of this through the evolution of change and adoption services Microsoft are providing through FastTrack, available to Partners to support adoption of O365 workloads.

When forecasting ROI, Partners can refer to their own case studies to demonstrate ROI and draw parallels between engagements (“the conditions are similar, therefore the outcome will be similar”). Those that are licensed to do so can call upon the Prosci ‘library of statistics’ to support their value proposition.

Partners like Kloud who specialise in the enablement of cloud services can lean on the capabilities of modern cloud technology to support their campaigns. O365 in particular (My Analytics, Power BI Adoption Content Pack, Yammer) can serve up many measures and insights that historically would have difficult to elicit. In short: technology is helping to strengthen the change management value proposition.

The funny thing is, Partners need a change management framework to successfully embed a change management competency within a customer, and the same is required sell the value proposition within its own organisation (to its sales representatives, account managers, engagement leads). Consider this a good test of your offering.

Moving forwards, now is a great time for Partners to be discussing change management with their customers. What’s their level of maturity? How successful have initiatives been? Is the ROI clear and is it retrospectively validated? What is their current view of Change Management? What are they looking to Partners to provide?

Connecting Surface Hubs to Microsoft Operations Management Suite

At Kloud, we have Surface Hubs in our Melbourne, Adelaide and Sydney regional offices. It makes sense  to monitor them using a central service, to which monitoring and diagnostic information can be aggregated.

This blog post covers the process of connecting your Surface Hub devices to Microsoft Operations Management Suite (OMS).

Before you start:

  • You need a Surface Hub (yes, obvious I know) set up and connected to your network. It’s a good idea to setup the device account before you connect it to OMS.
  • You’ll need to have registered an OMS workspace. Select a pricing scheme (there is a free option) suitable for you. (New to OMS?)
  • You’ll need read access to your Azure Subscription (else you won’t be able to add solutions to your workspace).
  • Ensure your Surface Hub device (network) names can be used to differentiate the devices if your enterprise has more than one Surface Hub. The hubs will be listed by device name (not device account) via OMS dashboards. You can change the device name via Settings > Device management > About)
  • You’ll need the administrator credential for your Surface Hub so that you can connect it (it will ask for this when you launch the Settings app)

Step 1: Add the Surface Hub Solution to your OMS Workspace

Sign-in to OMS at <workspace>.portal.mms.microsoft.com. Open the solution gallery. Look for the Surface Hub Solution:

Surface Hub Solution

You can add Solutions to your Workspace providing you have read-access to your Azure subscription. It’s going to add a new tile to your Workspace’s Home Dashboard, but you won’t be able to interact with it until you’ve connected at least one Surface Hub, so you’ll see this:

Surface Hub (No Connected Devices)

You’ll need the workspace id and workspace key. Get these from Operations Management Suite via Settings > Connected Services > Windows Servers (you can use either primary key or secondary key as your workspace key. I recommend you make this information accessible from the Surface Hub as you’ll need to copy/paste it. Use OneNote, a text file on your OneDrive, or email it to yourself, whatever works.

OMS - Windows Devices

Note this view denotes the number of connected Windows Computers. Surface Hubs are Windows Computers, so adding a new one will increase the number of connected Windows Computers by 1.

Step 2: Connect your Surface Hub

Now, fire up your Surface Hub and open the Settings app. You’ll need the Administrative account to access the Settings app on a Surface Hub.

From the Settings app, browse to Device management, then tap Configure OMS Settings.

Hub - Device Management

Enter your workspace ID and workspace (primary/secondary) key. Tapping apply will kick-off the enrolment process. This will provision an agent on the Surface Hub which is going to talk to your OMS service.

Connect to OMS

Step 3: Back to Operations Management Suite

It may take some time for each Surface Hub to register with OMS. The tile says ’up to 24 hours’ but for us it happened within minutes. You’ll know the device has enrolled because the tile on your home page will update:

Tile - Active Devices

Next, you can fine tune the information an installed agent will send to OMS. These agents are installed automatically on Surface Hubs when you connect them to OMS.

From Home > Settings > Data, you can select items from the Windows Event Logs you want sent from devices to OMS. The example below will push logged entries with a level of Error in both Application and System logs.

Things to note:

  • If you change the device name after the Surface Hub has been enrolled, OMS will treat the renamed device as a new device, since it differentiates by device name).  It’s a good idea to set a proper device name (and settle on a naming convention for all your devices) before you connect one.
  • OMS will not collate historical diagnostic logs from these devices. It will start collecting new information as soon as the local agent configuration is updated (i.e. you tell OMS to start collecting additional information from the device.)