Standard Operational Checks for IT Service Management Processes – Once Implemented

Why Operational Process Checks are Required in Service Management?

  • In order to sustain and measure how effectively we are executing our processes, we require to have process operational checks once implemented.
  • This will allow us to identify inefficiencies and subsequently to improve the current Service Management processes.
  • These checks will also provide input into Continual Service Improvement (CSI) Programme.


Operational Checks for Service Management Processes – Once Implemented

Ops checks SM.jpg

Standard Guidelines

  • The operational process checks will be managed by IT Service Management
  • Through process governance meetings, agreement can be established on who will update a section of a process or the whole document.
  • IT Service Management will need to be involved in all process governance meetings.
  • IT Service Management will conduct an internal audit on all Service Management processes at least once annually.

Review and Measure at Regular Intervals

It is very important that IT Service Management reviews all processes and measurements at least once every year to make appropriate changes to obtain their target IT and business goals.


Thank you……..

Proactive Problem Management – Benefits and Considerations

IT Service Management – Proactive Problem Management

The goal of Proactive Problem Management is to prevent Incidents by identifying weaknesses in the IT infrastructure and applications, before any issues have been instigated.



  • Greater system stability – This leads to increased user satisfaction.
  • Increased user productivity – This adds to a sizable productivity gain across the enterprise.
  • Positive customer feedback – When we proactively approach users who have been experiencing issues and offer to fix their problems the feedback will be positive.
  • Improved security – When we reduce security incidents, this leads to increased enterprise security.
  • To improve quality of software/product – The data we collect will be used to improve the quality.
  • To Reduce volume of problems – Lower the ratio of immediate (Reactive) support efforts against planned support efforts in overall Problem Management process.


  • Proactive Problem Management can be made easier by the use of a Network Monitoring System.
  • Proactive Problem Management is also involved with getting information out to your customers to allow them to solve issues without the need to log an Incident with the Service Desk.
    • This would be achieved by the establishment of a searchable Knowledgebase of resolved Incidents, available to your customers over the intranet or internet, or the provision of a useable Frequently Asked Question page that is easily accessible from the home page of the Intranet, or emailed regularly.
  • Many organisations are performing Reactive Problem Management; very few are successfully undertaking the proactive part of the process simply because of the difficulties involved in implementation.
    • Proactive Problem Management to Business Value
    • Cost involved with Proactive vs. Reactive Problem Management
    • Establishment of other ITIL processes such as configuration Management, Availability Management and Capacity Management.


Proactive Problem Management – FAQ

Q – At what stage of our ITIL process implementation should we look at Implementing Proactive Problem Management?

  • A – Proactive Problem Management cannot be contemplated until you have Configuration Management, Availability Management and Capacity Management well established as the outputs of these processes will give you the information that is required to pinpoint weaknesses in the IT infrastructure that may cause future Incidents.

Q – How can we performance measure and manage?

  • A – Moving from reactive to proactive maintenance management requires time, money, human resources, as well as initial and continued support from management. Before improving a process, it is necessary to define the improvement. That definition will lead to the identification of a measurement, or metric. Instead of intuitive expectations of benefits, tangible and objective performance facts are needed. Therefore, the selection of appropriate metrics is an essential starting point for process improvement.


Proactive Problem Management – High Level Process Diagram



Implementing proactive problem management will require an agreed uniform approach specially when multiple managed service providers (MSPs) involved with an organisation. Hope you found this useful.

Cosmos DB Server-Side Programming with TypeScript – Part 5: Unit Testing

Over the last four parts of this series, we’ve discussed how we can write server-side code for Cosmos DB, and the types of situations where it makes sense to do so. If you’re building a small sample application, you now have enough knowledge to go and build out UDFs, stored procedures, and triggers. But if you’re writing production-grade applications, there are two other major topics that need discussion: how to unit test your server-side code, and how to build and deploy it to Cosmos DB in an automated and predictable manner. In this part, we’ll discuss testing. In the next part, we’ll discuss build and deployment.

This post is part of a series:

  • Part 1 gives an overview of the server side programmability model, the reasons why you might want to consider server-side code in Cosmos DB, and some key things to watch out for.
  • Part 2 deals with user-defined functions, the simplest type of server-side programming, which allow for adding simple computation to queries.
  • Part 3 talks about stored procedures. These provide a lot of powerful features for creating, modifying, deleting, and querying across documents – including in a transactional way.
  • Part 4 introduces triggers. Triggers come in two types – pre-triggers and post-triggers – and allow for behaviour like validating and modifying documents as they are inserted or updated, and creating secondary effects as a result of changes to documents in a collection.
  • Part 5 (this post) discusses unit testing your server-side scripts. Unit testing is a key part of building a production-grade application, and even though some of your code runs inside Cosmos DB, your business logic can still be tested.
  • Finally, part 6 explains how server-side scripts can be built and deployed into a Cosmos DB collection within an automated build and release pipeline, using Microsoft Visual Studio Team Services (VSTS).

Unit Testing Cosmos DB Server-Side Code

Testing JavaScript code can be complex, and there are many different ways to do it and different tools that can be used. In this post I will outline one possible approach for unit testing. There are other ways that we could also test our Cosmos DB server-side code, and your situation may be a bit different to the one I describe here. Some developers and teams place different priorities on some of the aspects of testing, so this isn’t a ‘one size fits all’ approach. In this post, the testing approach we will build out allows for:

  • Mocks: mocking allows us to pass in mocked versions of our dependencies so that we can test how our code behaves independently of a working external system. In the case of Cosmos DB, this is very important: the getContext() method, which we’ve looked at throughout this series, provides us with access to objects that represent the request, response, and collection. Our code needs to be tested without actually running inside Cosmos DB, so we mock out the objects it sends us.
  • Spies: spies are often a special type of mock. They allow us to inspect the calls that have been made to the object to ensure that we are triggering the methods and side-effects that we expect.
  • Type safety: as in the rest of this series, it’s important to use strongly typed objects where possible so that we get the full benefit of the TypeScript compiler’s type system.
  • Working within the allowed subset of JavaScript: although Cosmos DB server-side code is built using the JavaScript language, it doesn’t provide all of the features of JavaScript. This is particularly important when testing our code, because many test libraries make assumptions about how the code will be run and the level of JavaScript support that will be available. We need to work within the subset of JavaScript that Cosmos DB supports.

I will assume some familiarity with these concepts, but even if they’re new to you, you should be able to follow along. Also, please note that this series only deals with unit testing. Integration testing your server-side code is another topic, although it should be relatively straightforward to write integration tests against a Cosmos DB server-side script.

Challenges of Testing Cosmos DB Server-Side Code

Cosmos DB ultimately executes JavaScript code, and so we will use JavaScript testing frameworks to write and run our unit tests. Many of the popular JavaScript and TypeScript testing frameworks and helpers are designed specifically for developers who write browser-based JavaScript or Node.js applications. Cosmos DB has some properties that can make these frameworks difficult to work with.

Specifically, Cosmos DB doesn’t support modules. Modules in JavaScript allow for individual JavaScript files to expose a public interface to other blocks of code in different files. When I was preparing for this blog post I spent a lot of time trying to figure out a way to handle the myriad testing and mocking frameworks that assume modules are able to be used in our code. Ultimately I came to the conclusion that it doesn’t really matter if we use modules inside our TypeScript files as long as the module code doesn’t make it into our release JavaScript files. This means that we’ll have to build our code twice – once for testing (which include the module information we need), and again for release (which doesn’t include modules). This isn’t uncommon – many development environments have separate ‘Debug’ and ‘Release’ build configurations, for example – and we can use some tricks to achieve our goals while still getting the benefit of a good design-time experience.

Defining Our Tests

We’ll be working with the stored procedure that we built out in part 3 of this series. The same concepts could be applied to unit testing triggers, and also to user-defined functions (UDFs) – and UDFs are generally easier to test as they don’t have any context variables to mock out.

Looking back at the stored procedure, the purpose is to do return the list of customers who have ordered any of specified list of product IDs, grouped by product ID, and so an initial set of test cases might be as follows:

  1. If the productIds parameter is empty, the method should return an empty array.
  2. If the productIds parameter contains one item, it should execute a query against the collection containing the item’s identifier as a parameter.
  3. If the productIds parameter contains one item, the method should return a single CustomersGroupedByProduct object in the output array, which should contain the productId that was passed in, and whatever customerIds the mocked collection query returned.
  4. If the method is called with a valid productIds array, and the queryDocuments method on the collection returns false, an error should be returned by the function.

You might have others you want to focus on, and you may want to split some of these out – but for now we’ll work with these so we can see how things work. Also, in this post I’ll assume that you’ve got a copy of the stored procedure from part 3 ready to go – if you haven’t, you can download it from the GitHub repository for that part.
If you want to see the finished version of the whole project, including the tests, you can access it on GitHub here.

Setting up TypeScript Configurations

The first change we’ll need to make is to change our TypeScript configuration around a bit. Currently we only have one tsconfig.json file that we use to build. Now we’ll need to add a second file. The two files will be used for different situations:

  • tsconfig.json will be the one we use for local builds, and for running unit tests.
  • will be the one we use for creating release builds.

First, open up the tsconfig.json file that we already have in the repository. We need to change it to the following:

The key changes we’re making are:

  • We’re now including files from the spec folder in our build. This folder will contain the tests that we’ll be writing shortly.
  • We’ve added the line "module": "commonjs". This tells TypeScript that we want to compile our code with module support. Again, this tsconfig.json will only be used when we run our builds locally or for running tests, so we’ll later make sure that the module-related code doesn’t make its way into our release builds.
  • We’ve changed from using outFile to outDir, and set the output directory to output/test. When we use modules like we’re doing here, we can’t use the outFile setting to combine our files together, but this won’t matter for our local builds and for testing. We also put the output files into a test subfolder of the output folder so that we keep things organised.

Now we need to create a file with the following contents:

This looks more like the original tsconfig.json file we had, but there are a few minor differences:

  • The include element now looks for files matching the pattern *.ready.ts. We’ll look at what this means later.
  • The module setting is explicitly set to none. As we’ll see later, this isn’t sufficient to get everything we need, but it’s good to be explicit here for clarity.
  • The outFile setting – which we can use here because module is set to none – is going to emit a JavaScript file within the build subfolder of the output folder.

Now let’s add the testing framework.

Adding a Testing Framework

In this post we’ll use Jasmine, a testing framework for JavaScript. We can import it using NPM. Open up the package.json file and replace it with this:

There are a few changes to our previous version:

  • We’ve now imported the jasmine module, as well as the Jasmine type definitions, into our project; and we’ve imported moq.ts, a mocking library, which we’ll discuss below.
  • We’ve also added a new test script, which will run a build and then execute Jasmine, passing in a configuration file that we will create shortly.

Run npm install from a command line/terminal to restore the packages, and then create a new file named jasmine.json with the following contents:

We’ll understand a little more about this file as we go on, but for now, we just need to understand that this file defines the Jasmine specification files that we’ll be testing against. Now let’s add our Jasmine test specification so we can see this in action.

Starting Our Test Specification

Let’s start by writing a simple test. Create a folder named spec, and within it, create a file named getGroupedOrdersImpl.spec.ts. Add the following code to it:

This code does the following:

  • It sets up a new Jasmine spec named getGroupedOrdersImpl. This is the name of the method we’re testing for clarity, but it doesn’t need to match – you could name the spec whatever you want.
  • Within that spec, we have a test case named should return an empty array.
  • That test executes the getGroupedOrdersImpl function, passing in an empty array, and a null object to represent the Collection.
  • Then the test confirms that the result of that function call is an empty array.

This is a fairly simple test – we’ll see a slightly more complex one in a moment. For now, though, let’s get this running.

There’s one step we need to do before we can execute our test. If we tried to run it now, Jasmine would complain that it can’t find the getGroupedOrdersImpl method. This is because of the way that JavaScript modules work. Our code needs to export its externally accessible methods so that the Jasmine test can see it. Normally, exporting a module from a Cosmos DB JavaScript file will mean that Cosmos DB doesn’t accept the file anymore – we’ll see a solution to that shortly.

Open up the src/getGroupedOrders.ts file, and add the following at the very bottom of the file:

The export statement sets up the necessary TypeScript compilation instruction to allow our Jasmine test spec to reach this method.

Now let’s run our test. Execute npm run test, which will compile our stored procedure (including the export), compile the test file, and then execute Jasmine. You should see that Jasmine executes the test and shows 1 spec, 0 failures, indicating that our test successfully ran and passed. Now let’s add some more sophisticated tests.

Adding Tests with Mocks and Spies

When we’re testing code that interacts with external services, we often will want to use mock objects to represent those external dependencies. Most mocking frameworks allow us to specify the behaviour of those mocks, so we can simulate various conditions and types of responses from the external system. Additionally, we can use spies to observe how our code calls the external system.

Jasmine provides a built-in mocking framework, including spy support. However, the Jasmine mocks don’t support TypeScript types, and so we lose the benefit of type safety. In my opinion this is an important downside, and so instead we will use the moq.ts mocking framework. You’ll see we have already installed it in the package.json.

Since we’ve already got it available to us, we need to add this line to the top of our spec/getGroupedOrders.spec.ts file:

This tells TypeScript to import the relevant mocking types from the moq.ts module. Now we can use the mocks in our tests.

Let’s set up another test, in the same file, as follows:

This test does a little more than the last one:

  • It sets up a mock of the ICollection interface.
  • This mock will send back a hard-coded string (self-link) when the getSelfLink() method is called.
  • It also provides mock behaviour for the queryDocuments method. When the method is called, it invokes the callback function, passing back a list of documents with a single empty string, and then returns true to indicate that the query was accepted.
  • The mock.object() method is used to convert the mock into an instance that can be provided to the getGroupedOrderImpl function, which then uses that in place of the real Cosmos DB collection. This means we can test out how our code will behave, and we can emulate the behaviour of Cosmos DB as we wish.
  • Finally, we call mock.verify to ensure that the getGroupedOrdersImpl function executed the queryDocuments method on the mock collection exactly once.

You can run npm run test again now, and verify that it shows 2 specs, 0 failures, indicating that our new test has successfully passed.

Now let’s fill out the rest of the spec file – here’s the complete file with all of our test cases included:

You can execute the tests again by calling npm run test. Try tweaking the tests so that they fail, then re-run them and see what happens.

Building and Running

All of the work we’ve just done means that we can run our tests. However, if we try to build our code to submit to Cosmos DB, it won’t work anymore. This is because the export statement we added to make our tests work will emit code that Cosmos DB’s JavaScript engine doesn’t understand.

We can remove this code at build time by using a preprocessor. This will remove the export statement – or anything else we want to take out – from the TypeScript file. The resulting cleaned file is the one that then gets sent to the TypeScript compiler, and it emits a Cosmos DB-friendly JavaScript file.

To achieve this, we need to chain together a few pieces. First, let’s open up the src/getGroupedOrders.ts file. Replace the line that says export { getGroupedOrdersImpl } with this section:

The extra lines we’ve added are preprocessor directives. TypeScript itself doesn’t understand these directives, so we need to use an NPM package to do this. The one I’ve used here is jspreproc. It will look through the file and handle the directives it finds in specially formatted comments, and then emits the resulting cleaned file. Unfortunately, the preprocessor only works on a single file at a time. This is OK for our situation, as we have all of our stored procedure code in one file, but we might not do that for every situation. Therefore, I have also used the foreach-cli NOM package to search for all of the *.ts files within our src folder and process them. It saves the cleaner files with a .ready.ts extension, which our file refers to.

Open the package.json file and replace it with the following contents:

Now we can run npm install to install all of the packages we’re using. You can then run npm run test to run the Jasmine tests, and npm run build to build the releasable JavaScript file. This is emitted into the output/build/sp-getGroupedOrders.js file, and if you inspect that file, you’ll see it doesn’t have any trace of module exports. It looks just like it did back in part 3, which means we can send it to Cosmos DB without any trouble.


In this post, we’ve built out the necessary infrastructure to test our Cosmos DB server-side code. We’ve used Jasmine to run our tests, and moq.ts to mock out the Cosmos DB server objects in a type-safe manner. We also adjusted our build script so that we can compile a clean copy of our stored procedure (or trigger, or UDF) while keeping the necessary export statements to enable our tests to work. In the final post of this series, we’ll look at how we can automate the build and deployment of our server-side code using VSTS, and integrate it into a continuous integration and continuous deployment pipeline.

Key Takeaways

  • It’s important to test Cosmos DB server-side code. Stored procedures, triggers, and UDFs contain business logic and should be treated as a fully fledged part of our application code, with the same quality criteria we would apply to other types of source code.
  • Because Cosmos DB server-side code is written in JavaScript, it is testable using JavaScript and TypeScript testing frameworks and libraries. However, the lack of support for modules means that we have to be careful in how we use these since they may emit release code that Cosmos DB won’t accept.
  • We can use Jasmine for testing. Jasmine also has a mocking framework, but it is not strongly typed.
  • We can get strong typing using a TypeScript mocking library like moq.ts.
  • By structuring our code correctly – using a single entry-point function, which calls out to getContext() and then sends the necessary objects into a function that implements our actual logic – we can easily mock and spy on our calls to the Cosmos DB server-side libraries.
  • We need to export the functions we are testing using the export statement. This makes them available to the Jasmine test spec.
  • However, these export statements need to be removed before we can compile our release version. We can use a preprocessor to remove those statements.
  • You can view the code for this post on GitHub.

Tools for Testing Webhooks

In a microservices environment, APIs are the main communication methods between services. Most of time each API merely sends a request and wait for its response. But there are definitely cases that need longer period to complete the requests. Even some cases stop processing at some stage until they get a signal to continue. Those are not uncommon in the API world. However, implementing those features might be a little bit tricky because most APIs use HTTP protocol, which are basically web-based applications that have timeout constraints. The timeout limitation is pretty critical to achieve those long-running processes within a given period of time. Fortunately, there are already solutions (or patterns) to overcome those sorts of challenges – asynchronous patterns and/or webhook patterns. As both are quite similar to each other, they are used interchangeably or together. While those approaches sort out the long-running process issues, they are hard for testing or debugging without some external tools. In this post, we are going to have a look at a couple of useful tools for debugging and testing, when we develop REST API applications, especially webhook APIs.

Disclaimer: those services introduced in this post do not have any sort of relationship to us, Kloud.


RequestBin is an online webhook request sneaking tool. It has a very simple user interface so that developers can hop into the service straight away. If we want to check webhook request data, follow the steps below:

  1. Click the Create a RequestBin button at the first page.

  2. A temporary bin URL is generated. Grab it.

  3. Use this bin URL as a webhook callback URL. Here’s the screenshot using Postman sending a request to the bin URL.

  4. The request data sent from Postman is captured like below. If the bin URL is, just append the querystring parameter of ?inspect so that we can inspect the request data. Can we find out the same request data sent?

This service brings a few merits. Firstly, we can simply use the bin URL when we register a webhook. The webhook request body is captured at the bin URL. Secondly, we don’t have to develop a test application to analyse the webhook payload because the data has already been captured by the bin URL. Thirdly, we can save time and resources for those testing application development. And finally, it’s free!
Of course, there are demerits. Each bin URL is only available for limited period of time. According to the service website, the bin URL is only valid for the first 48 hours after it’s generated. In fact, the lifetime varies from 5 mins to 48 hours. If we close a web browser and open it again, the bin URL is no longer valid. Therefore, this is only for testing webhooks that have a short lifecycle. In addition to this, RequestBin doesn’t have a good fit when we try local debugging. There must be cases that the webhooks receive data and process it. RequestBin only shows how webhook request data is captured, nothing further.
If RequestBin is not our solution, what else can we use? How can we debug or test the webhook functions?


According to this post, tunneling services take traffic from the Internet to local development environment. ngrok is one of those tunneling services. It has both free version and paid one, but the free one is enough for our purpose.

As it supports cross-platforms, download the suitable binary for OS. For Windows, there is only one binary, ngrok.exe. Copy this to the C:\ngrok folder (or wherever preferred) and enter the command below:
[code lang=text]
ngrok http 7071 -host-header=localhost

  • http: This specifies the protocol to watch incoming traffic.
  • 7071: This is the port number. Default is 80. If we debug Azure Functions, set this port number to 7071.
  • -host-header=localhost: Without this option, only ngrok captures the traffic but it can’t reach the local debugging environment.

After typing the command above, we can see the screen like below:

As we can see, external traffic hitting the endpoint of can reach our locally running Azure Functions app. Let’s run Azure Functions in a local debugging mode.

Azure Functions is now up and running. Run Postman and send a request to the endpoint that ngrok has generated.

We can now confirm that the code stops at the break point within Visual Studio.

ngrok also provides a replay feature. If we access to http://localhost:4040, how ngrok has captured all requests since it is run.

The free version of ngrok keeps changing the endpoint every time it runs. Run history is also blown away. But this wouldn’t matter for our purpose.
So far, we have briefly looked at a couple of tools to sneak webhook traffic for debugging purpose. If we utilise those tools well, we can more easily perform checking how API request calls are made. It would be definitely worth noting.

Testing Azure Functions in Emulated Environment with ScriptCs

In the previous post, Azure Functions Deployment Strategies, we have briefly looked several ways to deploy Azure Functions codes. In this post, I’m going to walk through how we can test Azure Functions codes on developers’ local machine.

Mocking and Asserting on ScriptCs

We need to know how to run test code scripts using ScriptCs. We’re not going too much details how to install ScriptCs here. Instead, we assume that we have ScriptCs installed on our local machine. In order to run test codes, mocking and asserting are crucial. There are ScriptPack NuGet packages for mocking and asserting, called ScriptCs.Moq and ScriptCs.FluentAssertions. If we use those packages, we can easily setup unit testing codes, with similar development experiences. Let’s have a look.

First of all, we need to install NuGet packages. Make sure that ScriptCs doesn’t yet support NuGet API version 3 at the moment of this writing, but it will support sooner or later based on their roadmap. In other words, we should stay on NuGet API version 2. This is possible by creating a scriptcs_nuget.config file in our script folder like:

Then run the command below on our Command Prompt console, to install ScriptCs.Moq.

Now, create a run.csx file like:

Run the following command and we’ll be able to see mocked result as expected.

This time, install ScriptCs.FluentAssertions for assertions.

This is the script for it:

We expected world but the value was hello, so we can see the message like:

Now, we know how to run test codes on our local machine using ScriptCs. Let’s move on.

Emulating Azure Functions on ScriptCs

There is no official tooling for Azure Functions running in a developer’s local machine yet. However, there is a script pack called ScriptCs.AzureFunctions that runs on ScriptCs. With this, we can easily emulate Azure Functions environment on our local dev machine. Therefore, as long as we have a running Azure Functions code, ScriptCs.AzureFunctions wraps the code and run it within the emulated local environment.

Azure Functions Code with Service Locator Pattern

This is a sample Azure Functions script, run.csx.

When you look at the code above, you might have found that we use a Service Locator Pattern. This is very important to manage dependencies in Azure Functions. Without a service locator, we can’t properly test Azure Functions codes. In addition to this, Azure Functions only supports a static method so the service locator instance is instantiated with the static accessor.

Here are the rest bits of the poorman’s service locator pattern.

Emulation Code for Azure Functions

Now, we need to write a test code, test.csx. Let’s have a look. This is relatively simple.

  1. Import run.csx by using the #load directive.
  2. Arrange parameters, variables and mocked objects.
  3. Run the Azure Functions code.
  4. Checks the expected result.

If we use ScriptCs.FluentAssertions package here, the last assert part would be much nicer.

Now run the test code:

Now, we have an expected result in the emulated environment.

So far, we’ve briefly walked through how we can emulate Azure Functions on our development environment. For testing, Service Locator Pattern should be considered. With ScriptCs.AzureFunctions, testing would be easily performed in the emulated environment. Of course this is not a perfect solution but surely it’s working. Therefore, it would be useful until the official tooling is released. If our build server allows ScriptCs environment, this emulation environment can be easily integrated so that we can perform testing within the build and test pipeline.

Testing your mobile app – plan ahead before it’s too late

More than ever before, organisations are now preparing themselves to engage with their customers not only to attract business but also to get to know them better. This proliferation of information makes it possible to personalise user experience for a targeted audience. Building your own Mobile App is a great medium to achieve that. But be cautious – a poorly rated app can ruin everything!

The inspiration for this blog is a piece of work that we did for one of our customer few months back. We tried to find an optimum way of testing mobile apps that were being built targeting their Australian customers.

When you look at the eco-system of mobile app testing, it can be daunting and overwhelming – especially because of the immense risks involved for a poorly tested application that can negate the brand reputation in no time. That’s why it is even more so important that you set your strategy right, before jumping into ‘some sort’ of testing.

There are four dimensions of testing that we looked at –
four dimensions of testing

When you look at the challenges, they are countless. Unlike any other technology space, mobile industry is going through a rapid transformation which changes as little as in every few months. And if your testing has to keep pace with that market, you will probably experience what we call as the ‘minimal set of challenges’. So here are is our minimal set of challenges –

  1. Diversity
    1. Diverse platforms
    2. Innumerable devices
  2. Fragmentation
    1. A huge fragmentation in OS/hardware/firmware – especially in Android platform
  3. Dynamic
    1. Frequent OS upgrade
    2. Shorter App Release cycles
  4. Tools
    1. Relatively less matured test tools
    2. Predicting & simulating end user behaviour

And when you have to work through these challenges, you have to have certain principles in mind that will drive your testing effort. The two most prominent among them are –

  1. Target the maximum with the optimum effort: Target your audience. Think about how your tests can impact the majority of your app users. A simple example can be to analyse your usage data from some other sources, look into the demography of these users – how they use it, how they interact and how they consume resources. Build your tests around those priority areas. This can be the ‘cool’ features in your app, a set of devices, one/two specific platforms, anything.
  2. A proper support system to help your testing: To test an app successfully, you need to have adequate support from the tools, infrastructure and resources. Make sure that you have them in place, allowing you to maximise your automation capability, thus reducing your regression time and manual errors.

So, planning is important..right? Here are my top five “to-dos” when planning for your tests –

1. What are we testing?

Seems quite reasonable question but probably the most poorly answered one. If you do not know what are you trying to test, that’s a perfect recipe for failure. Below is a simple illustration of the tests that you like to identify in a heat-map.

testing heat map

2. Prioritise your tests.

There are innumerable ways that you can design and write your tests but unfortunately projects work within the sphere of triple constraints! So prioritisation is the mantra. You need to understand and decide which makes the most impact and give you the biggest win in the market.

test prioritisation image

test prioritisation image

3. Determine how you will handle different tests.

It is of paramount importance that you think about what goes into each of the tests, how will you test them, what tools will be required, test acceptability criteria and so on.

The table below illustrates the idea with the help of simple table that aims to capture all possible factors that may impact your tests outcome.

test factors

test factors

4. Get your test infrastructure in place.

If you have to test the things you want, make sure there are enablers that can help you to do just that. There are few bits and pieces that you will need organise to get you going –

  • Mobile Test Tools
  • Test management Tools
  • Test Lab/execution set up
  • Mobile device lab (on-prem / cloud)
  • External dependencies

This is a sample illustration of how you can integrate few of the industry-standard tools. All you are doing here is to find a way that each of the different components in your testing infrastructures gels with each other and makes your life a lot easier by seamlessly flowing information from one to another. Your test scripts can be created, organised and executed seamlessly from this kind of a set up.

test integration diagram

5. Pick the right tools.

This does overlap with the previous point and you may be wondering which one to do first. While deciding on the different infrastructure components and understanding how they interact is a critical task, this heavily depends on finding the right fit. There are a number of tooling options in the market for testing mobile apps, each one offering their own unique features. You will have to find out what matters most to you and your organisations.

I prefer to use selection criteria when I go to pick a tool. For mobile app testing, it is even beneficial to set the criteria at two levels where you can consider going to the next level when you think the first level criteria are satisfactorily met.

The diagram below can get you a good feel of the most important things that you need to consider while selecting the right fit.

test tool criteria

When you plan well with the right kind of dependencies, you become more confident of the things that you would like to achieve. Of course, there will be surprises – but they are less ‘uncertain’ and more ‘predictable’.

One more thing before I sign off (there is always a ‘one more thing’!) – there are countless ways that things can go wrong in a mobile world but this is also the biggest and the strongest opportunity to connect to your customers in a ‘connected’ world. So leverage it in the most effective way and get the maximum out of it.

If you are not sure how to embark on that journey, feel free to reach out to us – together we can make it happen!

TDD for Mobile Development – Part 1

TDD for Mobile Development – Part 1
1. Unit Testing of Platform-Specific Code in Mobile Development.
2. Portable IoC (Portable.TinyIoC) for Mobile Development
3. Mobile Test-Driven Development – Running your unit tests from your IDE


This post aims at exploring the best practices in terms of code quality and testability for mobile development.
It is part of a series that talks about Unit and Integration Testing in the Mobile space. In particular, I focus on Android and iOS.

For many developers, testing is an after thought, and it is a task that ’s not well considered. But, there ’s heaps of research out there that shows you how much you could save and how test-first could improve your design. I am not going to go into the details of this, but I would assume that you are a test-first kind of person since you are reading this, so let’s get started.


In this post, I will show NUnit Lite for Xamarin.Android and Xamarin.iOS. NUnitLite as the name indicates is a cut-down version of NUnit. There are versions (builds) for testing iOS apps and for Android. The iOS comes out of the box when installing Xamarin, and it allows you to create a project from a template of NUnit Lite (MonoTouch) project.

This approach is good when you have a platform-specific code that has to be placed in the platform-specific or inside the app project. You could reference your MonoTouch or MonoDroid projects from the NUnitLite project and start your testing.

For Android, there are few versions of NUnitLite, I have worked with this one.

Sometimes, you are developing a component that needs to behave the same way on the two different platforms, but the internal implementation could be platform-specific. To test the platform specific, you put your code into your testing project as normal. But you could also Reference the same NUnitLite test file from both platforms to test both platforms, since it is the same expected behaviour on both platforms. Some developers do not like to have referenced files (me included), so you could create different versions for the two platforms if you wish to do so.

Sample of iOS platform-specific code

public class TestableController : UIViewController
    public TestableController ()
    public int GetTotal(int first, int second)
        return first + second;

Sample of Android platform-specific code

namespace Tdd.Mobile.Android
    public class TestableController : Fragment
        protected override void OnCreate (Bundle savedInstanceState)
            base.OnCreate (savedInstanceState);
        public int GetTotal(int first, int second)
            return first + second;

Please note that I am not suggesting that you write your code this way or put your login into the UIViewControlloer or Activity classes. The only reason I am doing it this way is to show you how you could test anything inside these platform-specific classes. Ideally, you would put your logic into ViewModels or other form of container that are injected into the controllers. Anyway, assuming that we have some platform-specific logic inside these classes, this is how I would test it.

    public class TestableControllerTest
        public void GetTotalTest()
            // arrange
            var controller = new TestableController ();
            // act
            var result = controller.GetTotal (2, 3);
            // assert
            Assert.AreEqual (5, result);

The screenshot below shows the structure of my solution. I have also put the code on GitHub in case you are interested in playing with the code. I would love to hear what you have to say, get in touch if you have any comments or questions.

Tdd Mobile Development Code Structure

Tdd Mobile Development Code Structure

In the next blog post, I will show how most of the code could be placed into testable libraries, and could be easily tested from your IDE (VS or Xamarin Studio), without the need to run an emulator/simulator.

TDD for Mobile Development – Part 1
1. Unit Testing of Platform-Specific Code in Mobile Development.
2. Portable IoC (Portable.TinyIoC) for Mobile Development
3. Mobile Test-Driven Development – Running your unit tests from your IDE

Effective Testing: Demystifying improvement and efficiency

If you have recently finished a system implementation or a large scale transformation work, you must be familiar with the phrase ‘test process efficiency’ and wondered what this refers to. But unlike the older days, just the delivery of a test function does not suffice the need for quality management. Increasingly organisations are looking for improved and optimised processes rather than being bound by the red lines of a traditional testing regime.

There are few key pillars that drive the test efficiency in any organisation, often these are called the ‘levers of testing function’ in a software implementation lifecycle. They have an immense impact on the final outcome and drive the quality of the delivered product or services. There are probably as many levers as there are different solutions. However at the core of it, a few fundamental principles remain at the top and drive the rest. I prefer to see it in the following way:

Is it the right time to start?

Early entry: If you plan to make a large, significant contribution to the end result, get involved early. Quality should not be ‘reactive’ in nature; it has to be imbibed from the very beginning of your software development.

How enough is ‘enough’?

Risk assessment and prioritisation: In an ideal world, even for a minor project, you will have infinite combinations to test and an enormous amount of data and logic to verify, with few organisations having the time or money to invest. Importantly, you will have to strike a balance between your goals and the risk appetite. A proper risk assessment will allow you to focus on just the right amount of test conditions instead of elsewhere, where the returns are not justified.

When do we know it’s ‘done’?

Acceptance criteria: This is often the most ignored part of any testing function but one of the most important. When you are playing with an infinite set, trying to prioritise and select the optimum figure, it can prove costly if you don’t know where to stop. A set of predefined criteria aligned with the risk appetite and quality goals will prove very useful.

Control minus the bureaucracy

Governance:  Most of the matured testing functions do have an amount of governance mechanism built into it but often is not complete. It is quite important to understand that there are few dimensions to the whole governance mechanism that makes it more full-proof and sound.

a. Team and reporting structure

b. Controls and escalation path

c. Check points including entry/exit gates

Independence vs Collaboration

Cross team collaboration: Success of any testing function heavily relies on the team environment. Unfortunately testing has often been viewed as an ‘independent function’ and suffers heavily from lack of information and coordination, resulting in a whole lot of duplication, rework and inefficiency. Some of the tangible outcomes of a closed and collaborative team effort are visible in

a. Increased flow of information

b. A solid and sound release including build management – its where things get tricky as you start to discover multiple touch points

c. Defect resolution including allocation and triage

This approach does not go without a word of caution. A cooperative and collaborative environment is favourable, as long as it does not impact and destabilise the objectivity and integrity of the testing.

Once you have the right levers to drive any testing function, you can increase the efficiency of one or multiple processes across the board. The next big question you may have is where exactly can they be applied? How can I ensure these efficiency factors become ‘tangible’ and importantly, how will I measure them? This in itself is a big enough discussion to warrant several posts and is often a matter of big debate. I am going to discuss that in detail in one of my other posts later on. In a nutshell, this efficiency can be demonstrated and measured across all the ‘test processes’ involved in various stages of a test cycle.

So what ‘process’ are we talking about?

To understand it better, let’s explore the steps in a typical testing regime. Any large scale testing programs will consist of the following test stages

A. Initiation and conceptualisation
B. Test strategy and planning
C. Test design
D. Test execution and implementation
E. Test analysis and reporting
F. Test closure and wrap up

Each of these stages mark a very special significance and involve a number of test processes, e.g. a ‘test design’ stage will involve processes like ‘requirement analysis and prioritisation, ‘building test specification’, ‘creating test scenarios’ and ‘building test cases and data requirements’. Each of these are able to be analysed, managed objectively, measured as well as controlled, all of which help to improve efficiencies which in turn leads to overall productivity.

A classic example is where a team is following an agile delivery approach; where all of these test processes have often been measured across each sprint. As you move from one sprint to another, you continue to observe all of the processes and collect relevant metrics during the lifecycle of the project. A quick analysis of the data will tell you where you will need to focus on to improve your delivery.

To conclude, it is important to understand and reflect on your current process; this is the next big step in your testing regime once you set up a testing function. Improving a test process not only lifts your team’s performance and motivation but you will continue to reduce costs for your client with the improvement to the overall process.

So, time to go back to the table and ask yourself the fundamental question – is my testing efficient?

Demonstrating Cross Platform testing with Browserstack – A beginner’s guide

This is a follow-on from my previous post on Cross-platform testing. Hope you enjoyed the realms of wandering around the way you could potentially plan your platform testing. I thought I would take some time to explain some of the concepts on cross-platform testing with examples from Browserstack.

Browserstack ( is a standard tool these days for testing your web application across multiple platforms.

The current Browserstack product family offers the following three broad service lines (The website offer a good amount of details on different products and offerings)

a. Browserstack live: This allows users to initiate real time interaction with the Browserstack environment.

b. Browserstack Automate: With the BS Automate, users will be able to run selenium-based automated scripts in BS VMs , remotely

c. Browserstack Screenshots : This is more relevant for static contents in the website where the users can generate screenshots across multiple OS-Browsers-device combinations at one go

Let’s take each one of them and talk in a bit more detail.

A. Browserstack Live: A typical Browserstack Live session is more about real time interaction with the VMs on the Browserstack cloud. Depending on the users’ choice, Browserstack will generate a connection with one of the VMs and initiate a session with the chosen Browser/OS.

The process is relatively simple. You will select your OS, and then select your preferred Browser or device. A sample screen looks like the one below. BSFig 1

Fig: Interactive testing with BS

This will spawn a session with the nearest available VM and open your website with the preferred device/OS/browser combo.

There is another great feature that we all should be aware of – ability to use Browserstack in your local environment. What that means – you should be able to use Browserstack for your local Dev environment or the static files in your PC. Browserstack provides you with an extension to your Chrome or Firefox browser. This helps to create a tunnel to the Browserstack VM with the local environment. All you need to do is to use that extension and establish a connection with your local dev. BSFig 2

Fig: connecting to your local environment

B. Browserstack Automate: Another great feature of the tool is the ability to create automated test cases that can run against the Browserstack VMs remotely.

BS subscription allows to get a user specific key through which you will establish a secure connection to the BS VMs and run tests there. All you need to do here is to write your automated code ( they support Selenium) and run them in your preferred OS/Brwoser combo.

A sample code snippet will look like this.

namespace browserStackAutomate
public class BrowserstackTestWeb
private IWebDriver driver;
private string baseURL;

public void Init()
DesiredCapabilities capability = DesiredCapabilities.Firefox();
capability.SetCapability(“browser”, “firefox”);
capability.SetCapability(“browser_version”, “26.0”);
capability.SetCapability(“os”, “Windows”);
capability.SetCapability(“os_version”, “7”);
capability.SetCapability(“browserstack.user”, “userxyz”);
capability.SetCapability(“browserstack.key”, “samplekey”);
capability.SetCapability(“acceptSslCerts”, “true”);
driver = new RemoteWebDriver(
new Uri(“”), capability
baseURL = “”;

public void LoginToAEOTest()
driver.Navigate().GoToUrl(baseURL + “/home”);
StringAssert.Contains(“Login”, driver.Title);
// Body of the actions – Test to be run against BS VM

public void Cleanup()

You can use a standard framework like NUnit to run the test cases remotely.

A typical remote Execution will look like the one below where you should be able to view the execution logs real time through Browserstack console.

BSFig 3

Fig: Execution on a desktop browser


Fig: Execution in a Mobile device

C. Browserstack Screenshot: Browserstack has another capability of capturing a series of screenshots of your website pages in your preferred OS/Browser/deice combo. You can specify the different combinations that you will need the screenshots for and it will collect them for you.

An integration component for Developers: Interesting part of using Browserstack is you can use it right from the time you are writing your code. Browserstack comes up with a Visual Studio integration that allows you to test your pages against different browsers within your VS console.

BSFig 5

One thing you need to ensure that you have Microsoft ASP.NET and Web Tools 2012.2 installed for Visual Studio 2012. This will help register BrowserStack in the list of browsers on which you can test. On clicking on BrowserStack, you will see a list of browsers and OSs being supported, and then use the BrowserStack cloud infrastructure to view your website. If you’ve already installed the BrowserStack extension, it should show up in the ‘Browse With’ menu after installing ASP.NET and Web Tools 2012.

To conclude, there are many tools in the market that will allow you achieve similar outcome. This is a demonstration for just one of those – that you might end up using it and make your website better !

I am sure that should be enough to get you going…until then Happy Testing !

If you have any question relating your website compatibility, please reach out to us at

Cross-Platform Testing: Myths and Mysteries

Cross-Platform testing (aka Platform testing) is often a confusing term and it means different things to different people. This post aims to bring together some common concepts in this area. Feel free to add your thoughts.

What is cross-platform testing really ?

Cross platform testing is one form of specialized testing where you would verify the suitability of your solution to work across various different platforms.  Platform can include pretty much anything including OS, browser or a device that will be necessary to run your solution.

It can be achieved in many ways. Two of the most used categories are –

  1. Cross Browser : Application simulation across a number of browsers
  2. Cross OS :   Application simulation in Operating Systems

Both of them can be performed across a range of devices including desktop, PC, mobile devices etc. An ideal solution is to have a combination of both, where you will have the ability to test against a multiple OS/Browser combinations, giving you an edge to test against exhaustive list of configurations.

Why is it so important?

You are putting so much effort and emotion to build your solution, can be a website or an app.  Its important that equally resonates with your end-users who will be using the system in long run. If they love it, there is nothing more satisfying.  In  an era of cloud computing and cloud based services, most of the solutions are public facing which means it is in easy reach of common users and they can access the solution ( website, Apps) is limitless, diverse ways . Why not test your system across some of these and make it work ? At least majority of your users will be happy!

Again, you can’t possibly test every single combinations that a user might use to access your applications but a handful of them will cover a chunk of your user base ( pareto principle of 80-20 applies !)

 How do you approach it?

Testing applications for cross platform capability can be immensely complicated if not approached in a proper way. Moreover, a lack of an agreed principles may lead to mis-communications and bad reputation of your website, product. So how exactly you should approach cross –platform testing?

1. Know your audience: Probably the most important principle of testing. If you have a good knowledge on who are going to use your applications and may be a bit of number crunching, past statistics, then you are in control ! This will drive the next set of actions on what device/platforms you will make your applications available and work seamlessly.

2. Decide the platform you will support: Once you know your user-base, you will have to take an informed decision on how do you want to implement your solution, targeting your maximum user base.

3. Know how much you can test: The fallacy with testing is ‘endless possibilities’; you can test against all possible combinations but your time (and money) will limit your aspirations. So why not invest the time at the right places ? Figure out which of the platforms you will test that covers off your maximum risk (and hence majority users !). A multi-dimensional matrix along with a heat map will be handy to determine what all things you will target first and things that can wait.The Yellow ones below are your ‘possible Test conditions’ and the Red ones are the ‘High risk test conditions’ that you should cover as part of your testing.


4. Think about automation: At the end of the day, testing your solution across multiple platforms/devices is nothing but an extremely boring piece of work, quite repetitive in nature.  You might or might not see any differences and by the time you are about to find something fishy, your morale will be low! So what .? Automation is there to help you out. There are quite a few automated Test tools in the market that can help you to access your applications and probably can save you an enormous amount of time.

A word of caution – many of these automated tools can get you to a certain point ( e.g. example can get you the screenshots across a series of platforms ) but there is still a certain level of manual intervention needed to ensure the usability and accessibility of your applications. Fortunately, that is not so boring!

6. Virtual world cost less than physical: Platform Testing could be extremely expensive if you think about arranging the time and money that you need to get your device with multiple browsers, OS or any other dependencies. Fortunately, virtualization is the saviour. You can create your own VMs with different platform combinations to test your application. Alternatively, there are quite a few companies in the market who will let to use their VMs to test your apps.

What are some of the Testing good practice?

  1. First things first; defining a clear boundary for Platform Testing is always important as it takes out any unnecessary confusion of the game.
  2. Create a Test matrix that you will test against. A matrix will always come handy when we define the boundary, design testing scenarios or communicate to the end users.
  3. Identify elements in the matrix that are of high risk of failure; A heat map might be a good idea to define those high risk elements and prioritize your tests, especially in a time/resource constrained environment.
  4. Use of Tools: There is no denying fact that use of the right tools makes the whole work much more efficient and exciting at the same. This will not only save in time and effort but also a great deal in arranging and procuring resources.

What should be my Tool Strategy?

Selecting the right-fit tool for Platform Testing is no different to any other Test tools selection process.  These are some of the things that might help you –

  1. Have a quick scan of the market – to see what are your options
  2. Look through the leading forums where people share their experience about certain tools and its usage
  3. List down top 3-4 tools that makes the most impact
  4. If possible, define a list of criteria against which the tools should be evaluated
  5. Prepare a quick comparison chart to see the relative position of each tools. You might use some elementary scoring against the criteria
  6. And finally, do a Proof-of-concept against top two tools to make sure you are comfortable with the decision.

And is easy.  You can get started when you like.

What are some of my options?

These days you will have quite a few options to choose from (it is quite a different situation few years back when you were lucky to have one workable solution). Here are some of the options worth looking ( not necessarily in any order of preference)

  1. Browserstack (
  2. Testize (
  3. TestPlant cross-browser Testing (
  4. Browsershots (

 So what is holding you back ?  Go and enjoy your test !

 If you have any questions about testing your applications against multi platforms, reach out to us at

Follow ...+

Kloud Blog - Follow