If you have recently finished a system implementation or a large scale transformation work, you must be familiar with the phrase ‘test process efficiency’ and wondered what this refers to. But unlike the older days, just the delivery of a test function does not suffice the need for quality management. Increasingly organisations are looking for improved and optimised processes rather than being bound by the red lines of a traditional testing regime.

There are few key pillars that drive the test efficiency in any organisation, often these are called the ‘levers of testing function’ in a software implementation lifecycle. They have an immense impact on the final outcome and drive the quality of the delivered product or services. There are probably as many levers as there are different solutions. However at the core of it, a few fundamental principles remain at the top and drive the rest. I prefer to see it in the following way:

Is it the right time to start?

Early entry: If you plan to make a large, significant contribution to the end result, get involved early. Quality should not be ‘reactive’ in nature; it has to be imbibed from the very beginning of your software development.

How enough is ‘enough’?

Risk assessment and prioritisation: In an ideal world, even for a minor project, you will have infinite combinations to test and an enormous amount of data and logic to verify, with few organisations having the time or money to invest. Importantly, you will have to strike a balance between your goals and the risk appetite. A proper risk assessment will allow you to focus on just the right amount of test conditions instead of elsewhere, where the returns are not justified.

When do we know it’s ‘done’?

Acceptance criteria: This is often the most ignored part of any testing function but one of the most important. When you are playing with an infinite set, trying to prioritise and select the optimum figure, it can prove costly if you don’t know where to stop. A set of predefined criteria aligned with the risk appetite and quality goals will prove very useful.

Control minus the bureaucracy

Governance:  Most of the matured testing functions do have an amount of governance mechanism built into it but often is not complete. It is quite important to understand that there are few dimensions to the whole governance mechanism that makes it more full-proof and sound.

a. Team and reporting structure

b. Controls and escalation path

c. Check points including entry/exit gates

Independence vs Collaboration

Cross team collaboration: Success of any testing function heavily relies on the team environment. Unfortunately testing has often been viewed as an ‘independent function’ and suffers heavily from lack of information and coordination, resulting in a whole lot of duplication, rework and inefficiency. Some of the tangible outcomes of a closed and collaborative team effort are visible in

a. Increased flow of information

b. A solid and sound release including build management – its where things get tricky as you start to discover multiple touch points

c. Defect resolution including allocation and triage

This approach does not go without a word of caution. A cooperative and collaborative environment is favourable, as long as it does not impact and destabilise the objectivity and integrity of the testing.

Once you have the right levers to drive any testing function, you can increase the efficiency of one or multiple processes across the board. The next big question you may have is where exactly can they be applied? How can I ensure these efficiency factors become ‘tangible’ and importantly, how will I measure them? This in itself is a big enough discussion to warrant several posts and is often a matter of big debate. I am going to discuss that in detail in one of my other posts later on. In a nutshell, this efficiency can be demonstrated and measured across all the ‘test processes’ involved in various stages of a test cycle.

So what ‘process’ are we talking about?

To understand it better, let’s explore the steps in a typical testing regime. Any large scale testing programs will consist of the following test stages

A. Initiation and conceptualisation
B. Test strategy and planning
C. Test design
D. Test execution and implementation
E. Test analysis and reporting
F. Test closure and wrap up

Each of these stages mark a very special significance and involve a number of test processes, e.g. a ‘test design’ stage will involve processes like ‘requirement analysis and prioritisation, ‘building test specification’, ‘creating test scenarios’ and ‘building test cases and data requirements’. Each of these are able to be analysed, managed objectively, measured as well as controlled, all of which help to improve efficiencies which in turn leads to overall productivity.

A classic example is where a team is following an agile delivery approach; where all of these test processes have often been measured across each sprint. As you move from one sprint to another, you continue to observe all of the processes and collect relevant metrics during the lifecycle of the project. A quick analysis of the data will tell you where you will need to focus on to improve your delivery.

To conclude, it is important to understand and reflect on your current process; this is the next big step in your testing regime once you set up a testing function. Improving a test process not only lifts your team’s performance and motivation but you will continue to reduce costs for your client with the improvement to the overall process.

So, time to go back to the table and ask yourself the fundamental question – is my testing efficient?

Application Development and Integration