Skip to main content

Improving Software Testability

Improving Software Testability


Many things go into the release of high quality software. The software needs to be well conceived, well designed, well coded, and well tested. If any of these criteria are ignored the production of a high quality software product will occur only by sheer luck, and probably not at all. However most development efforts strive to do more than ship high quality software - they strive to ship high quality software on time. So, this article focuses on how to efficiently achieve a well-tested software product.

There are many steps that the development team, the product manager, and testing team can take to improve the testability of the software, thereby making the best use of testing resources.

As an outsourced QA and test support services firm, ST Labs partners with developers of market-driven and corporate software to provide testing, consulting, and training services. As such, we typically don't test mission-critical or life-critical software. This article focuses on how to improve the testability of software that needs to be of a high quality, but also needs to be cost-effective to produce -- in other words, consumer or corporate software.

What is Software Testability?
Software testability can be defined in many different ways. ST Labs, in its training courses, has defined software testability simply as "a judgement of how effectively the software can be tested." This is a very broad definition; it looks at all aspects of a program that might make it more difficult to test. These can range from incomplete or ambiguous specifications to unstable code.

Other common definitions of testability are more formal. The IEEE Standard Glossary of Software Engineering Terminology (1990) defines software testability as "(1) the degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met, and (2) the degree to which a requirement is stated in terms that permit establishment of test criteria and performance of tests to determine whether those criteria have been met." This definition addresses the requirements issue and assumes that test criteria have been, or will be, formally established. This is not necessarily the case for all development projects. Many companies work with a less formal model, so they need a less formal definition of software testability.

Miller and Voas, in a number of articles, have used the following definition of software testability: "the probability that a piece of software will fail on its next execution (with a particular assumed input distribution) during testing if the software includes a fault." This definition is useful for establishing a mathematical measurement of the testability of software. This is most often done for software that must be highly reliable - avionics software, military software, medical software and the like -- software upon which people's lives may depend. This type of software is not the focus of this article, so we will use our simple definition of testability: a judgement of how effectively the software can be tested.

What factors make software more or less testable?
Incomplete, out-of-date, ambiguous or contradictory product references lower testability. By product references, we mean any document whose purpose is to describe what the product should look like, how it should function, who its intended users are and how these users are expected to use it. These are typically requirements documents, design specifications, marketing specifications and so on. These documents serve not only to help developers create the software, but they help testers form valid expectations about the behavior of the software. It is difficult for testers to identify problems with the software if they do not have access to detailed information by which to develop testing criteria.

Along the same lines, any additional information that developers can provide testers about how the software has been coded, such as limits, boundaries, error handling routines and algorithms, will assist testers in developing effective tests.

Code that hides faults is difficult to test. According to the fault/failure model first introduced by Morell, software failure only occurs when the following three conditions occur in this order:

An input causes a fault to be executed.
The fault causes a data state error.
The data state error propagates to an output state (becomes visible to a user in the form of a failure).

Software errors become very difficult to isolate if the input that causes the fault is not followed almost immediately by an observable faulty output state. If data state errors remain undetected through several intermediate transactions, when the fault finally causes a failure, it can be almost impossible to pinpoint the cause. This type of problem can easily go undetected during testing, to surface later, and perhaps with some frequency for certain classes of users.

Sometimes a data state error can exist, but the output state will appear normal. This occurs if two different input variables can produce the same result. Consider the following simple example.

A developer is tasked with writing an application that contains the following business rule: If the respondent is 18 or older AND they play golf, then send them the Golf Brochure. However, the developer hasn't had enough coffee (or maybe has had too much coffee!) the morning he writes this code and instead of making the requirement "18 or older" he makes the requirement "older than 18." A tester, knowing the business rules may try the following tests with these results:

Input Algorithm Visible Output .
Age of Respondent Plays Golf Send Brochure Fault? Fault Detectable?
17 (N) Y N N --
17 (N) N N N --
18 (N) Y N Y Y
18 (N) N N Y N

In this case, if the tester did all four tests, he would discover the problem. However, if the situation were more complex and the tester pressed for time, he may decide to skip one or two of the tests. Perhaps he would simplify by making sure to test at least once for 17 and once for 18, and once for golf and once for no golf, not taking into account the interaction between these variables. He could also decide to use 19 or some other age instead of 18, expecting "18 and up" to be a single equivalence class, in which case he wouldn't detect the fault at all. So, the way to get around this problem is to make the result of the evaluation of the (age) criteria visible. The tester should be able to see directly that "18" has been falsely interpreted as a "No."

Boolean functions are particularly susceptible to this type of problem.

Lack of proper tools and training make testing less effective. Although lack of proper tools and training does not make the code itself less testable, it does lessen the effectiveness of the testing effort. Tools can be very useful for reducing the cost of developing appropriate inputs and can increase the visibility of outputs. Automation tools can also be used for random or heuristic black box testing. Trained testers are needed to make effective use of these tools. The better testers understand the development environment and the more they know about how a product has been coded, the more efficient they will be at finding and analyzing bugs.

Strategies for improving testability of software:
What can developers do? Statistics have shown that as much as 30% of new code contains faults. Working with this reality means that adequately testable code becomes imperative for the creation of a high quality product. There are some things that developers can do to make their code more testable.

Simpler design leads to software that is easier to test. This includes several aspects of the design. Simplicity of functionality means the feature set of the software should be the minimum necessary to meet the requirements - no unspecified embellishments. Structural simplicity requires an architecture that is clean and logical; modularity will help limit the propagation of faults. Simpler code is readable and structured as opposed to "spaghetti code" which is difficult to follow and difficult to test. The development team should have agreed upon standards.iii
Inserting self-tests (assertions) into the code makes faults readily observable. With an assertion, the program will fail soon after the execution of a fault, rather than continuing to operate correctly until the data state error finally triggers a failure. The liberal use of assertions makes problems much easier to spot, as well as making them easier to fix. The developer can see at a glance where the fault lies that caused the assertion failure.
Anything the developer can do to make output more visible will increase the testability of the software. One obvious tactic is to make the source code available to testers. The advantages of this are discussed in an upcoming section on working in a development environment. With minimal coding, developers can also greatly enhance the testability of the software. Developers can add special switches that testers can activate to force the software to behave in particular ways. They can create applets that do things like monitor memory usage. They can make logging tools so testers can discover exactly what actions the software has performed. Which of these tools are most appropriate will depend on your product. Usually, the additional time needed to develop these aids is more than made up for in saved testing and debugging time.
Careful, complete code comments and other technical documentation can facilitate not only testers, but developers as well. Code reviews and white box testing are much more efficient when the code is clearly commented and all pertinent information regarding the program has been documented.
Code to facilitate automation and internationalization. When possible, use standard controls. Developing automated test scripts for an application that does not use standard controls is a painstaking process that is seldom worth the time and effort. Plan to stabilize the user interface and feature set early. This will avoid time-consuming and expensive rework of the automated tests and will free automators for other important tasks that need to be done at the end of the project cycle. Consider prototyping the UI in VB or some other appropriate tool. It may save you a great deal of time at the end of the project if you spend a few weeks at the beginning of the project finalizing the UI and developing a prototype. Remember that test automation is a development activity. A prototype allows the automation development effort to take place in parallel with the product development effort, significantly improving the effectiveness of testing and the chances for releasing the software on schedule. Lastly, liberal use of string tables will greatly aid both automation and localization efforts.
Communicate frequently with the testing team. Testers need to know about all changes to the specification as well as any coding problems, process problems or other snags. Be sure to let the test team know which areas of the product are ready to test and which aren't. This can save both testers and developers much time and frustration. Developers should make themselves available to the testers to answer questions about the code. Sometimes one or two cheerfully answered questions can save the testers days of trial and error testing. If appropriate, copy the testers on development group e-mails and let them feel they are a part of the development team. Bonds formed early in the product life cycle can go a long way towards easing tensions that invariably crop up during the stressful days just prior to release of the software.

What can product managers do? The project or product manager is usually responsible for coordinating the efforts of a number of groups to ensure a successful product release. There are several things someone in this key role can do to increase the testability of the software.

First, the product manager can make sure the product references are clear, complete and up-to-date. One of the most frequently heard complaints from testers is that the specifications and/or requirements documents are incomplete and out-of-date. Development efforts tend to be very dynamic. Make sure an appropriate person is assigned to the task of updating the documentation, otherwise even excellent documentation can become obsolete a few weeks after coding commences.
If automated testing is planned, insist on early agreement on the feature set and UI. Automation efforts can not be effectively undertaken in an environment where the UI is volatile.
Strongly encourage interaction between the developers and the testing team. Anything you can do to foster a spirit of cooperation will increase the chance of a successful, on-time release of the software. Establish formal lines of communication between the two groups early in the project. Forward any e-mail from you receive from one group to the other if appropriate. Create opportunities for the two groups to interact in an informal setting. Off-site outings (bowling, golf, movies, etc.) will give developers and testers a chance to get to know each other and become comfortable interacting. Although these tactics may not directly create more testable software, they will allow each team to better understand the other team's problems and challenges. This in itself should encourage the creation of more testable software.

What can testers do? Testers can do many things to make their own jobs easier.

Testers should learn as much as they can about the design of the product they are testing, and the technologies involved. This will help them make informed decisions about what and how to test. Without in-depth knowledge of both the product and the technologies involved, testers could waste a lot valuable testing time on tasks that are unnecessary or performed incorrectly.
Insist on testing in a debug environment. The testing environment should be set up to mirror the development environment. This will enable testers and developers to work more closely together when tracking down and eliminating bugs. If testers have programming skills, they will need a development environment to perform white box testing. Although periodic testing in a non-debug environment is essential, the huge advantage that the debug environment provides in making output more visible means that most testing should take place with debug code activated.
There are many tools available to assist testers. Take advantage of these tools. There are tools for code coverage, tools for automation, tools for stress testing, tools for load and volume testing, tools to help you generate valid data sets and tools which allow you to manage the test processes. These tools, when properly used, can make your product much more testable.
Write clear bugs. Well-written bugs enable the developers to do their job more quickly with less frustration (and they will have more respect for the tester). Others may also need to reproduce your bug: your boss, other testers, and the product manager are just a few who may have occasion to look at and try to understand your bugs. Set up team bug report standards and make sure everyone is aware of them.

Creating testable software is not just the responsibility of one particular person or team. Everyone who is helping produce the software has a role in improving testing effectiveness. The more testable your software is, the better chance you have of releasing a high quality product on time.


iK. Miller and J. Voas. Software Testability: The New Verification, IEEE Software, Vol. 12, No. 3, May 1995
iiLarry Joe Morell. A Theory of Error-based Testing. Technical Report TR-1395, University of Maryland, Department of Computer Science, April 1984.
iiiJames Bach, Principles of Software Testability, ST Labs internal publication, 1994.
Post a Comment

Popular posts from this blog

Compact and Repair an Access Database. Add Ref. to : AdoDb, Jro

< ?xml version="1.0" encoding="utf-8" ?>

using ADODB;
using JRO;
using System.Configuration;
using System.Data.OleDb;
using System.IO;

public class CompactAndRepairAccessDb : System.Windows.Forms.Form
private System.ComponentModel.Container components = null;
private JRO.JetEngine jro;
private System.Windows.Forms.Button btnConfirm;
private System.Windows.Forms.TextBox tbxOriginalDbSize;
private System.Windows.Forms.TextBox tbxCompactedDbSize;
private OleDbConnection cnn;

public CompactAndRepairAccessDb() {

FileInfo fi = new FileInfo( ConfigurationSettings.AppSettings["PathOriginal"] );
int s = Convert.ToInt32( fi.Length/1000 );
this.tbxOriginalDbSize.Text = s.ToString() + " kb";

private void btnConfirm_Click(object sender, System.EventArgs e) {
// First close all instances of the database

VBScript to Automate login into gmail

Dim IE
Dim crtScreen
Set IE = CreateObject("InternetExplorer.Application")
USERNAME = "saudaziz"

With IE
.navigate ""
End With

'wait a while until IE as finished to load
Do while IE.busy
set WshShell = WScript.CreateObject("WScript.Shell")
Do While UCase(IE.Document.readyState) <> "COMPLETE"
WScript.Sleep 100
set WshShell=nothing
IE.document.all.Item("Email").value = USERNAME
IE.document.all.Item("pASSWD").value =pASSWORD
Set IE = Nothing