download the TPI Evaluation toolkit

The following toolkits can be downloaded for free:

· Interim Maturity Evaluation based on Capability Maturity Model V 1.1

· Interim Maturity Evaluation based on Capability Maturity Model Integrated for Systems Engineering and Software Engineering V1.1

· Interim Maturity Evaluation based on Capability Maturity Model Integrated for Systems Engineering, Software Engineering, Integrated Product and Process Development, and Supplier Sourcing, V1.1

· Test Process Improvement Evaluation based on the Test Process Improvement Model (TPI)® IQUIP, The Netherlands

· Interim Maturity Evaluation based on People Capability Maturity Model V 2.0

· Size estimation using paired comparisons: a tool that helps to make better software size estimates in Lines Of Code.

Bugzilla and ActiveDirectory

Test process improvement TPI

The Test Process Improvement (TPI) model has been developed based on the practi- ... 6.1 General description of the TPI model. A test process improvement ...

Richard Murillo

Richard Murillo

Workflow Download.com

Workflow Download.com - Time Sheets Software Download Directory

Testing Maturity Model - Google Search

The TPI model supports the improvement of test processes, and offers insight into the "maturity" of the test processes within your organization. ...

Testing Maturity Model - Google Search

The TPI model supports the improvement of test processes, and offers insight into the "maturity" of the test processes within your organization. ...

A Maturity Model for Automated Software Testing (MDDI archive, Dec 94)

A Maturity Model for Automated Software Testing (MDDI archive, Dec 94)

Developing a Testing Maturity Model, Part II

Developing a Testing Maturity Model, Part II

Developing a Testing Maturity Model: Part I - Aug 1996

Developing a Testing Maturity Model: Part I - Aug 1996

Beizer�s Phases in a Tester�s Mental Life

Beizer�s Phases in a Tester�s Mental Life: "Beizer�s Phases in a Tester�s Mental Life
Phase 0 = There�s no difference between testing and debugging. Other than in support of debugging, testing has no purpose.
Phase 1 = The purpose of testing is to show that the software works.
Phase 2 = The purpose of testing is to show that the software doesn�t works.
Phase 3 = The purpose of testing is not to prove anything, but to reduce the perceived risk of not working to an acceptable value.
Phase 4 = Testing is not an act. It is a mental discipline that result in low-risk software without much testing effort."

Test Development FAQ

Test Development FAQ

List of Guidelines and Good Practices

List of Guidelines and Good Practices:

* Guideline 1: Plan & commit early.
o Good Practice 1: Decide as soon as possible — will the Working Group build test materials or acquire them?
o Good Practice 2: Think about and enumerate the quality-related deliverables that might help the Working Group through the Recommendation track.
o Good Practice 3: Synchronize quality-related deliverables and their development milestones with specification milestones.
o Good Practice 4: Consider whether the Working Group should bind any quality criteria to Rec-track advancement.
o Good Practice 5: Put some thought into how to staff the Working Group's test and other quality assurance plans.
* Guideline 2: Document QA processes.
o Good Practice 6: Put all of the Working Group's important test and other quality-related information in one place in a QA Process Document.
o Good Practice 7: Identify a Working Group point-of-contact for test materials or other quality-related business.
o Good Practice 8: Specify an archived email list to use for quality-related communications.
o Good Practice 9: Identify Web page(s) for test suites, announcements, and other quality-related topics.
* Guideline 3: Resolve legal & license issues.
o Good Practice 10: As early as possible, get agreement about acceptable license terms for submission of test materials.
o Good Practice 11: As soon as the nature of the Working Group's test materials becomes clear, get agreement about license terms for their publication.
o Good Practice 12: Decide policy about having brands, logos, or conformance icons associated with the Working Group's test materials.
* Guideline 4: Consider acquiring test materials.
o Good Practice 13: Do a quality assessment of proposed test materials before going any further.

The FOCUS-PDCA Methodology

An extension of the Plan, Do, Check, Act (PDCA) cycle sometimes called the Deming or Shewhart cycle. From Hospital Corporation of America.

Plan-Do-Check-Act A Problem Solving Process

iSixSigma Featured Link

QA Focus Papers

QA Focus Papers

The Immaturity of CMM

The Immaturity of CMM

by James Bach
james@satisfice.com

(Formerly of Borland International)

This article was originally published in the September ‘94 issue of American Programmer.

The Software Engineering Institute's (SEI) Capability Maturity Model (CMM) gets a lot of publicity. Given that the institute is funded by the US Department of Defense to the tune of tens of millions of dollars each year [1], this should come as no surprise— the folks at the SEI are the official process mavens of the military, and have the resources to spread the word about what they do. But, given also that the CMM is a broad, and increasingly deep, set of assertions as to what constitutes good software development practice, it's reasonable to ask where those assertions come from, and whether they are in fact complete and correct.

My thesis, in this essay, is that the CMM is a particular mythology of software process evolution that cannot legitimately claim to be a natural or essential representation of software processes.

The CMM is at best a consensus among a particular group of software engineering theorists and practitioners concerning a collection of effective practices grouped according to a simple model of organizational evolution. As such, it is potentially valuable for those companies that completely lack software savvy, or for those who have a lot of it and thus can avoid its pitfalls.

At worst, the CMM is a whitewash that obscures the true dynamics of software engineering, suppresses alternative models. If an organization follows it for its own sake, rather than simply as a requirement mandated by a particular government contract, it may very well lead to the collapse of that company's competitive potential. For these reasons, the CMM is unpopular among many of the highly competitive and innovative companies producing commercial shrink-wrap software.

A short description of the CMM

The CMM [7] was conceived by Watts Humphrey, who based it on the earlier work of Phil Crosby. Active development of the model by the SEI began in 1986.

It consists of a group of "key practices", neither new nor unique to CMM, which are divided into five levels representing the stages that organizations should go through on the way to becoming "mature". The SEI has defined a rigorous process assessment method to appraise how well a organization satisfies the goals associated with each level. The assessment is supposed to be led by an authorized lead assessor.

The maturity levels are:

1. Initial (chaotic, ad hoc, heroic)

2. Repeatable (project management, process discipline)

3. Defined (institutionalized)

4. Managed (quantified)

5. Optimizing (process improvement)

One way companies are supposed to use the model is first to assess their maturity level and then form a specific plan to get to the next level. Skipping levels is not allowed.

The CMM was originally meant as a tool to evaluate the ability of government contractors to perform a contracted software project. It may be suited for that purpose; I don't know. My concern is that it is also touted as a general model for software process improvement. In that application, the CMM has serious weaknesses.

Shrink-wrap companies, which have also been called commercial off-the-shelf firms or software package firms, include Borland, Claris, Apple, Symantec, Microsoft, and Lotus, among others. Many such companies rarely if ever manage their requirements documents as formally as the CMM describes. This is a requirement to achieve level 2, and so all of these companies would probably fall into level 1 of the model.



Criticism of the CMM

A comprehensive survey of criticism of the CMM is outside the scope of this article. However, Capers Jones and Gerald Weinberg are two noteworthy critics.

In his book Assessment & Control of Software Risks [6], Jones discusses his own model, Software Productivity Research (SPR), which was developed independently from CMM at around the same time and competes with it today. Jones devotes a chapter to outlining the weaknesses of the CMM. SPR accounts for many factors that the CMM currently ignores, such as those contributing to the productivity of individual engineers.

In the two volumes of his Quality Software Management series [12,13], Weinberg takes issue with the very concept of maturity as applied to software processes, and instead suggests a paradigm based on patterns of behavior. Weinberg models software processes as interactions between humans, rather than between formal constructs. His approach suggests an evolution of "problem-solving leadership" rather than canned processes.

General problems with CMM

I don't have the space to expand fully on all the problems I see in the CMM. Here are the biggest ones from my point of view as a process specialist in the shrink-wrap world:

· The CMM has no formal theoretical basis. It's based on the experience of "very knowledgeable people". Hence, the de facto underlying theory seems to be that experts know what they're doing. According to such a principle, any other model based on experiences of other knowledgeable people has equal veracity.

· The CMM has only vague empirical support. That is, the empirical support for CMM could also be construed to support other models. The model is based mainly on experience of large government contractors, and Watts Humphrey's own experience in the mainframe world. It does not account for the success of shrink-wrap companies, and levels 1, 4, and 5 are not well represented in the data: the first because it is misrepresented, the latter two because there are so few organizations at those levels. The SEI’s, Mark Paulk can cite numerous experience reports supporting CMM, and he tells me that a formal validation study is underway. That's all well and good, but the anecdotal reports I've seen and heard regarding success using the CMM could be interpreted as evidence for the success of people working together to achieve anything. In other words, without a comparison of alternative process models under controlled conditions, the empirical case can never be closed. On the contrary, the case is kept wide open by ongoing counterexamples in the form of successful level 1 organizations, and by the curious lack of data regarding failures of the CMM (which may be due to natural reluctance on the part of companies to dwell on their mistakes, or of the SEI to record them).

· The CMM reveres process, but ignores people. This is readily apparent to anyone who is familiar with the work of Gerald Weinberg, for whom the problems of human interaction define engineering. By contrast, both Humphrey and CMM mention people in passing [5], but both also decry them as unreliable and assume that defined processes can somehow render individual excellence less important. The idea that process makes up for mediocrity is a pillar of the CMM, wherein humans are apparently subordinated to defined processes. But, where is the justification for this? To render excellence less important the problem solving tasks would somehow have to be embodied in the process itself. I've never seen such a process, but if one exists, it would have to be quite complex. Imagine a process definition for playing a repeatably good chess game. Such a process exists, but is useful only to computers; a process useful to humans has neither been documented nor taught as a series of unambiguous steps. Aren't software problems at least as complex as chess problems?

· The CMM reveres institutionalization of process for its own sake. Since the CMM is principally concerned with an organization's ability to commit, such a bias is understandable. But, an organization's ability to commit is merely an expression of a project team's ability to execute. Even if necessary processes are not institutionalized formally, they may very well be in place, informally, by virtue of the skill of the team members. Institutionalization guarantees nothing, and efforts to institutionalize often lead to a bifurcation between an oversimplified public process and a rich private process that must be practiced undercover. Even if institutionalization is useful, why not instead institutionalize a system for identifying and keeping key contributors in the organization, and leave processes up to them?

· The CMM contains very little information on process dynamics. This makes it confusing to discuss the relationship between practices and levels with a CMM proponent, because of all the hidden assumptions. For instance, why isn’t training on level 1 instead? Training is especially important at level 1, where it may take the form of mentoring or of generic training in any of the skills of software engineering. The answer seems to be that nothing is placed at level 1, because level 1 is defined merely as not being at level 2. The hidden assumption here is that who we are, what problems we face, and what we’re already doing doesn’t matter: just get to level 2. In other words, the CMM doesn’t perceive or adapt to the conditions of the client organization. Therefore training or any other informal practice at level 1, no matter how effective it is, could be squashed accidentally by a blind and static CMM. Another example: Why is defect prevention a level 5 practice? We use project post mortems at Borland to analyze and improve our processes -- isn't that a form of defect prevention? There are many such examples I could cite, based on a reading of the CMM 1.1 document (although I did not review the voluminous Key Practices document) and the appendix of Humphrey's Managing the Software Process [5]. Basically, most and perhaps all of the key practices could be performed usefully at level 1, depending on the particular dynamics of the particular organization. Instead of actually modeling those process dynamics, the way Weinberg does in his work, the CMM merely stratifies them.

· The CMM encourages displacement of goals from the true mission of improving process to the artificial mission of achieving a higher maturity level. I call this "level envy", and it generally has the effect of blinding an organization to the most effective use of its resources. The SEI itself recognizes this as a problem and has taken some steps to correct it. The problem is built in to the very structure of the model, however, and will be very hard to exorcise.

Feet of clay: The CMM's fundamental misunderstanding of level 1 Organizations

The world of technology thrives best when individuals are left alone to be different, creative, and disobedient. -- Don Valentine, Silicon Valley Venture Capitalist [8]

Apart from the concerns mentioned above, the most powerful argument against the CMM as an effective prescription for software processes is the many successful companies that, according the CMM, should not exist. This point is most easily made against the backdrop of the Silicon Valley.

Tom Peters’s, Thriving on Chaos [9], amounts to a manifesto for Silicon Valley. It places innovation, non-linearity, ongoing revolution at the center of its world view. Here in the Valley, innovation reigns supreme, and it is from the vantage point of the innovator that the CMM seems most lost. Personal experience at Apple and Borland, and contact with many others in the decade I've spent here, support this view.

Proponents of the CMM commonly mistake its critics as being anti-process, and some of us are. But a lot of us, including me, are process specialists. We believe in the kinds of processes that support innovation. Our emphasis is on systematic problem-solving leadership to enable innovation, rather than mere process control to enable cookie-cutter solutions.

Innovation per se does not appear in the CMM at all, and it is only suggested by level 5. This is shocking, in that the most innovative firms in the software industry, (e.g., General Magic, a pioneer in personal digital communication technology) operate at level 1, according to the model. This includes Microsoft, too, and certainly Borland [2]. Yet, in terms of the CMM, these companies are considered no different than any failed startup or paralyzed steel company. By contrast, companies like IBM, which by all accounts has made a real mess of the Federal Aviation Administration’s Advanced Automation Project, score high in terms of maturity (according to a member of a government audit team with whom I spoke).

Now, the SEI argues that innovation is outside of its scope, and that the CMM merely establishes a framework within which innovation may more freely occur. According to the literature of innovation, however, nothing could be further from the truth. Preoccupied with predictability, the CMM is profoundly ignorant of the dynamics of innovation.

Such dynamics are documented in Thriving on Chaos, Reengineering the Corporation [4], and The Fifth Discipline [10], three well known books on business innovation. Where innovators advise companies to get flexible, the CMM advises them to get predictable. Where the innovators suggest pushing authority down in the organization, the CMM pushes it upward. Where the innovators recommend constant constructive innovation, the CMM mistakes it for chaos at level 1. Where the innovators depend on a trail of learning experiences, the CMM depends on a trail of paper.

Nowhere is the schism between these opposing world-views more apparent than on the matter of heroism. The SEI regards heroism as an unsustainable sacrifice on the part of particular individuals who have special gifts. It considers heroism the sole reason that level 1 companies succeed, when they succeed at all.

The heroism more commonly practiced in successful level 1 companies is something much less mystical. Our heroism means taking initiative to solve ambiguous problems. This does not mean burning people up and tossing them out, as the SEI claims. Heroism is a definable and teachable set of behaviors that enhance and honor creativity (as a unit of United Technologies Microelectronics Center has shown [3]). It is communication, and mutual respect. It means the selective deployment of processes, not according to management mandate, but according to the skills of the team.

Personal mastery is at the center of heroism, yet it too has no place in the CMM, except through the institution of a formal training program. Peter Senge [10], has this to say about mastery:

"There are obvious reasons why companies resist encouraging personal mastery. It is 'soft', based in part on unquantifiable concepts such as intuition and personal vision. No one will ever be able to measure to three decimal places how much personal mastery contributes to productivity and the bottom line. In a materialistic culture such as ours, it is difficult even to discuss some of the premises of personal mastery. 'Why do people even need to talk about this stuff?' someone may ask. 'Isn't it obvious? Don't we already know it?'"

This is, I believe, the heart of the problem, and the reason why CMM is dangerous to any company founded upon innovation. Because the CMM is distrustful of personal contributions, ignorant of the conditions needed to nurture non-linear ideas, and content to bury them beneath a constraining superstructure, achieving level 2 on the CMM scale may very well stamp out the only flame that lit the company to begin with.

I don't doubt that such companies become more predictable, in the way that life becomes predictable if we resolve never to leave our beds. I do doubt that such companies can succeed for long in a dynamic world if they work in their pajamas.

An alternative to CMM

If not the maturity model, then by what framework can we guide genuine process improvement?

Alternative frameworks can be found in generic form in Thriving on Chaos, which contains 45 "prescriptions", or The Fifth Discipline, which presents--not surprisingly--five disciplines. The prescriptions of Thriving on Chaos are embodied in an organizational tool called The Excellence Audit, and The Fifth Discipline Fieldbook [11], which provides additional guidance in creating learning organizations, is now available.

An advantage of these models is that they provide direction, without mandating a particular shape to the organization. They actually provide guidance in creating organizational change.

Specific to software engineering, I'm working on a process model at Borland that consists of a seven-dimensional framework for analyzing problems and identifying necessary processes. These dimensions are: business factors, market factors, project deliverables, four primary processes (commitment, planning, implementation, convergence), teams, project infrastructure, and milestones. The framework connects to a set of scaleable "process cycles". The process cycles are repeatable step by step recipes for performing certain common tasks.

The framework is essentially a situational repository of heuristics for conducting successful projects. It is meant to be a quick reference to aid experienced practitioners in deciding the best course of action.

The key to this model is that the process cycles are subordinated to the heuristic framework. The whole thing is an aid to judgment, not a prescription for institutional formalisms. The structure of the framework, as a set of two-dimensional grids, assists in process tailoring and asking "what if...?"

In terms of this model, maturity means recognizing problems (through the analysis of experience and use of metrics) and solving them (through selective definition and deployment of formal and informal processes), and that means developing judgment and cooperation within teams. Unlike the CMM, there is no a priori declaration either of the problems, or the solutions. That determination remains firmly in the hands of the team.

The disadvantage of this alternative model is that it's more complex, and therefore less marketable. There are no easy answers, and our progress cannot be plotted on the fingers of one hand. But we must resist the temptation to turn away from the unmeasurable and sometimes ineffable reality of software innovation.

After all, that would be immature.



Postscript 02/99

In the five years since I wrote this article, neither the CMM situation, nor my assessment of it, has changed much. The defense industry continues to support the CMM. Some commercial IT organizations follow it, many others don’t. Software companies pursuing the great technological goldrush of our time, the Internet, are ignoring it in droves. Studies alleging that the CMM is valuable don’t consider alternatives, and leave out critical data that would allow a full analysis of what’s going on in companies that claim to have moved up in CMM levels and to have benefited for that reason.

One thing about my opinion has shifted. I’ve become more comfortable with the distinction between the CMM philosophy, and the CMM issue list. As a list of issues worth addressing in the course of software process improvement, the CMM is useful and benign. I would argue that it’s incomplete and confusing in places, but that’s no big deal. The problem begins when the CMM is adopted as a philosophy for good software engineering.

Still, it has become a lot clearer to me why the CMM philosophy is so much more popular than it deserves to be. It gives hope, and an illusion of control, to management. Faced with the depressing reality that software development success is contingent upon so many subtle and dynamic factors and judgments, the CMM provides a step by step plan to do something unsubtle and create something solid. The sad part is that this step-by-step plan usually becomes a substitute for genuine education in engineering management, and genuine process improvement.

Over the last few years, I’ve been through Jerry Weinberg’s classes on management and change artistry: Problem Solving Leadership, and the Change Shop. I’ve become a part of his Software Engineering Management Development Group program, and the SHAPE forum. Information about all of these are available at http://www.geraldmweinberg.com. In my view, Jerry’s work continues to offer an excellent alternative to the whole paradigm of the CMM: managers must first learn to see, hear, and think about human systems before they can hope to control them. Software projects are human systems—deal with it.

One last plug. Add to your reading list The Logic of Failure, by Dietrich Dorner. Dorner analyzes how people cope with managing complex systems. Without mentioning software development or capability maturity, it’s as eloquent an argument against CMM philosophy as you’ll find.

References

1. Berti, Pat, "Four Pennsylvania schools await defense cuts.", Pittsburgh Business Times, Jan 22, 1990 v9 n24

2. Coplien, James, "Borland Software Craftsmanship: a New Look at Process, Quality and Productivity", Proceedings of the 5th Borland International Conference, 1994

3. Couger, J. Daniel; McIntyre, Scott C.; Higgins, Lexis F.; Snow, Terry A., "Using a bottom-up approach to creativity improvement in IS development.", Journal of Systems Management, Sept 1991 v42 n9 p23(6)

4. Hammer, Michael; Champy, James, Reengineering the Corporation, HarperCollins, 1993

5. Humphrey, Watts, Managing the Software Process, ch. 2, Addison-Wesley, 1989

6. Jones, Capers, Assessment & Control of Software Risks, Prentice-Hall, 1994

7. Paulk, Mark, et al, Capability Maturity Model 1.1 (CMU/SEI-93-TR-24)

8. Peters, Tom, The Tom Peters Seminar: Crazy Times Call for Crazy Organizations, Random House, 1994

9. Peters, Tom, Thriving on Chaos: Handbook for a Management Revolution, HarperCollins, 1987

10. Senge, Peter, The Fifth Discipline, Doubleday, 1990

11. Senge, Peter, The Fifth Discipline Fieldbook, Doubleday, 1994

12. Weinberg, Gerald M., Quality Software Management, v. 1 Systems Thinking, Dorset House, 1991

13. Weinberg, Gerald M., Quality Software Management, v. 2 First-order measurement, Dorset House, 1993

Bug Entry Good Practices

Bug Tracking and Project Management Software - BUGtrack - Bug Entry Good Practices

What is a good test case?

By Cem Keaner

This is a good summary for people new to software testing. I'm a strong believer in using multiple "test styles" and test activities as part of an overall testing strategy. This paper breaks black box testing into Function, Domain, Specification, Risk-based, Stress, Regression, User, Scenario, State-model based, High volume automated and Exploratory testing. Internally we may use different terms, but hit most of these categories in some form. For example, in addition to functional testing, which is probably the dominant style, we also do stress, capacity, security, specification (feature specs as well as others Logo requirement and Accessiblity/Section 508 etc), scenario and exploratory testing. We do “User” testing through both dogfooding our product as well as through betas and early adopter programs. My team has a few pilot projects with State-model based testing, but it’s limited right at the moment. We have recently done a lot more high volume automated testing on our IDE features, where we take arbitrary code and apply generic tests to it. This has been pretty successful. We’ve taken our compiler test suite of 20,000+ language tests and run in through this engine and have found a number of bugs we didn’t with traditional methods.

Managing a Software Test Team

by James Bach
Copyright 1997, Satisfice, Inc.

Whether you're an individual tester assigned to find bugs for a team of developers, or the manager of a testing department with 75 testers, you have an uphill job. When testing, you can't be sure you will catch all the problems, or even all of the important ones. You can criticize the product, but you can't directly improve it. Your work results in little that's tangible, so people assume there isn't much to it. If you're new to test leadership, let me assure you that these problems are normal and manageable. There is hope. In this article, I offer a set of principles and guidelines for being a successful test lead. These principles come from my own experience as a test manager and consultant, and from many mentors and colleagues who helped me learn the craft. The environment of testing No tester is an island. Beyond the technological issues inherent in testing, and the details of your test strategies, the problems you face in testing depend largely on how your project environment is structured. In my experience, most test teams operate in projects that provide little support for the needs of a good testing process. There's nothing sinister about that. It's just that few people know much about testers, testing, or the dynamics of software quality. What we don't understand, we generally don't support. Your job as a test team lead is partly to make the test process visible, along with its benefits and consequences, so that the environment will support it better. There isn't room, here, to detail all of the aspects of the typical project environment for testing, but here are a few points to keep in mind. No matter what management says about product quality, they usually don't behave as if it has to be especially excellent. The fact is, very few software products really have to be especially excellent. Software quality is rarely a selling point, it's only that the noticeable lack of quality in your product can be a selling point for your competitors. No matter what you do, you'll never have enough staff to do what you'd like to do. The requirements and specifications for the product, most of which will never be written down, will be vague, or outdated whenever you do see them.The mission and tasks of testing Everybody shares the overall mission of the project-- something like "ship a great product" or "please the customer." But each functional team also needs a more specific mission that contributes to the whole. In well-run projects, the mission of the test team is not merely to perform testing, but to help minimize the risk of product failure. Testers look for manifest problems in the product, potential problems, and the absence of problems. They explore, assess, track, and report product quality, so that others in the project can make informed decisions about product development. It's important to recognize that testers are not out to "break the code." We are not out to embarrass or complain, just to inform. We are human meters of product quality. The heart of testing is examining a product system, evaluating it against a value system, and discovering if the product satisfies those values. To do this completely is impossible. Trust me on this. No testing process, practical or theoretical, can guarantee that it will reveal all the problems in a product. However, it may be quite feasible to do a good enough job of testing, if the goal of testing is not to find all the problems-- only the critical ones; the ones that will otherwise be found in the field and lead to painful consequences for the company. The thing is, it can be difficult to tell the difference between a good job of testing and a poor one. Since many problems that are likely to be found in the field are just plain easy to find, even a sloppy test process can be accidentally sufficient for some projects. And if the product doesn't have many critical problems to begin with, even crippled test process may seem successful. Systematic good enough testing, on the other hand, is more than just a process of finding problems. It also provides a reasonable confidence on which to base decisions about the product. A note about quality assurance. The role of quality assurance is a superset of testing. Its mission is to help minimize the risk of project failure. QA people try to understand the dynamics of project failure (which includes product failure as an aspect) and help the team prevent, detect, and correct the problems. Often test teams are called QA teams, sometimes in the enlightened belief that testers should evolve into the broader world of QA, and sometimes just because it sounds cooler. If you take the mission of reducing risk seriously, then you need a test process that provides a balanced perspective on the quality of the product, and a reasonable confidence of revealing critical problems that may be lurking there. This requires effective collaboration with other members of the team, as well as users and other stakeholders who may not be on the project team. It must be coordinated in lock step with all of the other activities of the product development lifecycle. Testers themselves must have the right skills and knowledge to design and perform the test process. Finally, you as a test lead have to support all of these tasks. That makes six major tasks of testing. There isn't room here to expand on all of them, but here's a little more detail on each one. 1. Monitor product quality. 1.1 Quality Planning: explore and evolve a clear idea of the requirements for your product. 1.2 Quality Assessment: compare your observations of the product with the quality standard. 1.3 Quality Tracking: be alert to changes in quality over the course of the project. 1.4 Quality Reporting: keep the team informed about your findings. Quality monitoring, overall, is the core service of the test team, and supports their mission of helping to minimize the risk of product failure. The remaining testing tasks support this service. 2. Perform an appropriate test process. 2.1 Test Planning: decide what test techniques to use, and how they will be applied. 2.2 Product Analysis: understand what the product is and how it works. 2.3 Product Coverage: decide which parts of the product to test, and assure that they are covered. 2.4 Test Design: design tests that fulfill the test plan. 2.5 Test Execution: execute tests that fulfill the test designs and test plan. A test process is appropriate if it provides an acceptable level of confidence in the quality assessment. The less sure you need to be about your quality assessment, the less extensive your test process should be. Don't insist on extensive testing for low risk components. But, if you need to ship a high quality product, than you need to work hard at the tasks above. 3. Coordinate testing schedule with all other activities in the project. Make sure you know what's going on in the project. Many other processes beyond the test team can impact the test process. Make sure you are included in planning and status meetings. When testing is on the critical path, management is strongly tempted to cut it short. So, don't stand directly in front of the project freight train, if you can help it. Start testing the moment that a barely testable product is available, build up a backlog of bug reports, then try to keep development working up to the last minute, fixing bugs, while insisting on careful change control to help assure that no reckless modifications are made to the code base. If you can safely perform incremental, spot regression testing on changes, rather than multi-week cycles of general testing, then the developers stay on the critical path throughout most of the project. By the way, if your project has a sloppy release process, all your work in testing could go out the window at the last minute, due to one ill-advised bug fix that introduces a slew of new problems. 4. Collaborate with the project team and project stakeholders. Since testers are merchants of information, their success lies in large part with the quality of their information pipelines to and from the other functions of product development, as well as to stakeholders who aren't even on the project team. If you can establish a connection to actual or potential users of the product, they can help you improve your notion of acceptable quality. Developers give more credence to tests that involve realistic situations, so collecting data from users for use in test design is a good idea. Technical support people often hear from users who have concerns about quality, so they are useful to pal around with, too. They can also help you understand what kind of problems are likely to generate costly support calls. Virtually all products have some kind of online help or user manual. Whoever writes them has to examine the product. When they find problems, they should bring them to you, or at least copy you on them. You can help them understand the product better, and they can help you spot patterns of problems. Also, a user manual is an invaluable tool to use as a test plan. Whenever the technical writers ask you to review a manual or help file, I'd suggest dropping everything else for a week and do nothing but test against the writing. You are guaranteed to learn more about the product and find important bugs to boot. The marketing team produces fact sheets and ads, and those should be tested. Besides, any claim that goes into an advertisement gives you more leverage to argue for a high quality standard. Marketing people can help you understand which parts of the product they believe are most critical for impressing customers in the field, which helps justify the resources to test them. Your relationship to management dictates how well you will be allowed to do your job. Do what you can to make management understand and value your test processes, and especially your mission. The key thing with management is to track historical data about customers experiencing critical problems, then show them how a better process might prevent future mishaps. People learn best from their own failures. Your most important relationship is with the developers. This is the one that makes your job or breaks it. What Testers need from Developers Information What the product is and how it works. How the product is intended to be used. What parts of the product are at greater risk of failure. The schedule for remaining development work. Status and details of any changes or additions to the product. Status of reported problems. Services Provide timely information to the testers. Respond quickly to reported problems. Stay synchronized with the project schedule. Collaborate to set an appropriate standard of quality. Involve testers in all decisions that may impact them. Seek to understand the test process.What Developers need from Testers Information Problems found. What will be tested. Schedule for testing. Status of the current test cycle. What information testers need to work effectively. Services Provide timely information to Development. Seek to understand the general process of creating software. Seek to understand the product and it's underlying technologies. Respond quickly to new builds. Stay synchronized with the schedule, and don't delay the project. Collaborate to set an appropriate standard of quality.5. Study and learn. 5.1 Technology Study: learn about component technologies used in your product. 5.2 Market Study: learn how your users think, and who your competition is. 5.3 Testing Study: strive to know your job well enough to teach it to others. 5.4 Experiential Learning: preserve notes, collect metrics, and review problems found in the field. The world of technology changes so fast that it takes constant learning to get good at testing it. Good testing is so hard that we need to be constantly looking out for ways to do it better. 6. Lead the test team.6.1 Resource Estimation: figure out what you need to do the job. 6.2 Resource Acquisition & Allocation: get the resources you need and distribute them wisely. 6.7 Tester Assessment & Improvement: assure that the testers are performing up to par. 6.3 Pressure Management: be a buffer between testers, and the pressure to ship. 6.4 Relationship Management: establish and monitor relationships to other roles on the project. 6.5 Information Management: assure that information is gathered and distributed to the right people. 6.6 Process Management: assure that the test process is sound, synchronized, and visible. 6.7 Quality Management: work with development to maintain a reasonable standard of quality. Testers who report directly to the development team, without an intervening test manager, are at a disadvantage, because these important leadership activities are shortchanged. A good test manager makes good testing possible by monitoring all the rest of the activities, solving problems with them, and speaking with one voice about them to management. It's hard to pull a test team up out of accidentally sufficient testing and toward a test process that's more professional. Few people are prepared for this challenge, when it's first thrust upon them. I certainly wasn't prepared when I found myself in the job. The good news is that you can readily become better at it than most, just by striving to be good enough.What are developers? For the purposes of this article, I'm calling anyone who actually creates any aspect of the product a developer; whoever it is who creates quality and is the object of attention by those of us who assess quality. What is their mission? Naturally, developers want to ship a great product, or at least a good enough one. But rather than thinking in terms of minimizing the risk of failure, as testers do, developers are committed to creating the possibility of success. Their mission positions them at the vanguard of the project, while testing defends their flank. There's a natural tension between these roles, because achieving one mission can potentially negate the other. Any possibility of success creates some risk of failure. The key is for each role to do its job while being mindful of the other's mission. What is their focus? Developers, by and large, are focused on the details of technology, rather than the more abstract concerns of process or quality dynamics. The reason for this is simple: you can't create software without skill in manipulating technology and tools, and such skill requires a lot of time and effort to maintain. To produce software of consistently high quality, a lot of other kinds of skill is required, but consistently high quality is more often a desire than a need. Most developers believe that their knowledge of technology is the best key to survival and success in their field, and they are right. Developers are usually not focused on testers and what they do, until the latter stages of the project when development work is more or less defined by the outcome of testing. Testers, on the other hand, are always focused on the work of developers. That mismatch is another source of tension. What problems crop up between testers and developers? The differing focus of either role can create a situation where each role looks amateurish to the other. Developers, who know that high quality becomes possible only by mastering the details of technology, worry that testers have too little technology skill. Testers, who know that high quality becomes achievable only by taking a broad point of view, worry that developers are too involved in bits and bytes to care about the users. Each role creates work for the other, and people performing either role often have little understanding of what the other guy needs. How do you achieve a good relationship with them? I've found that the one key to relating to developers is to refer to your common mission of shipping a great product. Express the attitude that testing is here to allow development to focus on possibilities without falling prey to risks. You're going to provide information to them about risks. If you can't agree on the risks, then perhaps you can agree to watch closely what happens when the product is released, and learn from that experience. Another key is to be clear about what each of you needs from the other. Don't assume that your needs should be obvious to them. Finally, master as much of the technology as you can.

Johanna Rothman: Life as a New Test Manager

Life as a New Test Manager

© 1999 Johanna Rothman

These notes represent the opinions of the BOF attendees. Where I couldn't resist, I added my comments.

We had about 50-60 people attending the BOF. Demographics were:

Currently a manager: most
1 year or less management experience: 10
1-5 years of experience: 12
> 5 years experience: a few

We discussed what we would talk about:

Subject Interest level
Power and Influence in the Organization higher
What do I do moderate
How much testing do I do? lower
Balance freedom vs. guidance and coaching of staff higher
Make people thinkit was their idea moderate
tracking tasks and juggling moderate
Get technical people to test vs. develop, including keeping people in test and keeping people in the company lower
recruiting moderate
motivate people who need to do better higher
performance assessment moderate
justifying and determining size of testing effort lower
Influence

* Interpersonal skills required!
* Negotiation
* Get a track record
o Start small
o Get data
* Organizational agreement
o What is test's role?
o Be part of the lifecycle
o But how to get management's attention?
+ Data
+ Terse text (JR note: I think this is where I said senior managers prefer to read bullets and graphs to prose)
* Need to understand the project cube of time-to-market, cost-to-market, functionality, defects (JR adds the people and the work environment)
*
* One must be flexible
* Do quick project retrospectives
* Disasters catch attention
o "Significant emotional event"
* Demonstrate influence on the bottom line
* Logic, Pilot, Challenge ("I dare you"), Shame and Guilt ("xxx does this")
* Continuous communication upwards
o Be in their face
o Continuous mindshare
o Before the test plan
* "Hiring us was your idea"
* "Why? What risks do you see?"
* Talk about risk of features and cost of testing
* We asked about attendee's peers:
o More experience than you (some, small number)
o Do they have an effect on your influence (mixed results)

We talked about influence issues for about 45 minutes, and started talking about trust. If you're not trusted in the organization, how can you get things done?
Trust

* Hard due to the nature of the job
o Find errors in development work
o See Brian Marick's paper, "Working Effectively With Developers" about one way to work with developers
* Establishing credibility vs. being some kind of expert
* Write good bug reports. Make them complete
* Defer to developer expertise (but ask questions
* Remind development: "I'm on your side"
* Target major issues, not minor (in development experience)
* Know end user - specialty
* Don't feel moral superiority over development. Sometimes the problem is us.
* Help them specify
* Feed them
* Pair new tester with developer to help with developer testing and learn something about developer's code
* Your staff reviews you
o Skip level reviews (your manager talks to your reports)
o Managers are judged by the people in the team

Freedom to do the job vs. coaching

* Don't wait too long to look
* Hiring people hurts overall performance
* Dealing with passive/aggressive
o Challenge them, make them step up, set expectations
o Listen to them
* Structure communications
o Give freedom within bounds
* Be clear about goals. All else is situational.
* Status report contents
o Planned work done
o Planned work not done
o Unplanned work done
o Planned work going forward
* Look at strength of whole department
o Buddy systems work
o Brian's pointer on the web to a live example of programming pairs: http://www.armaties.com/Practices/PracPairs.html (this link seems to be broken, 7/29/02 -- JR)
* Meet with your staff 5-10 minutes every morning, on their schedule
* 1/2 hour one-on-one weekly
o How much should you prepare in advance. (JR's opinion: Your staff should know what to expect, but leave time for them to bring up issues)
o Encourage your staff to know you
o They own their careers, you're their to mentor them and remove obstacles
* Weekly report (learn how to estimate)
o Objectives
o Planned date of completion vs. actuals. Learn from these as they become actuals.
* Sometimes it's the little things that count
o Get staplers
o Buy lunch
* Find opportunities for your staff to do what they don't know they'll enjoy

Dealing with Shortfalls (of resources)

* How do you quantify your work?
* Make visible work that will be done and won't be done
o Give a choice
* Convincing management there is a shortfall:
o Work out what last project's decisions cost (in engineer years converted to dollars). Now what do you want to do?
o What does customer support cost you? Where are the problems. Now what do you want to do?
* Recruit developers to expensively do what testers are cheaper at. Now what do you want to do?

When you have too many direct reports

* Break into teams of 4 or less (JR says 6 or less)
* Learn from general management literature
o Jon Hagar's list
+ Competitive Advantage, Michael Porter - pretty dry but a good approach to evaluating competition, markets, etc in order to create a competitive advantage; I think the concepts can be applied to an organization as well as a company
+ Danger in the Comfort Zone, Judith Bardwick - changing demographics and attitudes that affect productivity
+ Lean Thinking, Womack and Jones - Getting big companies to the right size
+ Getting to Yes, Fisher and Ury - opening negotiation options rather than closing them down, applies to more than just contract negotiations
+ Crossing the Chasm, Moore - marketing and selling technology products to mainstream
+ Type Talk, Kroeger and Thuesen - understanding differences in people and how they can affect the work environment - applies to understanding self and how you work/are perceived by others as well as understanding those you work with/for/over
+ Managing for results, Drucker - good intro book to management issues (he has other books)
+ Passion for excellence, Peters and Austin - a little dated given how American industry has transformed itself, but still applicable
+ The New Rational Manager - Kepner and Tregoe - great book on risk management and problem solving
+ Good Authors:
# Jack Stack - teach everyone from the stockroom employee up how to read financials and to understand how what they did contributed to the bottom line.
# Deming - I think everyone should read his "rules" list
o Anything from Peter Drucker, especially The Effective Executive and Managing for Results
o Watts Humphrey, Managing Technical People
o Steve McConnell, Rapid Development
o Steve Maguire, Debugging the Development Process
o Ken Blanchard, The One Minute Manager

Taking over a group

* Sit back
* Spend a lot of time listening
* Remove obstacles
* Ask what can change, help it happen
* Give credit to people

Who's Your Successor?

* If you don't have someone to succeed you, you're stuck



Johanna Rothman observes and consults on managing high technology product development, looking for the leverage points that will make her clients more successful. You can reach her at jr@jrothman.com or by visiting www.jrothman.com.

Measuring Test Effectiveness

Measuring Test Effectiveness

Testing Interview Questions and Jobs

Testing Interview Questions and Jobs

Testing & The Role of a Test Lead / Manager

Testing & The Role of a Test Lead / Manager

TestCrew: Bughunters’ Journal

TestCrew: Bughunters’ Journal

Improving the Test Process

StickyMinds.com : Article info : Improving the Test Process

"Excel-erating" Test Status Reporting

StickyMinds.com : Article info : "Excel-erating" Test Status Reporting

Four Keys to Better Test Management

StickyMinds.com : Article info : Four Keys to Better Test Management

Twelve Tips for Realistic Scheduling

StickyMinds.com : Article info : Twelve Tips for Realistic Scheduling

StickyMinds.com : Article info : Categorizing Defects by Eliminating "Severity" and "Priority"

StickyMinds.com : Article info : Categorizing Defects by Eliminating "Severity" and "Priority"

Razor's Blog : Retrieve list of IIS virtual directories using C#

Razor's Blog : Retrieve list of IIS virtual directories using C#

Roadmap for Agile Testing 2004

Roadmap for Agile Testing 2004

PCQuest : Server Side : IIS 6’s New Metabase

PCQuest : Server Side : IIS 6’s New Metabase

Server Automation Tools - MetaBase 101. A simple tutorial on the IIS Metabase.

Server Automation Tools - MetaBase 101. A simple tutorial on the IIS Metabase.

IIS Metabase and programmatic administration in C#

IIS Metabase and programmatic administration in C#

Scripting with Flash 5: Calling methods from JavaScript

Macromedia Flash - Scripting with Flash 5: Calling methods from JavaScript

Testing Flash objects Scripting with Flash 5: Flash Methods

Macromedia Flash - Scripting with Flash 5: Flash Methods

Extending Flash MX 2004 Series: An Introduction to the JSAPI

Extending Flash MX 2004 Series: An Introduction to the JSAPI

AdventNet QEngine - Product Comparison Document

AdventNet QEngine - Product Comparison Document

AdventNet QEngine - WebTest Product Comparison

AdventNet QEngine - WebTest Product Comparison

IBM Rational Tester Price

IBM Passport Advantage Express

web_testing_checklist [SoftwareTestingWiki]

web_testing_checklist [SoftwareTestingWiki]

role_of_testers_in_XP.pdf (application/pdf Object)

role_of_testers_in_XP.pdf (application/pdf Object)

Making the Transition to Management

RoundTables : Making the Transition to Management

The Influential Test Manager

StickyMinds.com : Better Software magazine : The Influential Test Manager

Perspectives from a Test Manager

StickyMinds.com : Better Software magazine : Perspectives from a Test Manager

HttpContext.Items - a Per-Request Cache Store

ASP.NET.4GuysFromRolla.com: HttpContext.Items - a Per-Request Cache Store

TechStuff: Unit Testing Stored Procedures in MS SQL Sever

TechStuff: Unit Testing Stored Procedures in MS SQL Sever

Unit testing with HttpContext

Unit testing with HttpContext

Unit Testing in .NET - NUnit - Automating Unit Tests - Testing a Database

Unit Testing in .NET - NUnit - Automating Unit Tests - Testing a Database

Visual source safe web interface

Component Workshop Ltd

Unit Testing, why test iteration

Unit Testing, why test iteration

NUnit Unit Testing of ASP.NET Pages, Base Classes, Controls and other widgetry using Cassini (ASP.NET Web Matrix/V

ComputerZen.com - Scott Hanselman - NUnit Unit Testing of ASP.NET Pages, Base Classes, Controls and other widgetry using Cassini (ASP.NET Web Matrix/Visual Studio Web Developer)

Internet Macros, Web Automation, Web Application Testing, Windows scripting and Windows Macro Automation utilities - automate your PC with Windows scr

Internet Macros, Web Automation, Web Application Testing, Windows scripting and Windows Macro Automation utilities - automate your PC with Windows scripting for Win9x/NT/ME/2000/XP.

Ramadan - What is it?

  Ramadan is one of the most important and holy months in the Islamic calendar. It is a time of fasting, prayer, and spiritual reflection fo...