Thursday, November 30, 2006

Cache Stored Procedure SqlParameter Objects

Cache Stored Procedure SqlParameter Objects

Applies to

  • ADO.NET 1.1

What to Do

Often, applications must run SQL commands multiple times, cache the Stored Procedure SqlParameter so that they can be reused later.

Why

Caching the SqlParameter object avoids recreating the them each time the Stored Procedure needs to be called. Thus improving the performance of the application.

When

This guideline should be followed whenever you have code repetitively calling stored procedure.

How

A good approach is to cache parameter arrays in a Hashtable object. Each parameter array contains the parameters that are required by a particular stored procedure that is used by a particular connection. The following code fragment shows this approach.

public static void CacheParameterSet(string connectionString,
string commandText,
params SqlParameter[] commandParameters)
{
if( connectionString == null || connectionString.Length == 0 )
throw new ArgumentNullException( "connectionString" );
if( commandText == null || commandText.Length == 0 )
throw new ArgumentNullException( "commandText" );

string hashKey = connectionString + ":" + commandText;
paramCache[hashKey] = commandParameters;
}

The following function shows the equivalent parameter retrieval function

public static SqlParameter[] GetCachedParameterSet(string connectionString, string commandText)
{
if( connectionString == null || connectionString.Length == 0 )
throw new ArgumentNullException( "connectionString" );
if( commandText == null || commandText.Length == 0 )
throw new ArgumentNullException( "commandText" );

string hashKey = connectionString + ":" + commandText;

SqlParameter[] cachedParameters = paramCache[hashKey] as SqlParameter[];
if (cachedParameters == null)
{
return null;
}
else
{
return CloneParameters(cachedParameters);
}
}

When parameters are retrieved from the cache, a cloned copy is created so that the client application can change parameter values, without affecting the cached parameters. The CloneParameters method is shown in the following code fragment.

private static SqlParameter[] CloneParameters(SqlParameter[] originalParameters)
{
SqlParameter[] clonedParameters = new SqlParameter[originalParameters.Length];

for (int i = 0, j = originalParameters.Length; i < j; i++)
{
clonedParameters[i] =
(SqlParameter)((ICloneable)originalParameters[i]).Clone();
}
return clonedParameters;
}

Right out of GuidanceExplorer.

Saturday, November 04, 2006

.NET Memory usage - A restaurant analogy

Great explanation about .NET memory usage by Tess. Worth reading!

Friday, October 27, 2006

Should I use an abstract class or an interface? at C# Online.NET (CSharp-Online.NET)

Visual C# Best Practices

* Use abstract classes and interfaces in combination to optimize your design trade-offs.


Use an abstract class

* When creating a class library which will be widely distributed or reused—especially to clients, use an abstract class in preference to an interface; because, it simplifies versioning. This is the practice used by the Microsoft team which developed the Base Class Library. (COM was designed around interfaces.)

* Use an abstract class to define a common base class for a family of types.

* Use an abstract class to provide default behavior.

* Subclass only a base class in a hierarchy to which the class logically belongs.


Use an interface

* When creating a standalone project which can be changed at will, use an interface in preference to an abstract class; because, it offers more design flexibility.

* Use interfaces to introduce polymorphic behavior without subclassing and to model multiple inheritance—allowing a specific type to support numerous behaviors.

* Use an interface to design a polymorphic hierarchy for value types.

* Use an interface when an immutable contract is really intended.

* A well-designed interface does only one thing—not a potpourri of functionality.

Know Thy Code: Simplify Data Layer Unit Testing using Enterprise Services -- MSDN Magazine, June 2005

Know Thy Code: Simplify Data Layer Unit Testing using Enterprise Services -- MSDN Magazine, June 2005

MbUnit vs. NUnit Vs. Team System Unit Testing - Choosing a unit test framework

ISerializable - Roy Osherove's Blog : MbUnit vs. NUnit Vs. Team System Unit Testing - Choosing a unit test framework

Wednesday, September 27, 2006

Scripting with Windows PowerShell

How lame. Just about all links to PowerShell's new version RC2 are dead links. When will MS fix them!?!?! 

Link to Scripting with Windows PowerShell

LearnSqlServer.com - SQL Server Training and Tutorials ON VIDEO, SQL Server 2005 Video Training, Free SQL Server 2005 Videos

 

Link to LearnSqlServer.com - SQL Server Training and Tutorials ON VIDEO, SQL Server 2005 Video Training, Free SQL Server 2005 Videos

InfoQ: Book Excerpt: Implementing Lean Software Development: From Concept to Cash

 

The 7 Principles of Lean Software Development:
  1. Eliminate Waste
    The three biggest wastes in software development are: Extra Features, Churn, Crossing Organizational Boundaries.
  2. Build Quality In
    If you routinely find defects in your verification process, your process is defective.
  3. Create Knowledge
    Planning is useful. Learning is essential. Predictable performance is driven by feedback.
  4. Defer Commitment
    Abolish the idea that it is a good idea to start development with a complete specification.
  5. Deliver Fast
    Lists and queues are buffers between organizations that simply slow things down.
  6. Respect People
    Engaged, thinking people provide the most sustainable competitive advantage.
  7. Optimize the Whole
    Brilliant products emerge from a unique combination of opportunity and technology.

Source: InfoQ: Book Excerpt: Implementing Lean Software Development: From Concept to Cash

InfoQ: Testing Ajax Applications with Selenium

 

Testing Ajax Applications with Selenium

Source: InfoQ: Testing Ajax Applications with Selenium

InfoQ: Why Would a .NET Programmer Learn Ruby on Rails?

Good article, worth reading! 

Link to InfoQ: Why Would a .NET Programmer Learn Ruby on Rails?

Wednesday, September 13, 2006

Remote Debugging with WinDbg

Taken From:

Application debugging in a production environment
Author : Hans De Smaele
Published : September 12, 2004

 
Personally, I like WinDbg the most to do remote debugging. Not only is this the most
powerful debugger, but it has also the most possibilities to connect computers with each
other. In some scenarios, you can connect up to 5 computers to debug a station !
Note: Check the WinDbg documentation for more information about how to setup
remote debugging.
Application debugging in a production environment
Version 1.1

100
An easy way to debug user applications (not kernel problems) with WinDbg is by
installing WinDbg on the debugging station and "DbgSvr" (process server) on the remote
computer.
Give the command below on the remote computer :
DbgSvr –t tcp:port=9999
And on the debugging computer, where WinDbg is located, run :
WinDbg –premote tcp:server=<remote station name>, port=9999 –p <PID>
And there you go !

Thursday, May 25, 2006

code4ward : Royal TS 1.3.1

code4ward : Royal TS 1.3.1

Coding Horror: VNC vs. Remote Desktop

Coding Horror: VNC vs. Remote Desktop

List of the keyboard shortcuts that are available in Windows XP

List of the keyboard shortcuts that are available in Windows XP

Scott Forsyth's WebLog : Managing Terminal Services Sessions Remotely

Scott Forsyth's WebLog : Managing Terminal Services Sessions Remotely

K. Scott Allen : Remote Desktop Hacks

K. Scott Allen : Remote Desktop Hacks: "re: Remote Desktop Hacks
If you download the server 2003 admin pack (free), you can create an MMC snapin for remote desktops. In a multi server environment it is invaluable. Basically looks like windows explorer, your list or remote desktops on the left, and you can just toggle between them.

Also it is configurable per connection whether you want to use a console connection or not.

http://www.microsoft.com/downloads/details.aspx?FamilyID=C16AE515-C8F4-47EF-A1E4-A8DCBACFF8E3&displaylang=en"

Remote Desktop Hacks

K. Scott Allen : Remote Desktop Hacks

Remote Networking Development

Remote Networking Development

The Five Essential Phone Screen Questions

The Five Essential Phone Screen Questions

Ayende @ Blog - SQL Challange, Date Merging

Ayende @ Blog - SQL Challange, Date Merging

Peter Bromberg's UnBlog: Is your development process "Broken"?

Peter Bromberg's UnBlog: Is your development process "Broken"?

Session State Uses a Reader-Writer Lock

K. Scott Allen : Session State Uses a Reader-Writer Lock

Example VSS Framework for Source Code Management

Example VSS Framework for Source Code Management

Technology: Integrating Selenium With Build Script

Technology: Integrating Selenium With Build Script

SEFFS: To Flash Or Not To Flash

Claus Wahlers ★ w3blog ★ SEFFS: To Flash Or Not To Flash

SEFFS: To Flash Or Not To Flash

Claus Wahlers ★ w3blog ★ SEFFS: To Flash Or Not To Flash

How and when to use sIFR »UsableType » UsableType: Web Typography Guide

How and when to use sIFR »UsableType » UsableType: Web Typography Guide

Wednesday, May 10, 2006

Search Engine Optimization Tips

Search Engine Optimization Tips

TestNG - testing framework

TestNG is a testing framework inspired from JUnit and NUnit but introducing some new functionalities that make it more powerful and easier to use, such as:

* JDK 5 Annotations (JDK 1.4 is also supported with JavaDoc annotations).
* Flexible test configuration.
* Support for data-driven testing (with @DataProvider).
* Support for parameters.
* Allows distribution of tests on slave machines.
* Powerful execution model (no more TestSuite).
* Supported by a variety of tools and plug-ins (Eclipse, IDEA, Maven, etc...).
* Embeds BeanShell for further flexibility.
* Default JDK functions for runtime and logging (no dependencies).
* Dependent methods for application server testing.

Sahi - Web Automation Testing Tool

Sahi - Scripting

Saturday, May 06, 2006

How to save Google Video as AVI - EXTREME Overclocking Forums

How to save Google Video as AVI - EXTREME Overclocking Forums: "How to save Google Video as AVI

This article is the result of me being asked countless times how to change the .mp4 from http://video.google.com to .AVI, when you can simply download them as .AVI to begin with.


Step 1 Navigate to the Google Video that you want to download as if you were simply watching it normally.

Step 2 Click the Download button on the right side of the screen. It will ask if you want to download the Google Video Player. Click Cancel/No!.

Step 3 Click Manually download the video on the right side of the screen, under the Download button. Save this GVP file and open it with Notepad.

Step 4 If its not already on, turn on Word Wrap in Notepad by clicking the Format menu, then Word Wrap.

Step 5 In Notepad, select and copy all the text in between url: and docid:

Step 6 Paste this address into your internet browser or download manager, and let the download begin!

You can delete the GVP file you downloaded, its not needed any more.

That's all folks! AVI straight from Google Video.

Thanks to Fatsobob!

EDIT: Please remember, some (very rarely) Google videos are set to no download mode, which means there's no way to download them as far as I can tell. You can tell this type because it simply won't have a 'Download' button on the right. If you find a way to save them, let me know.
Attached Images
File Type: jpg step 1.JPG (139.2 KB, 38 views)
File Type: jpg step 2.JPG (137.4 KB, 37 views)
File Type: jpg step 3.JPG (141.2 KB, 36 views)
File Type: jpg step 4.JPG (146.4 KB, 42 views)
File Type: jpg step 5.JPG (145.0 KB, 39 views)
__________________
AMD 2400+ 35Watt IQYHA (2.42 GhZ (11 x 220@1.7v) 1x512 Patriot XBL PC3200 (2-3-3-6@1:1, 2.7v) Abit NF7-S 2.0 with
D23 3dFire BIOS (TYVM TicTac) 2MB Cirrus PCI Video Card (RIP 9800PRO) OCZ Powerstream 470
--Socket A Fanboy for life!!!--
1.2.3.4.5.6.7.8.9.10.
AS5 is 6-9 times more expe"

Improve Time Management: Overcome Procrastination

Google Operating System: Improve Time Management: Overcome Procrastination

Google Operating System

Google Operating System

Google techtalks - Google Video

Google techtalks - Google Video

Thursday, May 04, 2006

Testing & The Role of a Test Lead / Manager

Testing & The Role of a Test Lead / Manager

The Bug Life Cycle

The Bug Life Cycle

A 3-Step Success Strategy For Leading Change

A 3-Step Success Strategy For Leading Change
By Barbara Brown, PhD

As a leader, you know that change is an ongoing process, not a one-time event. The trick is to implement change in a way that does not destroy your organization, disrupt your service, and demoralize your staff. These three steps of the change process will help you do that.

Step 1: Analyze The Change

Before you begin any change process, think about what you want to continue doing, what you want to stop doing, and what you want to start doing. Consider the following strategies:

1. Givens: These are aspects of the change you cannot control. This change must happen, regardless of what you want, what you say, or what you do. You cannot control what will happen, when it will happen, how it will happen, where it will happen, or whom it will happen to.

2. Negotiables: These are aspects of the change you can influence. This change may or may not be necessary. It could be modified or adjusted in some way. You may be able to control what could happen, when it will happen, how it will happen, where it will happen, or whom it will happen to.

3. Controllables: These are aspects of the change you can fully control. You have complete power in this instance. You can control what will happen, when it will happen, where it will happen, or whom it will happen to.


Step 2: Prepare To Implement The Change

You increase commitment and minimize anger when you make your staff partners rather than bystanders in the change process. You also have to ponder the many “what if’s” of the change process. Use these strategies:

1. Notify your staff. Let everyone know ahead of time what will happen.

2. Involve your staff. Discuss the different aspects of the change. Ask for and use their suggestions.

3. Provide appropriate and timely training. Find out what your staff needs to know “before” the change occurs. Make sure they know what they need to know.

4. Make contingency plans. Consider what might go wrong with staff performance or other critical issues. Think about the ripple effects for each possibility. Then identify ways to minimize the negative impact of those occurrences. Also consider what might go better than expected. If something happens ahead of schedule, how might that impact everything else you are trying to do?


Step 3: implement The Change

If you want a smooth transition, you have to create an environment where your staff will see the change for its positives rather than for its negatives. Prepare to help them transition through the following phases: denial, resistance, and commitment. Of course, the goal is commitment. Use these strategies to reach that goal:

1. Give your staff more feedback than usual during the change process.

2. Allow for resistance. Help your staff let go of the “old way of doing things.”

3. Talk with your staff regularly to monitor the change process.

4. Create new and different communication channels so your staff can give you feedback about the change.

5. Reward and acknowledge the staff members who work hard to make the change work.

6. Establish symbols of the change such as new headings on newsletters, new logos, special slogans, or recognition events.

9 Smart Things Leaders Do To Keep A “Priority” Focus

9 Smart Things Leaders Do To Keep A “Priority” Focus: "Phoenix Checklist Questions-The Problem

1. Why is it necessary to solve the problem?
2. What benefits will you receive by solving the problem?
3. What is the unknown?
4. What is it you don’t yet understand?
5. What is the information you have?
6. What isn’t the problem?
7. Is the information sufficient? Or is it insufficient? Or redundant? Or contradictory?
8. Should you draw a diagram of the problem? A figure?
9. Where are the boundaries of the problem?
10. Can you separate the various parts of the problem? Can you write them down? What are the relationships of the parts of the problem?
11. What are the constants (things that can’t be changed) of the problem?
12. Have you seen the problem before?
13. Have you seen this problem in a slightly different form?
14. Do you know a related problem?
15. Can you think of a familiar problem having the same or a similar unknown?
16. Suppose you find a problem related to yours that has already been solved. Can you use it? Can you use its method?


Phoenix Checklist Questions-The Plan

1. Can you solve the whole problem? Part of the problem?
2. What would you like the resolution to be? Can you picture it?
3. How much of the unknown can you determine?
4. Can you derive something useful from the information you have?
5. Have you used all the information?
6. Have you taken into account all essential notions in the problem?
7. Can you separate the steps in the problem-solving process? Can you determine the correctness of each step?
8. What creative thinking techniques can you use to generate ideas? How many different techniques?
9. Can you see the result? How many different kinds of results can you see?
10. How many different ways have you tried to solve the problem?
11. What have others done?
12. Can you intuitively create a solution? Can you check the results?
13. What should be done? How should it be done?
14. Where should it be done?
"

Monday, March 13, 2006

Browser Agent Stats

Agent Stats

Market share for browsers, operating systems and search engines

Market share for browsers, operating systems and search engines

Browser News: Statistics - find the browser stats you want to know

Browser News: Statistics - find the browser stats you want to know

Current TV // Google Current // Darth Tater

Current TV // Google Current // Darth Tater

Digital Web Magazine - Designing for the Web

Digital Web Magazine - Designing for the Web

Digital Web Magazine - SEO and Your Web Site

Digital Web Magazine - SEO and Your Web Site

Digital Web Magazine

Digital Web Magazine

Browser Statistics

Browser Statistics

CSV Converter

CSV Converter

Converting documents to mediawiki markup - OpenWetWare

Converting documents to mediawiki markup - OpenWetWare

Web handbook - Building in universal accessibility + checklist

Web handbook - Building in universal accessibility + checklist

OpenCollective -- The Requirements Management Wiki - The Code Project - ASP.NET

OpenCollective -- The Requirements Management Wiki - The Code Project - ASP.NET

Guest PC - Virtual x86 Computer for Your Mac

Guest PC - Virtual x86 Computer for Your Mac

Article info : Four Keys to Better Test Management

Article info : Four Keys to Better Test Management

Better Software magazine : The Quality Barometer

Better Software magazine : The Quality Barometer

Wednesday, March 08, 2006

The Software Quality Page - A resource for information on software quality, testing, test planning, inspections, metrics, and process improvement.

The Software Quality Page - A resource for information on software quality, testing, test planning, inspections, metrics, and process improvement.

Automation Junkies / Resources / Software Test Automation Articles

Automation Junkies / Resources / Software Test Automation Articles

Seven Steps to Test Automation Success

Seven Steps to Test Automation Success

Lessons Learned in Test Management

Bret Pettichord's Publications

Testers Should Embrace Agile Programming

Bret Pettichord's Publications: "Testers Should Embrace Agile Programming"

Where Are the Testers in XP?

Column info : Where Are the Testers in XP?

"Design for Testability, Agile Testing, and Testing Processes"

Bret Pettichord's Publications: "Design for Testability, Agile Testing, and Testing Processes"

AYE Conference - Articles about software and IT Development

AYE Conference - Articles about software and IT Development

The Art of Interviewing and Selecting the Best Testers (PDF)

The Art of Interviewing and Selecting the Best Testers (PDF)

Do We Need Specialized Test Automation Tools?

Quality Tree Software, Inc. - Do We Need Specialized Test Automation Tools?

AT&T: DSL Internet Service, Standard plan information, features, pricing, and more

AT&T: DSL Internet Service, Standard plan information, features, pricing, and more

Tuesday, February 28, 2006

download the TPI Evaluation toolkit

The following toolkits can be downloaded for free:

· Interim Maturity Evaluation based on Capability Maturity Model V 1.1

· Interim Maturity Evaluation based on Capability Maturity Model Integrated for Systems Engineering and Software Engineering V1.1

· Interim Maturity Evaluation based on Capability Maturity Model Integrated for Systems Engineering, Software Engineering, Integrated Product and Process Development, and Supplier Sourcing, V1.1

· Test Process Improvement Evaluation based on the Test Process Improvement Model (TPI)® IQUIP, The Netherlands

· Interim Maturity Evaluation based on People Capability Maturity Model V 2.0

· Size estimation using paired comparisons: a tool that helps to make better software size estimates in Lines Of Code.

Bugzilla and ActiveDirectory

Test process improvement TPI

The Test Process Improvement (TPI) model has been developed based on the practi- ... 6.1 General description of the TPI model. A test process improvement ...

Richard Murillo

Richard Murillo

Workflow Download.com

Workflow Download.com - Time Sheets Software Download Directory

Sunday, February 26, 2006

Testing Maturity Model - Google Search

The TPI model supports the improvement of test processes, and offers insight into the "maturity" of the test processes within your organization. ...

Testing Maturity Model - Google Search

The TPI model supports the improvement of test processes, and offers insight into the "maturity" of the test processes within your organization. ...

A Maturity Model for Automated Software Testing (MDDI archive, Dec 94)

A Maturity Model for Automated Software Testing (MDDI archive, Dec 94)

Developing a Testing Maturity Model, Part II

Developing a Testing Maturity Model, Part II

Developing a Testing Maturity Model: Part I - Aug 1996

Developing a Testing Maturity Model: Part I - Aug 1996

Beizer�s Phases in a Tester�s Mental Life

Beizer�s Phases in a Tester�s Mental Life: "Beizer�s Phases in a Tester�s Mental Life
Phase 0 = There�s no difference between testing and debugging. Other than in support of debugging, testing has no purpose.
Phase 1 = The purpose of testing is to show that the software works.
Phase 2 = The purpose of testing is to show that the software doesn�t works.
Phase 3 = The purpose of testing is not to prove anything, but to reduce the perceived risk of not working to an acceptable value.
Phase 4 = Testing is not an act. It is a mental discipline that result in low-risk software without much testing effort."

Test Development FAQ

Test Development FAQ

List of Guidelines and Good Practices

List of Guidelines and Good Practices:

* Guideline 1: Plan & commit early.
o Good Practice 1: Decide as soon as possible — will the Working Group build test materials or acquire them?
o Good Practice 2: Think about and enumerate the quality-related deliverables that might help the Working Group through the Recommendation track.
o Good Practice 3: Synchronize quality-related deliverables and their development milestones with specification milestones.
o Good Practice 4: Consider whether the Working Group should bind any quality criteria to Rec-track advancement.
o Good Practice 5: Put some thought into how to staff the Working Group's test and other quality assurance plans.
* Guideline 2: Document QA processes.
o Good Practice 6: Put all of the Working Group's important test and other quality-related information in one place in a QA Process Document.
o Good Practice 7: Identify a Working Group point-of-contact for test materials or other quality-related business.
o Good Practice 8: Specify an archived email list to use for quality-related communications.
o Good Practice 9: Identify Web page(s) for test suites, announcements, and other quality-related topics.
* Guideline 3: Resolve legal & license issues.
o Good Practice 10: As early as possible, get agreement about acceptable license terms for submission of test materials.
o Good Practice 11: As soon as the nature of the Working Group's test materials becomes clear, get agreement about license terms for their publication.
o Good Practice 12: Decide policy about having brands, logos, or conformance icons associated with the Working Group's test materials.
* Guideline 4: Consider acquiring test materials.
o Good Practice 13: Do a quality assessment of proposed test materials before going any further.

The FOCUS-PDCA Methodology

An extension of the Plan, Do, Check, Act (PDCA) cycle sometimes called the Deming or Shewhart cycle. From Hospital Corporation of America.

Plan-Do-Check-Act A Problem Solving Process

iSixSigma Featured Link

QA Focus Papers

QA Focus Papers

Saturday, February 25, 2006

The Immaturity of CMM

The Immaturity of CMM

by James Bach
james@satisfice.com

(Formerly of Borland International)

This article was originally published in the September ‘94 issue of American Programmer.

The Software Engineering Institute's (SEI) Capability Maturity Model (CMM) gets a lot of publicity. Given that the institute is funded by the US Department of Defense to the tune of tens of millions of dollars each year [1], this should come as no surprise— the folks at the SEI are the official process mavens of the military, and have the resources to spread the word about what they do. But, given also that the CMM is a broad, and increasingly deep, set of assertions as to what constitutes good software development practice, it's reasonable to ask where those assertions come from, and whether they are in fact complete and correct.

My thesis, in this essay, is that the CMM is a particular mythology of software process evolution that cannot legitimately claim to be a natural or essential representation of software processes.

The CMM is at best a consensus among a particular group of software engineering theorists and practitioners concerning a collection of effective practices grouped according to a simple model of organizational evolution. As such, it is potentially valuable for those companies that completely lack software savvy, or for those who have a lot of it and thus can avoid its pitfalls.

At worst, the CMM is a whitewash that obscures the true dynamics of software engineering, suppresses alternative models. If an organization follows it for its own sake, rather than simply as a requirement mandated by a particular government contract, it may very well lead to the collapse of that company's competitive potential. For these reasons, the CMM is unpopular among many of the highly competitive and innovative companies producing commercial shrink-wrap software.

A short description of the CMM

The CMM [7] was conceived by Watts Humphrey, who based it on the earlier work of Phil Crosby. Active development of the model by the SEI began in 1986.

It consists of a group of "key practices", neither new nor unique to CMM, which are divided into five levels representing the stages that organizations should go through on the way to becoming "mature". The SEI has defined a rigorous process assessment method to appraise how well a organization satisfies the goals associated with each level. The assessment is supposed to be led by an authorized lead assessor.

The maturity levels are:

1. Initial (chaotic, ad hoc, heroic)

2. Repeatable (project management, process discipline)

3. Defined (institutionalized)

4. Managed (quantified)

5. Optimizing (process improvement)

One way companies are supposed to use the model is first to assess their maturity level and then form a specific plan to get to the next level. Skipping levels is not allowed.

The CMM was originally meant as a tool to evaluate the ability of government contractors to perform a contracted software project. It may be suited for that purpose; I don't know. My concern is that it is also touted as a general model for software process improvement. In that application, the CMM has serious weaknesses.

Shrink-wrap companies, which have also been called commercial off-the-shelf firms or software package firms, include Borland, Claris, Apple, Symantec, Microsoft, and Lotus, among others. Many such companies rarely if ever manage their requirements documents as formally as the CMM describes. This is a requirement to achieve level 2, and so all of these companies would probably fall into level 1 of the model.



Criticism of the CMM

A comprehensive survey of criticism of the CMM is outside the scope of this article. However, Capers Jones and Gerald Weinberg are two noteworthy critics.

In his book Assessment & Control of Software Risks [6], Jones discusses his own model, Software Productivity Research (SPR), which was developed independently from CMM at around the same time and competes with it today. Jones devotes a chapter to outlining the weaknesses of the CMM. SPR accounts for many factors that the CMM currently ignores, such as those contributing to the productivity of individual engineers.

In the two volumes of his Quality Software Management series [12,13], Weinberg takes issue with the very concept of maturity as applied to software processes, and instead suggests a paradigm based on patterns of behavior. Weinberg models software processes as interactions between humans, rather than between formal constructs. His approach suggests an evolution of "problem-solving leadership" rather than canned processes.

General problems with CMM

I don't have the space to expand fully on all the problems I see in the CMM. Here are the biggest ones from my point of view as a process specialist in the shrink-wrap world:

· The CMM has no formal theoretical basis. It's based on the experience of "very knowledgeable people". Hence, the de facto underlying theory seems to be that experts know what they're doing. According to such a principle, any other model based on experiences of other knowledgeable people has equal veracity.

· The CMM has only vague empirical support. That is, the empirical support for CMM could also be construed to support other models. The model is based mainly on experience of large government contractors, and Watts Humphrey's own experience in the mainframe world. It does not account for the success of shrink-wrap companies, and levels 1, 4, and 5 are not well represented in the data: the first because it is misrepresented, the latter two because there are so few organizations at those levels. The SEI’s, Mark Paulk can cite numerous experience reports supporting CMM, and he tells me that a formal validation study is underway. That's all well and good, but the anecdotal reports I've seen and heard regarding success using the CMM could be interpreted as evidence for the success of people working together to achieve anything. In other words, without a comparison of alternative process models under controlled conditions, the empirical case can never be closed. On the contrary, the case is kept wide open by ongoing counterexamples in the form of successful level 1 organizations, and by the curious lack of data regarding failures of the CMM (which may be due to natural reluctance on the part of companies to dwell on their mistakes, or of the SEI to record them).

· The CMM reveres process, but ignores people. This is readily apparent to anyone who is familiar with the work of Gerald Weinberg, for whom the problems of human interaction define engineering. By contrast, both Humphrey and CMM mention people in passing [5], but both also decry them as unreliable and assume that defined processes can somehow render individual excellence less important. The idea that process makes up for mediocrity is a pillar of the CMM, wherein humans are apparently subordinated to defined processes. But, where is the justification for this? To render excellence less important the problem solving tasks would somehow have to be embodied in the process itself. I've never seen such a process, but if one exists, it would have to be quite complex. Imagine a process definition for playing a repeatably good chess game. Such a process exists, but is useful only to computers; a process useful to humans has neither been documented nor taught as a series of unambiguous steps. Aren't software problems at least as complex as chess problems?

· The CMM reveres institutionalization of process for its own sake. Since the CMM is principally concerned with an organization's ability to commit, such a bias is understandable. But, an organization's ability to commit is merely an expression of a project team's ability to execute. Even if necessary processes are not institutionalized formally, they may very well be in place, informally, by virtue of the skill of the team members. Institutionalization guarantees nothing, and efforts to institutionalize often lead to a bifurcation between an oversimplified public process and a rich private process that must be practiced undercover. Even if institutionalization is useful, why not instead institutionalize a system for identifying and keeping key contributors in the organization, and leave processes up to them?

· The CMM contains very little information on process dynamics. This makes it confusing to discuss the relationship between practices and levels with a CMM proponent, because of all the hidden assumptions. For instance, why isn’t training on level 1 instead? Training is especially important at level 1, where it may take the form of mentoring or of generic training in any of the skills of software engineering. The answer seems to be that nothing is placed at level 1, because level 1 is defined merely as not being at level 2. The hidden assumption here is that who we are, what problems we face, and what we’re already doing doesn’t matter: just get to level 2. In other words, the CMM doesn’t perceive or adapt to the conditions of the client organization. Therefore training or any other informal practice at level 1, no matter how effective it is, could be squashed accidentally by a blind and static CMM. Another example: Why is defect prevention a level 5 practice? We use project post mortems at Borland to analyze and improve our processes -- isn't that a form of defect prevention? There are many such examples I could cite, based on a reading of the CMM 1.1 document (although I did not review the voluminous Key Practices document) and the appendix of Humphrey's Managing the Software Process [5]. Basically, most and perhaps all of the key practices could be performed usefully at level 1, depending on the particular dynamics of the particular organization. Instead of actually modeling those process dynamics, the way Weinberg does in his work, the CMM merely stratifies them.

· The CMM encourages displacement of goals from the true mission of improving process to the artificial mission of achieving a higher maturity level. I call this "level envy", and it generally has the effect of blinding an organization to the most effective use of its resources. The SEI itself recognizes this as a problem and has taken some steps to correct it. The problem is built in to the very structure of the model, however, and will be very hard to exorcise.

Feet of clay: The CMM's fundamental misunderstanding of level 1 Organizations

The world of technology thrives best when individuals are left alone to be different, creative, and disobedient. -- Don Valentine, Silicon Valley Venture Capitalist [8]

Apart from the concerns mentioned above, the most powerful argument against the CMM as an effective prescription for software processes is the many successful companies that, according the CMM, should not exist. This point is most easily made against the backdrop of the Silicon Valley.

Tom Peters’s, Thriving on Chaos [9], amounts to a manifesto for Silicon Valley. It places innovation, non-linearity, ongoing revolution at the center of its world view. Here in the Valley, innovation reigns supreme, and it is from the vantage point of the innovator that the CMM seems most lost. Personal experience at Apple and Borland, and contact with many others in the decade I've spent here, support this view.

Proponents of the CMM commonly mistake its critics as being anti-process, and some of us are. But a lot of us, including me, are process specialists. We believe in the kinds of processes that support innovation. Our emphasis is on systematic problem-solving leadership to enable innovation, rather than mere process control to enable cookie-cutter solutions.

Innovation per se does not appear in the CMM at all, and it is only suggested by level 5. This is shocking, in that the most innovative firms in the software industry, (e.g., General Magic, a pioneer in personal digital communication technology) operate at level 1, according to the model. This includes Microsoft, too, and certainly Borland [2]. Yet, in terms of the CMM, these companies are considered no different than any failed startup or paralyzed steel company. By contrast, companies like IBM, which by all accounts has made a real mess of the Federal Aviation Administration’s Advanced Automation Project, score high in terms of maturity (according to a member of a government audit team with whom I spoke).

Now, the SEI argues that innovation is outside of its scope, and that the CMM merely establishes a framework within which innovation may more freely occur. According to the literature of innovation, however, nothing could be further from the truth. Preoccupied with predictability, the CMM is profoundly ignorant of the dynamics of innovation.

Such dynamics are documented in Thriving on Chaos, Reengineering the Corporation [4], and The Fifth Discipline [10], three well known books on business innovation. Where innovators advise companies to get flexible, the CMM advises them to get predictable. Where the innovators suggest pushing authority down in the organization, the CMM pushes it upward. Where the innovators recommend constant constructive innovation, the CMM mistakes it for chaos at level 1. Where the innovators depend on a trail of learning experiences, the CMM depends on a trail of paper.

Nowhere is the schism between these opposing world-views more apparent than on the matter of heroism. The SEI regards heroism as an unsustainable sacrifice on the part of particular individuals who have special gifts. It considers heroism the sole reason that level 1 companies succeed, when they succeed at all.

The heroism more commonly practiced in successful level 1 companies is something much less mystical. Our heroism means taking initiative to solve ambiguous problems. This does not mean burning people up and tossing them out, as the SEI claims. Heroism is a definable and teachable set of behaviors that enhance and honor creativity (as a unit of United Technologies Microelectronics Center has shown [3]). It is communication, and mutual respect. It means the selective deployment of processes, not according to management mandate, but according to the skills of the team.

Personal mastery is at the center of heroism, yet it too has no place in the CMM, except through the institution of a formal training program. Peter Senge [10], has this to say about mastery:

"There are obvious reasons why companies resist encouraging personal mastery. It is 'soft', based in part on unquantifiable concepts such as intuition and personal vision. No one will ever be able to measure to three decimal places how much personal mastery contributes to productivity and the bottom line. In a materialistic culture such as ours, it is difficult even to discuss some of the premises of personal mastery. 'Why do people even need to talk about this stuff?' someone may ask. 'Isn't it obvious? Don't we already know it?'"

This is, I believe, the heart of the problem, and the reason why CMM is dangerous to any company founded upon innovation. Because the CMM is distrustful of personal contributions, ignorant of the conditions needed to nurture non-linear ideas, and content to bury them beneath a constraining superstructure, achieving level 2 on the CMM scale may very well stamp out the only flame that lit the company to begin with.

I don't doubt that such companies become more predictable, in the way that life becomes predictable if we resolve never to leave our beds. I do doubt that such companies can succeed for long in a dynamic world if they work in their pajamas.

An alternative to CMM

If not the maturity model, then by what framework can we guide genuine process improvement?

Alternative frameworks can be found in generic form in Thriving on Chaos, which contains 45 "prescriptions", or The Fifth Discipline, which presents--not surprisingly--five disciplines. The prescriptions of Thriving on Chaos are embodied in an organizational tool called The Excellence Audit, and The Fifth Discipline Fieldbook [11], which provides additional guidance in creating learning organizations, is now available.

An advantage of these models is that they provide direction, without mandating a particular shape to the organization. They actually provide guidance in creating organizational change.

Specific to software engineering, I'm working on a process model at Borland that consists of a seven-dimensional framework for analyzing problems and identifying necessary processes. These dimensions are: business factors, market factors, project deliverables, four primary processes (commitment, planning, implementation, convergence), teams, project infrastructure, and milestones. The framework connects to a set of scaleable "process cycles". The process cycles are repeatable step by step recipes for performing certain common tasks.

The framework is essentially a situational repository of heuristics for conducting successful projects. It is meant to be a quick reference to aid experienced practitioners in deciding the best course of action.

The key to this model is that the process cycles are subordinated to the heuristic framework. The whole thing is an aid to judgment, not a prescription for institutional formalisms. The structure of the framework, as a set of two-dimensional grids, assists in process tailoring and asking "what if...?"

In terms of this model, maturity means recognizing problems (through the analysis of experience and use of metrics) and solving them (through selective definition and deployment of formal and informal processes), and that means developing judgment and cooperation within teams. Unlike the CMM, there is no a priori declaration either of the problems, or the solutions. That determination remains firmly in the hands of the team.

The disadvantage of this alternative model is that it's more complex, and therefore less marketable. There are no easy answers, and our progress cannot be plotted on the fingers of one hand. But we must resist the temptation to turn away from the unmeasurable and sometimes ineffable reality of software innovation.

After all, that would be immature.



Postscript 02/99

In the five years since I wrote this article, neither the CMM situation, nor my assessment of it, has changed much. The defense industry continues to support the CMM. Some commercial IT organizations follow it, many others don’t. Software companies pursuing the great technological goldrush of our time, the Internet, are ignoring it in droves. Studies alleging that the CMM is valuable don’t consider alternatives, and leave out critical data that would allow a full analysis of what’s going on in companies that claim to have moved up in CMM levels and to have benefited for that reason.

One thing about my opinion has shifted. I’ve become more comfortable with the distinction between the CMM philosophy, and the CMM issue list. As a list of issues worth addressing in the course of software process improvement, the CMM is useful and benign. I would argue that it’s incomplete and confusing in places, but that’s no big deal. The problem begins when the CMM is adopted as a philosophy for good software engineering.

Still, it has become a lot clearer to me why the CMM philosophy is so much more popular than it deserves to be. It gives hope, and an illusion of control, to management. Faced with the depressing reality that software development success is contingent upon so many subtle and dynamic factors and judgments, the CMM provides a step by step plan to do something unsubtle and create something solid. The sad part is that this step-by-step plan usually becomes a substitute for genuine education in engineering management, and genuine process improvement.

Over the last few years, I’ve been through Jerry Weinberg’s classes on management and change artistry: Problem Solving Leadership, and the Change Shop. I’ve become a part of his Software Engineering Management Development Group program, and the SHAPE forum. Information about all of these are available at http://www.geraldmweinberg.com. In my view, Jerry’s work continues to offer an excellent alternative to the whole paradigm of the CMM: managers must first learn to see, hear, and think about human systems before they can hope to control them. Software projects are human systems—deal with it.

One last plug. Add to your reading list The Logic of Failure, by Dietrich Dorner. Dorner analyzes how people cope with managing complex systems. Without mentioning software development or capability maturity, it’s as eloquent an argument against CMM philosophy as you’ll find.

References

1. Berti, Pat, "Four Pennsylvania schools await defense cuts.", Pittsburgh Business Times, Jan 22, 1990 v9 n24

2. Coplien, James, "Borland Software Craftsmanship: a New Look at Process, Quality and Productivity", Proceedings of the 5th Borland International Conference, 1994

3. Couger, J. Daniel; McIntyre, Scott C.; Higgins, Lexis F.; Snow, Terry A., "Using a bottom-up approach to creativity improvement in IS development.", Journal of Systems Management, Sept 1991 v42 n9 p23(6)

4. Hammer, Michael; Champy, James, Reengineering the Corporation, HarperCollins, 1993

5. Humphrey, Watts, Managing the Software Process, ch. 2, Addison-Wesley, 1989

6. Jones, Capers, Assessment & Control of Software Risks, Prentice-Hall, 1994

7. Paulk, Mark, et al, Capability Maturity Model 1.1 (CMU/SEI-93-TR-24)

8. Peters, Tom, The Tom Peters Seminar: Crazy Times Call for Crazy Organizations, Random House, 1994

9. Peters, Tom, Thriving on Chaos: Handbook for a Management Revolution, HarperCollins, 1987

10. Senge, Peter, The Fifth Discipline, Doubleday, 1990

11. Senge, Peter, The Fifth Discipline Fieldbook, Doubleday, 1994

12. Weinberg, Gerald M., Quality Software Management, v. 1 Systems Thinking, Dorset House, 1991

13. Weinberg, Gerald M., Quality Software Management, v. 2 First-order measurement, Dorset House, 1993

Bug Entry Good Practices

Bug Tracking and Project Management Software - BUGtrack - Bug Entry Good Practices

What is a good test case?

By Cem Keaner

This is a good summary for people new to software testing. I'm a strong believer in using multiple "test styles" and test activities as part of an overall testing strategy. This paper breaks black box testing into Function, Domain, Specification, Risk-based, Stress, Regression, User, Scenario, State-model based, High volume automated and Exploratory testing. Internally we may use different terms, but hit most of these categories in some form. For example, in addition to functional testing, which is probably the dominant style, we also do stress, capacity, security, specification (feature specs as well as others Logo requirement and Accessiblity/Section 508 etc), scenario and exploratory testing. We do “User” testing through both dogfooding our product as well as through betas and early adopter programs. My team has a few pilot projects with State-model based testing, but it’s limited right at the moment. We have recently done a lot more high volume automated testing on our IDE features, where we take arbitrary code and apply generic tests to it. This has been pretty successful. We’ve taken our compiler test suite of 20,000+ language tests and run in through this engine and have found a number of bugs we didn’t with traditional methods.

Managing a Software Test Team

by James Bach
Copyright 1997, Satisfice, Inc.

Whether you're an individual tester assigned to find bugs for a team of developers, or the manager of a testing department with 75 testers, you have an uphill job. When testing, you can't be sure you will catch all the problems, or even all of the important ones. You can criticize the product, but you can't directly improve it. Your work results in little that's tangible, so people assume there isn't much to it. If you're new to test leadership, let me assure you that these problems are normal and manageable. There is hope. In this article, I offer a set of principles and guidelines for being a successful test lead. These principles come from my own experience as a test manager and consultant, and from many mentors and colleagues who helped me learn the craft. The environment of testing No tester is an island. Beyond the technological issues inherent in testing, and the details of your test strategies, the problems you face in testing depend largely on how your project environment is structured. In my experience, most test teams operate in projects that provide little support for the needs of a good testing process. There's nothing sinister about that. It's just that few people know much about testers, testing, or the dynamics of software quality. What we don't understand, we generally don't support. Your job as a test team lead is partly to make the test process visible, along with its benefits and consequences, so that the environment will support it better. There isn't room, here, to detail all of the aspects of the typical project environment for testing, but here are a few points to keep in mind. No matter what management says about product quality, they usually don't behave as if it has to be especially excellent. The fact is, very few software products really have to be especially excellent. Software quality is rarely a selling point, it's only that the noticeable lack of quality in your product can be a selling point for your competitors. No matter what you do, you'll never have enough staff to do what you'd like to do. The requirements and specifications for the product, most of which will never be written down, will be vague, or outdated whenever you do see them.The mission and tasks of testing Everybody shares the overall mission of the project-- something like "ship a great product" or "please the customer." But each functional team also needs a more specific mission that contributes to the whole. In well-run projects, the mission of the test team is not merely to perform testing, but to help minimize the risk of product failure. Testers look for manifest problems in the product, potential problems, and the absence of problems. They explore, assess, track, and report product quality, so that others in the project can make informed decisions about product development. It's important to recognize that testers are not out to "break the code." We are not out to embarrass or complain, just to inform. We are human meters of product quality. The heart of testing is examining a product system, evaluating it against a value system, and discovering if the product satisfies those values. To do this completely is impossible. Trust me on this. No testing process, practical or theoretical, can guarantee that it will reveal all the problems in a product. However, it may be quite feasible to do a good enough job of testing, if the goal of testing is not to find all the problems-- only the critical ones; the ones that will otherwise be found in the field and lead to painful consequences for the company. The thing is, it can be difficult to tell the difference between a good job of testing and a poor one. Since many problems that are likely to be found in the field are just plain easy to find, even a sloppy test process can be accidentally sufficient for some projects. And if the product doesn't have many critical problems to begin with, even crippled test process may seem successful. Systematic good enough testing, on the other hand, is more than just a process of finding problems. It also provides a reasonable confidence on which to base decisions about the product. A note about quality assurance. The role of quality assurance is a superset of testing. Its mission is to help minimize the risk of project failure. QA people try to understand the dynamics of project failure (which includes product failure as an aspect) and help the team prevent, detect, and correct the problems. Often test teams are called QA teams, sometimes in the enlightened belief that testers should evolve into the broader world of QA, and sometimes just because it sounds cooler. If you take the mission of reducing risk seriously, then you need a test process that provides a balanced perspective on the quality of the product, and a reasonable confidence of revealing critical problems that may be lurking there. This requires effective collaboration with other members of the team, as well as users and other stakeholders who may not be on the project team. It must be coordinated in lock step with all of the other activities of the product development lifecycle. Testers themselves must have the right skills and knowledge to design and perform the test process. Finally, you as a test lead have to support all of these tasks. That makes six major tasks of testing. There isn't room here to expand on all of them, but here's a little more detail on each one. 1. Monitor product quality. 1.1 Quality Planning: explore and evolve a clear idea of the requirements for your product. 1.2 Quality Assessment: compare your observations of the product with the quality standard. 1.3 Quality Tracking: be alert to changes in quality over the course of the project. 1.4 Quality Reporting: keep the team informed about your findings. Quality monitoring, overall, is the core service of the test team, and supports their mission of helping to minimize the risk of product failure. The remaining testing tasks support this service. 2. Perform an appropriate test process. 2.1 Test Planning: decide what test techniques to use, and how they will be applied. 2.2 Product Analysis: understand what the product is and how it works. 2.3 Product Coverage: decide which parts of the product to test, and assure that they are covered. 2.4 Test Design: design tests that fulfill the test plan. 2.5 Test Execution: execute tests that fulfill the test designs and test plan. A test process is appropriate if it provides an acceptable level of confidence in the quality assessment. The less sure you need to be about your quality assessment, the less extensive your test process should be. Don't insist on extensive testing for low risk components. But, if you need to ship a high quality product, than you need to work hard at the tasks above. 3. Coordinate testing schedule with all other activities in the project. Make sure you know what's going on in the project. Many other processes beyond the test team can impact the test process. Make sure you are included in planning and status meetings. When testing is on the critical path, management is strongly tempted to cut it short. So, don't stand directly in front of the project freight train, if you can help it. Start testing the moment that a barely testable product is available, build up a backlog of bug reports, then try to keep development working up to the last minute, fixing bugs, while insisting on careful change control to help assure that no reckless modifications are made to the code base. If you can safely perform incremental, spot regression testing on changes, rather than multi-week cycles of general testing, then the developers stay on the critical path throughout most of the project. By the way, if your project has a sloppy release process, all your work in testing could go out the window at the last minute, due to one ill-advised bug fix that introduces a slew of new problems. 4. Collaborate with the project team and project stakeholders. Since testers are merchants of information, their success lies in large part with the quality of their information pipelines to and from the other functions of product development, as well as to stakeholders who aren't even on the project team. If you can establish a connection to actual or potential users of the product, they can help you improve your notion of acceptable quality. Developers give more credence to tests that involve realistic situations, so collecting data from users for use in test design is a good idea. Technical support people often hear from users who have concerns about quality, so they are useful to pal around with, too. They can also help you understand what kind of problems are likely to generate costly support calls. Virtually all products have some kind of online help or user manual. Whoever writes them has to examine the product. When they find problems, they should bring them to you, or at least copy you on them. You can help them understand the product better, and they can help you spot patterns of problems. Also, a user manual is an invaluable tool to use as a test plan. Whenever the technical writers ask you to review a manual or help file, I'd suggest dropping everything else for a week and do nothing but test against the writing. You are guaranteed to learn more about the product and find important bugs to boot. The marketing team produces fact sheets and ads, and those should be tested. Besides, any claim that goes into an advertisement gives you more leverage to argue for a high quality standard. Marketing people can help you understand which parts of the product they believe are most critical for impressing customers in the field, which helps justify the resources to test them. Your relationship to management dictates how well you will be allowed to do your job. Do what you can to make management understand and value your test processes, and especially your mission. The key thing with management is to track historical data about customers experiencing critical problems, then show them how a better process might prevent future mishaps. People learn best from their own failures. Your most important relationship is with the developers. This is the one that makes your job or breaks it. What Testers need from Developers Information What the product is and how it works. How the product is intended to be used. What parts of the product are at greater risk of failure. The schedule for remaining development work. Status and details of any changes or additions to the product. Status of reported problems. Services Provide timely information to the testers. Respond quickly to reported problems. Stay synchronized with the project schedule. Collaborate to set an appropriate standard of quality. Involve testers in all decisions that may impact them. Seek to understand the test process.What Developers need from Testers Information Problems found. What will be tested. Schedule for testing. Status of the current test cycle. What information testers need to work effectively. Services Provide timely information to Development. Seek to understand the general process of creating software. Seek to understand the product and it's underlying technologies. Respond quickly to new builds. Stay synchronized with the schedule, and don't delay the project. Collaborate to set an appropriate standard of quality.5. Study and learn. 5.1 Technology Study: learn about component technologies used in your product. 5.2 Market Study: learn how your users think, and who your competition is. 5.3 Testing Study: strive to know your job well enough to teach it to others. 5.4 Experiential Learning: preserve notes, collect metrics, and review problems found in the field. The world of technology changes so fast that it takes constant learning to get good at testing it. Good testing is so hard that we need to be constantly looking out for ways to do it better. 6. Lead the test team.6.1 Resource Estimation: figure out what you need to do the job. 6.2 Resource Acquisition & Allocation: get the resources you need and distribute them wisely. 6.7 Tester Assessment & Improvement: assure that the testers are performing up to par. 6.3 Pressure Management: be a buffer between testers, and the pressure to ship. 6.4 Relationship Management: establish and monitor relationships to other roles on the project. 6.5 Information Management: assure that information is gathered and distributed to the right people. 6.6 Process Management: assure that the test process is sound, synchronized, and visible. 6.7 Quality Management: work with development to maintain a reasonable standard of quality. Testers who report directly to the development team, without an intervening test manager, are at a disadvantage, because these important leadership activities are shortchanged. A good test manager makes good testing possible by monitoring all the rest of the activities, solving problems with them, and speaking with one voice about them to management. It's hard to pull a test team up out of accidentally sufficient testing and toward a test process that's more professional. Few people are prepared for this challenge, when it's first thrust upon them. I certainly wasn't prepared when I found myself in the job. The good news is that you can readily become better at it than most, just by striving to be good enough.What are developers? For the purposes of this article, I'm calling anyone who actually creates any aspect of the product a developer; whoever it is who creates quality and is the object of attention by those of us who assess quality. What is their mission? Naturally, developers want to ship a great product, or at least a good enough one. But rather than thinking in terms of minimizing the risk of failure, as testers do, developers are committed to creating the possibility of success. Their mission positions them at the vanguard of the project, while testing defends their flank. There's a natural tension between these roles, because achieving one mission can potentially negate the other. Any possibility of success creates some risk of failure. The key is for each role to do its job while being mindful of the other's mission. What is their focus? Developers, by and large, are focused on the details of technology, rather than the more abstract concerns of process or quality dynamics. The reason for this is simple: you can't create software without skill in manipulating technology and tools, and such skill requires a lot of time and effort to maintain. To produce software of consistently high quality, a lot of other kinds of skill is required, but consistently high quality is more often a desire than a need. Most developers believe that their knowledge of technology is the best key to survival and success in their field, and they are right. Developers are usually not focused on testers and what they do, until the latter stages of the project when development work is more or less defined by the outcome of testing. Testers, on the other hand, are always focused on the work of developers. That mismatch is another source of tension. What problems crop up between testers and developers? The differing focus of either role can create a situation where each role looks amateurish to the other. Developers, who know that high quality becomes possible only by mastering the details of technology, worry that testers have too little technology skill. Testers, who know that high quality becomes achievable only by taking a broad point of view, worry that developers are too involved in bits and bytes to care about the users. Each role creates work for the other, and people performing either role often have little understanding of what the other guy needs. How do you achieve a good relationship with them? I've found that the one key to relating to developers is to refer to your common mission of shipping a great product. Express the attitude that testing is here to allow development to focus on possibilities without falling prey to risks. You're going to provide information to them about risks. If you can't agree on the risks, then perhaps you can agree to watch closely what happens when the product is released, and learn from that experience. Another key is to be clear about what each of you needs from the other. Don't assume that your needs should be obvious to them. Finally, master as much of the technology as you can.

Johanna Rothman: Life as a New Test Manager

Life as a New Test Manager

© 1999 Johanna Rothman

These notes represent the opinions of the BOF attendees. Where I couldn't resist, I added my comments.

We had about 50-60 people attending the BOF. Demographics were:

Currently a manager: most
1 year or less management experience: 10
1-5 years of experience: 12
> 5 years experience: a few

We discussed what we would talk about:

Subject Interest level
Power and Influence in the Organization higher
What do I do moderate
How much testing do I do? lower
Balance freedom vs. guidance and coaching of staff higher
Make people thinkit was their idea moderate
tracking tasks and juggling moderate
Get technical people to test vs. develop, including keeping people in test and keeping people in the company lower
recruiting moderate
motivate people who need to do better higher
performance assessment moderate
justifying and determining size of testing effort lower
Influence

* Interpersonal skills required!
* Negotiation
* Get a track record
o Start small
o Get data
* Organizational agreement
o What is test's role?
o Be part of the lifecycle
o But how to get management's attention?
+ Data
+ Terse text (JR note: I think this is where I said senior managers prefer to read bullets and graphs to prose)
* Need to understand the project cube of time-to-market, cost-to-market, functionality, defects (JR adds the people and the work environment)
*
* One must be flexible
* Do quick project retrospectives
* Disasters catch attention
o "Significant emotional event"
* Demonstrate influence on the bottom line
* Logic, Pilot, Challenge ("I dare you"), Shame and Guilt ("xxx does this")
* Continuous communication upwards
o Be in their face
o Continuous mindshare
o Before the test plan
* "Hiring us was your idea"
* "Why? What risks do you see?"
* Talk about risk of features and cost of testing
* We asked about attendee's peers:
o More experience than you (some, small number)
o Do they have an effect on your influence (mixed results)

We talked about influence issues for about 45 minutes, and started talking about trust. If you're not trusted in the organization, how can you get things done?
Trust

* Hard due to the nature of the job
o Find errors in development work
o See Brian Marick's paper, "Working Effectively With Developers" about one way to work with developers
* Establishing credibility vs. being some kind of expert
* Write good bug reports. Make them complete
* Defer to developer expertise (but ask questions
* Remind development: "I'm on your side"
* Target major issues, not minor (in development experience)
* Know end user - specialty
* Don't feel moral superiority over development. Sometimes the problem is us.
* Help them specify
* Feed them
* Pair new tester with developer to help with developer testing and learn something about developer's code
* Your staff reviews you
o Skip level reviews (your manager talks to your reports)
o Managers are judged by the people in the team

Freedom to do the job vs. coaching

* Don't wait too long to look
* Hiring people hurts overall performance
* Dealing with passive/aggressive
o Challenge them, make them step up, set expectations
o Listen to them
* Structure communications
o Give freedom within bounds
* Be clear about goals. All else is situational.
* Status report contents
o Planned work done
o Planned work not done
o Unplanned work done
o Planned work going forward
* Look at strength of whole department
o Buddy systems work
o Brian's pointer on the web to a live example of programming pairs: http://www.armaties.com/Practices/PracPairs.html (this link seems to be broken, 7/29/02 -- JR)
* Meet with your staff 5-10 minutes every morning, on their schedule
* 1/2 hour one-on-one weekly
o How much should you prepare in advance. (JR's opinion: Your staff should know what to expect, but leave time for them to bring up issues)
o Encourage your staff to know you
o They own their careers, you're their to mentor them and remove obstacles
* Weekly report (learn how to estimate)
o Objectives
o Planned date of completion vs. actuals. Learn from these as they become actuals.
* Sometimes it's the little things that count
o Get staplers
o Buy lunch
* Find opportunities for your staff to do what they don't know they'll enjoy

Dealing with Shortfalls (of resources)

* How do you quantify your work?
* Make visible work that will be done and won't be done
o Give a choice
* Convincing management there is a shortfall:
o Work out what last project's decisions cost (in engineer years converted to dollars). Now what do you want to do?
o What does customer support cost you? Where are the problems. Now what do you want to do?
* Recruit developers to expensively do what testers are cheaper at. Now what do you want to do?

When you have too many direct reports

* Break into teams of 4 or less (JR says 6 or less)
* Learn from general management literature
o Jon Hagar's list
+ Competitive Advantage, Michael Porter - pretty dry but a good approach to evaluating competition, markets, etc in order to create a competitive advantage; I think the concepts can be applied to an organization as well as a company
+ Danger in the Comfort Zone, Judith Bardwick - changing demographics and attitudes that affect productivity
+ Lean Thinking, Womack and Jones - Getting big companies to the right size
+ Getting to Yes, Fisher and Ury - opening negotiation options rather than closing them down, applies to more than just contract negotiations
+ Crossing the Chasm, Moore - marketing and selling technology products to mainstream
+ Type Talk, Kroeger and Thuesen - understanding differences in people and how they can affect the work environment - applies to understanding self and how you work/are perceived by others as well as understanding those you work with/for/over
+ Managing for results, Drucker - good intro book to management issues (he has other books)
+ Passion for excellence, Peters and Austin - a little dated given how American industry has transformed itself, but still applicable
+ The New Rational Manager - Kepner and Tregoe - great book on risk management and problem solving
+ Good Authors:
# Jack Stack - teach everyone from the stockroom employee up how to read financials and to understand how what they did contributed to the bottom line.
# Deming - I think everyone should read his "rules" list
o Anything from Peter Drucker, especially The Effective Executive and Managing for Results
o Watts Humphrey, Managing Technical People
o Steve McConnell, Rapid Development
o Steve Maguire, Debugging the Development Process
o Ken Blanchard, The One Minute Manager

Taking over a group

* Sit back
* Spend a lot of time listening
* Remove obstacles
* Ask what can change, help it happen
* Give credit to people

Who's Your Successor?

* If you don't have someone to succeed you, you're stuck



Johanna Rothman observes and consults on managing high technology product development, looking for the leverage points that will make her clients more successful. You can reach her at jr@jrothman.com or by visiting www.jrothman.com.

Measuring Test Effectiveness

Measuring Test Effectiveness

Testing Interview Questions and Jobs

Testing Interview Questions and Jobs

Testing & The Role of a Test Lead / Manager

Testing & The Role of a Test Lead / Manager

Thursday, February 23, 2006

TestCrew: Bughunters’ Journal

TestCrew: Bughunters’ Journal

Improving the Test Process

StickyMinds.com : Article info : Improving the Test Process

"Excel-erating" Test Status Reporting

StickyMinds.com : Article info : "Excel-erating" Test Status Reporting

Four Keys to Better Test Management

StickyMinds.com : Article info : Four Keys to Better Test Management

Twelve Tips for Realistic Scheduling

StickyMinds.com : Article info : Twelve Tips for Realistic Scheduling

StickyMinds.com : Article info : Categorizing Defects by Eliminating "Severity" and "Priority"

StickyMinds.com : Article info : Categorizing Defects by Eliminating "Severity" and "Priority"

Razor's Blog : Retrieve list of IIS virtual directories using C#

Razor's Blog : Retrieve list of IIS virtual directories using C#

Roadmap for Agile Testing 2004

Roadmap for Agile Testing 2004

PCQuest : Server Side : IIS 6’s New Metabase

PCQuest : Server Side : IIS 6’s New Metabase

Server Automation Tools - MetaBase 101. A simple tutorial on the IIS Metabase.

Server Automation Tools - MetaBase 101. A simple tutorial on the IIS Metabase.

IIS Metabase and programmatic administration in C#

IIS Metabase and programmatic administration in C#

Tuesday, February 21, 2006

Scripting with Flash 5: Calling methods from JavaScript

Macromedia Flash - Scripting with Flash 5: Calling methods from JavaScript

Testing Flash objects Scripting with Flash 5: Flash Methods

Macromedia Flash - Scripting with Flash 5: Flash Methods

Extending Flash MX 2004 Series: An Introduction to the JSAPI

Extending Flash MX 2004 Series: An Introduction to the JSAPI

AdventNet QEngine - Product Comparison Document

AdventNet QEngine - Product Comparison Document

AdventNet QEngine - WebTest Product Comparison

AdventNet QEngine - WebTest Product Comparison

IBM Rational Tester Price

IBM Passport Advantage Express

web_testing_checklist [SoftwareTestingWiki]

web_testing_checklist [SoftwareTestingWiki]

role_of_testers_in_XP.pdf (application/pdf Object)

role_of_testers_in_XP.pdf (application/pdf Object)

Making the Transition to Management

RoundTables : Making the Transition to Management

The Influential Test Manager

StickyMinds.com : Better Software magazine : The Influential Test Manager

Perspectives from a Test Manager

StickyMinds.com : Better Software magazine : Perspectives from a Test Manager