Tuesday, May 31, 2005

SOAP vs Flash Remoting Benchmark

SOAP vs Flash Remoting Benchmark

Google for h4ck3r5


Gizoogle -

Gizoogle - Fo all you beotches who wanna find shiznit

VS.NET and VSS: Adding a solution to VSS

Dim Blog As New ThoughtStream(me) : VS.NET and VSS: Adding a solution to VSS: "VS.NET and VSS: Adding a solution to VSS"

AppSettings In web.config

AppSettings In web.config: "AppSettings In web.config"

ASP.NET Articles- Stay up-to-date with the latest tips, tricks, and techniques in these articles.

DuxQA™ Enterprise - Overview

DuxQA™ Enterprise - Overview: "DuxQA™ Enterprise is currently FREE ! Only services such as consultancy, training and maintenance are charged to clients. It is a flexible test planning and management tool, suitable for all types of functional testing, which provides transparency and accountability to professional testing teams. Its powerful quality assurance features also make it ideal for software certification. "

Modem AT Commands

Modem AT Commands

QA Traq Test Case Management System GPL License

QA Traq

GenieTCMS: Test Case Management System

What is GenieTCMS ?

Our Test Case Management System (TCMS) is a tool for QA department to make it work more efficient. Depends on the level of integration with your test environment it can save you well 50% the time your group needs to complete a test.
TCMS targets two major areas where employee time can be efficiently utilized. First, is the preparation stage when QA engineer has to determine what set of test cases he/she has to run for this product / version / build / release / etc., locate all the scripts, locate all additional notes and memos. Second, is the after-test stage when QA engineer has to analyze the results, compare to what's expected and document any errors. This second phase can greatly improve productivity when your test scripts generate "feedback" that is loaded back to TCMS.

Saturday, May 28, 2005


What is IETest?

IETest is a .NET library for testing web sites through Microsoft Internet Explorer. It enables you to automate Internet Explorer to the point where you can perform automated testing without any human attention required.

Main : Tools for Unit-Testing : Web : IE-based :

testdriven.com: Your test-driven development community - Web Links

Friday, May 27, 2005

IEBlog : Netscape 8 and Internet Explorer's XML Rendering

IEBlog : Netscape 8 and Internet Explorer's XML Rendering

Web Testing with Ruby

Web Testing with Ruby

Learning Ruby

Learning Ruby

How to Report Bugs Effectively

How to Report Bugs Effectively: "How to Report Bugs Effectively

by Simon Tatham, professional and free-software programmer


Anybody who has written software for public use will probably have received at least one bad bug report. Reports that say nothing ('It doesn't work!'); reports that make no sense; reports that don't give enough information; reports that give wrong information. Reports of problems that turn out to be user error; reports of problems that turn out to be the fault of somebody else's program; reports of problems that turn out to be network failures.

There's a reason why technical support is seen as a horrible job to be in, and that reason is bad bug reports. However, not all bug reports are unpleasant: I maintain free software, when I'm not earning my living, and sometimes I receive wonderfully clear, helpful, informative bug reports.

In this essay I'll try to state clearly what makes a good bug report. Ideally I would like everybody in the world to read this essay before reporting any bugs to anybody. Certainly I would like everybody who reports bugs to me to have read it.

In a nutshell, the aim of a bug report is to enable the programmer to see the program failing in front of them. You can either show them in person, or give them careful and detailed instructions on how to make it fail. If they can make it fail, they will try to gather extra information until they know the cause. If they can't make it fail, they will have to ask you to gather that information for them.

In bug reports, try to make very clear what are actual facts ('I was at the computer and this happened') and what are speculations ('I think the problem might be this'). Leave out speculations if you want to, but don't leave out facts.

When you report a bug, you are doing so because you want the bug fixed. There is no point in swearing at the programmer or being d"

Mart Muller's Sharepoint Weblog - Tam Tam WikiSharePoint beta 1 for WSS and SPS!

Mart Muller's Sharepoint Weblog - Tam Tam WikiSharePoint beta 1 for WSS and SPS!

GENIESYS.NET -> GenieTCMS™ -> Demo

GENIESYS.NET -> GenieTCMS™ -> Demo

Thursday, May 26, 2005

Gauging Software Readiness with Defect Tracking

IEEE Software Best Practices - May-June 1997

How to Misuse Code Coverage

What is code coverage?
There are many different coverage measures. A simple one is to record which lines of code were executed. If a line has never been executed, it's a safe bet you didn't catch any bugs lurking in it. This type of coverage is usually called “statement coverage”.
In this paper, I'll use a somewhat more powerful type. Many commercial tools measure
some variant of this coverage; few measure anything more powerful. Different people use
different names for this type of coverage: "segment coverage", "multiple condition
coverage", "branch coverage" (arguably incorrectly), and so on. I'll use "multiple condition coverage". It measures whether each logical condition in the code has evaluated both true and false (in some execution of the program, not necessarily in the same one).

Data Flow Analysis Techniques for Test Data Selection

This paper examines a family of program test data selection
criteria derived from data flow analysis techniques similar to those
used in compiler optimization. It is argued that currently used path
selection criteria which examine only the control flow of a program
are inadequate. Our procedure associates with each point in a
program at which a variable is defined, those points at which the
value is used. Several related path criteria, which differ in the
number of these associations needed to adequately test the
program, are defined and compared.

Program testing is the most commonly used method for
demonstrating that a program actually accomplishes its intended
purpose. The testing procedure consists of selecting elements from
the program's input domain, executing the program on these test
cases, and comparing the actual output with the expected output
(in this discussion, we assume the existence of an "oracle", that is,
some method to correctly determine the expected output). While
exhaustive testing of all possible input values would provide the
most complete picture of a program's performance, the size of the
input domain is usually too large for this to be feasible. Instead,
the usual procedure is to select a relatively small subset of the
input domain which is, in some sense, representative of the entire
input domain. An evaluation of the performance of the program
on this test data is then used to predict its performance in general.
Ideally, the test data should be chosen so that executing the
program on this set will uncover all errors, thus guaranteeing that
any program which produces correct results for the test data will
produce correct results for any data in the input domain.
However, discovering such a perfect set of test data is a difficult, if
not impossible task [1,2]. In practice, test data is selected to give
the tester a feeling of confidence that most errors will be
discovered, without actually guaranteeing that the tested and
debugged program is correct. This feeling of confidence is
generally based upon the tester's having chosen the test data
according to some criterion; the degree of confidence depends on
the tester's perception of how directly the criterion approximates
correctness. Thus, if a tester has a "good" test data criterion, the
problem of test data selection is reduced to finding data that meet
the criterion.

Andy Oakley : 'The Compressed (zipped) Folder is invalid or corrupted'

Andy Oakley : 'The Compressed (zipped) Folder is invalid or corrupted': "'The Compressed (zipped) Folder is invalid or corrupted'

Some people have reported problems downloading releases from the Workspaces application; either they find a 0 byte file, or are presented with the message 'The Compressed (zipped) Folder is invalid or corrupted'. Either way, not fun.

From the list of late-breaking issues:
On downloading a release, the error message 'The Compressed (zipped) Folder is invalid or corrupted.' is shown
This corresponds to a known problem in the way Internet Explorer downloads compressed files. For more information, see KB 308090. As suggested in that article, this problem can be worked around by first saving the compressed folder to disk, and opening the release from the saved location.

So, the KB has background and the workaround is to save to disk. What if you never have that option? You may have (inadvertantly) configured your machine to always open ZIP files, without asking you first. To fix this, open up any Windows Explorer window, Tools/Folder Options, then on the File Types tab find ZIP files and click Advanced. Finally, make sure the 'Confirm open after download' box is checked.

All fixed."

SourceGear: Dragnet

SourceGear: Dragnet

reflection emit and generics.

Managed CodeGen :

If there is anyone that knows how software really works it is the testers.. and Yiru Tang is no exception… Check out Yiru’s first blog and learn about reflection emit and generics.

What is managed code?

Brad Abrams : What is managed code?: "What is managed code?

Managed code is code that has its execution managed by the .NET Framework Common Language Runtime. It refers to a contract of cooperation between natively executing code and the runtime. This contract specifies that at any point of execution, the runtime may stop an executing CPU and retrieve information specific to the current CPU instruction address. Information that must be query-able generally pertains to runtime state, such as register or stack memory contents.

The necessary information is encoded in an Intermediate Language (IL) and associated metadata, or symbolic information that describes all of the entry points and the constructs exposed in the IL (e.g., methods, properties) and their characteristics. The Common Language Infrastructure (CLI) Standard (which the CLR is the primary commercial implementation) describes how the information is to be encoded, and programming languages that target the runtime emit the correct encoding. All a developer has to know is that any of the languages that target the runtime produce managed code emitted as PE files that contain IL and metadata. And there are many such languages to choose from, since there are nearly 20 different languages provided by third parties – everything from COBOL to Camel – in addition to C#, J#, VB .Net, Jscript .Net, and C++ from Microsoft.

Before the code is run, the IL is compiled into native executable code. And, since this compilation happens by the managed execution environment (or, more correctly, by a runtime-aware compiler that knows how to target the managed execution environment), the managed execution environment can make guarantees about what the code is going to do. It can insert traps and appropriate garbage collection hooks, exception handling, type safety, array bounds and index checking, and so forth. For example, such a compiler makes sure to lay out stack frames and everything just right so that the garb"

Tough ASP.NET interview questions < TechInterviews.com

Tough ASP.NET interview questions < TechInterviews.com: "Q24 - System.Array.CopyTo() - Deep copies an Array

Q26 - Boxing is an implicit conversion of a value type to the type object
int i = 123; // A value type
Object box = i // Boxing
Unboxing is an explicit conversion from the type object to a value type
int i = 123; // A value type
object box = i; // Boxing
int j = (int)box; // Unboxing

Q27 - String is Reference Type.
Value type - bool, byte, chat, decimal, double, enum , float, int, long, sbyte, short,strut, uint, ulong, ushort
Value types are stored in the Stack
Reference type - class, delegate, interface, object, string
Reference types are stored in the Heap

Comment by Harish — 5/11/2005 @ 9:07 am"

Microsoft Recruits for .Net Framework Compatibility Testing

Microsoft Recruits for .Net Framework Compatibility Testing: "Microsoft white paper: 'When the application is started up on the .Net Framework 1.0, 1.1 or 2.0, the CLR [Common Language Runtime] looks at the .Net Framework version recorded in the application and tries to run the application on the version of the .Net Framework that the application was compiled with. If that version is not installed on the machine, the CLR will attempt to start the application on the latest .Net Framework and CLR, for example, an application compiled for .Net Framework 1.0 running on a machine with only .Net Framework 1.1 will be rolled forward to run on the .Net Framework 1.1. Likewise, an application compiled for .Net Framework 1.1 running on a machine with only the .Net Framework 2.0 will be rolled forward to run on the .Net Framework 2.0.'"

Interactive DHTML art-demos - Gerard Ferrandez

Interactive DHTML art-demos - Gerard Ferrandez

Microsoft Virtual PC 2004 Evaluation Guide

Microsoft Virtual PC 2004 Evaluation Guide

Virtual PC 2004 Shortcuts

this is from Virtual PC 2004 Help
**Using the keyboard and mouse in a virtual machine**

Key combination Description
Host key+L Restores Virtual PC Console from a minimized state. Moves Virtual PC Console to the foreground.
Host key+I Installs Virtual Machine Additions.
Host key+ENTER Toggles a virtual machine between full-screen mode and window mode.
Host key+DELETE Sends CTRL+ALT+DELETE to the virtual machine operating system.
Host key+P Pauses or resumes the virtual machine, depending upon its current state.
Host key+R Resets the virtual machine.
Host key+F4 Closes the virtual machine.
Host key+C Copies the selected items.
Host key+V Pastes a copied item.
Host key+A Selects all.
Host key+E Opens the virtual machine settings.
Host key+DOWN ARROW Minimizes the virtual machine.
Host key+LEFT ARROW Switches to the previous virtual machine when running multiple virtual machines, unless you are using
full-screen mode.
Host key+RIGHT ARROW Switches to the next virtual machine when running multiple virtual machines, unless you are using
full-screen mode.

Wednesday, May 25, 2005

From Email on Path Testing

"Paths" are a phenomenon of "computational models", in which (roughly speaking) real-world "objects" are represented by "nodes" (vertices) and the relationships between them by "links" (edges, arcs). A set of nodes and links constituting a computational model is a "graph". If the links have a preferred direction of traversal (from a "source node" to a "target node"), they are "directed links" and you have a "directed graph".

A finite state machine (as you asked about) is usually a "partially directed graph" (most transitions may be bi-directional, but some transitions may be irreversible) in which the nodes represent component or system states, and the links represent transitions between them. Control flow graphs and cause-effect graphs are always directed graphs; in the first, the nodes represent points at which flow of control is exercised other than by fall-through, and the links represent the flows of control; in the second, the nodes represent "causes" (stimuli) or "effects" (responses) and logical operations between them, while the links represent the logical relationships that enter into, and result from, the logical operations.

In a computational model, a "path" is a sequence of nodes joined by links. Some (many) paths may be "unachievable" (infeasible) because they would depend on forbidden combinations of conditions. Testing all "achievable" paths is known as "path coverage", with the implication (if unqualified) that "100% path coverage" ("all-paths coverage") is meant. When you ask about "traversing all the paths", Saravana, you presumably mean "achieving all-paths coverage", which is often infeasible in testing non-trivial components because the number of achievable paths is often more than astronomical -- particularly when loops are involved. (Of course, OO methods often fall into the "trivial" class.)

Apart from its basic definition given above, "path" can have different meanings for different types of model, such as finite state machines, flowgraphs, cause-effect graphs, etc. In a finite state machine, a "path" is known as an "n-switch", meaning a sequence of transitions between "n+2" states. E.g., given three states, "A", "B", and "C", and no forbidden transitions: the sequences "A->B", "B->A", "B->C", "C->B", "C->A", and "A->C" would achieve "0-switch coverage"; triplet sequences such as "A->B->C", "B->C->A" (etc.) would achieve "1-switch coverage"; quadruplet sequences such as "A->B->C->B", "A->B->C->A" (etc.) would achieve "2-switch coverage"; and so on. ("Computation of the sizes of covering sets for 1-switch and 2-switch coverage in this example model will be left as an exercise for the reader.") It will often be true that a complete set of switches (all-paths coverage) will be infinite in size -- for example, just a single member of such a set might be "A -> B -> A -> B ..." ad infinitum, each transition utilising a different set of start and end data.

In a control flowgraph (as might be developed from, say, a use case or a set of sequence diagrams), paths are uni-directional, in the sense that a link such as "A->B" cannot be traversed in reverse. (A triad such as "B->A" in the same graph would represent a link different from "A->B", the two triads together representing a loop.) Generally, only "entry-exit paths" are of interest here (starting at an "entry node" of a process and ending at an "exit node"), since processes normally can't start or end in the middle, except by the insertion of additional entry and/or exit nodes. All-paths coverage won't require an infinite set of paths unless the graph includes at least one error-free uncounted loop, but even relatively simple graphs may require quite high numbers of paths to achieve coverage. In principle (again), the number of paths needed to achieve 100% path coverage of a non-looping graph doubles with each binary decision node in the graph ("n^2"), though nested decisions and unachievable combinations both work to reduce the number.

You (Saravana) worry that, "When u developed the system, and give it to client, the system may have traversed all the paths, suppose we have not covered a particular path in FSM, then this path is explored in future, the system may fail. So there is a need to traverse all paths" -- but, as I've indicated above, this may well be impracticable if not impossible, especially for state transition graphs. This is part of the risk-based nature of all testing. The trick is to find a set of paths that provides *adequate* coverage without being *exhaustive*.

I know of no simple way to identify an "adequate set" for finite state models (which doesn't mean there isn't one -- I'm not well-read on the subject). For control flowgraphs, the commonly accepted criterion for adequacy is the "basis set", which covers 100% of the "basis paths" (a.k.a "independent circuits"). A basis path is an achievable entry-exit path such that, during selection of "this" path, only one decision node in "the prior" path is switched to an alternate link (i.e., only one new node sequence, not previously covered, is introduced). Also, if the path traverses a loop, there is only one iteration of the loop, unless forced by a minimum iterations criterion.

Basis paths have the property that, when you have a complete set (a "basis set"), you can generate *all achievable paths* (i.e., 100% path coverage) by simple addition and subtraction of path elements in the existing set -- hence the name, "basis path set" (a basis for "all paths" coverage, if you want to try for it ...).

A basis set generally contains more paths than a branch-covering set (which exercises all explicit branches), which in turn is generally larger than a statement-covering set (which excludes "null" branches such as an empty "else"). From the testing point of view, it has many advantages -- not just that it exercises more paths than branch coverage, but that:

* It's very systematic (so there's less chance of "missing" cases);
* Test data preparation may be simplified (you may need to change only two variables per test data set in order to exercise or "force" successive paths);
* Debugging may be greatly simplified (the bug is either in or closely related to the "new path segment" exercised for the first time in the test case that found the bug).

With regard to the last point, the operating ideal is, "not more than one bug per test path; not more than one test path per bug". Glen Myers called this, "high-yield testing".

The basic method for generating a basis set is reasonably simple:

0. Create a flowgraph of the algorithm you wish to cover.
1. Identify and record the simplest, most obvious entry-exit path.
2. In the most recently recorded path, identify the first decision node which, when switched to an alternate branch, will lead to a smallest amount of change from the most recently recorded path; trace and record the new path.
3. Repeat (2) until all nodes and links are covered.

Note that, at step (1), some authorities prefer to record the simplest "happy-day scenario" path, which is often longer than the shortest achievable path (often an error case). However, this may make it harder to prepare the initial test data set, and (because of the larger scope of the test case) to isolate the location of any bug(s) it catches.

As a general example, consider the following algorithm:

0. begin
1. if order specifies air shipment
2. if Weight <= 2 kg
3. then Rate = 6 units
4. else if [Weight > 2 kg but] Weight <= 20 kg
5. then Rate = Weight x 3 units
6. else
7. Excess = Weight – 20; Rate = 60 + (Excess x 4 units)
8. endif
9. endif
10. if Destination = Brazil or Eire or China
11. then Rate = Rate + 5%
12. endif
13. endif
14. end

Using line numbers as node numbers, the simplest and most obvious entry-exit path traverses 0-1-13-14 ("order doesn't specify air shipment". An alternative "happy-day" path would traverse 0-1-2-3-9-10-12-13-14: "air shipment for <=2 kg, not going via Brazil, Eire, or China".).

In Path (1) (0-1-13-14), there is only one decision node, node (1), so for path (2) we select the *simplest* consequence of switching it (from "false" to "true"). This leads to the subpath "0-1-2-3" rather than "0-1-2-4", since selecting node (3) represents a "simpler" choice than the consequences of selecting node (4) (a single action, at node 3, rather than a nested decision followed by a choice of actions, at node 4). From node (3), we continue with the sequence, "9-10-13-14". We now have two paths as follows, with the differences from the first to the second (the newly-traversed nodes) set off in brackets:

Path 1: 0-1-13-14
Path 2: 0-1-[2-3-9-10-12]-13-14

If execution of Path 2 reveals a bug, it will be either in or closely related to the bracketed sequence.

I'll spare you description of how the remaining paths are generated, and simply present the full basis test set (one of several candidate sets), marking out the new path segment (subpath) in each path:

Path 1: 0-1-13-14
Path 2: 0-1-[2-3-9-10-12]-13-14
Path 3: 0-1-2-3-9-10-[11]-12-13-14
Path 4: 0-1-2-[4-5-8]-9-10-11-12-13-14
Path 5: 0-1-2-4-[6-7]-8-9-10-11-12-13-14

The paths correspond to the following scenarios:

Path 1: not an air shipment
Path 2: air shipment, weight <= 2 kg, destination not Brazil / Eire / China
Path 3: air shipment, weight <= 2 kg, destination = Brazil / Eire / China
Path 4: air shipment, 2kg <= weight <= 20 kg, destination = Brazil / Eire / China
Path 5: air shipment, weight > 20kg, destination = Brazil / Eire / China

We can verify coverage by simple inspection (is each of the 14 nodes present at least once?) or by constructing a coverage verification table as we go (nodes horizontally, paths vertically; for each path, mark in that row the nodes it traverses; when there is at least one tick for each node, you have node *and* link coverage).

The maximum size of a basis set is easily computed for any algorithm as L - N + E + X, where "L" = links, "N" = nodes, "E" = "entry nodes", and "X" = "exit nodes". Entry- and exit-nodes get counted twice. This computation works for any number of exit and entry nodes; if the algorithm has only a single entry and a single exit, the more usual form (L - N + 2) is adequate. In either case, the resulting number is the "cyclomatic complexity" of the algorithm with the sigillum, "V(G)". (Usually I drop the (G), which simply means "the V for *this* graph".)

Applying the first formula to our algorithm above, we have 17 links joining 14 nodes, of which 1 is an entry node and 1 an exit node, so for this algorithm, V = 17 - 14 + 1 + 1 = 5 (the number of paths I actually selected).

Step (3) of the method outlined above might be rewritten as "Repeat (2) until you have 'V' paths", except that the *achievable* size of the basis test set may be smaller than V because of "predicate correlations" and "predicate dependencies". Predicates are correlated when the truth value of one condition is "forced" by what has happened at a prior condition. A "dependent predicate" is a truth-value forced by some prior processing. Either of these conditions may make a given branch sequence unachievable for all cases in practice, even though it might look feasible on paper, and such unachievable (sub)paths may reduce the achievable size of the basis test set. Often you can't immediately tell from the content of a branch condition which if any previous decisions or computations it's correlated to (e.g., where a boolean or other resultant value in a predicate was computed earlier in the algorithm), and it may be necessary to "interpret" predicates (trace them back through the algorithm) in order to identify the entry variables they depend on. This can be, shall we say, an unexciting activity to engage in, but I never promised excitement.

Saravana, you seemed most concerned with finite state machines, and I was unable to provide you with a full answer there; I hope someone else will be able to. In fact, my answer regarding control flowgraphs is incomplete too, since I haven't considered loops except in passing, but if you have to deal with process (algorithm) models, what I've described will give you a start. (Basic rules for loops: you must cover each subpath *within* the loop; try to test each nested subpath with only a single loop iteration, otherwise observe all minimum iteration criteria; you must test any minimum iteration limits; you must test any maximum iteration limits; apply boundary test concepts to testing limits. We further have to consider nested loops and concatenated loops. And there's also the fact that the condition at node (10) is a compound condition ... Beizer, "Software Testing Techniques", is the principal authority for this stuff.)

As for path testing with cause-effect graphs -- see Nursimulu's and Probert's paper at http://www.cs.ubc.ca/local/reading/proceedings/cascon95/htm/english/abs/nursimul.htm . The paper also (indeed primarily) discusses the use of such models in requirements verification.

These notes are preliminary to an article I am preparing for the British Computer Society's "The Tester" journal, so I'll be doubly interested in any feedback.

Regards to all,

-- Don

Interest in CS as a Major Drops Among Incoming Freshmen

Computing Research News

The Computer Journal, Volume 46, Issue 3, May 2003: pp. 307-318

The Computer Journal, Volume 46, Issue 3, May 2003: pp. 307-318



steve.clarke.weblog: SharePointFav 1.0

steve.clarke.weblog: SharePointFav 1.0

Integrating a Legacy Web Application in SharePoint

Integrating a Legacy Web Application in SharePoint

SharePoint Blogs

SharePoint Blogs

Tuesday, May 24, 2005

Professional Tester Magazine- Articles

Professional Tester Magazine

IEEE 829 Documentation and how it fits in with testing

IEEE 829 Documentation and how it fits in with testing

What is a Windows Service and how does its lifecycle differ from a "standard" EXE?

13.What is a Windows Service and how does its lifecycle differ from a “standard” EXE?

Windows Service applications are long-running applications that are ideal for use in server environments. The applications do not have a user interface or produce any visual output; it is instead used by other programs or the system to perform operations. Any user messages are typically written to the Windows Event Log. Services can be automatically started when the computer is booted. This makes services ideal for use on a server or whenever you need long-running functionality that does not interfere with other users who are working on the same computer. They do not require a logged in user in order to execute and can run under the context of any user including the system. Windows Services are controlled through the Service Control Manager where they can be stopped, paused, and started as needed.

How does the lifecycle of Windows services differ from Standard EXE?

Windows services lifecycle is managed by “Service Control Manager” which is responsible for starting and stopping the service and the applications do not have a user interface or produce any visual output, but “Standard executable” doesn’t require Control Manager and is directly related to the visual output

Automated Testing Basics

Automated Testing Basics

A lot of people have asked Greg and me about automated testing. This is an important subject for me since I feel like it's the most important part of my job. I'm a smart guy and I know a lot about software development, so it's clearly not the best use of my time to click on the same button looking for the same dialog box every single day. Part of smart testing is delegating those kinds of tasks away so I can spend my time on harder problems. And computers are a great place to delegate repetitive work.

That's really what automated testing is about. I try to get computers to do my job for me. One of the ways I describe my goal as a tester is to put myself out of a job - meaning that I automate my entire job. This is, of course, unachievable so I don't worry about losing my job. But it's a good vision. My short term goal is always to automate the parts of my job I find most annoying with the selfish idea of not having to do annoying stuff any more.

With people new to automated testing, that's always how I frame it. Start small, pick an easy menial task that you have to do all the time. Then figure out how to have a computer do it for you. This has a great effect on your work since after you get rid of the first one that will free up more of your time to automate more and more annoying, repetitive tasks. Now with all this time you can go an focus on testing more interesting parts of your software.

That last paragraph makes it sound like writing automated tests is easy, when in fact it's typically quite hard. There are some fundamentally hard problems in this space, and those are what I'm going to talk about most here. Lots of test tools try to help out with these problems in different ways, and I highly recommend finding a tool that works for you. Unfortunately all the tools I use here at Microsoft are custom built internal tools (it's one of the advantages of a huge development team, we have people who's full time jobs are developing test tools for us.) So I can't really recommend specific tools. But I will talk about are the two major parts of an automated test, the major problems involved, and some of the general ways to solve those problems. Hopefully it will be valuable as a way to better understand automated testing and as a way to help choose your own test tools. As a side note, implementing automated tests for a text based or API based system is really pretty easy, I'm going to focus on a full UI application - which is where the interesting issues are.

Automated test can be broken into two big pieces: Driving your program and Validating the results.

Driving Your Program

This concept is pretty basic, if you want to test that when you push the spell check button a spell check session starts you have to have some way of getting your test to push the spell check button. But execution can be much trickier.

As far as I know there are really just three ways to do this. You can override the system and programmatically move the mouse to a set of screen coordinates, then send a click event. You can directly call the internal API that the button click event handler calls (or an external API if your software provides it, we use these a lot in Office testing.) Or you can build a crazy system that hooks into the application and does it through some kind of voodoo magic.

I'll admit that the systems I use are the last one. I don't understand exactly how they work, hence the vague description. They're the best way to do it from a testing standpoint, but hard to build. The other two options have serious drawbacks.

Calling into the API is good because it's easy. Calling an API function from your test code is a piece of cake, just add in a function call. But then you aren't actually testing the UI of your application. Sure, you can call the API for functionality testing, then every now and then click the button manually to be sure the right dialog opens. Rationally this really should work great, but a lot of testing exists outside the rational space. There might be lots of bugs that happen when the user goes through the button instead of directly calling the API (don't ask me, more voodoo magic, but I have lots of real life experience that backs me up here.) And here's the critical part - the vast majority of your users will use your software through the UI, not the API. So those bugs you miss by just going through the API will be high exposure bugs. These won't happen all the time, but they're the kind of things you really don't want to miss, especially if you were counting on your automation to be testing that part of the program.

I don't want to discount working through the API. I do it a lot since it's an easy way to exercises the functionality of the program. Just don't get dependent on it. Remember that if your automation is working this way you're getting no testing coverage on your UI. And you'll have to do that by hand.

Simulating the mouse is good because it's working the UI the whole time, but it has its own set of problems. The real issue here is reliability. You have to know the coordinates that you're trying to click before hand. This is doable, but lots of things can make those coordinated change at runtime. Is the window maximized? What's the screen resolution? Is the start menu on the bottom or the left side of the screen? Did the last guy rearrange the toolbars? These are all things that will change the absolute location of your UI. And I'm not even getting into trying to predict where a dialog window will pop up (its typically where it was when it last closed, and who knows what the last guy was doing.)

The good news is there are tricks around a lot of these issues. The first key is to always run at the same screen resolution on all your automated test systems (note: there are bugs you could be missing here, but we won't worry about that now - those are beyond the scope of your automation anyway.) I also like to have my first automated test action be maximizing the program (hint: sending the key sequence alt--x to the program will do this.) This takes care of most of the big issues, but small things can still come up.

The really sophisticated way to handle this is to use relative positioning. If your developers are nice they can build in some test hooks for you so you can ask the application where it is (or get an HWND and ask the OS yourself.) This even works for child windows, you can ask a toolbar where it is, or the dialog problem I mentioned is easily solved this way. If you know that the 'file -> new' button is always at (25, 100) inside the main toolbar it doesn't matter if the application is maximized or if the last user moved all the toolbars around. Just ask the main toolbar where it is and tack on (25, 100) - then click there.

So this has an advantage over just exercising the APIs since you're using the UI too, but it has a disadvantage too - it's a lot of work.

No one method will be right for all the cases you're trying to automate. Think hard about the goals of each automation case and choose the right way to drive your application to meet those goals (to back this up, I've written many specs for test case automation.)

Results Verification

So you've figured out the right way to drive your program, and you have this great test case, but after you've told your program to do stuff you need to have a way to know if it did the right thing. This is the verification step in your automation, and every automated script needs this.

You have three options. You can fake it, do it yourself, or use some kind of visual comparison tool.

Faking verification is great. I don't mean that you just make up the answers, I'm talking about making some assumptions about the functionality of your program and the specific functionality this automated test is checking (once again, having a well defined scope is critical.) For example when I was writing automation for the spelling engine in Visio I wrote a test that typed some misspelled text into a shape: “teh “. This should get autocorrected to “the “. But it's hard to programmatically check if “the “ was correctly rendered to the screen. Instead, I went and asked the shape for the text inside it and just did a string compare with my expected result.

There are lots of bugs this scenario can (and did) miss. The big one is when the spelling engine auto-corrects the text it has to tell someone it changed so that new text gets redrawn. Every now and then we'd have a recurring bug where this didn't happen and my test doesn't catch that. But I didn't define my test scope to catch that, I even explicity called out in the test description that it wouldn't catch that issue. This way I still get automated coverage on some stuff, but don't get complacent thinking that just because I have automation I don't have to manually check that area. The key to “faking it” is to limit the functionality you're testing and make explicit assumptions that other things will work right. Then you know that you aren't testing those other things and they still need manual coverage.

There are other ways to fake verification too. One of my favorites is a test case that says “if the program doesn't crash while executing this case, it's a pass.“ Don't laugh, this can be really valuable. I use it mostly in stress case scenarios. I do something really attrociously bad to the program like add a couple hundred thousand shapes to a Visio diagram, or set the font size to 18000 pt. I don't really have an expectation for what working right means at that point and it would be hard to check, I just want to make sure things don't blow up. Once again the key to this kind of fake it verification is having very narrow parameters on what the test is really testing, and understanding what isn't being tested.

The second method is to just do the verification yourself. This is basically the easy way out (cheating even) of the automation world. But it's sometimes the most cost effective way to go. An example of this is a big suite of tests that drive the program, and when they get to a state where they want to see if the right thing happened, instead of doing a verification it just grabs a bitmap of the screen and saves it off somewhere. Then you come along and click through all the bitmaps quickly checking if they all look right. This has real advantages over manual testing, especially if you're working with test cases that have complicated set ups (lots of things to manipulate and settings to change.) You basically save all the time you would have spent clicking buttons and instead just make sure the end results are correct. Most importantly, you don't have to implement a way to programatically compare two pictures in a meaningful way (more on this later.)

You pay for that cost saving in other places though. You don't get automated logs which can track which tests pass and fail over time. You don't get your automation to a point where you don't have to think about it and just get notified when something goes wrong. And you still have to spend a lot of time flipping through images (which, as you may guess, is exceptionally boring.)

Overall I only recommend this if you're working in a space where doing programatic visual comparison is really hard (like some advanced rendering scenarios, or times when your program's output is non-deterministic) or if you're not building a lot of automation and the cost of finding/buying/learning a visual comparison tool is greater then the cost of you sitting there and looking at bitmaps for the full product cycle and beyond.

Using a visual comparison tool is cleary the best, but also the hardest. Here your automated test gets to a place where it wants to check the state, looks at the screen, and compares that image to a master it has stored away some where. This process suffers from a lot of the problems that the mouse pointer option has. Lots of things can chage the way your program gets rendered on screen, and visual comparison tools are notoriously picky about any small changes at all. Did that toolbar get moved by the last user, just a couple of pixels? Too bad, the comparison fails.

Fortunately advanced comparison tools (or smart test scripts) can dodge a lot of these issues. The methods for this are similar to the smart methods of driving an application. For example, you could only get a picture of the toolbar you care about and compare those two pictures - then it doesn't matter where it moved to as long as it rendered correctly. Another way to use this is to only compare the canvas of your application (where the content goes: the drawing page in Visio, the writing page in Word, etc.) That way you don't care about the toolbars or the start menu, or other kinds of UI (except they can change the size of your canvas, be careful about that.)

This is clearly a complicated issue. And it's the most problematic part of automated testing for standard user applications. Fortunately, test tool suites are working hard on these problems trying to come up reasonable solutions for them. I know the internal Microsoft tools I use are getting much better at this, unfortunately I don't know what outside industry tools are doing in this space.

Other Thoughts

Of course, just figuring out how to drive your program and check the results doesn't get you an automated test suite. There is still a lot of work in designing those tests. For me, this is the hardest part of automation. Deciding what the tests should do is much more difficult then figuring out how to do it.

I guess the easiest way to say this is: writing automation is a valuable skill, but writing really good, reusable, and effective automation will make you a super star. I'm still working on that second part.

But I promise as I get better at it I'll write about it here. Until then, I hope this helps, and if you have any questions I'd love to hear them.

Chris Dickens
Microsoft Office Tes

Remoting FAQ's

MumbaiUserGroup: "What distributed process frameworks outside .NET do you know?
Distributed Computing Environment/Remote Procedure Calls (DEC/RPC), Microsoft Distributed Component Object Model (DCOM), Common Object Request Broker Architecture (CORBA), and Java Remote Method Invocation (RMI).

What are possible implementations of distributed applications in .NET?
.NET Remoting and ASP.NET Web Services. If we talk about the Framework Class Library, noteworthy classes are in System.Runtime.Remoting and System.Web.Services.

When would you use .NET Remoting and when Web services?
Use remoting for more efficient exchange of information when you control both ends of the application. Use Web services for open-protocol-based information exchange when you are just a client or a server with the other end belonging to someone else.

What's a proxy of the server object in .NET Remoting?
It's a fake copy of the server object that resides on the client side and behaves as if it was the server. It handles the communication between real server object and the client object. This process is also known as marshaling.

What are remotable objects in .NET Remoting?
Remotable objects are the objects that can be marshaled across the application domains. You can marshal by value, where a deep copy of the object is created and then passed to the receiver. You can also marshal by reference, where just a reference to an existing object is passed.

What are channels in .NET Remoting?
Channels represent the objects that transfer the other serialized objects from one application domain to another and from one computer to another, as well as one process to another on the same box. A channel must exist before an object can be transferred.
What security measures exist for .NET Remoting in System.Runtime.Remoting?
None. Security should be taken care of at the application level. Cryptography and other security techniques can be applied at application or server level.


Microsoft Interview Questions: "
Interviewing at Microsoft

Over the years I've been collecting interview questions from Microsoft. I guess I started this hobby with the intent of working there some day, although I still have never interviewed there myself. However, I thought I'd give all of those young Microserf wanna-bes a leg up and publish my collection so far. I've actually known people to study for weeks for a Microsoft interview. Instead, kids this age should be out having a life. If you're one of those -- go outside! Catch some rays and chase that greenish monitor glow from your face!

If you've actually interviewed at Microsoft, please feel free to contribute your wacky Microsoft interview stories.
Scott Hanselman's 'Great .NET Developer' Questions

Tue, 2/22/05 12:30pm

Scott Hanselman has posted a set of questions that he thinks 'great' .NET developers should be able to answer in an interview. He even splits it up into various categories, including:

* Everyone who writes code
* Mid-Level .NET Developer
* Senior Developers/Architects
* C# Component Developers
* ASP.NET (UI) Developers
* Developers using XML

Am I the only one that skipped ahead to 'Senior Developers/Architects' to see if I could cut Scott's mustard?
Jason Olson's Microsoft Interview Advice

Fri, 1/21/05 8:16pm

Jason Olson recently interviewed for an SDE/T position (Software Development Engineer in Test) at Microsoft and although he didn't get it, he provides the following words of advice for folks about to interview for the first time:

* Just Do It
* Remember, no matter how much you might know your interviewer, it is important to not forget that it is still in interview
* Pseudocode! Pseudocode! Pseudocode!
* But, as long as you verbalize what you're thinking you should be in pretty good shape
* Bring an energy bar or something to snack on between breaks in order to keep your energy level up
* [B"

Software Test Engineering @ Microsoft :

Software Test Engineering @ Microsoft :

Monday, May 23, 2005

Remote Desktop Connection Web Connection Software Download

Windows XP: Remote Desktop Connection Web Connection Software Download


Avignon: "What is Avignon?
Avignon is an acceptance testing framework developed in-house at NOLA Computer Services. For programming teams that use the eXtreme Programming (XP) methodology, Avignon lets customers express acceptance tests in a non-ambiguous manner before development starts.

How does it work?
Avignon uses XML so that anyone can define their own language for expressing acceptance tests. Each XML tag has an associated Java class that performs the actions required for that part of the test.

What does it include?
Avignon consists of a JUnit test case that executes test scripts and a small set of prebuilt tag handlers. Out of the box, Avignon can manipulate databases and interact with Web pages through either an HttpUnit browser interface or through its own IE browser integration.

What else do you need?
To run Avignon you will also need a copy of:
# Java JRE version 1.3 or greater
# JUnit
# HTTPUnit
# JAXP (for pre-1.4 Java)
# Xalan (for pre-1.4 Java)

How is Avignon distributed?
Avignon is distributed under the terms of the GNU General Public License.
www.gnu.org/licenses/gpl.html "

Jiffie - Java InterFace For Internet Explorer

Jiffie - Java InterFace For Internet Explorer

Build a Configurable Web-Based Bug Management Tool Using ADO.NET, XML, and XSL


Brian Marick's Blogs

Exploration Through Example

Wiki: ScriptingForTesters

Wiki: ScriptingForTesters

weinberg shape Forum quality software engineering management

weinberg shape Forum quality software engineering management

James Bach's Blog

James Bach's Blog

Base64 Encoder and Decoder

Base64 Encoder and Decoder

XSS (Cross Site Scripting) Cheatsheet: Esp: for filter evasion - by RSnake

XSS (Cross Site Scripting) Cheatsheet: Esp: for filter evasion - by RSnake

Sunday, May 22, 2005

Johanna Rothman: No More Second Class Testers!

Johanna Rothman: No More Second Class Testers!

Methods & Tools Archives

Methods & Tools Archives

Brian Marick - Writings

Brian Marick - Writings

Software Testing Institute - Plan on Testing Success

Software Testing Institute - Plan on Testing Success: "A good test plan is the cornerstone of a successful testing implementation. While every testing effort may be unique, most test plans include a common content framework. This article presents the components that make up this framework, and serves as a guide to writing your own test plan.


This section establishes the scope and purpose of the test plan. This is where to describe the Fundamental aspects of the testing effort.

* Purpose - Describe why the test plan was developed--what the objectives are. This may include documenting test requirements, defining testing strategies, identifying resources, estimating schedules and project deliverables.

* Background - Explain any events that caused the test plan to be developed. This can include implementing improved processes, or the addition of new environments or functionality.

* Technical Architecture - diagram the components that make up the system under test. Include data storage and transfer connections and describe the purpose each component serves including how it is updated. Document the layers such as presentation/interface, database, report writer, etc. A higher level diagram showing how the system under test fits into a larger automation picture also can be included if available.

* Specifications - list all required hardware and software including vendors and versions.

* Scope - briefly describe the resources that the plan requires, areas of responsibility, stages and potential risks.

* Project Information - identify all the information that is available in relation to this project. User documentation, project plan, product specifications, training materials and executive overview materials are examples of project information.


This section of the test plan lists all requirements to be tested. Any requirement not listed is outside of the scope of the test plan. (The day you’re h"

Software Testing Institute - Software Tester Salaries

Software Testing Institute - Software Tester Salaries

Bret Pettichord's Software Testing Hotlist

Bret Pettichord's Software Testing Hotlist

Improving Software Testability

Improving Software Testability


Many things go into the release of high quality software. The software needs to be well conceived, well designed, well coded, and well tested. If any of these criteria are ignored the production of a high quality software product will occur only by sheer luck, and probably not at all. However most development efforts strive to do more than ship high quality software - they strive to ship high quality software on time. So, this article focuses on how to efficiently achieve a well-tested software product.

There are many steps that the development team, the product manager, and testing team can take to improve the testability of the software, thereby making the best use of testing resources.

As an outsourced QA and test support services firm, ST Labs partners with developers of market-driven and corporate software to provide testing, consulting, and training services. As such, we typically don't test mission-critical or life-critical software. This article focuses on how to improve the testability of software that needs to be of a high quality, but also needs to be cost-effective to produce -- in other words, consumer or corporate software.

What is Software Testability?
Software testability can be defined in many different ways. ST Labs, in its training courses, has defined software testability simply as "a judgement of how effectively the software can be tested." This is a very broad definition; it looks at all aspects of a program that might make it more difficult to test. These can range from incomplete or ambiguous specifications to unstable code.

Other common definitions of testability are more formal. The IEEE Standard Glossary of Software Engineering Terminology (1990) defines software testability as "(1) the degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met, and (2) the degree to which a requirement is stated in terms that permit establishment of test criteria and performance of tests to determine whether those criteria have been met." This definition addresses the requirements issue and assumes that test criteria have been, or will be, formally established. This is not necessarily the case for all development projects. Many companies work with a less formal model, so they need a less formal definition of software testability.

Miller and Voas, in a number of articles, have used the following definition of software testability: "the probability that a piece of software will fail on its next execution (with a particular assumed input distribution) during testing if the software includes a fault." This definition is useful for establishing a mathematical measurement of the testability of software. This is most often done for software that must be highly reliable - avionics software, military software, medical software and the like -- software upon which people's lives may depend. This type of software is not the focus of this article, so we will use our simple definition of testability: a judgement of how effectively the software can be tested.

What factors make software more or less testable?
Incomplete, out-of-date, ambiguous or contradictory product references lower testability. By product references, we mean any document whose purpose is to describe what the product should look like, how it should function, who its intended users are and how these users are expected to use it. These are typically requirements documents, design specifications, marketing specifications and so on. These documents serve not only to help developers create the software, but they help testers form valid expectations about the behavior of the software. It is difficult for testers to identify problems with the software if they do not have access to detailed information by which to develop testing criteria.

Along the same lines, any additional information that developers can provide testers about how the software has been coded, such as limits, boundaries, error handling routines and algorithms, will assist testers in developing effective tests.

Code that hides faults is difficult to test. According to the fault/failure model first introduced by Morell, software failure only occurs when the following three conditions occur in this order:

An input causes a fault to be executed.
The fault causes a data state error.
The data state error propagates to an output state (becomes visible to a user in the form of a failure).

Software errors become very difficult to isolate if the input that causes the fault is not followed almost immediately by an observable faulty output state. If data state errors remain undetected through several intermediate transactions, when the fault finally causes a failure, it can be almost impossible to pinpoint the cause. This type of problem can easily go undetected during testing, to surface later, and perhaps with some frequency for certain classes of users.

Sometimes a data state error can exist, but the output state will appear normal. This occurs if two different input variables can produce the same result. Consider the following simple example.

A developer is tasked with writing an application that contains the following business rule: If the respondent is 18 or older AND they play golf, then send them the Golf Brochure. However, the developer hasn't had enough coffee (or maybe has had too much coffee!) the morning he writes this code and instead of making the requirement "18 or older" he makes the requirement "older than 18." A tester, knowing the business rules may try the following tests with these results:

Input Algorithm Visible Output .
Age of Respondent Plays Golf Send Brochure Fault? Fault Detectable?
17 (N) Y N N --
17 (N) N N N --
18 (N) Y N Y Y
18 (N) N N Y N

In this case, if the tester did all four tests, he would discover the problem. However, if the situation were more complex and the tester pressed for time, he may decide to skip one or two of the tests. Perhaps he would simplify by making sure to test at least once for 17 and once for 18, and once for golf and once for no golf, not taking into account the interaction between these variables. He could also decide to use 19 or some other age instead of 18, expecting "18 and up" to be a single equivalence class, in which case he wouldn't detect the fault at all. So, the way to get around this problem is to make the result of the evaluation of the (age) criteria visible. The tester should be able to see directly that "18" has been falsely interpreted as a "No."

Boolean functions are particularly susceptible to this type of problem.

Lack of proper tools and training make testing less effective. Although lack of proper tools and training does not make the code itself less testable, it does lessen the effectiveness of the testing effort. Tools can be very useful for reducing the cost of developing appropriate inputs and can increase the visibility of outputs. Automation tools can also be used for random or heuristic black box testing. Trained testers are needed to make effective use of these tools. The better testers understand the development environment and the more they know about how a product has been coded, the more efficient they will be at finding and analyzing bugs.

Strategies for improving testability of software:
What can developers do? Statistics have shown that as much as 30% of new code contains faults. Working with this reality means that adequately testable code becomes imperative for the creation of a high quality product. There are some things that developers can do to make their code more testable.

Simpler design leads to software that is easier to test. This includes several aspects of the design. Simplicity of functionality means the feature set of the software should be the minimum necessary to meet the requirements - no unspecified embellishments. Structural simplicity requires an architecture that is clean and logical; modularity will help limit the propagation of faults. Simpler code is readable and structured as opposed to "spaghetti code" which is difficult to follow and difficult to test. The development team should have agreed upon standards.iii
Inserting self-tests (assertions) into the code makes faults readily observable. With an assertion, the program will fail soon after the execution of a fault, rather than continuing to operate correctly until the data state error finally triggers a failure. The liberal use of assertions makes problems much easier to spot, as well as making them easier to fix. The developer can see at a glance where the fault lies that caused the assertion failure.
Anything the developer can do to make output more visible will increase the testability of the software. One obvious tactic is to make the source code available to testers. The advantages of this are discussed in an upcoming section on working in a development environment. With minimal coding, developers can also greatly enhance the testability of the software. Developers can add special switches that testers can activate to force the software to behave in particular ways. They can create applets that do things like monitor memory usage. They can make logging tools so testers can discover exactly what actions the software has performed. Which of these tools are most appropriate will depend on your product. Usually, the additional time needed to develop these aids is more than made up for in saved testing and debugging time.
Careful, complete code comments and other technical documentation can facilitate not only testers, but developers as well. Code reviews and white box testing are much more efficient when the code is clearly commented and all pertinent information regarding the program has been documented.
Code to facilitate automation and internationalization. When possible, use standard controls. Developing automated test scripts for an application that does not use standard controls is a painstaking process that is seldom worth the time and effort. Plan to stabilize the user interface and feature set early. This will avoid time-consuming and expensive rework of the automated tests and will free automators for other important tasks that need to be done at the end of the project cycle. Consider prototyping the UI in VB or some other appropriate tool. It may save you a great deal of time at the end of the project if you spend a few weeks at the beginning of the project finalizing the UI and developing a prototype. Remember that test automation is a development activity. A prototype allows the automation development effort to take place in parallel with the product development effort, significantly improving the effectiveness of testing and the chances for releasing the software on schedule. Lastly, liberal use of string tables will greatly aid both automation and localization efforts.
Communicate frequently with the testing team. Testers need to know about all changes to the specification as well as any coding problems, process problems or other snags. Be sure to let the test team know which areas of the product are ready to test and which aren't. This can save both testers and developers much time and frustration. Developers should make themselves available to the testers to answer questions about the code. Sometimes one or two cheerfully answered questions can save the testers days of trial and error testing. If appropriate, copy the testers on development group e-mails and let them feel they are a part of the development team. Bonds formed early in the product life cycle can go a long way towards easing tensions that invariably crop up during the stressful days just prior to release of the software.

What can product managers do? The project or product manager is usually responsible for coordinating the efforts of a number of groups to ensure a successful product release. There are several things someone in this key role can do to increase the testability of the software.

First, the product manager can make sure the product references are clear, complete and up-to-date. One of the most frequently heard complaints from testers is that the specifications and/or requirements documents are incomplete and out-of-date. Development efforts tend to be very dynamic. Make sure an appropriate person is assigned to the task of updating the documentation, otherwise even excellent documentation can become obsolete a few weeks after coding commences.
If automated testing is planned, insist on early agreement on the feature set and UI. Automation efforts can not be effectively undertaken in an environment where the UI is volatile.
Strongly encourage interaction between the developers and the testing team. Anything you can do to foster a spirit of cooperation will increase the chance of a successful, on-time release of the software. Establish formal lines of communication between the two groups early in the project. Forward any e-mail from you receive from one group to the other if appropriate. Create opportunities for the two groups to interact in an informal setting. Off-site outings (bowling, golf, movies, etc.) will give developers and testers a chance to get to know each other and become comfortable interacting. Although these tactics may not directly create more testable software, they will allow each team to better understand the other team's problems and challenges. This in itself should encourage the creation of more testable software.

What can testers do? Testers can do many things to make their own jobs easier.

Testers should learn as much as they can about the design of the product they are testing, and the technologies involved. This will help them make informed decisions about what and how to test. Without in-depth knowledge of both the product and the technologies involved, testers could waste a lot valuable testing time on tasks that are unnecessary or performed incorrectly.
Insist on testing in a debug environment. The testing environment should be set up to mirror the development environment. This will enable testers and developers to work more closely together when tracking down and eliminating bugs. If testers have programming skills, they will need a development environment to perform white box testing. Although periodic testing in a non-debug environment is essential, the huge advantage that the debug environment provides in making output more visible means that most testing should take place with debug code activated.
There are many tools available to assist testers. Take advantage of these tools. There are tools for code coverage, tools for automation, tools for stress testing, tools for load and volume testing, tools to help you generate valid data sets and tools which allow you to manage the test processes. These tools, when properly used, can make your product much more testable.
Write clear bugs. Well-written bugs enable the developers to do their job more quickly with less frustration (and they will have more respect for the tester). Others may also need to reproduce your bug: your boss, other testers, and the product manager are just a few who may have occasion to look at and try to understand your bugs. Set up team bug report standards and make sure everyone is aware of them.

Creating testable software is not just the responsibility of one particular person or team. Everyone who is helping produce the software has a role in improving testing effectiveness. The more testable your software is, the better chance you have of releasing a high quality product on time.


iK. Miller and J. Voas. Software Testability: The New Verification, IEEE Software, Vol. 12, No. 3, May 1995
iiLarry Joe Morell. A Theory of Error-based Testing. Technical Report TR-1395, University of Maryland, Department of Computer Science, April 1984.
iiiJames Bach, Principles of Software Testability, ST Labs internal publication, 1994.

Achieving Quality By Design | VeriTest Testers' Network

Achieving Quality By Design | VeriTest Testers' Network

VeriTest Testers' Network

VeriTest Testers' Network

Not All .NET Enabled ERP Solutions Are Created Equal

Not All .NET Enabled ERP Solutions Are Created Equal
By: Joey Benadetti, President, SYSPRO USA, SYSPRO USA

Microsoft®'s .NET strategy confuses many users and vendors. But because of Microsoft's massive marketing campaign on NET's benefits, many vendors are calling their solutions “.NET enabled” even though their products fall short of fulfilling Microsoft's .NET parameters. An examination of .NET's underlying technology reveals the fallacy of many vendors’ claims.

Software Magazine - Building Secure Software: Can We Get There From Here?

Building Secure Software:
Can We Get There From Here?
by Jim Reavis

We are starting to awaken from our dreamlike lust for nifty new software and are finding that the dark side of this software we built is pretty scary stuff.

In one of my favorite movies, “The Matrix,” our protagonist Neo is given the option of taking the blue pill and returning to slumber in the machine-simulated world of the Matrix or taking the red pill and waking up to see the horribly scarred world as it really is. As an information security professional, this scene resonates with me because I feel that only now are my good friends on the software development side of the house starting to take the red pill en masse. You see, in the roaring 1990s, when it came to software it was all about features, time tomarket and getting online ASAP. We are starting to awaken from our dreamlike lust for nifty new software and are finding that the dark side of this software we built is pretty scary stuff.

As far as I can tell, we live in a world with two types of software — the code with vulnerabilities we know about and the code with vulnerabilities we don’t know about. Time and popularity seem to be the two factors that draw the glitches out of all software, but the fact that it will be buggy is as certain as the rain in Seattle. From a Chief Security Officer’s perspective, this has been a vexing problem and one over which the CSO has had little control. The CSO typically has no choice but to deal with the software after it is baked, whether it is developed internally, bought off the shelf or customdeveloped offshore. A little security testing during pre-production integration perhaps, but basically we security professionals are mitigating risks within living, breathing production systems.

In San Francisco in February, I was lucky enough to be the master of ceremonies and moderator for the Secure Software Forum, an event organized by SPI Dynamics and sponsored by Microsoft. The forum was actually mentioned by Bill Gates at his RSA keynote speech, which might account for the overflow crowd. The premise of this forum was that maybe software vulnerabilities are not something we have to accept and in fact maybe there are some strategies to turn the tide. So what, you say? Aren’t I just recycling talk about Trustworthy Computing and Microsoft sending programmers to security school for a month? Is Microsoft any better for the effort?

The biggest problem we have in information security is a lack of credible metrics, but I do believe Microsoft has made some serious strides. Their development environment is much more robust than in previous years. A big problem they deal with is the consequences of architectural decisions made years ago, before Trustworthy Computing was a glimmer in anyone’s eye. But what we learned at the Secure Software Forum is that a singular focus on Microsoft and their security vulnerabilities is to ignore the millions of other vulnerabilities that are out there. To quote a CSO I know, “We have over 300 buggy applications that we need to document controls for SarbanesOxley. I would love to blame Microsoft, but we wrote every one of those applications in house.” And that was a key finding — this problem was created by all of us and it belongs to all of us.

For the forum, SPI Dynamics recast the acronym ASAP, coining the term Application Security Assurance Program. They are basically describing the need to integrate security throughout the software development life cycle and throughout the production lifespan of the application. This is the direction we need to stampede toward and change the face of software development. Most of my CSO friends are not there yet. As one of our panelists, Theresa Lanowitz of Gartner related to the audience, only about 5% of organizations have done a superior job of integrating security into the software development life cycle. That is appalling, especially when you consider the benefits those organizations could derive. Some studies have suggested that software bugs that are remediated during the production phase are 60 times more expensive than if caught and fixed during the design phase. The ratio isn’t quite as exciting during the coding or quality assurance testing phase, but in all cases is much cheaper than fixing released software.

The CSO needs to be involved in software development from the beginning — in fact before the beginning. When a new business initiative that requires new software is dreamed up, the CSO needs to do a risk assessment. He needs to see to it that security checks are built into all logical milestones of the software development process. In some cases the CSO needs to be an advocate for revamping the process or providing new tools and automation to help assure a better quality product.

Not Happening Yet

Yet, by and large, this isn’t happening yet. Why? CSOs, security teams and application development are different creatures that historically have few points of intersection. Developers like to get whacked on Red Bull and fly down a mountain at 100 miles an hour. The CSOs sit at home quietly behind deadbolt locked doors and have no friends. Seriously, the cultural difference is there, but the business imperative is also at odds. For developers who are rewarded for getting a product out the door, time to market is crucial, and process junkies who slow things down are the enemy.

CSOs are slowly getting the leverage they need to fight this battle. Regulations such as Sarbanes-Oxley, are giving their mission some board-level backing. Developers are getting hungrier for technical education related to security. But we also need to look at the tools we equip our developers with, and do a lot more automation to eliminate repeated mistakes. Jon Callas, CTO of PGP Corporation, who was reacting to comments at the forum that seemed to indicate that C++ was going to be with us forever, said, “Sure C++ is going to be with us, but it doesn’t need to be everywhere.” The CTO of SPI Dynamics, Caleb Sima, who has developed a class library technology called SecureObjects, noted: ??If we just did input validation in our software, we could solve 70% of our problems. When a field is supposed to be for phone numbers, obviously only a phone number should be entered.”

I don’t know if we will ever see the actual end of this movie, but I have hope that the good guys can gain some ground. If events like the Secure Software Forum become more commonplace, and more importantly, developers and security practitioners start practicing what they hear, we will see software that doesn’t scare us all.

Jim Reavis is president of Reavis Consulting Group and editor of the CSOInformer newsletter. He focuses on security issues of top concern to IT professionals. He is an advisor to the International Board of ISSA, the world’s largest organization of information security professionals. Learn more at: www.reavis.org.

Software Magazine - Software Security Code Review: Getting it Right Before You Release

Software Security Code Review:
Getting it Right Before You Release
by Mark Curphey and David Raphael

Code reviews help in two ways: development teams determine how hackers might break their code, and they learn ways to build more robust applications

In the world of security there tends to be two camps; those that break things and those that build things. It is often said that people that are good at building things are not good at breaking things and vice versa. While this may or may not be true, it is certainly true to say that if you understand how people break things you can usually figure out how to build things better. Software security code reviews have two functions. They allow development teams to determine how malicious hackers might break their code and they also help development teams learn ways to build more robust software.

The majority of security testing has been and continues to be what is referred to as black box testing. The name describes a type of testing where the test team does not know how the system works on the inside. This black box approach to testing is the same technique as that of a hacker; the tester must deduce how the system functions before deciding how to attack it. In reality this process is heavily driven by guesswork. While today the majority of people use this approach, there is a better way.

The systems source code is the DNA, describing exactly how the software works. The advantages that the system DNA provides in diagnosing software issues should be obvious. Even hackers now favor the advantages code analysis afford, typically decompiling security patches released by vendors and determining the original area of weakness in the system the patch fixes.

In this article we describe the approach to software security code reviews we have developed at Foundstone working with hundreds of clients and numerous types of systems. Our approach has two main phases and should be able to be applied to most development processes and to most technology with little customization needed.

The two key phases are;

* Threat Modeling
* Code Inspection

In the first part of this article we will look at threat modeling and in the second part we will delve into code inspection.

Threat modeling is a technique that was pioneered by Microsoft and is essentially a pragmatic twist on a classic security technique called risk analysis. The idea of threat modeling is to systematically explore the application while thinking like an adversary. Thinking like an adversary (attacker) is a key element. The process forces users to explore weaknesses in the architecture and determine if adequate countermeasures are in place. Building a threat model is best conducted in a workshop format with a team that includes the system architects and system developers all gathered in one place (physically or virtually). The team should appoint a scribe to document the model and a lead who is an experienced threat modeler to lead the process. It is worth noting that workshops work well only when people’s cell phones are turned off, PDAs shutdown and the team is able to exclusively dedicate their thought process. The tasks we have to complete are as follows;

* Describe the System
* List and Tag Assets
* Define System and Trust Boundaries
* List and Rank Threats
* List Countermeasures

It is rare that these steps are conducted in a strict incremental manner. As with software development, iterative processes tend to be more pragmatic and more complete.

Describe the System

The first session of the modeling process starts with a brainstorming session in which the objective of the team is to describe the complete system both graphically and textually. Experienced modelers will ensure that the team describes the system as it is going to be or has been built and not as the team would ideally like it to be built! In general you should not focus on the style or layout but on the content itself. Heavy usage of white-boards or flip-charts is common and effective. We have found that UML models such as use case diagrams and sequence diagrams to be very useful starting points but certainly not essential. Microsoft recommends data flow diagrams but our experience is that specific formats are not important; developing an accurate and realistic description of the system is. Graphical descriptions work well as they are unambiguous and easily refactored. As well as architectural descriptions (usually graphical), the team should describe the functionality and features such as “Login” or “Obtain Balances”. An acid test of this task is for a lay person to be able to understand what the system does and how it does it.

List and Tag Assets

Using the drawings, annotations and lists that describe the system the team should then create a list of the assets that make up the entire system. Assets generally can be split into tangible assets and intangible assets. Tangible assets may include things like password files, configuration files, source code, databases and network connections. Intangible assets of systems are very difficult to describe and many people waste an unnecessary amount of time attempting to list them. The team should list the assets in a table and attempt to group them into common asset types if possible. The next step, tagging the assets, is extremely important. There are several ways to do this and several tags worth considering adding to assets to help with the code inspection.At the highest level you should consider determining whether the asset is architecturally significant from a security perspective. That is to say does the asset play a role in enforcing the security model? If it does, it is highly likely that you will want to perform a security code inspection on it. Other things to consider, could be security mechanisms such as authentication, authorization, data protection, and data validation.

Define System and Trust Boundaries

Next leveraging the architecture system model and the asset list, the team must define system and trust boundaries. System boundaries are places where data can flow in or out of the system and or its various components. For example if a B2B web service is the evaluation target, the system would have a system boundary where the service is exposed to the public network. These system boundaries are later used to explore requirements for things like authentication, authorization and data validation. As well as socket connectivity system boundaries, include things like access to registry keys or file systems. UML sequence diagrams have proven to be very valuable in defining system boundaries and should be used if available.

At the same time as the team defines the system boundaries they should also define what are called trust boundaries. These are virtual domains of components or parts of the system that implicitly trust each other in some shape or form. Typically in systems groups of components or pieces of code place a degree of trust in each other. By describing these trust boundaries we can explore the path of least resistance an attacker may travel to exploit the asset in the next task. In complex models we can tag trust boundaries with a level of trust in such as way that we can model implied trust and explicit trust.

Checkpoint: At this point of the process we have gathered a great deal of information. We know exactly what the system does, the components that it comprises of and how it all functions from a security perspective. We will have spent approximately 4-6 hours so far.

List and Rank Threats

Now the fun starts and the team turns to the creative portion of the threat modeling exercise. The idea is to develop a list of realistic threats to the system. Confusion at this point is one of the most common mistakes made by inexperience modelers. To be clear, a threat is not a vulnerability (a weakness in a system). We define threats and vulnerabilities as:

* Threat: capabilities, intentions, and attack methods of adversaries to exploit, or any circumstance or event with the potential to cause harm to, information or an information system. [NIS]
* Vulnerability: a feature or bug in a system or program which enables an attacker to bypass security measures or [CERT 1993]: an aspect of a system or network that leaves it open to attack.

As example of a threat may include “steal the password file” or “modify the configuration file to obtain unauthenticated access”.

Listing threats is both creative and fun. The team is attacking the system on paper and tends to become engaged very quickly. But a word of warning; this part of the process can be enlightening, competitive and difficult to constrain. Enlightening because it quickly becomes obvious if the system could be compromised by certain common threat vectors. Competitive because team members and human nature always kicks in as people try to develop “cool” threats. Last but not least, this process can be difficult to constrain. The leader must ensure that the team remembers that this is a model and not an attempt to develop a list of every possible threat. The leader must also (often delicately) constrain the imagination of some individuals who decide that alien hackers from Mars melting the electrons in transport is a threat!

Each threat must then be categorized and ranked according to criteria that enable the team to prioritize them. Microsoft has developed several schemes for this ranking. The two well discussed schemes in the Microsoft world are STRIDE and DREAD (although I am reliably informed they are moving away from DREAD). STRIDE is a classification scheme standing for Spoofing, Tampering, Repudiation, Information disclosure, Denial of service and Elevation of Privileges. DREAD is a ranking model and an acronym for Damage potential, Reproducibility, Exploitability, Affected users and Discoverability.

Our experience at Foundstone is that each customer can usually develop very effective ranking models that work best for their own business extending or modifying the basic principles of STRIDE and DREAD.

List Countermeasures

For each threat the team must now decide if there are adequate countermeasures in place to prevent or mitigate the attack from being successful. Countermeasures may include things like the fact the password file is encrypted or that access control is in pace on the database only allowing certain users to access the password file. Any threat without a countermeasure is by definition a vulnerability and this is, in many respects, the end game of the threat modeling; discovering the vulnerabilities by a system modeling process.


Threat modeling seems such a simple concept; and it is, however there is a definite skill that teams will develop through practice and experience. It is also an incredibly powerful, effective and valuable technique that in our opinion, can be applied to software both during the development process and after software has been built. We have saved customers literally millions of dollars by helping them avoid committing thousands of development hours to designs that were fundamentally flawed. You too can leverage the process to pragmatically analyze your software and build better software.

Part Two

In part two of this article we will be describing the code inspection part of the code review process. We will show you how to take the security significant parts of the system and find both flaws and bugs in the code. Flaws are issues in the code due to poor implementation choices such as storing cryptographic keys in configuration files or choosing weak random numbers to seed key generation. Bugs are issues in code usually due to incorrect semantic constructs such as buffer overflows. We will release a free tool to accompany part two of this article called .NETMon, part of the upcoming Foundstone S3i .NET Security Toolkit (free and open source). It watches the .NET Common Language Runtime (CLR) and watches how security is enforced by the .NET framework by bespoke code. This tool is one of many that have been developed at Foundstone to help code review teams and software security professionals.

Mark Curphey is the founder of the Open Web Application Security Project (OWASP), and is director of software security consulting at Foundstone, now a McAfee company. OWASP, created to help organizations understand and improve the security of their Web applications, publishes a list of Top 10 Web application security vulnerabilities each January. Curphey is the former director of software security for Charles Schwab and has a Masters Degree in cryptography.

David Raphael is a Senior Software Security Consultant at Foundstone. As an expert in both J2EE and .NET, David performs software security reviews and provides training to both J2EE and .NET developers on writing secure software.

SoftwareMag.com - Software Magazine

SoftwareMag.com - Home

Bret Pettichord's Software Testing Hotlist

Bret Pettichord's Software Testing Hotlist

20th IEEE/ACM International Conference on Automated Software Engineering Online Conference Site

20th IEEE/ACM International Conference on Automated Software Engineering Online Conference Site

Friday, May 20, 2005

Comparison: Web-based Tracker

Comparison: Web-based Tracker

.: CounterSoft\Gemini Project Issue Tracking\Gemini Features :.

.: CounterSoft\Gemini Project Issue Tracking\Gemini Features :.

Open Source Issue Tracking Software in C#

Open Source Issue Tracking Software in C#

Bug Tracking Software

Bug Tracking Software

Tracking Down a Defect Management Tool:

Articles: "Tracking Down a Defect Management Tool:
How to Select the Right Defect Tracking Solution for Your Business

This article by LogiGear CEO Hung Q. Nguyen, author of the bestselling Testing Computer Software (Wiley, 2nd ed. 1999), offers sample evaluation forms, advice on defining business and technical requirements, and an example tool feature list to assist you in choosing a defect tracking system. Download PDF..."

Testing Articles in PDF format

Testing [Cigital Labs]

Testing Techniques

Testing Techniques

Getting Information when there is no specification

• Whatever specs exist
• Software change memos that come with each new internal version of the program
• User manual draft (and previous version’s manual)
• Product literature
• Published style guide and UI standards
• Published standards (such as Clanguage)
• 3rd party product compatibility test suites
• Published regulations
• Internal memos (e.g. project mgr. to engineers, describing the feature definitions)
• Marketing presentations, selling the concept of the product to management
• Bug reports (responses to them)
• Reverse engineer the program.
• Interview people, such as
•development lead
•tech writer
•customer service
•subject matter experts
•project manager
• Look at header files, source code, database table definitions
• Specs and bug lists for all 3rd party tools that you use
• Prototypes, and lab notes on the prototypes
• Interview development staff from the last version.
• Look at customer call records from the previous version. What bugs were found in the field?
• Usability test results
• Beta test results
• Ziff-Davis SOS CD and other tech support CD’s, for bugs in your product and common bugs in your niche or on your platform
• BugNet magazine / web site for common bugs
• News Groups, CompuServe Fora, etc., looking for reports of bugs in your product and other products, and for discussions of how some features are supposed (by some) to work.
• Localization guide (probably one that is published, for localizing products on your platform.)
• Get lists of compatible equipment and environments from Marketing (in theory, at least.)
• Look at compatible products, to find their failures (then look for these in your product), how they designed features that you don’t understand, and how they explain
their design. See listserv’s, NEWS, BugNet, etc.
• Exact comparisons with products you emulate
• Content reference materials (e.g. an atlas to check your on-line geography program)

Welcome to SHAREPOINTCOMMUNITY.COM - Resources and Support for Microsoft SharePoint Technology Administrators, Managers, Developers and Users.

Welcome to SHAREPOINTCOMMUNITY.COM - Resources and Support for Microsoft SharePoint Technology Administrators, Managers, Developers and Users.

.NET Security Toolkit Released 3/08/2005

The Foundstone S3i™ (Strategic Secure Software Initiative) .NET Security Toolkit is designed to help application developers and architects to build secure and reliable .NET software applications. The new toolkit is comprised of the Validator.NET, .NETMon and SecureUML template tools which help developers validate, debug and analyze vulnerabilities during the design and development of .NET applications.


Scott Galloway's Personal Blog : Possibly the ultimate developer tools list

Scott Galloway's Personal Blog : Possibly the ultimate developer tools list

Pitfalls of relying on/or looking only for "Expected" behavior in Testing


Testers should specify the expected result of every test, in advance?
This is another guidance from Myers (1979) that has had lasting influence. See, for example, ISEB's current syllabus for test practitioner certification at www1.bcs.org.uk/DocsRepository/00900/913/docs/practsyll.pdf. One fundamental problem with this idea is that it is misguided. Every test has many results. No one specifies them all. An "expected result" points the tester at one or a few of these results, but away from the others. For example, suppose we test a program that adds numbers. Give it 2+3 (the intended inputs) and get back 5 (the monitored output). This looks like a passing test, but suppose that the test took 6 hours, or that there was a memory leak, or that it erased a file on the hard disk. If all we compare against is the expected result (5), we won't notice these other bugs.

The problem of an observer (such as a tester) missing obvious failures is not just theoretical. I've seen it at work, and psychologists have described it in research, naming it inattentional blindness (Mack & Rock, 2000). If you are focusing your attention on something else, then you have a significant probability of completely missing a surprising event that happens right in front of you. Some striking demos are available at http://viscog.beckman.uiuc.edu/djs_lab/demos.html.

Does this mean that it is a bad idea to develop expected results? No, of course not. It means that there are pluses and minuses in using, and relying on, expected results.

Expected results are valuable when an answer is difficult or time-consuming to compute. They are valuable when testing is done by junior staff who don't know the product (but realize that these people are especially likely to miss other failures). They are valuable for the test planner who is thinking through how the product works while she design the tests—working the tests through to their predicted conclusions is an important tool for learning the product. They are of course valuable for automated tests—but the ability to automatically derive an expected result is much more valuable for automated testing (because it is so much faster) than the ability of a human to write the result down.