The outside beta test can be a very exciting stage in the software development process. Dedicated, loyal customers who are anxious to see your company and its products succeed eagerly download the software, thereby exposing it to a much broader range of platforms than you can hope to have available in your lab—and all for free. The marketing department is thrilled because your product is being exposed to privileged customers who will spread great word of mouth before your product hits the shelves. For the product team, the release of a beta is usually regarded as the last milestone before the retail version of the product ships.
Beta tests can, however, be laden with unfulfilled promise and wasted cycles. Testers don't always find defects or report them—often you'll get a number of beta test reports that say nothing more than "everything's great!". Many of the defects that are reported are trivial. Some testers—especially the better ones—report a few defects, and then are never heard from again. A large beta test might be a useful publicity stunt, but it's debatable that numbers alone improve the quality of the test. Microsoft proudly announces that tens of thousands have beta tested Office and Windows. Ever seen a defect in any of those products?
A beta test, like life itself, is like a sewer: what you get out of it depends on what you put into it. The beta cycle can be used purely as a marketing ploy, with little useful feedback for engineering. However, if you prepare your product carefully, choose your testers wisely, and provide testers with appropriate incentives, you can get useful information from the beta. There are a few issues for which you should prepare:
In a well-written function, the best way to assure optimal performance is to reduce extra work and wasted cycles inside loops. If function takes a value, does some work, and returns exactly the same value, most people would remove that function from the code. And yet exactly this kind of sub-optimal performance appears when a product with plenty of known defects is released to beta test. Consider two scenarios:
Case 0:
Case 1:
I would consider most of these time estimates to be hopelessly optimistic in general, although the estimate for each step might be reasonable in a best-case scenario. I haven't included the time associated with processing the same report from two or more different testers. Moreover, I have not included any wait-states in this breakdown; there's going to be considerable lag time between the time the tester submits his report and the time that the BTC enters the report in the defect tracker, for instance. Turnaround times of a day are easy to imagine at several of the steps described above.
But even ignoring all those factors, look at the difference between scenarios! In the first case, the developer fixes the known problem before the beta, which takes n minutes. In the second case, a whole bunch of other people get involved pointlessly, and that costs n plus 42 minutes, according to my highly optimistic assessment. Multiply that by a thousand defect reports -- which would not be uncommon for beta test programs that I have observed -- and you've got 42,000 staff minutes. That translates to 700 staff hours or 17 staff weeks of wasted time. Or, to put it another way, you're wasting four people's time for a month for every thousand reports—all to get to exactly the same place you would have been had the beta shipped without the known, fixable defects. And that's just the wasted time—never mind the time that the product team needs to do the rest of their jobs. While developers, QA people, and outside testers are distracting themselves with the known defects, the unknown ones still lurk.
If the process above were rendered as a program function in C, most developers would have no problem in identifying the wasted cycles, and immediately would optimize out the unnecessary steps. Why not do in real life what you'd do in a few minutes in front of a debugger?
The most important optimization by far is to eliminate defects as early as possible in the development process. Most people consider fixing defects to be unpleasant; it's a lot more fun and a lot more exciting to add features. However, it's a much better use of time to address problems while they're fresh in your mind, and before the problems become embedded in the program's source. That means subjecting the requirements, the functional specification, and the source itself to review, with the goal of eliminating defects before they start to waste cycles.
Many people equate defects with programming errors. However, to paraphrase Gerald Weinberg, a program that has a lousy design but no programming errors is still a lousy program. The most important person in the program's design is the person who is going to use it. Make sure that the program follows the user's task, rather making the user follow the programmers' functions and procedures. Forestall the first wave of beta test reports—in most betas, you'll get plenty of suggestions on how to improve the user interface or the feature set of the program—by being highly conscious of the user during the design process, by reviewing functional specifications and prototypes carefully, and by performing usability testing on your product long before the beta ships.
Programmers are sometimes resistant to participating in code review. Usually the problem relates to ego and insecurity; sometimes additional resistance comes from bad experience. However, review is used by all other forms of engineering. No-one would consider working alone to build a bridge, and no engineering company would permit it. Development managers should make sure that the reviews are conducted by the development team with the goal of finding defects, not with finding fault. Code review is also a very inexpensive way to for developers to teach and to learn. If a developer is absolutely certain that his code is robust and free from defects, he is probably wrong; but in any case, you can leverage his pride to suggest that he teach his defect prevention techniques to other developers.
Automated tools, such as Lint and BoundsChecker can find subtle problems, improving quality while saving enormous amounts of time. Remember also that testing alone tells your neither about the cause of the problem nor where to find it in the code. Unless you choose to look for it, a defect such as a null pointer or a memory leak can easily lie submerged until the product ships to paying customers.
A product's installation program is often written by a junior programmer with relatively little supervision. However, the installation program is critical to assuring a good beta test. If a program cannot be installed at all, you will lose an entire beta cycle for each tester affected by the problem. If the product seems to install correctly but leaves out something crucial, testers will report problems that don't really exist in the core product, netting you nothing but red herrings and unnecessary work. Finally, if the product doesn't uninstall correctly, later beta cycles may fail to identify problems with missing files or configuration settings—those items will be left on the tester's platform from the previous cycle. For this reason, the installation program should be reviewed by senior developers and checked carefully against the product's specifications—another good way for experienced programmers to bring the junior ones along.
Several studies have shown that outside beta testers are dramatically less effective than internal QA staff. There are several reasons for this: QA staff have experience with the product, and often have access to the developers. Good management and a good test plan mean that QA tests more methodically and thoroughly than any outsider could; an outsider usually cannot hope to have the preparation or the discipline that a well-managed QA team can. Outside beta testers might not provide the kind of clarity or consistency that you expect from your own staff. Finally, there's a motivational issue. Your QA department is being paid to test your software; outside testers are typically volunteers.
There are a few obvious ways to improve on this. The first is to qualify your outside testers, and to continue to monitor the quantity and quality of problem reports that you receive. If you find that a tester is costing time by returning inadequate reports, drop him from the program. Due to the high cost of processing a report, the quality of beta testers is generally more important than the quantity. Second, prepare your testers; provide them with instructions and tools to help make their reports detailed, consistent, and efficient. Get detailed data on their test platforms once; assign that system an identifier, and make sure that the tester includes the identifier in defect reports. Give your testers clear instructions on testing goals and the areas of the product that you expect them to inspect. If you are aware of any serious problems, note them carefully and clearly. Don't send a product whose feature set is not complete; first you'll get remarks on the missing features, and second, after you add the feature, it will receive less testing coverage than the rest of the product. On later rounds of the test, use your problem tracking tool to generate a report that indicates clearly which problems have been fixed.
Provide incentives to your testers for producing plenty of clear reports. A free copy of the released software is fine as a courtesy to all of the testers, but it's not likely to be an inducement towards spending several hours on serious testing and clear reporting, which is what you're looking for. Consider substantial monetary awards for the top three providers of useful reports. You should be able to demonstrate easily a good business case for doing this. Current studies suggest that a single technical support call costs a minimum of $20; finding a single defect that generates five calls is worth $100.
Most importantly, remember that you waste your testers' time and your own by releasing beta products without finding and resolving all of the clear, obvious defects first. Beta testers want to find and report defects; if you make defects too easy to find, they'll simply find and report the easy ones. If your product has defects that can be resolved, don't stick to the beta release date just to say that you made it; you'll cause far more work for your product team almost immediately. Check the business case: it's usually worth delaying your beta release for a few days or even weeks, since each known defect that generates a report has a high probability of taking an hour or more of staff time away from your organization.
A beta test can give you valuable feedback and the assurance that your product works on a wide variety of platforms. By releasing a product that is truly ready for the test, and by preparing your testers properly, you can make sure that the positive feedback does not cost you unnecessary time and effort, and helps to improve the quality of your product.
This essay is copyright ©2003 Michael Bolton. If you liked it, please let me know. If you didn't, please let me know how I might improve upon it. If you'd like to be notified when I've posted another essay that you might enjoy, please click here to be put on my mailing list. If your reply-to address isn't altered to prevent spam, you don't have to put anything in the body of the message.
You are welcome to reprint this article for your own use or for your company's use, as long as you follow the instructions here (./reprints.html).
Best of all, if you (or your company, or your manager, or your employee) need counselling or instruction in this area, I can help with engaging and informative courses on quality assurance and software testing in plain English that can save your company lots of time and money. Contact me for details. Thanks!
Last revised: 14-Mar-2001
File name: EffectiveBetaTesting.html