Path: utzoo!utgpu!news-server.csri.toronto.edu!rutgers!uwm.edu!ux1.cso.uiuc.edu!ux1.cso.uiuc.edu!m.cs.uiuc.edu!marick From: marick@m.cs.uiuc.edu Newsgroups: comp.software-eng Subject: Re: Software Quality Assurance (SQA) re Message-ID: <39400106@m.cs.uiuc.edu> Date: 14 Jun 90 20:53:00 GMT References: <191025@<1990Jun12> Lines: 129 Nf-ID: #R:<1990Jun12:191025:m.cs.uiuc.edu:39400106:000:5829 Nf-From: m.cs.uiuc.edu!marick Jun 14 15:53:00 1990 I've narrowed the question down to "Who does testing?". Because everyone seems to have a different idea of what SQA does, this may not speak to the original question. If so, sorry. ========== The two extremes are that developers do all their own testing and that all testing is contracted out to a completely independent organization. Although people do make hard-and-fast rules, like "testing should be separate from development", designing a testing strategy and organization is just like designing anything else -- it involves making tradeoffs between conflicting goals. You have to figure out your goals, figure out the tradeoff, and make a choice that will work for you. And then, when you find out it didn't work, you figure out why and try to improve it next time. For example, here's an idealized picture of a tradeoff between developer testing and independent testing. The developer curve is marked with an = and independent testing is marked with a %. Developer testing finds more bugs earlier, while the independent tester is still getting up to speed. In the long run, though, the independent tester wins out, because the developer will miss bugs caused by his own misunderstandings. | % | | % | | % B| = = = U| = = G| = % S| = | = % F| = O| = % U| = % N| = % D| =% % % % --------------D1---------------------------D2----- TIME So, which do you pick? Well, if you have an absolute deadline at D1, you should pick developer testing. At point D2, independent testing will yield the best product. However, there's an important question that the graph doesn't capture -- what *kind* of bugs are found by the different people? Does the developer discover a few deep important bugs, while the independent tester discovers a lot of low priority bugs? It often seems that way. Much of this is because independent testers have the software "thrown over the fence", and they don't have the time or opportunity to learn it well enough to test it well. And, sad to say, a lot of the people in testing aren't very good. They're either employees not good enough for development or freshouts learning the ropes in their first assignment (and ready to jump ship to development the first chance they get). And you want good tests in this situation? -- especially given that effective testing is highly heuristic, meaning it's based on experience and skill as a programmer and designer. In such a situation, you're often better having the developer test his or her own code. It's a myth that developers can't test. They might not be able to do as well as the hypothetical equally competent independent tester, but most of them can do quite well. They don't want to, so they do a half-hearted job. And they don't know how, so their half-hearted job is a bad one. And they don't have decent tools, or a system designed to be testable, so they rebel against the scutwork required. (Freshouts can't rebel, you see, so they get stuck with the work.) All things being equal, developer testing would be best, because it's cheapest. But, in practice, developer testing often means bad testing. What's reasonable compromise? A moderately independent, semi-rotating testing organization that "loans" people to projects seems to work well. What does *that* mean? 1. Moderately independent means that a project manager can't swipe testers to fill development holes without a fight with an equal. Testers who work for you are too tempting (and too easily overruled). 2. Developers rotate in and out of testing. They learn to test. Because developers know testing, and testers know development, the quality of both development and testing rises. 3. But there has to a be a semi-permanent core to the testing group. They're the experts. They do the training. They build or buy tools. They improve the process. 4. "Loaning" means that the testers act as full members of the projects they're testing, except that they also report to the testing manager. This helps avoid the us-vs-them tribal battles that are endemic with separate testing organizations. 5. Loaning also means that testers enter the project early. They participate in design at all levels. Their job is to act as the Devil's Advocate -- that person who persistently and annoyingly asks, "What could go wrong here? Why won't this work?". (In a mature development+testing organization, I would expect this to be the tester's most important role.) If no tester is available, someone should still explicitly have this role, as it will flush out nasty errors. 6. Since they're loaned early, testers can write the test cases early. There's no good reason not to have test cases for an interface when the interface is finished. You *will* discover design errors that can be corrected early -- why wait until a bunch of code has to be thrown away? 7. Since the testers will be infected with developer misunderstandings, an independent testing team that makes a reasonably quick pretend-I'm-a-user test run over the system is a useful backup. There are certainly situations where this organization is not appropriate. As always, the way you do things is very dependent on what you're trying to do. What are your quality goals? What fits into your culture? What are your organization-development goals? What are your time/resource constraints? One thing: when starting up such a testing organization, it helps to have a strong-willed manager who's widely respected, and also technically strong team members (who "smell right" to developers) who nevertheless have good social skills and common sense. Brian Marick Motorola @ University of Illinois marick@cs.uiuc.edu, uiucdcs!marick