Path: utzoo!utgpu!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!elroy.jpl.nasa.gov!decwrl!mcnc!duke!crm From: crm@duke.cs.duke.edu (Charlie Martin) Newsgroups: comp.software-eng Subject: Re: COCOMO Message-ID: <677386979@macbeth.cs.duke.edu> Date: 20 Jun 91 03:03:01 GMT References: <1991Jun18.033606.1362@netcom.COM> <568@smds.UUCP> <1991Jun19.174716.6861@netcom.COM> Distribution: comp Organization: Duke University Computer Science Dept.; Durham, N.C. Lines: 52 In article <1991Jun19.174716.6861@netcom.COM> jls@netcom.COM (Jim Showalter) writes: >>On the other hand I will grant you that modern software and hardware >>technology does mean that you can do more in the same number of lines >>of code, and that it is much easier to build large programs. > >Well, this wasn't my original objection, but as long as you've brought >it up, this is the OTHER thing I find absurd about the COCOMO approach: >the focus on SLOC. You cranked out 20 KSLOC of code in 1962. What language >was it? Assembler? FORTRAN? In 1990 I cranked out 20 KSLOC of Ada, >complete with tasking, genericity, exception handlers, constraint checking, >etc. To achieve this same level of functionality in a 1962 language, I >might well have had to write 10-100 times as many lines of code. How >much assembler would you have to write to provide dynamic binding and >inheritance a la C++? My guess is: considerably more than you could write >in a year (ask Stroustrup!). It's a good point in a way -- the productivity in function realized with a high-level language is higher. So what you get for your 20 KSLOC is more stuff in some sense that is awfully hard to measure precisely. On the other hand, we still tend to get 20 K SLOC of it per man year. I think the answer is that we can't figure that measuring SLOC is the *only* measurement of interest, but in terms of effort and costing models, the fact that SLOC/man-hour remains relatively constant across environments makes COCOMO and related models very attractive. > >As with all such metrics, the baselines should be calculated in terms of >COMPLEXITY. How many complexities could you write per year in 1962 vs >1991? THAT'S the critical issue, and I believe there is a wealth of >evidence that modern languages and environments have greatly increased >productivity when it is measured in this way. SLOC obscures the truth. Sure, and I hope you'll forgive me if I point out its also sort of old news. The paradoxes of SLOC counts between HLL's and assembler were being pointed out at least in the late sixties. Another point is one that Dijkstra harps on a bit -- SLOC measures a kind of bulk productivity that might give a disincentive to good and elegant programming. You must be careful not to *reward* people on the basis of SLOC per hour at a desk, because you'll get crap -- big crap. But all the other metrics that are predictive -- Halstead volume, McCabe cyclomatic complexity, function points -- also turn out to predict effort about as well as Intermediate COCOMO. (This is less of a surprise that it sounds like: since I COCOMO is predictive and the other ones are also predictive, they *must* correlate.) Again, the correlation is closer than *any* method's accuracy; there is no empirical reason to prefer the others. -- Charlie Martin (...!mcnc!duke!crm, crm@cs.duke.edu) 13 Gorham Place/Durham, NC 27705/919-383-2256