Path: utzoo!utgpu!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!zaphod.mps.ohio-state.edu!mips!apple!netcomsv!jls From: jls@netcom.COM (Jim Showalter) Newsgroups: comp.software-eng Subject: Re: COCOMO Message-ID: <1991Jun18.033606.1362@netcom.COM> Date: 18 Jun 91 03:36:06 GMT References: <677047335@macbeth.cs.duke.edu> Distribution: comp Organization: Netcom - Online Communication Services UNIX System {408 241-9760 guest} Lines: 23 >(1) Empirically, in any organization, man-months per 1000 lines of code >(K SLOC) is roughly constant, no matter what language or environment is >used. So, we can always assume that effort in man-months is >proportional to size in KSLOC. The primary complaint I and others have with the COCOMO model is the above claim. To assert that a person writing in some homebrew dialect of FORTRAN using a line editor on an IBM mainframe with a circa 1962 debugger is as productive (or even within two orders of magnitude as productive) as a person using the latest-greatest software development environment and one of the modern software engineering oriented languages (e.g. Ada, Eiffel, C++, etc) is prima-facie absurd, claims of empiricism notwithstanding. I have empirically observed exactly the opposite: that productivity varies wildly between different software development organizations (and that those who are more productive have a significant competitive edge), and I believe any estimation model that fails to take this into account is inherently inaccurate. At a minimum I suggest that estimation models should contain a factor for an organization's SEI rating. -- *** LIMITLESS SOFTWARE, Inc: Jim Showalter, jls@netcom.com, (408) 243-0630 **** *Proven solutions to software problems. Consulting and training on all aspects* *of software development. Management/process/methodology. Architecture/design/* *reuse. Quality/productivity. Risk reduction. EFFECTIVE OO usage. Ada/C++. *