Path: utzoo!attcan!uunet!tut.cis.ohio-state.edu!rutgers!mcnc!duke!romeo!crm From: crm@romeo.cs.duke.edu (Charlie Martin) Newsgroups: comp.software-eng Subject: Re: Reuse and Abstraction (was: reu Message-ID: <19855@duke.cs.duke.edu> Date: 29 May 90 00:40:03 GMT References: <4979@stpstn.UUCP <102100009@p.cs.uiuc.edu <80449@tut.cis.ohio-state.edu <19614@duke.cs.duke.edu <80685@tut.cis.ohio-state.edu <19760@duke.cs.duke.edu <80884@tut.cis.ohio-state.edu <5122@stpstn.UUCP Sender: news@duke.cs.duke.edu Reply-To: crm@romeo.UUCP (Charlie Martin) Organization: Duke University CS Dept.; Durham, NC Lines: 138 In article <5122@stpstn.UUCP cox@stpstn.UUCP (Brad Cox) writes: ambiguity [is eliminated]... by developing a *common vocabulary* of *tests* that both parties, producer and consumer, can agree on as determining the meaning of these terms. For example, "pound" is defined by a test procedure involving a scale, and "ten-penny" by a test procedure involving shape recognition by the natural senses. I rather like this idea, in general. It also fits in with the problems of describing "formally" what meeting a requirement means. But I've got some questions about its application.... Since test procedures based on scales and shape recognition fail to help with intangible concepts like "Set", we must become ultra-diligent in developing and *publishing* test procedures to detect each of the terms involved in specifying software components, to create the very *basis* for non-ambiguous producer/consumer dialog. Then, with a sufficiently large, robust, and widely-accepted library of such test procedures, we can take the next step of building a specification/testing *language* that can compose meaningful sentences, i.e. *specifications* from these primitive terms, and then compile these sentences to build go/nogo gauges that can determine whether a putative implementation is in fact an actual implementation of that specification. Charlie also argues: Actually, it's pretty hard to show a library of reusable components that are generally agreed to be reusable, and non-trivial. The Smalltalk environment is one; the Objective-C System-building Environment is another. No question that these are more reusable than many other possibilities, but both of these (and most others, certainly all others of which I'm aware) make a further implicit constraint on the systems that can be built with them: they require that the systems be realized with only components from a particular "catalogue". This is a problem that occurs over and over again with reuse by composition from a collection of spare parts: you can't merge parts from two catalogues. Thus if one company's catalogue doesn't have what is needed, you have no other options. This isn't necessarily a bad thing from a commercial standpoint, and it certainly isn't part of the usual company's reward structure to build stuff that makes it easy to include competitors products. But it does mean that one buys into a raft of new problems as soon as one gets into reuse of commercial components. Also, my experience with both Smalltalk and Objective-C (admittedly limited) has been that to use the libraries of classes available, one had to buy into a lot of architectural assumptions that might or moght not fit what was needed, e.g., garbage-collection or interpretive codes. (At a little higher level, things like model-view-controller.) To use any other architectural assumption meant backing up and rebuilding the universe from primitives. It s an interesting sort of task, but it's one that academic types are not attracted to because a formal specification of a basically uninteresting library isn't going to make a good publication, and one which industrial types aren't likely to try unless soemone comes up with the money. Re: academic types, please see Ralph London's formal specification (in Z, as I recall) of the Smalltalk Set class. The Oxford people are an interesting case, because (for reasons I don't completely understand) you apparently get some sort of rewards for doing things that wouldn't be considered quite "research" over here. Perhaps because if Tony Hoare and Robin Milner say it's good enough, no one would dare to object. Or perhaps it has to do with the Oxford name on the box. In any case, this is good stuff. There ought to be more of it. Why isn't there? Noting that there is one example doesn't make much of a case for it being a hot research area. Re: industrial types, please note my previous transmissions about Stepstone's use of specification/testing for all of our Software-IC products. We do not view this as an academic matter, but a matter right at the heart of our business. How will a robust commercial marketplace in fine-granularity software components even be *thinkable* unless producers and consumers can develop a way of agreeing on what is to be bought and sold? I'm with you here, and I'm looking forward to seeing the paper you've mentioned. Another part of this is the apparent insensitivity of the marketplace to issues of quality: you can't sell more correct, but you can sell faster or earlier availability. (Even Stepstone is selling rapid prototyping not correct codes.) Actually, there are some academic types (e.g., me) who don't think the exercise is so trivial, and who don't believe that there are "no technical difficulties" in developing a well-designed library of reusable components. Even things like stacks, lists, associate search structures, graphs, etc., ar NOT designed properly for reuse as they appear in the CS textbooks or in existing component libraries. In fact, though, judging from personal experience, Charlie is right if he is suggesting it is hard to convince many people of this. People don't really want to know WHY it's hard to "get it right", nor do they want to consider alternative designs even if they come with (so far, at least) an irrefutable rationale. This means it IS difficult to publish new designs for things people believe they already understand, and almost as difficult to find industrial funding. We hope, however, it does not turn out to be impossible :-). To be somewhat more controversial here, the academic priesthood is highly unlikely to make great contributions precisely *because* they're conditioned to believe that only technical problems are significant. They focus on the *weapons* (i.e. OO technology), and neglect the *war* (software industrial revolution). Please take it from one who *knows*, the costs of building commercially robust components are *not* in getting the code running. There is at least a ten-fold greater cost in getting them tested and documented, and another ten-fold increment in sales/marketing. Both of these latter steps (100 fold) are unrelated to *implementation*, and closely related to *specification*. Since I'm at an academic site, I think I need to throw in the obligatory complaint about the implication that we at academic sites don't *know*. Don't fall for that, lots of us at .edu sites have done substantial real-world work. But let's continue the war analogy a little further. Outside of a few special cases (the British vs the Zulu, say) neither the weapons nor the desirability of the goal has really affected the way the war came out. What does seem to make a difference is the right tactics. So far, we seem to be agreeing that the right tactics are more "formal" or rigorous in some sense. What I wish we could do is measure the effect of those tactics. Charlie Martin (...!mcnc!duke!crm, crm@summanulla.mc.duke.edu) O: NBSR/One University Place/Suite 250/Durham, NC 27707/919-490-1966 H: 13 Gorham Place/Durham, NC 27705/919-383-2256