Path: utzoo!attcan!uunet!aplcen!uakari.primate.wisc.edu!zaphod.mps.ohio-state.edu!tut.cis.ohio-state.edu!mailrus!ames!haven!uvaarpa!mcnc!duke!romeo!crm From: crm@romeo.cs.duke.edu (Charlie Martin) Newsgroups: comp.software-eng Subject: Re: Reuse and Abstraction (was: reu Message-ID: <19946@duke.cs.duke.edu> Date: 31 May 90 15:29:14 GMT References: <4979@stpstn.UUCP <102100009@p.cs.uiuc.edu <80449@tut.cis.ohio-state.edu <19614@duke.cs.duke.edu <80685@tut.cis.ohio-state.edu <19760@duke.cs.duke.edu <80884@tut.cis.ohio-state.edu <5122@stpstn.UUCP <19855@duke.cs.duke.edu> <5131@stpstn.UUCP Sender: news@duke.cs.duke.edu Reply-To: crm@romeo.UUCP (Charlie Martin) Organization: Duke University CS Dept.; Durham, NC Lines: 162 You know, I really love it when these net discussions get productive like this..... In article <5131@stpstn.UUCP> cox@stpstn.UUCP (Brad Cox) writes: > >The IEEE article argues that we should begin thinking of object-oriented >(and older) technologies, not as fancier ways of building things from >scratch, but modularity/binding technologies. And just as other domains >have become quite adept at deploying diverse modularity/binding technologies >in common (i.e. sheep herding, shearing, spinning, weaving, sewing, >zippers/buttons/bows can all be viewed as modularity/binding technologies >at different, but thoroughly compatible levels), we need to stop viewing each >technology as a panacea (i.e. Ada vs Smalltalk, or Objective-C vs C++), but >as a tool relevant to a specific level of the software producer/consumer >hierarchy, and usually irrelevant at others. For example, the fact that >C++ provides outstanding gate/block-level integration facilities is >relevant to those whose work demands a better C, and irrelevant to those >whose work demands a chip-level (Software-IC) modularity/binding technology. > I *think* I agree with you to a great extent, although I'm beginning to get a bit lost here. My inderstanding and responses: We shouldn't think of OOP and reuse as a way of building things from components. Instead they represent "modularity/binding technologies". These are different mechanisms for combining what you have into what you want depending on the level of abstraction implied by what you need. Different langauges and such are suited to different levels of abstraction. As an example, C++ is well-suited to binding at a low level (analogous to discete components) and other things are better suited to a higher level. You give Software-IC as an example, from which I presume you mean that Objective-C is such a higher-level mechanism. The level of abstraction point is a good one, and is why we've mostly stopped writing machine language programs. (There are only a few levels of abstraction where machine language is appropriate.) But every time I try to get my hands on the modularity/binding technologies point, it falls apart and seems to turn into "putting things together from components." If you have higher-level components it is easier, so long as the higher level componets are suited to what you are trying to do. I guess that sounds tautologous now. Can you expand on your point a little? On the gate-level vs higher-level point, I've used both Objective-C and C++ and I have to admit I didn't see much difference in the language per se. There is/was a bigger library of higher level components to draw on, but then this is the result of longer and better funded development of these components. I once had a small argument with Tom Love that came down to his saying: "Here's a big application. How many lines for you to write it from scratch, no fair copying in parts of vi or using dbm?" "50,000 lines." "I can write it in Objective-C in 200 lines." "How?" "Well, first I instantiate my editor and data base class libraries...." "Isn't that reuse?" "No, that's class inheritance." The point being that if you *define* the problem statement the right way, it's easy to get the right answer. So is your second point that Objective-C is that much higher level than C++, and if so, on what basis? Or is it that the bigger IC libraries gives OC the advantage. >In other words, think of Objective-C, not as a *language*, but an >environment that supports a modularity/binding technology analogous to >socketed pluggable hardware ICs. There are many ways of building >chips that can plug into these sockets, and many vendors who are doing >so (many of whose work has been bought repackaged, and is now being >distributed by Stepstone). But as long as the components abide by the >socket standard, they do not need not be built via Objective-C. They >can be built in plain C, or C++, or even Cobol, and Objective-C >components can be used via any of these. Its only a matter of abiding >by the low-level socket standard...which brings me right back to where >we started, the need for specification/testing languages capable of >describing, and then verifying compliance to, these standards. I still don't see that OC is particularly advantageous compared to C++, but maybe that's a side issue. In any case, it's your product and your idea, and you're entitiled to some pride of ownership: it really is a substantial thing. One problem with the "socket standard" -- and this is a practical problem, not an objection to your definition -- is that the number of reasonable degrees of freedom is so large in software. Unlike circuits, the bandwidth of our connections is very high. This leaves more room for variation, and it gets used. >>Also, my experience with both Smalltalk and Objective-C (admittedly >>limited) has been that to use the libraries of classes available, one >>had to buy into a lot of architectural assumptions that might or >>moght not fit what was needed, e.g., garbage-collection or >>interpretive codes. (At a little higher level, things like >>model-view-controller.) To use any other architectural assumption >>meant backing up and rebuilding the universe from primitives. >Software's abstract/concrete hybrid nature is simultaneously our curse, and >our blessing (in that it guarantees us higher salaries than those who work >in more concrete domains, like flipping burgers). But apart from this >fundamental difference, I'd argue that other domains experience exactly >the same kind of complications, but have grown thoroughly adept at mastering >them. For example, you might want to take advantage of a hardware store's >new inventory in copper plumbing pipes, but in a house with existing iron >pipes, you'd face corrosion problems; a preexisting architectural decision. >But somehow mature industries like plumbing always seem to provide a way >of getting around incompatible architectures; i.e. by selling an suitable >adaptor. I grant you, we're far more immature an industry than plumbers, >and off-the-shelf adaptors are still rare and raise cultural objections, >such as thinking of such adaptors as kludges. > >> Re: academic types, please see Ralph London's formal specification (in Z, >> as I recall) of the Smalltalk Set class. >> > >I get your point re: Oxford, but Ralph London is an American thru and >thru. Works at Tektronix Labs, I think in Beaverton. Wish I had a >reference to his article to give you, but it was beautiful work, and >insightful into the difficulty of applying a mathematical approach to >well-understood piece of concrete engineering effort. Sorry, I'd remembered the paper as having been published as an Oxford monograph. Can't find the reference right at hand. But this is still someone other than an academic.... >Without disagreeing with you one bit that quality costs money that the >market can be reluctant to pay, be aware that although Stepstone's code >started out as rapid prototypes (like any other code), after seven >years of improving it is considerably better than that. The same is >true of Smalltalk, except they've been at it longer (twenty years or >so). As to *correct*, we're back to the original topic...what can it >even *mean* if I say Stepstone's *putative* Set is an *actual* >implementation of some abstract definition of Set, other than to point >to our a of gauges that at least does verify compliance within some >stated tolerance. Since we *have* gone so far as to define what we mean >by *correct*, I'd claim (sticking neck waaay out) that our code is not >mere rapid prototype, but also *correct* within a defined sense of >"correctness". I'd be surprised if we'll ever be truly able to exceed >that level, based on the fact that other mature disciplines (bridge >builders, plumbers, space shuttles, etc) have never managed to. Sorry, I think I wasn't clear there: what I meant to say was that the *advantage* you-all sell is not greater code correctness -- which doesn't seem to sell well -- but the productivity advantages. Or am I mistaking your approach from the PPI days? On the Brits vs Zulu point, what I meant was that superiour weapons must be DAMNED superior before they are a telling point in warfare. The British defeated multiple thousands of Zulus with spears and shields by using Gatling guns and just a few hundred soldiers. It wasn't because the Zulus were unskilled or cowards (the opposite: they should have run like rabbits!) but in that case better technology. Your US in Viet Nam is a good analogy for the other side: given appropriate tactics, small similarly armed forces can whup bigger forces. And my point was that maybe it isn't the coding etc technology at all, but rather Something Else. I think rigor in reasoning about software MIGHT be that something else. Charlie Martin (...!mcnc!duke!crm, crm@summanulla.mc.duke.edu) O: NBSR/One University Place/Suite 250/Durham, NC 27707/919-490-1966 H: 13 Gorham Place/Durham, NC 27705/919-383-2256