Path: utzoo!utgpu!jarvis.csri.toronto.edu!rutgers!aramis.rutgers.edu!athos.rutgers.edu!nanotech From: dmocsny@uceng.uc.edu (daniel mocsny) Newsgroups: sci.nanotech Subject: Re: Intractability of active-shield testing Message-ID: Date: 22 Jun 89 02:33:20 GMT Sender: nanotech@athos.rutgers.edu Organization: Univ. of Cincinnati, College of Engg. Lines: 66 Approved: nanotech@aramis.rutgers.edu In article , dmo@turkey.philips.com (Dan Offutt) writes: > Suppose that AI-based design systems that can think a million times as > fast as a human designer become possible, inexpensive, and numerous. > What changes would this imply in the rate of technological advance? It would change everything in ways we can hardly imagine at present. But we can amuse ourselves by speculating, and arguing ;-) > ... the increase will be much less than > proportional to the hardware speedup obtained. Million-times-faster > designers cannot bring in one year the designs that unspeeded > designers would bring in a million years. One reason, briefly, is > that a speedup in conscious design cannot serve as a substitute for > real-world testing of design realizations. Real-world testing takes > time, cannot be speeded up without substantial risk, and produces > empirical data about design performance that cannot be obtained in > any other way and which is a critical ingredient in subsequent > design efforts. OK, but hold on a second! Think about all the data consumers generate every day that vendors have no choice but to ignore because (1) they can't handle the data volume (2) no communication systems are in place to make gathering the data easy and (3) the data is unavailable for political reasons (e.g., trade secrets, inter- and intra-corporate rivalry). If we grant your original premise, that mechanical super-intelligence is cheap and ubiquitous (and further assume that humans will be able to stay on top of it!), then vendors will have *vastly* increased ability to gather data and use it. Similarly, consumers will have a vastly increased ability to record and report complaints. Even if real-world data doesn't get generated any faster, if we simply start using a vastly larger portion of the data now going to waste, product improvements will speed up drastically. Think of all the millions of consumers out there using all of those products. How much time passes now before major design flaws filter back to the vendors and are corrected? Too much. Similarly, enormous amounts of data are already available for every major product category one cares to name. I suggest that most of what a present-day vendor will learn from a product-testing program must already be available in principle. Here we can divide the data into essence and accident. The accidents are all those things you should have already known (for example, so many automobiles have been sold that by now the general outline of consumer preference should not be any great surprise), whereas the essence is whatever really is new about the product and heretofore untested. With massive increases in data-gathering and -handling power, vendors will be able to greatly increase their efficiency in designing products that work the first time. But we will observe even more fundamental changes. For example, if we had super-intelligent machines, we would probably not use them to build better automobiles. Instead, we would no longer need the present levels of automobile use, because the existence of such machines would imply the existence of communication technology fast and transparent enough to make most of our present physical travel a waste of time. When the jet engine appeared, nobody tried to mount one on a horse. New technological capabilities do not always help you do better what you are already doing. Instead, they often push you into doing entirely new things. Dan Mocsny Snail: Internet: dmocsny@uceng.UC.EDU Dept. of Chemical Engng. M.L. 171 513/751-6824 (home) University of Cincinnati 513/556-2007 (lab) Cincinnati, Ohio 45221-0171