Path: utzoo!utgpu!news-server.csri.toronto.edu!rutgers!apple!agate!ucbvax!DECWRL.DEC.COM!mwm From: mwm@DECWRL.DEC.COM (Mike Meyer, My Watch Has Windows) Newsgroups: comp.society.futures Subject: Re: Thinking Machines Message-ID: <9011301857.AA03669@raven.pa.dec.com> Date: 30 Nov 90 18:57:22 GMT Sender: daemon@ucbvax.BERKELEY.EDU Organization: The Internet Lines: 22 >> I feel that people would be too >> tempted to let such machines take over previously human thinking tasks such >> as figuring out difficult mathematical problems or searching for new elementary >> physics particles or even writing poetry. So? Having done a few of these, I will categorically state that having non-human intelligences doing them wouldn't keep me from doing them. People motivated to do those things will do them whether NHIs are doing the same thing or not; people not motivated to do them won't, whether NHIs etc. Can you propose a mechanism whereby having the NHIs doing these things would cause people to be non-motivated? On the other hand, having a non-human viewpoint on a problem could lead to a solution that otherwise wouldn't be found. For the activities you talk about, this isn't critical. But consider critical problems that humans haven't found good solutions to (or at least hasn't been able to apply them globally if they've been found): freedom from vs. freedom to; distribution of wealth; greed; aggression; etc. Potentially solving these problems is worth quite a bit of risk.