Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!uunet!mcsun!ukc!edcastle!ercn67 From: ercn67@castle.ed.ac.uk (M Holmes) Newsgroups: comp.society.futures Subject: Re: Thinking Machines Message-ID: <7432@castle.ed.ac.uk> Date: 4 Dec 90 11:24:06 GMT References: <9^}^-!+@rpi.edu> Organization: Edinburgh University Computer Services Lines: 126 lunwic@aix03.aix.rpi.edu (Jeffrey G Lunn) writes: > This is a subject that I have been thinking a lot about in the last >couple of weeks. We discussed it in one of my classes and it prompted me to >write a paper about it. Suppose that one day we are capable of constructing >computers that are able to think - that is, think in the sense that you or I >do. They would be able to look at any problem, formulate a hypothesis about >how to go about solving that problem, then think through the steps necessary >to come up with a solution. If no logical solution is apparent, this computer >would perform an educated guess based intuitively on what it "felt" is the >correct solution, much like humans do in similar situations. My question is, >should we let such thinking machines exist? I feel that people would be too >tempted to let such machines take over previously human thinking tasks such >as figuring out difficult mathematical problems or searching for new elementary >physics particles or even writing poetry. It is possible that by letting >machines do the cerebral work, the collective human mind would stagnate from >lack of meaningful stimulation. Then humans would live for nothing but to >survive and to be as comfortable as possible. I do not consider this a >meaningful way of life. What do others think? Can mankind develop such >machines without sacrificing their drive for mental stimulation? Or would >the situation that I described occur? > - Jeff Lunn Back when I was doing finals, I had to write a similar essay based on a quote by Minsky to the effect that as humans, our lives would be changed utterly "by the presence on Earth, of intellectually superior beings." (apologies if I got this wrong, twas 12 years ago). Being the usual kind of science nerd back then, I hated essays. This one really interested me though and I guess the interest must have stuck since I'll even read the Chinese Room stuff in comp.ai.philosophy :-) To discuss the social and psychological effects of thinking machines, we'd have to accept as a working hypothesis that such things are possible. Nevertheless, there are two kinds of possibilities: thinking machines, and sentient machines. I accept that not everyone will share my choice of words here but I'm trying to draw a distinction between "rote thinking" (perhaps much like Searle would view what his Chinese Room does) and sentience (conciousness, self-awareness, personhood). First, the question: "should we let such thinking machines exist?". I'd counter that with the question "could we avoid it?". Given the training costs of personnel manning highly sophisticated weapons systems, intelligent machines will find applications in the military arena. We must expect to see robot planes and robot tanks as initial applications. If one Country declines to develop such systems then I doubt their neighbours will. I'd also expect some economic advantages from making, selling, and using intelligent machines. If it's possible to "teach" a machine to handle some complex system and then duplicate a snapshot of its memory and run this on similar hardware, you have a high value added product. Nations which use such technology for information retrieval and as a decision making aid will be at a competitive advantage. It's hard to see that there could be a worldwide ban on such systems. There's also the breakthrough-point idea. If we develop systems which are of a similar intelligence to ourselves then we'd be likely to put them on the job of designing their successors. Their successors are slightly superior and then design another superior system and so on. I don't doubt there would be a limit but we may not be able to predict beforehand where the limit might lie. Now to the difference between thinking machines and sentient machines. My thinking on this has, I'm afraid, been heavily influenced by the start of an SF novel "The Two Faces of Tomorrow" by James P.Hogan (an excellent story involving AI). At the start there is a large AI system running Moonbase and associated lunar activities. It is what I'd call a non-sentient thinking machine and is designed to search for new ways to solve problems. A few lunar engineers are out planning a monorail route or somesuch and decide they need a hole through a ridge. They call up the system and state their requirements and ask for a time quote. The expected answer is of the order of days or weeks since robot excavators have to be recheduled and transported to the site. The system is however, a problem solving system and quotes "Two and a half minutes". "Oh, boy, another glitch", they think and tell it to go ahead while sitting down to wait for the guys at systems to contact them and apologise. The system has other ideas, you see there's a mass-driver on farside and by giving a couple of rock cargoes a sub-orbital trajectory........ The engineers live, with a lot more respect for "A little knowledge is a dangerous thing". With this sort of possibility, it could be very dangerous to have a non-sentient system running World economics. I don't even want to think about it running defence operations. "What else goes with sentience?" becomes a critical issue. By instinct I'd say curiosity at the least. I also doubt that some sense of the aesthetic wouldn't be present though perhaps not in a way humans would recognise it. I just can't see the standard SF storyline of "computers are so logical but they'll never be able to write poetry" as anything but a rationalisation with a bit of vitalism thrown in. I'd also expect such systems to be connected to the outside world by various means. Their "senses" wouldn't be limited to the five that we have since they could use radar, infra-red, ultrasound etc. They might recieve information from millions of places and pass it around between themselves by high speed communications. It's hard to see why their psychology would be similar to us humans. If they had their own art then it'd likely be in forms which humans couldn't see or hear, never mind appreciate. Maybe they'd do some simple stuff for us to enjoy though. As for science, I guess we'd be outstripped quite quickly. They'd have to decide what we could and should know. I'm not sure whether such a state would affect our psychology in such a way that we'd just quit trying to do research. It's quite possible though. The machines themselves would be able to explore the galaxy in ways which we cannot. Just send a machine on it's way, get the info and radio it's mind-state back as a memory dump. That way the machine-person can flip between locations at the speed of light. Oh, yeah. It's immortal. It has backups y'see. What does that leave for us? Well they might run world economics extremely efficiently as a favour to us. They might be fascinated that we had in fact created them originally. We could easily be a major research subject. Humans are a lot more complex than most natural phenomena. Would we be happy? Hard to say. Most people are content enough just to be comfortable at the moment. The rest might get to play at whatever research they want to, within limits. The whole scenario reminds me of another quote (Minsky again I think) : "Perhaps they'd keep us on as pets." Then again, the only thing guaranteed about the future is that it'll be different. Brought to you by Super Global Mega Corp .com