Path: utzoo!utgpu!jarvis.csri.toronto.edu!rutgers!cs.utexas.edu!uunet!mcvax!ukc!etive!aipna!edai!cam From: cam@edai.ed.ac.uk (Chris Malcolm cam@uk.ac.ed.edai 031 667 1011 x2550) Newsgroups: comp.ai Subject: Re: Turing Test and Subject Bias Message-ID: <416@edai.ed.ac.uk> Date: 12 Jun 89 20:37:54 GMT References: <3018@crete.cs.glasgow.ac.uk> <1108@hydra.cs.Helsinki.FI> <3039@crete.cs.glasgow.ac.uk> <408@edai.ed.ac.uk> <3079@crete.cs.glasgow.ac.uk> Reply-To: cam@edai (Chris Malcolm) Organization: University of Edinburgh, Edinburgh Lines: 25 In article <3079@crete.cs.glasgow.ac.uk> gilbert@cs.glasgow.ac.uk (Gilbert Cockton) writes: >So what is the common practice? >Again, how *DO* AI types test their systems? As I've said before, in ways as various as the intended capabilities of the systems. For example, (you're not going to like this!) I'm developing a system which can plan how to assemble shapes out of parts. How do I test it? I tell it the shapes of the parts, the shape to build, and then watch the robot build the shape (or fail, as the case may be). The criterion is simple and indubitable. I cannot imagine there ever being any dispute about whether or not the robot succeeded (except trivial borderline pedantries). By developing I mean that I'm trying to extend the capabilities of the system. It is not a complicated system; there are probably thousands of ways in which it could be built. What is interesting is that some ways are very simple, whereas others are very complex. What is even more interesting is why, i.e., the interesting research questions concern good (simple, economical) architectures for building systems capable of successful thought and action in a world. -- Chris Malcolm cam@uk.ac.ed.edai 031 667 1011 x2550 Department of Artificial Intelligence, Edinburgh University 5 Forrest Hill, Edinburgh, EH1 2QL, UK