Path: utzoo!utgpu!jarvis.csri.toronto.edu!rutgers!apple!versatc!mips!prls!philabs!linus!mbunix!bwk From: bwk@mbunix.mitre.org (Barry W. Kort) Newsgroups: comp.ai Subject: Re: Free will and responsibility. Summary: Therapy for Carbon-Based Neural Networks and Silicon-Based Machines Keywords: Libertarianism, behaviorism , existentialism Message-ID: <56038@linus.UUCP> Date: 13 Jun 89 21:01:56 GMT References: <10333@ihlpb.ATT.COM> <3850@uhccux.uhcc.hawaii.edu> <52019@linus.UUCP> <54908@linus.UUCP> <1385@lzfme.att.com> Sender: news@linus.UUCP Reply-To: bwk@mbunix (Barry Kort) Organization: Protoplasmics Ltd., Cleft Chasm, NM Lines: 19 In article <1385@lzfme.att.com> jwi@lzfme.att.com (Jim Winer @ AT&T, Middletown, NJ) writes: > My own experience has shown that accurate observational reports are > useful only if you (as therapist) are trained (and willing), > and the subject (patient) is also willing... I agree. Therapy must be by mutual consent. > It would be interesting to put an artificialintelligence into > abreactive crisis. I have no idea how this would be done, or even > what it would mean, given the state of the art -- but it would be > interesting. I think there was a Star Trek episode in which Kirk gave the machine it's "Goedel Sentence", and the machine, realizing the error of its ways, turned itself off. --Barry Kort