Path: utzoo!attcan!utgpu!jarvis.csri.toronto.edu!rutgers!att!cbnewsh!mbb From: mbb@cbnewsh.ATT.COM (martin.b.brilliant) Newsgroups: comp.ai Subject: Re: Therapy for Carbon-Based Neural Networks and Silicon-Based Machines Message-ID: <1617@cbnewsh.ATT.COM> Date: 21 Jun 89 14:07:31 GMT References: <1420@lzfme.att.com> Organization: AT&T Bell Laboratories Lines: 39 From article <1420@lzfme.att.com>, by jwi@lzfme.att.com (Jim Winer @ AT&T, Middletown, NJ): > ..... > Forcing an abreactive state for a machine intelligence would involve > simulating an earlier surround in which there is a highly charged > emotional state. For sequential computing devices, this seems > unlikely. For a neural net this might correspond to a period of > intense negative feedback..... It shouldn't be that hard. The therapeutic situation Jim described is basically what happens when a learning system makes an invalid generalization. Suppose it mistakenly concludes that a certain set of conditions is too dangerous to allow at any time. Sure, if it is a simple system you go in and do a manual adjustment, and a man-made system should be built so you can do that, but suppose for some reason you couldn't do that. You would have to force the system to go back and re-evaluate that supposedly dangerous situation. > ...... Thus, in a complex net, changing a strong > early pattern of response might have interesting effects on the > response to later patterns (if not totally destructive). I wonder if > there would be any behavior changes that might relate to the type of > behavior changes that result from putting a human through an > abreactive crisis? Yep, I think that's the right question. I'm suggesting that the machine would resist therapy at first. If it consented to review its assumptions, it might have to unlearn and relearn everything it learned after it made the incorrect generalization. It might behave rather immaturely for a while. I'm not thinking necessarily of a net, but more of an expert system that learns from its Q and A's. To stay sane, such a system would have to remember when and how it learned what it thinks it knows. M. B. Brilliant Marty AT&T-BL HO 3D-520 (201) 949-1858 Holmdel, NJ 07733 att!hounx!marty1 or marty1@hounx.ATT.COM Disclaimer: Opinions stated herein are mine unless and until my employer explicitly claims them; then I lose all rights to them.