Path: utzoo!attcan!uunet!tut.cis.ohio-state.edu!pt.cs.cmu.edu!andrew.cmu.edu!jk3k+ From: jk3k+@andrew.cmu.edu (Joe Keane) Newsgroups: comp.ai Subject: Re: Are neural nets stumped by change? Message-ID: Date: 15 Aug 89 05:21:19 GMT References: <4331@lindy.Stanford.EDU> Organization: Mathematics, Carnegie Mellon, Pittsburgh, PA Lines: 33 In-Reply-To: <4331@lindy.Stanford.EDU> In article <4331@lindy.Stanford.EDU> GA.CJJ@forsythe.stanford.edu (Clifford Johnson) writes: >In my original message I did clarify this somewhat. The point is >that neural nets in essence automate Bayesian types of induction >algorithms. In adapting to change, they only do so according to >statistical/numerical rules that are bounded by their (implicit >or explicit) preprogrammed characterizations and >parameterizations of their inputs. Some neural networks have carefully hand-crafted topologies. But if you use a standard topology and training algorithm in a new domain, where is the ``preprogramming''? Similarly, with a standard topology, you aren't giving it any ``parameterization''; it learns them all by itself. >Thus, a change in the basic >*type* of pattern is beyond their cognition. This doesn't follow. It may seem intuitive to you, but i think it's false. Fill in some more steps and i'll tell you where i think the problem is. >Second, a change in >the parameters of patterns they can adaptively recognize is only >implemented over the time it takes for them to make enough >mistakes that the earlier statistics are in effect overwritten. What's this about mistakes? You can train simply by reinforcement on good examples. But it is often better to use corrective training. To do this, you _force_ the net to make a mistake, to get it well trained. A neural net can have some characteristics which are strongly selected and some which are easily changed. So it can learn a new part of the input space without much changing its performance on part it's already learned. This behavior can come out of the simplest nets.