Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!tut.cis.ohio-state.edu!gem.mps.ohio-state.edu!ctrsol!sdsu!ucsd!ogccse!orstcs!tgd From: tgd@orstcs.CS.ORST.EDU (Tom Dietterich) Newsgroups: comp.ai Subject: Re: Backpropagation applications Summary: RE: approximation results "useless" Message-ID: <13658@orstcs.CS.ORST.EDU> Date: 9 Nov 89 06:07:43 GMT References: <1690@cod.NOSC.MIL> <11283@phoenix.Princeton.EDU> Organization: Oregon State University, Corvallis Lines: 22 By approximation results, I assume you are referring to various proofs that multi-layer feedforward networks can, with sufficient numbers of hidden units, approximate any function arbitrarily closely. Some people have jumped from these results to the conclusion that neural networks can learn any function. This is true, but only if there is no bound on the amount of training data presented to the learning system (and of course, no bound on the number of hidden units). In real applications, the important question is "Can learning algorithm X learn my unknown function given that I have only M training examples?" In other words, how effectively does the learning algorithm exploit its training data? The approximation results provide no insight into this question, which is why they are not very useful. For further details, see "Limitations on Inductive Learning", Proceedings of the Sixth International Conference on Machine Learning (available from Morgan-Kaufmann Publishers, San Mateo, CA). --Tom Dietterich