Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!cs.utexas.edu!samsung!gem.mps.ohio-state.edu!ctrsol!sdsu!ucsd!ogccse!orstcs!tgd From: tgd@orstcs.CS.ORST.EDU (Tom Dietterich) Newsgroups: comp.ai Subject: Re: Backpropagation applications Summary: Re: Backpropagation applications Message-ID: <13660@orstcs.CS.ORST.EDU> Date: 9 Nov 89 06:15:40 GMT References: <1690@cod.NOSC.MIL> <77404@linus.UUCP> Organization: Oregon State University, Corvallis Lines: 15 > furthermore, workers at los alamos used the same training set as a toy > problem for one of their very early non-linear interpolation codes. > after a _single_ pass through the training set, their program > performed perfectly on the training material and had lower than a 5% > error rate on the novel material. they didn't publish this work > because they thought that sejnowski's work was over-sensationalized > and too trivially replicable by conventional means. If this is true, I'd be very interested in seeing the results. One often hears rumors of great things happening at Los Alamos. I'd like to see the work peer-reviewed and published. Until then it is just a rumor. --Tom Dietterich Editor, Machine Learning Journal