Path: utzoo!attcan!uunet!mcsun!ukc!warwick!csuyk From: csuyk@warwick.ac.uk (FUNG Wai Wa) Newsgroups: comp.ai.neural-nets Subject: Scaling the input data when using BP Message-ID: <1991Feb8.201313.14002@warwick.ac.uk> Date: 8 Feb 91 20:13:13 GMT Sender: news@warwick.ac.uk (Network news) Organization: Computing Services, Warwick University, UK Lines: 48 Hi, netters, I am learning the BP algorithm and I got a question about scaling the input data for training. I mean, for example, if I want to train the XOR problem, a 'normal' training data set would be : 0 0 <- input 0.1 <- desired output 0 1 0.9 1 0 0.9 1 1 0.1 Now, if I arbitrary re-assign the 'low' value as -2 and 'high' value as 1 in the input data, then I would get a training set as -2 -2 0.1 -2 1 0.9 1 -2 0.9 1 1 0.1 If I train the network in this way, am I doing a XOR training? If so, does the difference in training speed indicate that the network is dependent on the order of magnitude of the input values? I'd guess so, judging from the updating formula of the BP algorithm. And is it better to scale them symmetrically, ie low <= -2 and high <= 2 ? (I can't see any reason why this is necessary.) Sorry if this is a trivial question. And would anyone kindly suggest some papers on the related topic? Thanks a lot in advance. FUNG W.W. Brought to you by Super Global Mega Corp .com