Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!tut.cis.ohio-state.edu!quanta.eng.ohio-state.edu!kaa.eng.ohio-state.edu!rob From: rob@kaa.eng.ohio-state.edu (Rob Carriere) Newsgroups: comp.dsp Subject: Re: Psychoacoustics Message-ID: <3506@quanta.eng.ohio-state.edu> Date: 13 Nov 89 22:17:52 GMT References: <1989Oct31.193130.1685@eddie.mit.edu> <1989Nov2.180644.28647@sj.ate.slb.com> <13729@orstcs.CS.ORST.EDU> Sender: news@quanta.eng.ohio-state.edu Reply-To: rob@kaa.eng.ohio-state.edu (Rob Carriere) Organization: Ohio State Univ, College of Engineering Lines: 21 In article <13729@orstcs.CS.ORST.EDU> pvo3366@sapphire.OCE.ORST.EDU (Paul O'Neill) writes: >Our ears have a different frequency response at different azimuths and >elevations. We use the frequency content of arriving sounds as one of >our localization inputs. > >Demonstration: Plug one ear with your hand. Close your eyes. Click >the fingernails of your thumb and forefinger on the other hand at >various positions around your other ear. Can you localize the clicks? >How? You're only using one ear. I agree with the content of the post, but the demonstration is bogus. Quite apart from any audio clues as to the location of the sound, you have the clues arising from the fact that you know where your fingers are. There is no obvious way to tell whether or not this extra information is used to ``cheat''. A proper demonstration (which works quite well, incidentely) would be to have a second person produce the sounds. SR