Path: utzoo!utgpu!news-server.csri.toronto.edu!cs.utexas.edu!sdd.hp.com!spool.mu.edu!agate!bionet!raven.alaska.edu!milton!hlab From: jdb9608@ultb.isc.rit.edu (J.D. Beutel) Newsgroups: sci.virtual-worlds Subject: Re: Japanese stereo TV/computer terminals Message-ID: <1991Jun19.193449.20694@milton.u.washington.edu> Date: 19 Jun 91 19:00:35 GMT References: <1991Jun18.161206.19250@milton.u.washington.edu> <1991Jun18. Sender: hlab@milton.u.washington.edu (Human Int. Technology Lab) Organization: Rochester Institute of Technology Lines: 95 Approved: cyberoid@milton.u.washington.edu I've never heard of the NTT display before, but I have actually used a very similar display from Dimension Technologies here in Rochester. (I'm not affiliated with them in any way.) Their first commercial product is on a monochrome LCD, and has the drawback of about a one second update speed (yes, a whole second). They've been working on their next project for the government---a color LCD with a reasonable refresh rate, which they expect to release within a year. Their current screen is PC compatible (with its own display cards, I think), and their next screen will have cards for some graphics workstations additionally (Sparc's, I think). This technology struck me as exciting (especially for something like television in the near future), but not applicable to most VR applications. Video phones give a 360 degree field of view, whereas video screens provide just a sliver of that. Furthermore, I don't see how this technology can be applied to video phones because as long as you're going to have a private screen stuck to your face, two little ones are as good as one big one (or better because they're close to your eyes). brucec@phoebus.labs.tek.com (Bruce Cohen) writes: >hlab@milton.u.washington.edu (Human Int. Technology Lab) writes: > >> NTT's display has >> two infrared sensors that track a viewer's head position >> and adjust for these movements. NTT hopes to produce >> its screens for computer terminals and video phones but >> says commercial systems are still two years away. >> >> (Edited by Robert Buderi) > >Fascinating! Some questions come to mind: > >1) How bad is the view of a screen for one person when the screen is > tracking another person? Is this inherently a solo device? The DTI screen has no tracking device. The users must sit within a certain range of viewing distences from the screen. There are overlapping diamonds of correct viewing perspectives, which the viewers must find for themselves by moving their heads (a little L/R key makes it easy when you close one eye and see if the other is seeing the correct letter). Sitting directly in front of the screen gives the best perspective. Several people can use the same perspective vertically (e.g., someone can stand behind you and look over your head). Additionally, there are several good zones to either side. The 3D perspective from the side is slightly distorted because you see the same thing you would see sitting in front, but if there really was a 3D object then you'd see not exactly the same thing. Overall, the side views are not bad. The NTT device may have the same properties. If so, and it tracks one person in the right zone, then all the other people will have to move, so it sounds like more of a solo device than it could be without the tracking device. (But then, it should have some switch for turning off the tracking for multiple viewing.) >2) [Use position tracker for motion parallax simulation?] You have a fascinating idea there. >3) How intrusive is the target for the tracking device (correct me if my > assumption is wrong, but I would guess from your description that the > viewer has to wear some sort of optical target which the sensors detect)? I don't know about the NTT device. The DTI does no tracking, and of course nobody needs to wear glasses to look at it, so it's less intrusive than most 3D view systems. Even with a target it would probably be less intrusive than electric glasses. Both of the salespeople who did the demo had a distant stare, however, which made me worry that the way one must relax one's eyes to look at the screen may have some long-term effects. A clever gentleman at the demo suggested that one use for a 3D system could be detection of movement in satellite photos. If a satellite takes a picture of the same place on different orbits, and the pictures are shown to left and right eyes respectively, then anything that changes position between orbits will stand out (literally). This would be especially useful for complex pictures, as long as nothing moves so far that we stop perceiving it as the same object. There was a similar discussion in this newsgroup a while ago about how we can write programs to translate positional differences into three-dimensional data for translating orbital photos of other planets into VR models. Of course, our brains do it already. I have seen some papers on neural networks simulating the hypercolumns and other structures of the visual cortex, which may be the way to get computers to see in three dimensions and/or extract that extra data. I can provide a reference if anyone's interested. But, I haven't seen anything that can actually do it, besides the wet grey stuff. -- -- J. David Beutel 11011011 jdb9608@cs.rit.edu "I am, therefore I am."