Path: utzoo!utgpu!jarvis.csri.toronto.edu!mailrus!iuvax!rutgers!netnews.upenn.edu!grad1.cis.upenn.edu!ranjit From: ranjit@grad1.cis.upenn.edu (Ranjit Bhatnagar) Newsgroups: comp.graphics Subject: Re: Texture mapping by spatial position Keywords: 3-d texture mapping Message-ID: <14266@netnews.upenn.edu> Date: 12 Sep 89 01:44:28 GMT References: <9119@pyr.gatech.EDU> <170@vsserv.scri.fsu.edu> Reply-To: ranjit@grad1.cis.upenn.edu.UUCP (Ranjit Bhatnagar) Organization: University of Pennsylvania Lines: 90 In article <170@vsserv.scri.fsu.edu> prem@geomag.UUCP (Prem Subrahmanyam) writes: > > I would strongly recommend obtaining copies of both DBW_Render and > QRT, as both have very good texture mapping routines. DBW uses > absolute spatial coordinates to determine texture, while QRT uses > a relative position per each object type mapping. The combination of 3-d spatial texture-mapping (where the map for a particular point is determined by its position in space rather than its position on the patch or polygon) with a nice 3-d turbulence function can give really neat results for marble, wood, and such. Because the texture is 3-d, objects look like they are carved out of the texture function rather than veneered with it. It works well with non-turbulent texture functions too, like bricks, 3-d checkerboards, waves, and so on. However, there's a disadvantage to this kind of texture function that I haven't seen discussed before: as generally proposed, it's highly unsuited to _animation._ The problem is that you generally define one texture function throughout all of space. If an object happens to move, its texture changes accordingly. It's a neat effect - try it - but it's not what one usually wants to see. The obvious solution to this is to define a separate 3-d texture for each object, and, further, _cause the texture to be rotated, translated, and scaled with the object._ DBW does not allow this, so if you want to do animations of any real complexity with DBW, you can't use the nice wood or marble textures. This almost solves the problem. However, it doesn't handle the case of an object whose shape changes. Consider a sphere that metamorphoses into a cube, or a human figure which walks, bends, and so on. There's no way to keep the 3-d texture function consistent in such a case. Actually, the real world has a similar defect, so to speak. If you carve a statue out of wood and then bend its limbs around, the grain of the wood will be distorted. If you want to simulate the real world in this way and get animated objects whose textures stay consistent as they change shape, you have to use ordinary surface-mapped (2-d) textures. But 3-d textures are so much nicer for wood, stone, and such! There are a couple of ways to get the best of both worlds: [I assume that an object's surface is defined as a constant set of patches, whether polygonal or smooth, and though the control points may be moved around, the topology of the patches that make up the object never changes, and patches are neither added to or deleted from the object during the animation.] 1) define the base-shape of your object, and _sample its surface_ in the 3-d texture. You can then use these sample tables as ordinary 2-d texture maps for the animation. 2) define the base-shape of your object, and for each metamorphosized shape, keep pointers to the original shape. Then, whenever a ray strikes a point on the surface of the metamorphed shape, find the corresponding point on the original shape and look up its properties (i.e. color, etc.) in the 3-d texture map. [Note: I use ray-tracing terminology but the same trick should be applicable to other techniques.] The first technique is perhaps simpler, and does not require you to modify your favorite renderer which supports 2-d surface texture maps. You just write a preprocessor which generates 2-d maps from the 3-d texture and the base-shape of the object. However, it is susceptible to really nasty aliasing and loss of information. The second technique has to be built into the renderer, but is amenable to all the antialiasing techniques possible in an ordinary renderer with 3-d textures, such as DBW. Since the notion of 'the same point' on a particular patch when the control points have moved is well-defined except in degenerate cases, the mapping shouldn't be a problem -- though it does add an extra level of antialiasing to worry about. [Why? Imagine that a patch which is very large in the original base-shape has become very small - sub-pixel size - in the current animated shape. Then a single pixel-sized sample in the current shape could be mapped to a large part of the original - using, for instance, stochastic sampling or analytic techniques.] If anyone actually implements these ideas, I'd like to hear from you (and get credit, heh heh, if I thought of it first). I doubt that I will have the opportunity to try it. If you post a reply to this article, please include this paragraph. If you see this paragraph in a follow-up, but didn't see the original article, please send me mail. My postings often seem to get very limited and unpredictable distribution, and I'm hoping to track down the problem. (ranjit@eniac.seas.upenn.edu / Ranjit Bhatnagar, 4211 Pine St., Phila PA 19104) -- ranjit "Trespassers w" ranjit@eniac.seas.upenn.edu mailrus!eecae!netnews!eniac!... "Such a brute that even his shadow breaks things." (Lorca)