Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP
Posting-Version: version B 2.10.2 9/18/84; site uwvax.UUCP
Path: utzoo!watmath!clyde!cbosgd!ihnp4!qantel!dual!lll-crg!gymble!umcp-cs!seismo!uwvax!derek
From: derek@uwvax.UUCP (Derek Zahn)
Newsgroups: net.graphics
Subject: Re: ray casting (* l o n g *)
Message-ID: <312@uwvax.UUCP>
Date: Mon, 16-Sep-85 14:55:27 EDT
Article-I.D.: uwvax.312
Posted: Mon Sep 16 14:55:27 1985
Date-Received: Thu, 19-Sep-85 07:15:46 EDT
References: <1858@bmcg.UUCP>
Organization: U of Wisconsin CS Dept
Lines: 154
> ... Anyone care to post a generalized explanation to the net?
>
> Thanks much, Fred Cordes
I have been experimenting with ray tracing a bit recently, so I will
post something (although my experience is doubtless less than others
that read this, so any gurus out there please post expansions and
clarifications).
Ray-tracing is a rendering technique. It is a method of transforming
objects (stored in any of a multitude of ways) into an image -- typically
a RESOLUTION_X x RESOLUTION_Y array of pixel color values.
Consider the screen as a window looking out on the object-space world.
It has a size, and each of its corners has a location in space. Now,
consider your eye. It also has a location in space. Now, divide the
window into a grid, where each point on the grid represents one pixel
on the screen.
Basically, the idea is to treat each of these pixels as a separate
entity. I will describe a procedure for one pixel, but it is clear that
it generalizes easily to all pixels in the image space.
Construct a ray that begins at the eye and travels through the point
in space (grid location) representing the given pixel. A form of this
consisting of an origin point and a direction vector is easy to compute.
Now (this is the inefficient but simple method), for each object that
has been entered into an object database before the actual ray-tracing
started, perform an intersection calculation. This tells us which object
was hit by the ray. If none of the intersection calculations produced
an intersection, then the ray hits nothing. If more than one gave an
intersection, choose the one that intersected closest to the eye.
Example: Suppose that we have an object database that consists of one
sphere and one triangle. For each of these, some data must be stored.
For the sphere, we need to store the coordinates of the center and the
value of the radius. For the triangle, the vertices must be kept.
In addition, other values used in coloring and texturing the object
can be kept, such as the color (or the name of a color function), the
amount of light that is reflected diffusely and specularly, etc.
The intersection with the sphere is a simple algebraic substitution of
the ray equation into the equation of the sphere and solving. The
intersection calculation for the triangle is a bit more complex (not
necessarily slower) -- one way is to find the intersection with the
plane that contains the triangle (another simple algebra exercise), and
then determining whether or not this point is inside the triangle itself.
Once we have the point at which the ray intersected some object, we are
ready to determine what color to make this pixel. This is based on
many factors. First, we need to know the surface normal vector for the
intersected object at this point. We also need to know the color of the
object here. Now, for each of the light sources in the light source
database, we want to trace another ray from the light source to the
original intersection point. If some other object gets in the way before
the ray reaches our point, then the contribution of this light source
to the color of the pixel is nil (the point is in shadow).
Otherwise, compute the contribution of this light source (I am going to
hand-wave that for now, it is a bit complex and this is already longer
than I had thought it would be). Once the contribution of all light sources
is summed, the pixel color values are now determined, and we can move to the
next pixel.
That's it. Now, bear in mind that this is a VERY simple ray-tracing
algorithm. I just noticed that almost everything I just said could be
changed to produce more realistic images or else generate them faster,
but these improvements are embellishments of this basic framework.
For example, the above does not deal at all with reflection or refraction
(although these are easy; just recursive calls to the ray-tracing routine
with a new ray that starts at the intersection point and whose direction
depends on the normal vector, index of refraction, etc), or fuzzy shadows,
or lots of other good_stuff().
The things that you read dealing with Jacobians and the like are all
refinements of the ray-tracing procedure. Such refinements can generally
be divided into several areas:
1) Object modeling -- this area consists of various different surface
types, and methods of ray-tracing them. For example, fractally-generated
terrain is interesting. How can one efficiently intersect a ray with
a fractal mountain consisting of 20,000 triangles? Also, modeling of
free-form surfaces is interesting. Points in space may be connected
with parametric cubic spline functions; intersecting these patches is
not always easy; this is one place where you see Newton's iteration.
Also, sophisticated ways of combining these methods are needed for
realistic modeling of objects like trees.
2) Surface modeling -- once an intersection point is determined, how
do we color it to provide a realistic image? Many methods exist for
mapping 2 dimentional color/texture maps onto 3-D objects, and others
exist for using 3-D functions to determine color/texture directly.
Recent work indicates that controlled stochastic methods of surface
structure/color/detail generation may provide significant increases
in visual realism and complexity at moderate cost (this is a particular
interest of mine). All told, the variations of surface modeling techniques
are as numerous as the variations of real surfaces.
3) Photographic quality -- computer generated images typically suffer
from resolution problems. Only part of this is due to the limited
screen resolution. Antialiasing attempts to provide smoother images,
and methods for doing this are interesting areas of research. On a
related note, typically real pictures have significant amounts of "dirt"
or "noise" in them (you may think of it as disharmonious surface
detail) which computer generated images usually lack. I am
not aware of any work being done to model this phenomenon; is any being
done?
4) Efficient image generation -- this area has received a great deal of
attention, but there is still much to be done. Basically, how can we
generated complex pictures in as little time as possible? In order to
think of this, we must examine the method of producing a ray-traced image.
This will lead to areas where the time can be cut down.
A) The image
It may be possible to cut down the number of rays that are cast. For
example, if all pixels of a certain area are the same color, there is no
need to trace them all. I am not aware of any published works describing
implementations of this idea, although I am sure that there are some.
B) The ray
1) The number of intersection calculations
Various ways of subdividing the object space have been proposed.
Basically, the idea is to divide space (or hierarchically defined
objects) into areas of interest. Each object is associated with
only the areas of space which it intersects. Then, a ray needs
only intersect objects in the spatial areas through which it passes.
2) Efficiency of an intersection calculation
For some objects this is not a big deal, but for other, more
complex objects (like procedurally defined objects or free-form
surfaces composed of many patches) it can be.
C) The point
Once the intersection is calculated, how can we provide visually complex
surface detail at reasonable cost?
Well, I am done. In case anybody has read this far, I would be very
interested in discussion of ray-tracing topics on this newsgroup.
I am sorry for my horrible grammar and style, and also for not citing
references. I cannot remember where many of these ideas came from.
I would be interested in building a ray-tracing bibliography on the
newsgroup -- if one already exists, could someone tell me about it?
Now, I hope that this will spark some interest in ray-tracing discussion
on this group, or at least force someone who knows what s/he is talking
about to correct my blunders.
derek
--
Derek Zahn @ wisconsin
...!{allegra,heurikon,ihnp4,seismo,sfwin,ucbvax,uwm-evax}!uwvax!derek
derek@wisc-rsch.arpa