An Introduction to Ray Tracing


0. The project


Our goal in this project is to "visualize the circle-light universe." The circle-light universe is a universe that is in all ways similar to our own, with one slight difference: light travels not in a straight line, but in circles with fixed radius. This creates a fascinating situation. For large radii, it's very similar to our own universe, but as the radius decreases, things start to "fuzz out." It gets even more interesting once you start dealing with mirrors. For a quick introduction to what we're talking about, think about some interesting facts in this universe. If anything is farther away from you than the diameter of light, you can't see it now matter how much you use magnifying glasses! Moreover, if there isn't anything out there, you don't see black, empty space. You see the back of your own head! It's obviously a cool and interesting project. But first, we need to understand what we're doing and how we're doing it. The aim of this text is to explain that to someone who wants to do more than just "see the pictures."

Ray tracing is perhaps one of the most straightforward ways to get a computer to model our 3-D world on a 2-D monitor. It is natural for us to understand light in terms of photons traveling in straight paths, or rays of light, and this is exactly what ray tracing uses to obtain images of objects.

Ray tracing is quite possibly the most used technique in film and television to achieve computer special effects, because it is so precise. It is one of the best techniques by which reflection and the placement of light sources is handled. It's prime disadvantage, however, is speed; complex images can take hours to render, and possibly more. Thus it has its limitations.

To study the circle-light universe, however, ray tracing is quite useful. The intuitive understanding of rays allows us to model the circle-light universe without having to develop a whole new system for 3-D renderings! By taking this approach, we save ourselves a lot of hassle (although there is a fair bit of math involved even using this technique - not a surprise considering this was developed at Mathcamp).

Now, on to the tracing!



1. In the normal universe


Ray tracing functions via a simple concept; tracing rays. It'd be vastly inefficient to do it the way nature does it, however, by throwing rays of light out from the sun and seeing if they hit our eye. We'd have to trace billions and billions of rays to get even the most grainy picture! There has to be a better way...

A model of the human eye. You can see how the images actually get reversed; the bottom is on the top, and the top is on the bottom.

First, let's look at the eye itself. Light rays come in through the lens. The lens is essentially a point (or, at least, very small) and allows only those rays coming through properly. These rays hit the back of the eye, the retina, which is equipped to detect the photons hitting it. The image is reversed, but the brain corrects for this and then displays it properly.

From a ray tracing perspective, this is interesting, but still somewhat difficult to model. Once again, we can't cast so many rays that we get enough coming through the lens. This also means storing the entire image and then reversing it, which is somewhat computationally intensive (though this pales in comparison to casting the rays). There has to be a better way...

The screen is placed in front of the eye. The boxes represent pixels, and the "thing" on the left is how we represent the eye.

What computer scientists do is to model the reverse. They put the retina - or the screen, as far as we're concerned - in front of the eye. How far ahead it is determines how large the field of view is (and thus, the magnification). This alleviates the problem of having to reverse the image. Note that objects can, in fact, be between the lens - or, for simplification, just the eye) and the screen - an intersect will still be recorded.

By dividing the screen up into pixels, as a computer monitor is, we now have the essential elements for defining a ray, which are the same as for defining a line: two points in 3-space. Now, we can just intersect this line with all the objects that we have layed out for our universe. If the ray hits nothing, then it's recorded as the default background color (usually black). If the ray does hit something, then we have to deal with other factors.

This shows how rays are cast from the eye to the screen. If they intersect nothing, as with the top ray, they are set to the background color. If, like the bottom one, they hit a sphere, they're set to the object's color with some other calculations involved (this ray would probably be a shade of green).

The most basic thing to do would be to just set the color of the pixel to the color of the closest intersected object (this way, if one object is in front another, it will appear that way). But this ignores light sources, reflection/refraction, specularity, and other surface elements.

This shows specularity. The purple sphere is very bright near the light source. This image was rendered with MARTI.

First we'll take care of lighting. This isn't so hard, actually. The computer can just taken the intersect point and a light source, and define a line with those two points. If that line intersects other objects on the way, the intersected object is in shadow (from that light source); otherwise it's lit, and the color of the pixel should be adjusted to reflect the color and intensity of the light source. We can also deal with specularity now. If you look at a sphere, especially a reflective one, you'll notice a place on it where the light is really bright (this is true for most curved objects, but it's most recognizable on a sphere). This is specularity, and it should be reflected in ray tracing. What you'll notice is that this specularity is most bright where the object is perpendicular to the light source/the rays it casts. Thus, by taking a perpendicular vector and comparing this to the light source, it's easy to see how much specularity to add.

Reflection is another matter; this takes advantage of a programming technique known as "recursion." A simple example of recursion would be a math problem: "Start with x = n, and define f(x) to be (x + f(x - 1)) while x > 0." This is actually a rather complicated way to take the sum of the first n integers (although there's a much easier formula for it). How we use recursion in this case is not too complicated. The final color of the pixel is some constant of reflectivity * the color of the sphere + another constant of reflectivity * the color the reflected ray, which is calculated mathematically (using the normal vector again). Of course, if the reflected ray hits another object which is reflective, it calls the same algorithm again, and sure enough, the whole thing recurses.

Well, now you've had a tour of how normal ray tracing works. Now, we delve into the meat of the project: circle-light ray tracing.



2. In the circle-light universe


What changes in the circle light universe? Can't we just do the same thing? Well, almost, but not really.

Here you can see how two circle rays are cast through the same pixel. Notice that if you rotate this plane around the two points (the eye and the pixel on the screen), you continue to make more and more circles, all different, but the line is unchanged. That's why the images start to look so weird - because there are infinite circles going through that one pixel!

The problem is this: it takes three points in 3-space to define a circle. For a line, it takes just two. That means that infinite circles go through each pixel, and that sometimes more than one object is hit in one pixel - the colors are a blend of the colors of multiple objects! What you see to the right is just two such possible circles - just for the plane of the image! It gets really weird - or, as Clayton put it, "trippy."

If a creature lived in this universe, it would probably develop eyes with a second lens - this would clear things up a bit, giving it a third point through which to define the circle. We want to know what things would look like for us, though, so we draw the pictures exactly as we would see them. It does lead to some funky stuff - but it's still cool. We'll take you through a brief summary of what's necessary to program the computer here, but we won't go very far in-depth.

The first issue is this - with two points and a radius, we can define the centers for the circles. We can't draw infinite circles, but we can draw a large number through each pixel in the viewing area. The first thing is to define this number - in general, 500 is a good number. Unfortunately, such an image takes about ten straight hours to render - but we can live with that with the help of some dedicated MARTI@home'ers.

We can define one center and then rotate it by 2*pi/# rays for each new circle. This gets us all the circles we cast.

From here, it's a "simple" task to intersect these circles with all of the other objects. While it _is_ possible to do this via some equations, it's not at all easy. In general, our method is simpler to program (as opposed to several pages of equations otherwise!) - through some rotations, we get the circle on the xy-plane, and take the slice of the sphere intersecting the plane. These can be intersected much more simply, and then we just need to rotate backwards. It works well, though it was a small pain to program it.

Once we have the intersections, we can get on to other matters. Checking for intersections with light sources is relatively easy - although there are infinite rays that can hit the object from the light source, we can trace those the same way we traced using two points - the eye and screen from before. Reflection is also handled similar to normal ray tracing, and we have our image!

Indeed, the images are interesting. The project will be open-sourced soon, and we encourage everyone to play around with it, send in bug reports/errors, and just enjoy the whole thing. If you have any more questions, comments, or suggestions for this document, the images, or indeed anything else (flames too!), please send them off to us.



Document written by Dan Zaharopol. 1