Tea Candles

CSE 168 2010 Final Project

Carlos Dominguez and Holmes Futrell

  1. Introduction
  2. Modeling the Scene
    1. Instancing
    2. Constructive Solid Geometry
    3. The Model
  3. Distribution Ray Tracing
    1. Depth of Field
    2. Glossy Reflections
  4. Photon Mapping
    1. Diffuse Interreflection
    2. Caustics
    3. Participating Media
  5. Final Image


For our final project we wanted to model a scene from a real photograph as closely as possible while at the same time applying advanced techniques from the course. We had some vague initial ideas about the type of image we wanted to reproduce, and after some searching we found a photo (shown below) by Paulo Rodrigues that caught our attention. The photograph shows a living room scene with the camera placed just above a coffee table. In the foreground there are four tea candles, a teapot, and an incense burner. In the background the viewer can see a bookcase and then see past a divider and into the kitchen.

We based our rendering on Paulo's photo because of the great number of effects present which we could attempt to capture. Most of the lighting in the living room arrives after one or more bounces from the kitchen. The glass candle holders serve to focus the light from the candle flames to form colored caustics on the coffee table. And there are also a number of "soft" phenomena present: the soft shadows cast by the kitchen lights into the living room, the glossy reflections in the coffee table, and the depth of field from the camera itself.

Modeling the Scene

Neither of us had experience modeling using 3rd party tools, which presented difficulties since the scene we wished to model is rather complex. We had a handful of .obj models at our disposal, one of which luckily included a teapot, but we had to model the rest of the scene ourselves. To accomplish this we added a number capabilities to our renderer and modeled the scene procedurally.


Instancing is a technique used to efficiently render copies of objects. For example instanced copies may share the same mesh information or bounding volume hierarchy while having different transformations and material properties which define position, orientation, and color. If a scene contains many copies of an object this optimization can save many orders of magnitude of memory and rendering time.

In our project we used instancing less to optimize the rendering process than to help us model the scene. For example, we modeled the four tea candle holders as instances of a single object with different material color and positions in the scene. Besides creating copies where necessary, we applied instancing to most other objects in the scene to modify their transformation matrices to get them to the proper position, orientation, and scale.

Constructive Solid Geometry

Constructive Solid Geometry (CSG) uses set operations on objects to define new objects. This is very useful to those like us without modeling skills because it means a small number of geometric primitives can be used to create a range of more interesting objects.

We used CSG in our project to model all of the complex objects in the scene for which we did not have pre-existing models. For example, the glass candle holders are modeled as a positive outer sphere which is hallowed by a smaller negative sphere inside of it. The top edge of the candle holder is formed by intersecting this hallow sphere with a box. Another example, the chrome cabinet handles found on the kitchen cabinet in the scene, is illusrated below:

The Model

We applied constructive solid geometry, instancing, and pre-existing teapot and bunny .obj models to create the scene based on the reference photograph (taking some artistic license along the way). A wireframe of the scene as well as a rendering of the scene without global illumination or distribution ray tracing effects is shown below (note how dark the scene looks without global illumination).

Distribution Ray Tracing

We use distribution ray tracing to model the "soft" phenomena present in the scene such as soft shadows, glossy reflections on the coffee table, and depth of field from the camera. To accomplish this multiple sample rays per pixel are cast which are distributed over the domain of interest (e.g. the area of the light source, the outgoing ray reflection direction, and the surface of the lens).

Depth of Field

To simulate depth of field we implemented a simple lens model. There exists a square lens of a finite size and a focus plane a set distance away. For each pixel in the image a fixed number of rays are cast through random locations on the lens and then through the focus plane. This causes both areas in front and behind the focus plane to become blurred. To reduce noise it is often necessary to use a large number of rays per pixel, especially if the lens is large.

Glossy Reflections

To simulate glossy reflections we adopted the glossy reflection portion of the Schlick BRDF model. In essence the half vector (the vector between the incoming and outgoing ray direction) is randomly varied about the surface normal vector so that the outgoing ray direction is randomly perturbed from that of an ideal reflection.

Photon Mapping

Diffuse Interreflection

We use photon mapping to compute the lighting on diffuse surfaces which arrives after one or more bounces from other diffuse surfaces (diffuse interreflection). To do this we trace photons from each light source and after the first diffuse reflection we store the photon in the global photon map at each interaction with a diffuse surface. In the ray-tracing pass we calculate direct illumination by casting shadow rays and calculating the irradiance directly. We calculate indirect illumination from other diffuse surfaces by considering the area density of nearby photons in the global photon map.


Photon mapping is also useful to visualize caustic effects. To do this we store photons which arrive at diffuse surfaces from either specular reflection or refraction in a separate caustic photon map. To render the caustic effects on diffuse surfaces we use an area density estimation from the caustic map. In addition, we apply cone filtering in the irradiance estimate in order to sharpen the caustic effects.

Participating Media

In our project we consider the air as participating media in order to model the candle flame. To accomplish this we use a volume photon map which stores interaction events between photons emitted from the flames and the surrounding air. In the rendering pass we use ray marching and use a volume radiance estimate from the volume photon map to compute out-scattered radiance.

Final Image

Our final image shows the results of our model when combined with distribution ray tracing effects and global illumination through photon mapping. To render the image we used 5,000,000 photons which were distributed equally amongst the light sources. Although the candles are at much lower wattage than the lights in the kitchen, we used an equal number of photons because the caustic effects the candles produce are very important to the final image.

We traced 50 sample rays per pixel, and in fact would have used more samples per pixel if time had permitted. This is because the depth of field and glossy reflection effects generate high frequency noise which take a huge number of samples to eliminate. The final render at 1024x768 resolution and 50 sample rays per pixel took approximately 6 hours, almost all of which were spent in the ray-tracing pass (relatively little time was spent in the photon tracing pass). This time would have been even greater if not for the fact we were rendering on a quad core machine and we had previously multi-threaded the ray-tracing pass of our renderer. Our final image is the result of this overnight render and is shown below:

Thanks for reading!

Download a full resolution, lossless version of the final image.

Valid HTML 4.01 Transitional Valid CSS!