The two main scene elements are a shishi-odoshi or "deer-scarer", and a
ishi-doro, a stone lantern. Both of these are common elements in a
Japanese garden. I chose both of these for the simplicity and elegance
of their geometry. All of the scenery I modeled myself using Maya, and
imported into my renderer. The final image is 1024x1024, using 100,000
photons for global illumination, no caustics, with 128 samples per
pixel for soft shadows, and 144 samples in the indirect illumination
gathering step. The render was distributed over 24 machines in the SDSC
Visualization Lab; each node took an average of 18min.
This page describes my final project for CSE168: Rendering
Algorithms. The project consists of writing a raytracer from scratch in
under 10 weeks, which I dubbed RenderCam :). I roughly follwed the
outline given in the course but there are several additional features
that I added in my own interest.
The only primitives it currently supports are triangle meshes (a
triangle is just a mesh with one triangle), and spheres. It has a
reasonably generic shader system, and have several shaders implemented.
All of the raytracing code is written from scratch except for the
Vector/Matrix classes provided by in the course (with a few
modifications of my own), and Henrik Wann Jensen's photon map
Here is a very early but nice looking early test image.
A very clean (lots of samples) path tracing
image illustrating color bleeding
Direct visualization of the global photon map.
A simple cornell box replica, also using path tracing.
||The Global (indirect) illumination
||The final composite.
A caustic formed by a glass teapot.
Here is a list of features broken up by category
The two algorithms which I most regret not having the time
to implement were irradiance caching (painfully slow otherwise) and
importance sampling (without it, it's difficult to get sharp looking
- Direct Illumination Raytracing
- Specular Reflection/Refraction is implemented with proper Fresnel equations
Carlo Raytracing - Monte carlo techniques are mainly used for shadows,
but stratified distribution raytracing is used in the gathering step of
the photon mapping illumination.
- Path Tracing - Niave path
tracing is implemented as a reference for more sophisticated
algorithms, and was a starting point for global illumination.
- Tone Mapping - I use Schlick's simple exponential approximation for tone mapping the final images
Mapping - Photon maps are used for caustics and global illumination
using the two-pass algorithm Jensen describes in his book.
Monte-Carlo sequences: for photon stratification, halton sequences, a
type of low discrepancy quasi-random number sequence are used to help
- Distributed Rendering - Since I work at SDSC
(note the web address) I figured I might as well make the most of my
experience and resources. The renderer is packaged as a standard linux
binary, and can be run on the command line with a scene script as
output. This is one place in which my choice of scene scripting helped.
A high level command script asks the user for general rendering
parameters, number of samples, resolution, base scene file, etc. The
script then generates N auxiallary scene scripts by appending
dynamically created Lua code to the end of the scene file, then
launches a scheduling system (APST, developed by the Grid Programming
Lab at UCSD/SDSC) for N jobs with each appended script. Each job
renders some chunk of the scene to a simplified raw floating point
format, which are that composited together (you have to love the
simplicity: using the unix command 'cat' ) and tone mapping is applied
and the final image outputted.
I implemented most of my algorithms as shaders. There is a generic
Shader class and a Material class, where each material contains a list
of pairs of Shader instances and weights to calculate the overall
illumination. For example, there is a PathTracingDiffuseShader class
and a PhotonDiffuseShader class. The elegance of this system is that
for a large portion of the algorithms, only the lambertian surface
shader needs to be rewritten. And since shaders are loaded on the fly,
allows one to switch between rendering algorithms without recompiling.
- Diffuse shader with shadows rays
- Specular shader (reflection/refraction)
- Phong highlight shader (uses an exponential to approximate the reflection of a visible light)
- Perlin noise shader for doing procedural textures such as marble or wood grains.
- A general 2D smoothed noise shader (good for granite or stone)
generic photon mapping shader, with options for direct visualization of
the photon map, or implementing compositing algorithm as described in
The acceleration structures are hardcoded since they would concievably be changed very seldomly.
- Vector: a naive test of all scenery objects
- Uniform Grid: a standard uniform grid implementation
- Hierarchical Grid: an optimized two level hierarchical grid
Instead of using flex/bison to code up a parser, I decided to leverage the flexibility of an existing scripting language, Lua.
Instead of developing a proprietary syntax for describing scene
elements, I added simple bindings to several of the C++ classes that
store the description and parameters of the scene. I found this makes
it much easier to add general functionality (such as rendering multiple
frames for animation) since you can leverage the facilities of a turing
complete language. Here's an example (the first test image):
-- camera setup
camera = Camera()
-- final scene setup
scene = Scene()
You'll notice that it also dictates what action is taken, right
now the options are to either run the scene previewer app (Tester), or
you can also have it just render to an image and write it to a file
using this snippet:
r = RenderCam()
image = Image()
I find this method of scripting to be compact and easy to use.
One reason I was motivated to do it this way was using Doug Zonker's Slithy