"Shishi-Odoshi"

Background

The two main scene elements are a shishi-odoshi or "deer-scarer", and a ishi-doro, a stone lantern. Both of these are common elements in a Japanese garden. I chose both of these for the simplicity and elegance of their geometry. All of the scenery I modeled myself using Maya, and imported into my renderer. The final image is 1024x1024, using 100,000 photons for global illumination, no caustics, with 128 samples per pixel for soft shadows, and 144 samples in the indirect illumination gathering step. The render was distributed over 24 machines in the SDSC Visualization Lab; each node took an average of 18min.


Overview

This page describes my final project for CSE168: Rendering Algorithms. The project consists of writing a raytracer from scratch in under 10 weeks, which I dubbed RenderCam :). I roughly follwed the outline given in the course but there are several additional features that I added in my own interest.

The only primitives it currently supports are triangle meshes (a triangle is just a mesh with one triangle), and spheres. It has a reasonably generic shader system, and have several shaders implemented.

All of the raytracing code is written from scratch except for the Vector/Matrix classes provided by in the course (with a few modifications of my own), and Henrik Wann Jensen's photon map implementation.


Images


Here is a very early but nice looking early test image.

A very clean (lots of samples) path tracing
image illustrating color bleeding

Direct visualization of the global photon map.

A simple cornell box replica, also using path tracing.
Diffuse/Specular Only The Caustics The Global (indirect) illumination The final composite.

A caustic formed by a glass teapot.

Features

Here is a list of features broken up by category

Algorithms

The two algorithms which I most regret not having the time to implement were irradiance caching (painfully slow otherwise) and importance sampling (without it, it's difficult to get sharp looking casutics).

Shaders

I implemented most of my algorithms as shaders. There is a generic Shader class and a Material class, where each material contains a list of pairs of Shader instances and weights to calculate the overall illumination. For example, there is a PathTracingDiffuseShader class and a PhotonDiffuseShader class. The elegance of this system is that for a large portion of the algorithms, only the lambertian surface shader needs to be rewritten. And since shaders are loaded on the fly, allows one to switch between rendering algorithms without recompiling.

Acceleration Structures

The acceleration structures are hardcoded since they would concievably be changed very seldomly.

Scene Scripts

Instead of using flex/bison to code up a parser, I decided to leverage the flexibility of an existing scripting language, Lua. Instead of developing a proprietary syntax for describing scene elements, I added simple bindings to several of the C++ classes that store the description and parameters of the scene. I found this makes it much easier to add general functionality (such as rendering multiple frames for animation) since you can leverage the facilities of a turing complete language. Here's an example (the first test image):
	--- Materials
	red_diffuse=Material()
	red_diffuse:addShader(DiffuseShader(Color(1,0,0)),1)
	
	green_diffuse=Material()
	green_diffuse:addShader(DiffuseShader(Color(0,1,0)),1)
	
	marble=Material()
	marble:addShader(PerlinShader(Color(.8,.8,.8),Color(.3,.3,.3)),.8)
	marble:addShader(SpecularShader(),.2)
	marble:addShader(PhongHighlightShader(),1)
	
	mirror=Material()
	mirror:addShader(SpecularShader(),1)
	
	red_shiny=Material();
	red_shiny:addShader(DiffuseShader(Color(1,0,0)),.7)
	red_shiny:addShader(SpecularShader(),.3)
	red_shiny:addShader(PhongHighlightShader(),1)
	
	--- Geometry
	s1=Sphere(Vector(0,2,0),1)
	s1.m=red_shiny
	s1:transform(Matrix():translate(Vector(0,1,0)))
	
	floor=TriangleMesh("plane.obj")
	floor.m=marble
	floor:transform(Matrix():scale(Vector(5,5,5)))
	
	-- camera setup
	camera = Camera()
	camera.FoV=1.04719755
	camera.position=Vector(4,4,0)
	camera:lookAt(Vector(0,0,0))
	
	-- final scene setup
	scene = Scene()
	scene.resx=512
	scene.resy=512
	scene:addObject(s1)
	scene:addObject(floor)
	
	scene.camera=camera
	scene:addLight(Light(Vector(3,3,3),100))
	scene:addLight(Light(Vector(-3,3,-3),100))
	scene:addLight(Light(Vector(0,5,0),10))
	
	t=Tester();
	t:Run(scene)

You'll notice that it also dictates what action is taken, right now the options are to either run the scene previewer app (Tester), or you can also have it just render to an image and write it to a file using this snippet:
	r = RenderCam()
	image = Image()
	image=r:render(scene)
	image:write_image("render_ouptut.ppm")
I find this method of scripting to be compact and easy to use. One reason I was motivated to do it this way was using Doug Zonker's Slithy