CSE 168 Final Project

Water Pillar

Joel Chelliah

June 8th, 2010


For my final project I wanted to render a pillar of water hitting a water surface. I was inspired by the following image I found on deviantArt by givemerocknroll. I was really fascinated by this image and I wanted to see how well I could render a similar scene. The parts of the image I was most interested in was how the bottom of the water is lit up purely by caustics and how the background was a lot clearer when seen through the pillar of water. These were the things I wanted to focus the most on in my rendering.


I modelled the scene using Blender. This was my first time modelling anything so I was afraid I would be spending too much of my time learning Blender, but after looking at a few tutorials on the web, making the water turned out to be pretty easy. I did the entire simulation in Blender and picked out my favorite frame to use in the rendering.

In the scene, I also place a large plane that intersects the entire body of water, but I do this in my code just before rendering, so it's not shown in Blender screenshots above. This plane is where all the photons that refract through the water are stored.


I'm using the following features for this scene:

Since my model is pretty big (246000 triangles) and takes some time to render I use the Cornell box scene to test all my implementations before using them in my scene. Below, are some images of testing my photonmapping for different parameter values. The values under the images are total number of photons emitted and number of photons sampled during the irradiance estimate.

50000 photons emitted, 100 sampled 5000000 photons emitted, 2000 sampled

5000000 photons emitted, 2000 sampled

After getting some good results with photonmapping my next goal was to implement depth of field. I wanted to have the focus on the pillar of water so that I get a good blur on all the water drops flying around,hoping that this would give a good feel of depth to the scene. Below we see the difference between no depth of field, and depth of field using 80 rays per pixel. This greatly improves the image and makes it look a lot more realistic.

500000 photons emitted, 200 sampled, without depth of field 500000 photons emitted, 200 sampled, 80 rays per pixel

The final step was adding antialiasing to the scene. I implemented stratified supersampling and played around with dividing the pixels into different number of subpixels. In the end I got pretty good results while using 10 rays for depth of field and 25 rays for antialiasing (dividing each pixel into 5x5 subpixels). The result can be seen below.

500000 photons emitted, 200 sampled, 80 dOf rays per pixel 500000 photons emitted, 200 sampled, 10 rays for dOf, 25 rays for AA

The difference seem to be very small. For me the first image still looks better than the one with AA even though that one has 10*25=250 rays per pixel while the first image only uses 80. I also feel there a greater feel of depth in the first image. So for the final image I decided to not do anti-aliasing and instead shoot a lot more dOf rays per pixel.

Final image

For the final image I emitted 10,000,000 photons, 4000 sampled in irradiance estimate, and 100 rays per pixel for depth of field. If I had more time to do the rendering I would have also included anti-aliasing, but I would have at least needed to do 3x3 subpixels to see any difference which means that I would have had to shoot a total of 900 rays per pixel, which I didn't have time for. I'm still really happy with how my image turned out. My favorite part of this project was seeing the improvement in the image after implementing a new algorithm, this was a huge motivation to keep on working on making the image look better. I am definitely going to work on improving this image even more after the rendering contest!

Thank you for reading.

- Joel Chelliah