This is the webpage for my final project. Unfortunately, and for a variety of reasons,
I was unable to complete my intended final project image, seen below. The original
image was to have included an implementation of depth of field, soft shadows, the use
of textures, antialiasing, and reflections off of metal and laquered surfaces.
My plan was to complete the project in a series of phases. These phases included taking care of the implementations above, finding a model and verifying it would work in my case, setting up my scene, and ultimately rendering the image.
As stated above, the final image is not complete. Due to time constraints, unexpected software and hardware malfunctions, and general stubbornness (to name a few), it just did not get done. This shouldn't be mistaken as a lack of committment or effort, just a mismanagement of time, and unforeseen problems that ultimately ended in an incomplete project.
Below I will explain my implementation of certain features that did get accomplished, and the degree to which they were so accomplished. It is not what I had planned, of course, but is all that I have to show for my time spent on this project.
1. Scene Contents and Geometry
This isn't particularly related to the core of the project, that is, to write advanced algorithms that sum to photorealistic renderings. In the beginning, I had assumed that obtaining geometry for what I intended to do was going to be as easy as going to google, searching, and clicking on the first link which would magically download a complete OBJ file formatted perfectly for my software. Obviously, and now in retrospect, it was completely idiotic of me to think this. Upon finally researching this, perhaps too late, I realized that people SELL models online, and that only low quality models are available for free. It was as if I thought Gibson would just have downloadable OBJ files on their website.
In the end, I was able to find a model but was unable to format it into the OBJ file that I needed without significant effort. Thus, I ended up modeling the room that I originally wanted to use, an office space I have access to, and instead of my guitar sitting on top of the cardboard box, a red sphere now sits.
2. Depth of Field
Depth of field was particularly difficult for me to grasp at first. I attempted to implement this feature directly within the function I used to generate my EyeRays. This wasn't the correct answer, as it turned out, because I effectively changed my FOV in the process. After reading a number of papers I found online, it finally came to me to take an eyeray that had been shot from the camera, as normal, finding that eyeray's intersection with the focal plane, and then following the thin-lens formula for DOF. This works like a charm, but drastically increases render time due to the increased number of rays shot. Though my implementation is far from perfect, it essentially accomplishes what I had intended.
3. Soft Shadows Though not a terribly difficult feature to implement, the soft shadows add a great deal of realism to any scene, as hard shadows are unnatural except in very certain light. Soft shadows in my implementation are done in the Shader function by sampling various points on the surface of the light. I had started to write code to handle spherical light surfaces, but decided against it since none were present in the scene I had intended on rendering. I focused on area-lights instead, sampling at various places within the boundries of the light. I took the new position and computed various shadow rays from the hit point to the new point on the light. Averaging these values gives me gradient I need.
4. Photon-Mapping I attempted to implement photon mapping in my renderer and believe that I came close to achieving it, but came up short. Essentially, photon mapping is the tracing of light from a light source to any number of destinations and recursively and randomly shooting photons from those locations that were hit with a random probability of stopping. At each point you store various values relating to the trail of photons, such as its BRDF, outgoing direction, and position. These photons are spacially stored in a kd-Tree and searched for using a modified version of the nearest neighbor algorithm. The kd-Tree and search are implemented and working in my renderer, but the setting of the alpha value and interpolation of the photons is where I think I went wrong. I set up my renderer to shoot rays from the a light in a hemispherical fashion, but when I attempted to integrate, I got varied results. I believe this is due to an error in my understanding of the actual process. At this point, however, I believe it would be quick to fix in my renderer.
Though it isn't visible in the final image I was able to compute, anti aliasing was going to be implemented as
part of the photon-mapping / path-tracing implementation using the monte-carlo integration. AA is a
fundamental feature in CG, so I am not trying to downplay its importance, I just didn't get it implemented as
I had planned.
As I said above, while the project did not get accomplished, it shouldn't be mistaken as a lack of effort or committment. Personally, I am one to try and work out problems on my own with little help, which proved to be quite detremental to getting this project done. I realize many opportunities for help were presented to me and missed because of my stubborn nature. In the end, however, I feel I learned a great deal about the techniques and possibilities that can be achieved with computer graphics, and hope to possibly continue doing something related to the field.