CSE 168 Spring 2005
I initially started the project with great ambitions to implement a number of advanced techniques and expand my experience with the Maya modelling program. I had to curtail my ambitions as time progressed in order to allow them to fit within the confines of the alloted time. Maybe with in the future I may revisit this project and complete all the goals I would like to and implement a more advanced scene.
The final image I ended up settling on is my own version of the Cornell Box. This box was the basis for most of my development and testing as it allows me to test a number of raytracing techniques.
TechniquesIn my final version of the project I have implemented the following additional features to my ray tracer:
I also added a number of new loading features to the default miro loader, Each primitive now has the option to load in specular, reflectivity, refraction and colour values. This allows me to specify more exactly how I want objects to behave within the scene. This gave me much more versatility during testing and also changes to be implemented rapidly compared to implementing these changes within the 3d model's object file. I also added support for area lights. This allows for a rectangular area light to be added to scene to support the use of Soft Shadows.
I will explain in depth each of the techniques, how they work and how I went about implementing. Additional screenshots will illustrate how the technique affected the final image.
Refraction is the effect of how light is altered when it enters different bodies. This effect can be clearly scene in real life, when water distorts the image of the world behind it. This is due to the wavelengths of light being altered by the properties of the media that it is passing through. Different medias will have different effects on the resulting image. In order to render this effect, we need to simulate how the light is affected and calculate the resulting image that should be seen.
As with most of the techniques in ray tracing, we accomplish this by generating more rays. When we hit an object, we determine how the light is bent by the media. This is calculate using a set of equations known as Fresnell Equations. We then generate a new ray from the hit point along the newly calculated path. We follow this ray till we exit the media, then trace a ray to the next hit location . This final hit spot is then our pixel value (assuming no other reflections). We do however need to be careful. When we are shooting the rays, we need to be able to determine whether we have hit the outside or inside of an object. E.g. When we are shooting a ray through a sphere, we will hit the inside of the sphere again as we exit. To calculate this I used the dot product and compared whether the ray's direction was in the same hemisphere as that of the hit normal, if so then we hit the inside, if not we hit the outside.
To simulate different medias, we need to change the value of the refraction. This tends to be slightly hit and miss and so requires trial and error to get satisfactory results. However, general conventions states of refraction constants on 1 for air, and 1.33 for water for example.
The results of my implementation of refraction can be seen below:
As can be seen from the screenshot above, there is an element of refraction in the sphere. This is rendered with a constant of 1.33. The sphere gives a nice refraction and we can see the green walls slightly warped by the sphere. However, there are a number of artifacts in the image. There is a problem with the shadow interfering in refraction effect. Also, the specular highlight seems to have been refracted as well. This is obviously incorrect. This could be due to the specular reflection being calculated then added into the refraction effect. This is a start, but needs some fine tuning to provide a satisfactory effect.
I furthered my implementation of Refraction by applying Beer's Law. In the previous implementation the distance of the object that we see through the refraction of the object is not taken into account. Hence we are getting an incorrect image. We need to use the distance of the hit object to determine the correct shading to give a correct look and we must take into account the colour of the original object. This is achieved by implementing Beer's Law. Beer's Law can be expressed in the following formula:
light_out = light_in * e–(e * c * d)
For our needs in ray tracing, we can further simplify this equations as we only want the light to fall-off in a material that is not 100% translucent, so we can simplify this to:
light_out = light_in * e–(d * C)
Where d is the path length, and C is the density of the media. Beer's Law should allow us to generate a more realistic refraction effect.
Soft Shadows is a technique that comes as a result of the implementation of area lights. Originally we had only been using point lights. Point lights distribute light equally in all directions. This makes light the scene easy as we need only sample the point of the light and determine whether or not the light is obscured and thus we are in shadow.
To implement Soft Shadows we need to sample the area light. When we find a ray collision with an object, we need to construct a set of rays that are to be shot towards the light. To do this, we create a set of rays with slightly varying direction vectors. This is done by sectioning the light up into a grid and selecting the point of each cell as the target for our ray. We then fire all these rays and work out the contribution from the light to this point. Hence this allows us the effect of soft shadows where part of the light might be obscured but we still get some contribution from the light source and hence we are not in a full shadow.
This was my first attempt at rendering the soft shadow. Notice the large amount of shadow acne present. Whilst it looks bad, I must be on the right track, notice the definite shadow created on the plane.
I went back and double checked all my calculations. The reason for the failing render was down to 2 failings. Firstly, I needed to added an epsilon value to protect for inaccuracies in the calculations. This would protect against the presence of shadow acne. Secondly, and the major cause was my tracing of the ray. It was incorrectly determining the closest hit point, returning a shadow when it hit the light itself. This required only a minor modification.
With these adjustments made, I rendered my first successful soft shadow. This was rendered using 9 samples for the light. Notice, the slight banding that occurs at the edge of the shadow. This could be improved by sampling the light more.
Cornell Scene rendered with 100W light source
Cornell Scene rendered with 1000W light source
Obviously there is a lot that can be extended and improved on this project. I will now discuss improvements to the current techniques I am using and further techniques that I would have liked to include.
Soft Shadows - Poker Faces & Pimples
As seen in my implementation of Soft Shadows, there is a major issues of banding effects, where the shadow seems to have bands of varying intensity and looks incorrect. This can be minimized by upping the number of samples taken, however this is very bad from a performance perspective. The banding is caused by the area light only being sampled at regular intervals. Hence, when we sample the light, we are always sampling the same place on the light and thus bands of shadow occur.
One way to tackle this problem is by taking random samples of the light. By dissecting the area light into a grid, we can pick random spots within each cell to get an averaged result. As the points are random each time we sample, we will eradicate the problems of banding.
Unfortunately, this is not a perfect solution. Now due to the random sampling we introduce an aspect of noise into the image. So in effect we have replaced the banding with noise. So is this better or worse? I would say better. Noise is a much easier artifact for the eye to adjust to and forgive, naturally humans will tend to be less sensitive to noise in an image. Again, we can reduce the level of noise caused by upping the number of samples taken of the light source.
Supersampling is a technique used to implement Anti-Aliasing. Anti-aliasing (sometimes called Oversampling) is a technique to remove jagged edges from an image to create nice sharp edges. These edges are caused by Aliasing which is the process by which smooth curves and other lines become jagged because the resolution of the graphics device or file is not high enough to represent a smooth curve.
Supersampling is a simple technique, where multiple rays are shot through each pixel then these results are averaged to give a smooth averaged value. Obviously the quality of the anti-aliasing will depend upon the number of rays shots, and this has a dramatic speed hit and each extra ray, is in effect like rendering the whole scene again. Hence, shooting say 16 rays per pixel will have a 16 fold performance hit.
This technique can then be further advanced to attempt to have the best of both worlds, both performance and quality. This is done by only supersampling at edges of primitives, as these are the areas where major aliasing will occur. This can be done by storing a list of what the ray has hit and only shooting the extra rays when we hit an edge of a primitive. This should lead to a decent looking image without a significant performance hit.
Photon Mapping is an advanced ray tracing technique that takes its basis from heat transfer. In this we aim to simulate the photons emitted from a light source and how they interact with the scene in order to simulate advanced light effects such as caustics.
In order to implement Photon Mapping, it is necessary to preprocess the scene and create a Photon Map (or 2 for advanced effects). This map stores where the photons have landed in the scene. This is then used along with standard rendering techniques and allows a Global Illumination model as the photon maps illustrates advanced effect such as reflections and caustics.
I decided that that I would give an review of what I have learnt from this course and my view on current ray tracing topics.
This course has taught me a number of things concerning software development. The cycle of development for this course has been very progressive and each assignment has built on top of the previous. However, I had a number of smaller bugs and poor implementation in the early stages of the course. When I tried to add more functionality to the ray tracing, I suffered from the bugs raising their heads and having to force new code and functionality onto a poor base. As is often the case, this problem only gets worse, and by the time of the final project I spent most of the time returning and fixing old code or forcing new code to work with the old code. On reflection, I believe I should have rewritten the ray tracer from the ground up if time had allowed.
Overall, I feel disappointed with my final project. I had great aspirations for my final image, but I was unable to even attempt to model this due to the problems with the ray tracer. On a more positive note, I feel that I will certainly revisit ray tracing. I found the course very interesting and would definitely like to explore the subject further in my own time. Hopefully, I will be able to render the image I had planned one day.