CSE 168 Final Project: The Lone Tree

Putt Sakdhnagool, Yi Chen

psakdhnagool at cs.ucsd.edu, yic016 at cs.ucsd.edu

Introduction

Our project inspired by the hot spring picture (Figure. 1). The beauty of light scatteing through the fog and the reflection from the water catched our attention.


Figure 1. Hot spring image
(source: http://porbital.multiply.com/photos/album/98/When_smoke_meets_light#photo=8)

We decide to use photon mapping and participating media for rendering this scene. To add more challenge to our project, we decide to rendering this scene without using mesh file. Instead of using 3d mesh file like Wavefront OBJ for the object in scene, we use procedural modelling to generate most part of our scene. We implement height map for generating terrain and using L-system for tree generation. One of our result image is shown in Figure 2. The detail of each implementation is described below.


Figure 2. The lone tree

Photon Mapping and Perticipating Media

We use the code provided in the class website as a base code for photon mapping. We implemented both global mapping and caustic mapping for this project. We show the result of our implementation using the cornell box scene. Figure 3a shows the result from the first pass of photon mapping. This cornell box scene uses red diffuse material for the left wall and cyan mirror materia for the rightwall. The white dot indicates the global photon. The red dot represents the shadow photon and the green dot represents the caustic photon.

Figure 3b shows the indirect illumination from global photon map and Figure 3c shows the indirect illumination from caustic map. Figures 3d shows the combination of both indirect illumination and the final image is shown in Figures 3e.

(a)(b)(c)(d)(e)

Figure 3. Result from Photon Mapping

Participating Media

We use the photon mapping to do the participating media. Like the photon mapping, it's also a two passes procedure. First we send photons from light source and if the photon interacts with the light source (ether scattered or absorbed), we record this point and save it in the volume photon map. Second, when we render the scene, it has to evaluate the contribution both from the contribution along the ray direction and contribution other than the ray direction. We use ray marching to evaluate the first term. In the following figures, we use uniform step for the ray marching. That's why for some point whose the illumination changes fast, there will be some artifacts. And for the other direction contribution, We use the photons within some distance and evaluate the irradiance for this point. Figure 4a is the volume photon map.

Participating Media: Implementation

We first use a geometry to bound the media and the geometry also provides the information where the ray marching ends. But this implementation has many difficulties when dealing with complicated scene (like the bounding geometry intersects with some other geometry in the scene.) Figure 4b is the result doing participating media with bounding geometry and figure 4c is with different density of media. Due to the difficulty of implementing, we decided to put media in the scene instead (every objects are in the media.) But for this implementation, it's hard to decide when the ray marching should stop. So I choose a scene with bounding geometry to avoid this difficulty.

We control the media by controlling its density and the rate of scattering. Figure 4d is the media with lower uniform density and Figure 4e is with the density proportion to it's height (y location in this case.)

We also model the light beam to make it more realistic. We model the light beam like spot light (we only shot photons within some angle.) The previous figures are with this modeling light beam. Figure 4f is the scene without it.

Besides, we also want to perturb the density with noise function and make it more like smoke. Figure 4g is the media with the density function involving level 3 Perlin noise turbulence.

(a)(b)(c) (d)
(e)(f)(g)

Figure 4. Result from Photon Mapping with Participating Media

Generate Terrian using Heightmap

In this project, we use heightmap to define the terrain for the scene.

Heightmap

The height map is the image which each pixel of the image store the height value of the terrain. In this project, we use heightmap definition from POV-RAY documentation [3]. The height map image defines the terrain for unit square where the maximum height is 1 (Figure 5). Increasing the resolution of the heightmap is will be increase the smoothness of the terrain not the size.

For the image of size w x h, each image pixel (u,v) represents the position (u/w,v/h) of the terrain and the value c of pixel (u,v) is the height of the terrain at that position (Figure 5). So the pixel (u,v) with value c is represents the point (u/w,c,v/h) of the terrain.


Figure 5. Heightmap definition (image from [3])

To generate the triagle mesh from the heightmap, we use 4 corresponding point to define a sqaure. Each square contains 2 triangles meshes. For image of size w x h, we have (w-1)x(h-1) squares which are divided to 2x(w-1)x(h-1) triangle meshes.

Heightmap Example


Figure 6. Heightmap Example: Worley noise Heightmap, F0(left), F3*F1(middle), Perlin Noise Heighmap (Right)

Tree Generation using L-System

We use the system called "L-System" for tree genaration.

The L-System

The l-system or Lindenmayer system is a string rewriting system that can be used to generate fractals with dimension between 1 and 2 [1] The most powerful part of the l-system is the rewriting mechanism. The complex string can be generated by using the simple string and rewriting rules. It consists of the set of variable, set of constants, and rules for replacing the variable. The l-system can defined as follow,

Variable: A,B
Constant: +,-
Rules:
      A->A+B
      B->-A

The rule A->A+B means that the variable A will be replaced by a string "A+B" and B->-A means that the variable B will be replaced by a string "-A". Using a string "A" as a start string, the result of each iteration is shown below

0 : A
1 : A+B
2 : A+B+-A
3 : A+B+-A-A+B
4 : A+B+-A+-A+B-A+B+-A

The L-System for tree generation

For generating the tree, we use this definition [2]:

Variable: A,B
Constant: f,l,[,],+,-,^,v,<,>
Constant definition:
      f : Create branch
      l : Create leaf
      [ and ] : For define the local area
      + and - : For rotation in x-axis
      ^ and v : For rotation in y-axis
      < and > : For rotation in z-axis

Moreover, we need some other parameters for generating the tree. These parameter are listed below.

angle : define the rotion of branches for constant +,-,^,v,<,>
iteration: number of iteration for rewriting the string
radius: the redius of the branches
radius reduction: the rate to decrease redius of the branches
height: length of each branches

Tree Example

Rules:
            A->^fB>>>B>>>>>B
            B->[^^f>>>>>>A]
initial string : fA
angle : 25
iteration: 5
radius: 0.2
radius reduction: 0.7
height: 0.05
Rules:
            A->fB-[[A]+A]+fB[+fBA]-A
            B->-fB
initial string : fA
angle : 25
iteration: 5
radius: 0.2
radius reduction: 0.7
height: 0.05
Figure 7. Sample Tree Generate by L-System with Parameters

Bump Mapping

Bump mapping is a techique for manipulate the surface lightning using the texture. This technique uses texture to manipulate the surface normal.

Bump Mapping Implementation

For this project, we extended the cellular texture we implemented in previous assignment to calculate the effect of bump mapping. Instead of using the 2D texture and UV-coordidate for bump mapping, we use the gradiant of cellular texture for manipulate the normal[4]. If the surface at position (x,y,z) with normal N = (nx,ny,nz) and the gradiant from the texture is (∇x,∇y,∇z), we can calculate the new normal by

Nnew = (nx*∇x,ny*∇y,nz*∇z)

The gradiant ∇x,∇y ,and ∇z are calculate by

∇x = texture(x-δ,y,z) - texture(x+δ,y,z)
∇y = texture(x,y-δ,z) - texture(x,y+δ,z)
∇z = texture(x,y,z-δ) - texture(x,y,z+δ)

where
               texture(x,y,z) is the value of the texture for position (x,y,z)

Bump Mapping Example


Figure 8. Bump Mapping Example: Perlin noise bump mapping(left), Worley noise Bump mapping(middle), and Teapot witn Perlin Noise Bump Map

Problems

We found many problems during the development. The most hardest problem is combining the code from 2 people which requires a lot of time. Since many part of our implementation share the same code e.g. both bump mapping and participating media require modification to same material code or change of photon mapping code affect the implementation of participating media. This cause us to miss some update of the code and made our code inconsistent.

Conclusion

We can produce our final image as show in Figure 8. We can implement all important techniques we proposed in the proposal within one week period.

(a)(b)
(c)(d)
Figure 9. Result Images
(Click on the image to see larger image)

Division of Labours

Putt is responsible for writing photon mapping code, procedural modelling (heightmap, tree generation, and bump mapping), and writing the report. Yi is responsible for writing participating media code and report. The final scene is the work of both team member.

References