Finally, A Ray Tracer!

After working 150-something hours in the past two weeks, I've finally finished my CS488 ray tracer project. If I included the 60 hours I spent on my simple ray tracer from Assignment 4, then my total hour count is about 210 over 3 weeks! Man, I really hope this is the last time I ever have to work this hard (70 hours/week). But now that I'm finished, I can finally get some sleep and catch up on my other courses. Without any further ramblings from my sleep-deprived mind, here's an image gallery of my rendered images!

1. Refractions

You'd think that copying Snell's and Fresnel equations from the textbook would be simple and straight forward but this objective actually took me the longest to complete. I can't believe it took me 40 hours to debug my code to find an incorrect minus sign. And when I say debugging, I mean reading through a 100 MB log file that contains every matrix and vector used to render each pixel!

For this objective I placed a glass sphere with varying refraction indices infront of two boxes and put everything inside a cube room. I generated these initial images without applying Beer's Law.

Refraction Index 1.0
Refraction Index 1.5

For these higher refraction indices, I got a weird black circle inside my sphere. My guess is that anything outside the dark circle (i.e. the light outer ring) are experiencing total internal reflection. As a result, those light rings are only seeing reflections instead of both reflections and refractions. It's odd that the Fresnel equations does not provide a smooth interpolation between reflections and refractions.

Also note that the horizontal line in these spheres are simply reflections of my sunset background.

Refraction Index 2.5
Refraction Index 4.5

Beer's Law

For some reason, I decided to mention Beer's Law when writing my objectives list when refraction alone is enough for a valid objective. Ah me and my big mouth...

I re-rendered the above images after implementing Beer's Law using a very crude approximation. Instead of taking account of the material's internal attenuation factors, I simply used the inverse bouding volume size. While not realistic, it does improve my non-attenuated images (somewhat... maybe?).

Refraction Index 1.0
Refraction Index 1.5
Refraction Index 2.5
Refraction Index 4.5


Turns out transluency is implemented the same way as glossy reflections so this extra feature was trivial to implement.

Translucency in a Cornell Box

2. Glossy Reflections

This objective was actually the simplest to implement. All I had to do was perform Monte-Carlo sampling on a Phong-distributed hemisphere. Once the exponent value is over 10000, the glossiness is not noticable enough to see and the material can be approximated to a true mirror.

Glossy Reflections with Varying Distribution Exponents
SphereExponent Value
Bottom Left1
Bottom Center10
Bottom Right100
Top Left1000
Top Center10000
Top Right100000

3. Perlin Noise

Not much to say here... I ported Ken Perlin's Java implementation and then experimented with different scales and octaves until I found a texture that seemed realistic. The columns from left to right varies the octaves from 1 to 4. The rows from bottom to top varies the scaling from 1 to 4.

Marble Texture
Wood Texture

One thing that stumped me while implementing this was that the noise generator function is a 3D function. This means that for texturing a surface, I had to use the uv-mapped 2D surface coordinate with a z-value of 0 instead of the actual 3D coordinate.

4. Texture Mapping

This was a pretty straight-forward implementation from my course notes. However, this did require a significant amount of refactoring. Before this objective, all of my visible classes such as Material and Background only stored the necessary colours. After refactoring, these classes store Texture instead. Of course to maintain backwards-compatibility and keep my interfaces uniform, I implemented both a SolidColourTexture class and an ImageTexture class.

Sphere Mapping

Sphere Mapping
Earth Texture

Cube Mapping

The most annoying part for this objective is that I had to reboot into Windows to use Photoshop to create these cube textures. I can't believe there isn't a user-friendly alternative on Linux yet.

Cube Mapping with a Debug Texture
Debug Texture
Cube Mapping
Minecraft Cube Texture

Mesh Mapping

To map a mesh, I first send out a ray from its centroid to an intermediate bounding container (either a sphere or cube) and use the bounding container to map the points.

Mesh Mapping

With the magic of polymorphism, I'm also able to choose the type of intermediate container to perform the mapping.

Sphere and Cube Mapping For Mesh

5. Bump Mapping

With the uv-mapping from the previous objective, it was easy to implement bump mapping.

Light at x=-200
Light at x=0
Light at x=200
Bump Map
The Earth can't be complete without the moon!

6. Bounding Volume Hierarchy

When there are only a few objects, using a tree-hierachy to cache the scene was more expensive than a simple array. For more complex scenes (most use cases), using a tree-hierarchy significantly improved the rendering time.

SceneRendering Time without BVH (s)Rendering Time with BVH (s)
Cornell Box2.923.3
Chess Board163.2233.44
Cornell Box
Chess Board

7. Anti-Aliasing

No Anti-Aliasing
Super Sampling 4x
Adaptive Sampling Depth 2
Threshold 0.01
Where Additional Sampling was Performed

Although there are a couple jagged edges in adaptive sampling, it's still a big improvement over no anti-aliasing and it has a much better performance than super sampling.

Anti-AliasingRender Time (s)
No Anti-Aliasing2.8
Super Sampling36.8
Adaptive Sampling11.24

8. Soft Shadows

This was one of the first Monte-Carlo techniques that I implemented. It was implemented by sampling random points on area lights. After 49 samples (7x7 grid), there isn't much improvement.

1 Sample
4 Samples
49 Samples
100 Samples

9. Depth of Field

Another Monte-Carlo technique by sampling various points on the camera.

Focal Distance of 250
Focal Distance of 500
Focal Distance of 750

10. Final Scene

For my final scene, I wanted to render a chess board onto a real photo. Ideally, it should blend seemlessly into the background image like CGI. Unfortunately I made my final scene too complex so it won't be able to finish before the deadline (I killed the process after it made less than 10% progress in 9 hours with 40 threads).

Preview of Original Final Scene with Bounding Volumes instead of Mesh

To save time, I designed this alternative scene with only 3 chess-pieces instead of all 16. Sadly this was also not able to finish in full resolution before the deadline. Sigh, I really wish Waterloo have better undergrad servers for doing these heavy computations.

Preview of Final Scene 2


After giving up on the undergrad machines, I spent $20 on an c4.8xlarge instance on Amazon EC2 to render my image before the report's deadline. I can't believe 36 virtual threads on Intel Xeon processors beat the 56 threads on AMD processors on our CS servers.

Final Scene 2