Finally, A Ray Tracer!
After working 150-something hours in the past two weeks, I've finally finished my CS488 ray tracer project. If I included the 60 hours I spent on my simple ray tracer from Assignment 4, then my total hour count is about 210 over 3 weeks! Man, I really hope this is the last time I ever have to work this hard (70 hours/week). But now that I'm finished, I can finally get some sleep and catch up on my other courses. Without any further ramblings from my sleep-deprived mind, here's an image gallery of my rendered images!
You'd think that copying Snell's and Fresnel equations from the textbook would be simple and straight forward but this objective actually took me the longest to complete. I can't believe it took me 40 hours to debug my code to find an incorrect minus sign. And when I say debugging, I mean reading through a 100 MB log file that contains every matrix and vector used to render each pixel!
For this objective I placed a glass sphere with varying refraction indices infront of two boxes and put everything inside a cube room. I generated these initial images without applying Beer's Law.
For these higher refraction indices, I got a weird black circle inside my sphere. My guess is that anything outside the dark circle (i.e. the light outer ring) are experiencing total internal reflection. As a result, those light rings are only seeing reflections instead of both reflections and refractions. It's odd that the Fresnel equations does not provide a smooth interpolation between reflections and refractions.
Also note that the horizontal line in these spheres are simply reflections of my sunset background.
For some reason, I decided to mention Beer's Law when writing my objectives list when refraction alone is enough for a valid objective. Ah me and my big mouth...
I re-rendered the above images after implementing Beer's Law using a very crude approximation. Instead of taking account of the material's internal attenuation factors, I simply used the inverse bouding volume size. While not realistic, it does improve my non-attenuated images (somewhat... maybe?).
Turns out transluency is implemented the same way as glossy reflections so this extra feature was trivial to implement.
This objective was actually the simplest to implement. All I had to do was perform Monte-Carlo sampling on a Phong-distributed hemisphere. Once the exponent value is over 10000, the glossiness is not noticable enough to see and the material can be approximated to a true mirror.
Not much to say here... I ported Ken Perlin's Java implementation and then experimented with different scales and octaves until I found a texture that seemed realistic. The columns from left to right varies the octaves from 1 to 4. The rows from bottom to top varies the scaling from 1 to 4.
One thing that stumped me while implementing this was that the noise generator function is a 3D function. This means that for texturing a surface, I had to use the uv-mapped 2D surface coordinate with a z-value of 0 instead of the actual 3D coordinate.
This was a pretty straight-forward implementation from my course notes. However, this did require a significant amount of refactoring. Before this objective, all of my visible classes such as
Background only stored the necessary colours. After refactoring, these classes store
Texture instead. Of course to maintain backwards-compatibility and keep my interfaces uniform, I implemented both a
SolidColourTexture class and an
The most annoying part for this objective is that I had to reboot into Windows to use Photoshop to create these cube textures. I can't believe there isn't a user-friendly alternative on Linux yet.
To map a mesh, I first send out a ray from its centroid to an intermediate bounding container (either a sphere or cube) and use the bounding container to map the points.
With the magic of polymorphism, I'm also able to choose the type of intermediate container to perform the mapping.
With the uv-mapping from the previous objective, it was easy to implement bump mapping.
When there are only a few objects, using a tree-hierachy to cache the scene was more expensive than a simple array. For more complex scenes (most use cases), using a tree-hierarchy significantly improved the rendering time.
|Scene||Rendering Time without BVH (s)||Rendering Time with BVH (s)|
Although there are a couple jagged edges in adaptive sampling, it's still a big improvement over no anti-aliasing and it has a much better performance than super sampling.
|Anti-Aliasing||Render Time (s)|
This was one of the first Monte-Carlo techniques that I implemented. It was implemented by sampling random points on area lights. After 49 samples (7x7 grid), there isn't much improvement.
Another Monte-Carlo technique by sampling various points on the camera.
For my final scene, I wanted to render a chess board onto a real photo. Ideally, it should blend seemlessly into the background image like CGI. Unfortunately I made my final scene too complex so it won't be able to finish before the deadline (I killed the process after it made less than 10% progress in 9 hours with 40 threads).
To save time, I designed this alternative scene with only 3 chess-pieces instead of all 16. Sadly this was also not able to finish in full resolution before the deadline. Sigh, I really wish Waterloo have better undergrad servers for doing these heavy computations.
After giving up on the undergrad machines, I spent $20 on an c4.8xlarge instance on Amazon EC2 to render my image before the report's deadline. I can't believe 36 virtual threads on Intel Xeon processors beat the 56 threads on AMD processors on our CS servers.