diff --git a/.DS_Store b/.DS_Store index ed31069..92c2b2f 100644 Binary files a/.DS_Store and b/.DS_Store differ diff --git a/hw3/.DS_Store b/hw3/.DS_Store new file mode 100644 index 0000000..5008ddf Binary files /dev/null and b/hw3/.DS_Store differ diff --git a/hw3/images/.DS_Store b/hw3/images/.DS_Store new file mode 100644 index 0000000..8978714 Binary files /dev/null and b/hw3/images/.DS_Store differ diff --git a/hw3/images/CBbunny_H_16_8.png b/hw3/images/CBbunny_H_16_8.png new file mode 100644 index 0000000..41fa8a8 Binary files /dev/null and b/hw3/images/CBbunny_H_16_8.png differ diff --git a/hw3/images/CBbunny_H_1_1.png b/hw3/images/CBbunny_H_1_1.png new file mode 100644 index 0000000..c53e9a5 Binary files /dev/null and b/hw3/images/CBbunny_H_1_1.png differ diff --git a/hw3/images/CBbunny_H_1_16.png b/hw3/images/CBbunny_H_1_16.png new file mode 100644 index 0000000..f46a5e8 Binary files /dev/null and b/hw3/images/CBbunny_H_1_16.png differ diff --git a/hw3/images/CBbunny_H_1_4.png b/hw3/images/CBbunny_H_1_4.png new file mode 100644 index 0000000..e897cda Binary files /dev/null and b/hw3/images/CBbunny_H_1_4.png differ diff --git a/hw3/images/CBbunny_H_1_64.png b/hw3/images/CBbunny_H_1_64.png new file mode 100644 index 0000000..3435d4d Binary files /dev/null and b/hw3/images/CBbunny_H_1_64.png differ diff --git a/hw3/images/CBbunny_H_64_32.png b/hw3/images/CBbunny_H_64_32.png new file mode 100644 index 0000000..0236df1 Binary files /dev/null and b/hw3/images/CBbunny_H_64_32.png differ diff --git a/hw3/images/bunny.png b/hw3/images/bunny.png new file mode 100644 index 0000000..b52cd27 Binary files /dev/null and b/hw3/images/bunny.png differ diff --git a/hw3/images/bunny_1_1.png b/hw3/images/bunny_1_1.png new file mode 100644 index 0000000..a59c848 Binary files /dev/null and b/hw3/images/bunny_1_1.png differ diff --git a/hw3/images/bunny_1_16.png b/hw3/images/bunny_1_16.png new file mode 100644 index 0000000..2c7e408 Binary files /dev/null and b/hw3/images/bunny_1_16.png differ diff --git a/hw3/images/bunny_1_1_2.png b/hw3/images/bunny_1_1_2.png new file mode 100644 index 0000000..a9829c4 Binary files /dev/null and b/hw3/images/bunny_1_1_2.png differ diff --git a/hw3/images/bunny_1_1_6.png b/hw3/images/bunny_1_1_6.png new file mode 100644 index 0000000..e9b4231 Binary files /dev/null and b/hw3/images/bunny_1_1_6.png differ diff --git a/hw3/images/bunny_1_4.png b/hw3/images/bunny_1_4.png new file mode 100644 index 0000000..ad0e6cb Binary files /dev/null and b/hw3/images/bunny_1_4.png differ diff --git a/hw3/images/bunny_1_64.png b/hw3/images/bunny_1_64.png new file mode 100644 index 0000000..2db03f9 Binary files /dev/null and b/hw3/images/bunny_1_64.png differ diff --git a/hw3/images/bunny_64_32.png b/hw3/images/bunny_64_32.png new file mode 100644 index 0000000..4e75220 Binary files /dev/null and b/hw3/images/bunny_64_32.png differ diff --git a/hw3/images/bunny_rate.png b/hw3/images/bunny_rate.png new file mode 100644 index 0000000..d125704 Binary files /dev/null and b/hw3/images/bunny_rate.png differ diff --git a/hw3/images/bvh/beast.png b/hw3/images/bvh/beast.png new file mode 100644 index 0000000..dda0a26 Binary files /dev/null and b/hw3/images/bvh/beast.png differ diff --git a/hw3/images/bvh/beetle.png b/hw3/images/bvh/beetle.png new file mode 100644 index 0000000..a9b3778 Binary files /dev/null and b/hw3/images/bvh/beetle.png differ diff --git a/hw3/images/bvh/cow.png b/hw3/images/bvh/cow.png new file mode 100644 index 0000000..1a67be2 Binary files /dev/null and b/hw3/images/bvh/cow.png differ diff --git a/hw3/images/bvh/maxplanck.png b/hw3/images/bvh/maxplanck.png new file mode 100644 index 0000000..fb14641 Binary files /dev/null and b/hw3/images/bvh/maxplanck.png differ diff --git a/hw3/images/dragon_64_32.png b/hw3/images/dragon_64_32.png new file mode 100644 index 0000000..7120a38 Binary files /dev/null and b/hw3/images/dragon_64_32.png differ diff --git a/hw3/images/example_image.jpg b/hw3/images/example_image.jpg new file mode 100644 index 0000000..4cd77ba Binary files /dev/null and b/hw3/images/example_image.jpg differ diff --git a/hw3/images/global-illum/bunny_0.png b/hw3/images/global-illum/bunny_0.png new file mode 100644 index 0000000..aca83dd Binary files /dev/null and b/hw3/images/global-illum/bunny_0.png differ diff --git a/hw3/images/global-illum/bunny_0_F.png b/hw3/images/global-illum/bunny_0_F.png new file mode 100644 index 0000000..f6c7868 Binary files /dev/null and b/hw3/images/global-illum/bunny_0_F.png differ diff --git a/hw3/images/global-illum/bunny_1.png b/hw3/images/global-illum/bunny_1.png new file mode 100644 index 0000000..d11a99b Binary files /dev/null and b/hw3/images/global-illum/bunny_1.png differ diff --git a/hw3/images/global-illum/bunny_1_F.png b/hw3/images/global-illum/bunny_1_F.png new file mode 100644 index 0000000..9522b16 Binary files /dev/null and b/hw3/images/global-illum/bunny_1_F.png differ diff --git a/hw3/images/global-illum/bunny_2.png b/hw3/images/global-illum/bunny_2.png new file mode 100644 index 0000000..3aba89b Binary files /dev/null and b/hw3/images/global-illum/bunny_2.png differ diff --git a/hw3/images/global-illum/bunny_2_F.png b/hw3/images/global-illum/bunny_2_F.png new file mode 100644 index 0000000..031aea5 Binary files /dev/null and b/hw3/images/global-illum/bunny_2_F.png differ diff --git a/hw3/images/global-illum/bunny_3.png b/hw3/images/global-illum/bunny_3.png new file mode 100644 index 0000000..917e20c Binary files /dev/null and b/hw3/images/global-illum/bunny_3.png differ diff --git a/hw3/images/global-illum/bunny_3_F.png b/hw3/images/global-illum/bunny_3_F.png new file mode 100644 index 0000000..1977ded Binary files /dev/null and b/hw3/images/global-illum/bunny_3_F.png differ diff --git a/hw3/images/global-illum/bunny_4.png b/hw3/images/global-illum/bunny_4.png new file mode 100644 index 0000000..d9d9d9b Binary files /dev/null and b/hw3/images/global-illum/bunny_4.png differ diff --git a/hw3/images/global-illum/bunny_4_F.png b/hw3/images/global-illum/bunny_4_F.png new file mode 100644 index 0000000..5e93c32 Binary files /dev/null and b/hw3/images/global-illum/bunny_4_F.png differ diff --git a/hw3/images/global-illum/bunny_5.png b/hw3/images/global-illum/bunny_5.png new file mode 100644 index 0000000..0a86d1d Binary files /dev/null and b/hw3/images/global-illum/bunny_5.png differ diff --git a/hw3/images/global-illum/bunny_5_F.png b/hw3/images/global-illum/bunny_5_F.png new file mode 100644 index 0000000..c8dbec4 Binary files /dev/null and b/hw3/images/global-illum/bunny_5_F.png differ diff --git a/hw3/images/global_bunny.png b/hw3/images/global_bunny.png new file mode 100644 index 0000000..90211d2 Binary files /dev/null and b/hw3/images/global_bunny.png differ diff --git a/hw3/images/global_spheres.png b/hw3/images/global_spheres.png new file mode 100644 index 0000000..112c5ff Binary files /dev/null and b/hw3/images/global_spheres.png differ diff --git a/hw3/images/p1t1CBempty.png b/hw3/images/p1t1CBempty.png new file mode 100644 index 0000000..f0755d3 Binary files /dev/null and b/hw3/images/p1t1CBempty.png differ diff --git a/hw3/images/p1t1banana.png b/hw3/images/p1t1banana.png new file mode 100644 index 0000000..7056b07 Binary files /dev/null and b/hw3/images/p1t1banana.png differ diff --git a/hw3/images/p1t4CBspheres.png b/hw3/images/p1t4CBspheres.png new file mode 100644 index 0000000..9203deb Binary files /dev/null and b/hw3/images/p1t4CBspheres.png differ diff --git a/hw3/images/p3/bunny_ray_1.png b/hw3/images/p3/bunny_ray_1.png new file mode 100644 index 0000000..22ef5a0 Binary files /dev/null and b/hw3/images/p3/bunny_ray_1.png differ diff --git a/hw3/images/p3/bunny_ray_16.png b/hw3/images/p3/bunny_ray_16.png new file mode 100644 index 0000000..17260c0 Binary files /dev/null and b/hw3/images/p3/bunny_ray_16.png differ diff --git a/hw3/images/p3/bunny_ray_4.png b/hw3/images/p3/bunny_ray_4.png new file mode 100644 index 0000000..36fb919 Binary files /dev/null and b/hw3/images/p3/bunny_ray_4.png differ diff --git a/hw3/images/p3/bunny_ray_64.png b/hw3/images/p3/bunny_ray_64.png new file mode 100644 index 0000000..66bd6b4 Binary files /dev/null and b/hw3/images/p3/bunny_ray_64.png differ diff --git a/hw3/images/pt1CBcoil.png b/hw3/images/pt1CBcoil.png new file mode 100644 index 0000000..68ea954 Binary files /dev/null and b/hw3/images/pt1CBcoil.png differ diff --git a/hw3/images/pt1t3CBempty.png b/hw3/images/pt1t3CBempty.png new file mode 100644 index 0000000..0c19d32 Binary files /dev/null and b/hw3/images/pt1t3CBempty.png differ diff --git a/hw3/images/pt2CBlucy.png b/hw3/images/pt2CBlucy.png new file mode 100644 index 0000000..8819afd Binary files /dev/null and b/hw3/images/pt2CBlucy.png differ diff --git a/hw3/images/pt2cowAfterBVH.png b/hw3/images/pt2cowAfterBVH.png new file mode 100644 index 0000000..3adffd7 Binary files /dev/null and b/hw3/images/pt2cowAfterBVH.png differ diff --git a/hw3/images/pt2cowBeforeBVH.png b/hw3/images/pt2cowBeforeBVH.png new file mode 100644 index 0000000..3612fd1 Binary files /dev/null and b/hw3/images/pt2cowBeforeBVH.png differ diff --git a/hw3/images/pt2maxplanck.png b/hw3/images/pt2maxplanck.png new file mode 100644 index 0000000..4e516e8 Binary files /dev/null and b/hw3/images/pt2maxplanck.png differ diff --git a/hw3/images/pt3t2CBbunny_16_8.png b/hw3/images/pt3t2CBbunny_16_8.png new file mode 100644 index 0000000..07f1237 Binary files /dev/null and b/hw3/images/pt3t2CBbunny_16_8.png differ diff --git a/hw3/images/roulette/roulette_0.png b/hw3/images/roulette/roulette_0.png new file mode 100644 index 0000000..1f2d771 Binary files /dev/null and b/hw3/images/roulette/roulette_0.png differ diff --git a/hw3/images/roulette/roulette_1.png b/hw3/images/roulette/roulette_1.png new file mode 100644 index 0000000..90f7626 Binary files /dev/null and b/hw3/images/roulette/roulette_1.png differ diff --git a/hw3/images/roulette/roulette_100.png b/hw3/images/roulette/roulette_100.png new file mode 100644 index 0000000..94f0e3f Binary files /dev/null and b/hw3/images/roulette/roulette_100.png differ diff --git a/hw3/images/roulette/roulette_2.png b/hw3/images/roulette/roulette_2.png new file mode 100644 index 0000000..1b2678a Binary files /dev/null and b/hw3/images/roulette/roulette_2.png differ diff --git a/hw3/images/roulette/roulette_3.png b/hw3/images/roulette/roulette_3.png new file mode 100644 index 0000000..ae1feb6 Binary files /dev/null and b/hw3/images/roulette/roulette_3.png differ diff --git a/hw3/images/roulette/roulette_4.png b/hw3/images/roulette/roulette_4.png new file mode 100644 index 0000000..dcd0db0 Binary files /dev/null and b/hw3/images/roulette/roulette_4.png differ diff --git a/hw3/images/spheres_direct.png b/hw3/images/spheres_direct.png new file mode 100644 index 0000000..f69d88f Binary files /dev/null and b/hw3/images/spheres_direct.png differ diff --git a/hw3/images/spheres_indirect.png b/hw3/images/spheres_indirect.png new file mode 100644 index 0000000..9cb638a Binary files /dev/null and b/hw3/images/spheres_indirect.png differ diff --git a/hw3/images/spheres_lambertian.png b/hw3/images/spheres_lambertian.png new file mode 100644 index 0000000..729ba17 Binary files /dev/null and b/hw3/images/spheres_lambertian.png differ diff --git a/hw3/images/spheres_lambertian_rate.png b/hw3/images/spheres_lambertian_rate.png new file mode 100644 index 0000000..7730774 Binary files /dev/null and b/hw3/images/spheres_lambertian_rate.png differ diff --git a/hw3/images/spp/spp_sphere_1.png b/hw3/images/spp/spp_sphere_1.png new file mode 100644 index 0000000..34b909c Binary files /dev/null and b/hw3/images/spp/spp_sphere_1.png differ diff --git a/hw3/images/spp/spp_sphere_1024.png b/hw3/images/spp/spp_sphere_1024.png new file mode 100644 index 0000000..adef08f Binary files /dev/null and b/hw3/images/spp/spp_sphere_1024.png differ diff --git a/hw3/images/spp/spp_sphere_128.png b/hw3/images/spp/spp_sphere_128.png new file mode 100644 index 0000000..7784af5 Binary files /dev/null and b/hw3/images/spp/spp_sphere_128.png differ diff --git a/hw3/images/spp/spp_sphere_16.png b/hw3/images/spp/spp_sphere_16.png new file mode 100644 index 0000000..dbc420e Binary files /dev/null and b/hw3/images/spp/spp_sphere_16.png differ diff --git a/hw3/images/spp/spp_sphere_2.png b/hw3/images/spp/spp_sphere_2.png new file mode 100644 index 0000000..207414a Binary files /dev/null and b/hw3/images/spp/spp_sphere_2.png differ diff --git a/hw3/images/spp/spp_sphere_4.png b/hw3/images/spp/spp_sphere_4.png new file mode 100644 index 0000000..2a7821e Binary files /dev/null and b/hw3/images/spp/spp_sphere_4.png differ diff --git a/hw3/images/spp/spp_sphere_64.png b/hw3/images/spp/spp_sphere_64.png new file mode 100644 index 0000000..1149b1a Binary files /dev/null and b/hw3/images/spp/spp_sphere_64.png differ diff --git a/hw3/images/spp/spp_sphere_8.png b/hw3/images/spp/spp_sphere_8.png new file mode 100644 index 0000000..3a9cc3e Binary files /dev/null and b/hw3/images/spp/spp_sphere_8.png differ diff --git a/hw3/index.html b/hw3/index.html index c30176b..7572aaa 100644 --- a/hw3/index.html +++ b/hw3/index.html @@ -1,7 +1,686 @@ - -
- - - Homework 3 index.html here - - \ No newline at end of file + + + + ++ + | +
+ In our comprehensive exploration of advanced rendering techniques, we embarked on a nuanced journey that spanned from foundational ray generation and scene intersection methodologies to the implementation of sophisticated global illumination and adaptive sampling strategies. Our initial focus was on mastering the intricacies of ray generation and accurately determining intersections within complex scenes, establishing a solid foundation for subsequent advancements.
+
+ Progressing to the optimization of rendering times through the implementation of Bounding Volume Hierarchy (BVH), we achieved significant efficiency improvements, particularly in scenes characterized by complex geometries. This optimization was crucial for handling detailed scenes effectively, to ensure we could increase the scale of detail within our scenes.
+
+ Our exploration further extended into the realms of direct and indirect illumination, where we employed techniques such as uniform hemisphere sampling and light importance sampling. These methods allowed us to capture the subtle interplay of light and shadow with remarkable realism, enhancing the visual depth and authenticity of our renderings. When we incorporated adaptive sampling, this enabled us to allocate computational resources dynamically across various regions of an image based on complexity and variance. This strategic allocation not only improved the overall image quality but also optimized the rendering process, striking a balance between computational efficiency and visual fidelity.
+
+ Throughout this journey, we encountered and surmounted numerous challenges, from achieving the desired lighting effects to optimizing rendering workflows. Some of the specific challenges we encountered were 1) updating the max_depth of the rays that were generated for global illumination, 2) generally ensuring our formulas from the algorithms were implemented correctly, 3) ensuring that we were iterating through the correct total number of light samples to prevent the image from getting dimmer as we increase the number of light samples, 4) ensuring adaptive sampling was achieving the correct results through the rate images generated, 5) ensuring the removal of russian roulette for adaptive sampling, since that was an issue we encountered there, 6) testing whether different types of illuminations functioned correctly, solved through examining different 100x100px small regions to prevent investing too much time into rendering an outcome that could have been spotted earlier. There were several other conceptual and implementation challenges to overcome along the process, but we persevered and ensured the correctness & functionality of the overall product.
+
+ We learned significant lessons in the process of pinpointing locations in the code that might need to be improved based on varying different parameters (flags) when creating several renderings. We learned the differences between direct illumination and indirect illumination, while actually seeing those changes reflect the overall brightness of rays with higher maximum depths. Learning the physics formulas to actually implement these characteristics of light in ray tracing was overall highly fascinating and taught us significant lessons in the realm of computer graphics rendering simulations.
+
+ We were committed to ensuring each function worked as desired, to create beautiful renderings that are not only visually appealing but also computationally efficient. This exploration not only advanced our technical proficiency in computer graphics but also reinforced our dedication to pushing the boundaries of digital rendering technology.
+
Generating the Ray: To generate the ray, we know that our primary task is to convert from image space (1x1 grid) to camera space (2x2) grid with a different origin coordinate and apply scaling transformations between each of the spaces. Given key pieces of information about the camera position, the camera to world transformation matrix, the horizontal field of view (hFov), and vertical field of video (vFov), we were able to implement the following algorithm:
+1. Translate: image coordinates to the origin, by subtracting 0,5 from both the x and y coordinate.
+2. Transform normalized image coordinates to camera space with the tangent equation.
+3. Calculate the sensor_x and sensor_y positions through the output of the tangent function applied to the x vector and scaling the value.
+4. Generate the ray in camera space, using the 3D coordinates we know with sensor_x, sensor_y, and the z = -1 value as provided in the visualization.
+5. Transform the ray into world space.
+6. Normalize the direction vector into the unit vector.
+7. Create the ray from the calculated direction vector using the camera’s position as the origin for the vector.
+This process is comprehensive in computing the position of the input sensor sample coordinate on the canonical sensor plane one unit away from the pinhole.
+ +
+ The Triangle Intersection Algorithm: The triangle intersection algorithm checks if a ray intersects a triangle in 3D space. A triangle is defined by three points: A, B, and C. A ray is defined by an origin O and a direction D. The goal is to find if the ray hits the triangle and, if so, where.
+
+ Step 1: Find the Plane of the Triangle:
+ First, we determine the plane in which the triangle lies. This is done by calculating the cross product of two edges of the triangle (from A to B and from A to C), which gives us the normal vector (N) of the plane. The plane's equation can be expressed as N dot (P - A) = 0, where P is any point on the plane.
+
+ Step 2: Ray-Plane Intersection:
+ Next, we check if the ray intersects this plane. We find the intersection point P by solving the equation N dot (O + tD - A) = 0 for t, where t is the scalar that, when multiplied with the ray's direction D and added to the origin O, gives us the intersection point P. If t is negative, the intersection point is behind the ray's origin, and we conclude there's no intersection with the triangle.
+
+ Step 3: Inside-Out Test (Barycentric Method):
+ After finding the intersection point P on the plane, we need to check if P is inside the triangle. One common method is to use barycentric coordinates, which express P as a combination of the triangle's vertices: P = alpha A + beta B + gamma C, where alpha, beta, and gamma are the barycentric coordinates of P.
+ For P to be inside the triangle, it must satisfy three conditions: alpha >= 0, beta >= 0, and gamma >= 0 (where gamma = 1 - alpha - beta). These conditions ensure that P is not only on the plane of the triangle but also within its boundaries.
+
+ Overall, the triangle intersection algorithm is fundamental to the rendering pipeline, enabling the accurate depiction of complex 3D models by identifying which rays from the camera intersect with objects in the scene. By efficiently and accurately identifying these intersections, the algorithm facilitates subsequent shading and rendering processes that bring the scene to life. Sphere intersection works in a very similar manner, but with variations in the computation of finding the plane of the sphere.
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
BVH stands for bounding volume hierarchy, which is the process of splitting the triangles from the original thing to the leaf nodes which contain primitives, such as either triangles or spheres. Our BVH algorithm is as following:
+1. Bounding Box Calculation: For each primitive, calculate a bounding box that fully encloses it. The bounding box of a primitive is the smallest axis-aligned box that contains the primitive.
+2. Centroid Calculation: For each primitive, calculate the centroid (geometric center). This is done by averaging the coordinates of all points in the primitive.
+3. Axis Selection: Choose an axis to split the primitives along. This can be the x, y, or z axis. The choice can be made based on various criteria, such as the axis with the greatest range of centroid coordinates.
+4. Primitive Sorting: Sort the primitives based on their centroid coordinates along the chosen axis. This is done using the std::nth_element function, which partially sorts the range of primitives such that all elements before the nth element are less than or equal to the elements after it. The nth element is chosen to be the middle of the range, effectively dividing the primitives into two roughly equal-sized groups.
+5. Node Creation: Create a new BVH node that encloses all the primitives in the current range. The bounding box of the node is the smallest box that contains the bounding boxes of all primitives in the range.
+6. Recursion: Recursively apply the process to each group of primitives to create the left and right child nodes of the current node. The base case of the recursion is when a group contains a small number (e.g., less than or equal to a specified maximum leaf size) of primitives, in which case a leaf node is created that directly contains those primitives.
+The heuristic we chose for picking the splitting point was the axis to split along through calculating the extent along each axis was different - we then assigned a value of 0 if x-axis has the maximum extent, then a value of 1 if y-axis has the maximum extent, then a value of 2 if z-axis has the maximum extent. We then looked at sorting the primitives based on their centroid along the axis through getting the two coordinates and sorting based on the two coordinates. Sorting the primitives in the BVH (Bounding Volume Hierarchy) construction process is crucial for creating an efficient tree structure that minimizes the number of intersection tests needed during ray tracing. This sorting process is used to find a good splitting point for the BVH node. The primitives are divided into two groups around this splitting point, each group forming a child node in the BVH. This process is repeated recursively, resulting in a tree structure where each node encloses a subset of the primitives, and each leaf node contains a small number of primitives. The goal is to create a BVH where each ray-primitive intersection test can quickly eliminate a large number of primitives by testing against the bounding boxes of the BVH nodes. By sorting the primitives and creating a balanced BVH, the number of intersection tests can be significantly reduced, leading to faster ray tracing.
+ +
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ The following snippets are terminal snippets of executing the rendering command for various dae files. Specifically for cow.dae, we rendered this before and after BVH. There was a massive rendering speed up through using BVH.
+
BEFORE BVH
+ [PathTracer] Rendering... 100%! (18.7786s)
+ [PathTracer] BVH traced 478626 rays.
+ [PathTracer] Average speed 0.0255 million rays per second.
+ [PathTracer] Averaged 625.178969 intersection tests per ray.
+ [PathTracer] Saving to file: cow.png... Done!
+ [PathTracer] Job completed.
+
+ AFTER BVH
+ [PathTracer] Rendering... 100%! (0.0369s)
+ [PathTracer] BVH traced 377293 rays.
+ [PathTracer] Average speed 10.2334 million rays per second.
+ [PathTracer] Averaged 0.000000 intersection tests per ray.
+ [PathTracer] Saving to file: cow.png... Done!
+ [PathTracer] Job completed.
+
+ The following are large files that can only be rendered using BVH. This shows the amount of rays, average speed, and render time duration to complete the rendering path tracing process.
+ For beetle, beast, and max planck:
+
+ beetle.dae
+ [PathTracer] Rendering... 100%! (0.0333s)
+ [PathTracer] BVH traced 330924 rays.
+ [PathTracer] Average speed 9.9451 million rays per second.
+ [PathTracer] Averaged 0.000000 intersection tests per ray.
+ [PathTracer] Saving to file: beetle.png... Done!
+ [PathTracer] Job completed.
+
+ beast.dae
+ [PathTracer] Rendering... 100%! (0.0467s)
+ [PathTracer] BVH traced 397005 rays.
+ [PathTracer] Average speed 8.5062 million rays per second.
+ [PathTracer] Averaged 0.000000 intersection tests per ray.
+ [PathTracer] Saving to file: beast.png... Done!
+ [PathTracer] Job completed.
+
+ mackplanck.dae
+ [PathTracer] Rendering... 100%! (0.0555s)
+ [PathTracer] BVH traced 402286 rays.
+ [PathTracer] Average speed 7.2458 million rays per second.
+ [PathTracer] Averaged 0.000000 intersection tests per ray.
+ [PathTracer] Saving to file: maxplanck.png... Done!
+ [PathTracer] Job completed.
+
+ CBlucy.dae
+ [PathTracer] Rendering... 100%! (1.9990s)
+ [PathTracer] BVH traced 438961 rays.
+ [PathTracer] Average speed 0.2196 million rays per second.
+ [PathTracer] Averaged 0.000000 intersection tests per ray.
+ [PathTracer] Saving to file: CBlucy.png... Done!
+ [PathTracer] Job completed.
+
+ This clearly illustrates the improvement that using bounding volume hierarchies have on the overall rendering process, through illustrating the differences in rendering times. This was through reducing the number of necessary ray-primitive intersection tests, organizing the scene's geometry in a way that optimizes traversal paths, and enabling efficient parallel processing.
+
+
+
Uniform Hemisphere sampling is a technique in graphics used under the context of global illumination and rendering algorithms like path tracing. This method involves generating sample directions uniformly distributed over a hemisphere centered around a normal vector at a point on a surface. The goal is to simulate the way light scatters after hitting surfaces, by sampling the incoming light from all possible directions over the hemisphere.
+ We implemented uniform hemisphere sampling through the following process:
+
+
1. Coordinate System Transformation: First, a local coordinate system is created at the point of intersection (isect) on a surface, where the surface normal (isect.n) aligns with the Z-axis of this local coordinate system. This transformation is crucial for simplifying the calculations, as it allows sampling in a standardized hemisphere oriented along the Z-axis.
+2. Sampling Directions: For each sample, a direction (w_in) is generated uniformly across the hemisphere. This involves generating two random numbers to represent the spherical coordinates (excluding the radius, since it's a direction vector on a unit hemisphere) and converting them to Cartesian coordinates. The uniform distribution ensures that every direction is equally likely to be sampled, which is important for accurately simulating diffuse lighting.
+3. Converting Samples to World Space: The sampled direction in the local coordinate system is then transformed back to the world coordinate system. This step is necessary because the rest of the scene (including light sources, other objects, etc.) is defined in world coordinates, and we need to trace rays in this space.
+4. Ray Tracing for Light Contribution: For each sampled direction, a ray is cast from the intersection point into the scene. If this ray hits a light source, the light contribution from that direction is calculated based on the surface's Bidirectional Scattering Distribution Function (BSDF), the light's intensity, and the geometric and visibility terms. The probability density function (PDF) for a uniform hemisphere sample is 1 / (2 * PI), as the area of a hemisphere is 2 * PI.
+5. Accumulating Light Contributions: The contributions from all samples are accumulated to estimate the total direct lighting at the point of intersection. This involves summing up the contributions and averaging them by the number of samples. This estimate approximates the integral of the incoming light over the hemisphere. The loop iterates “num_samples” times to ensure we are iterating the correct number of times.
+ This process overall is very essential in simulating realistic lighting in computer graphics, as it allows for the approximation of complex light interactions in a scene without explicitly simulating every photon's path. +Coordinate System Transformation: Similar to uniform hemisphere sampling, a local coordinate system is established at the intersection point on a surface, aligning the surface normal with the Z-axis. This simplifies the calculations by providing a consistent reference frame for sampling.
+1. Sampling Light Sources: Instead of sampling all possible directions over the hemisphere, importance sampling targets light sources directly. For each light source in the scene, a number of samples proportional to the light's apparent size and intensity from the point of view of the intersection point are generated. This approach prioritizes directions where light is more concentrated, increasing the efficiency of the sampling process.
+2. Visibility Check: For each sampled direction, a ray is cast from the intersection point towards the light source to determine if the light source is visible (where, for example, it is not blocked by other geometries or meshes present in the sample). This step is crucial for accurately calculating the contribution of each light source to the overall lighting.
+3. Light Contribution Calculation: For visible samples, the light contribution is calculated based on the surface's Bidirectional Scattering Distribution Function (BSDF), the intensity of the sampled light, and the geometric relationship between the light source, the surface point, and the viewing direction. The contribution is weighted by the inverse of the probability density function (PDF) associated with the sampling strategy, which compensates for the non-uniform sampling distribution.
+4. Accumulating Light Contributions: The contributions from all samples are accumulated to estimate the total direct lighting from all light sources at the point of intersection. The result is an approximation of the integral of the incoming light over the hemisphere, weighted by the importance of each light source.
++ Uniform Hemisphere Sampling + | ++ Light Sampling + | +
---|---|
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+ When focusing on a scene with at least one area light, such as the CBbunny.dae file, and comparing the noise levels in soft shadows while rendering with varying numbers of light rays (1, 4, 16, and 64) using light sampling (not uniform hemisphere sampling), we observe a clear trend in the quality of the rendered images, particularly in terms of noise reduction and shadow softness.
+
+ 1 Light Ray: With just a single light ray, the rendering of the bunny scene exhibits pronounced noise within the soft shadows. This is because the sampling is highly limited, capturing only a fraction of the possible light paths from the area light to the surface. The result is a shadow that, while directionally correct, lacks smoothness and detail, with stark contrasts between light and dark areas due to the insufficient sampling of the light's distribution.
+
+ 4 Light Rays: Increasing the number of light rays to four begins to mitigate the noise within the soft shadows, as more potential light paths are sampled and contribute to the final image. The shadows start to show a bit more gradation, with a slight improvement in the transition between light and shadow regions. However, noise remains visible, indicating that while four samples provide a better approximation of the light's effect, it's still not enough to capture the full complexity of the lighting in the scene.
+
+ 16 Light Rays: At sixteen light rays, there's a noticeable improvement in the quality of the soft shadows. The increased number of samples allows for a more accurate representation of the area light's influence, smoothing out the transitions between light and dark areas and significantly reducing noise. The shadows appear more natural, with subtle gradations that enhance the realism of the scene. This level of sampling strikes a balance between computational cost and visual fidelity, offering a clearer and more detailed rendering.
+
+ 64 Light Rays: Rendering the scene with sixty-four light rays further refines the soft shadows, virtually eliminating noise in these areas. With this high number of samples, the light's distribution is captured with great precision, resulting in smooth, realistic shadows that accurately reflect the area light's subtleties. The shadows blend seamlessly into the light, with very soft edges and a depth that adds dimensionality to the scene. This level of detail comes at a higher computational cost but achieves the highest quality in terms of shadow softness and overall lighting realism.
+
+ Comparing the results between uniform hemisphere sampling and light importance sampling for the bunny.dae file reveals distinct differences in image quality, particularly in the rendering of shadows and illumination effects. Uniform hemisphere sampling, by distributing samples evenly across the hemisphere above each point on the surface, tends to produce images with softer shadows and a more diffuse lighting effect. This method, while simple and unbiased, can lead to increased noise and less accurate representations of direct lighting effects, especially in scenes with strong, directional light sources. On the other hand, light importance sampling, which strategically focuses sampling efforts towards the directions of significant light sources, results in images with more accurately rendered shadows and highlights. This method significantly reduces noise in the rendered image and enhances the realism of lighting effects, such as the sharpness of shadows and the brightness of illuminated areas. In the context of the bunny.dae file, light importance sampling would likely produce a more visually appealing and realistic rendering, capturing the nuances of light interaction with the bunny's geometry more effectively than uniform hemisphere sampling. +
+The function “at_least_one_bounce_radiance” is the function designed to compute the radiance (light energy per unit area per unit solid angle) arriving at a point after at least one bounce off surfaces in the scene. This is a key component of global illumination, which simulates indirect lighting effects such as color bleeding, soft shadows, and the interplay of light between different surfaces. The following is our process of the implementation of our code:
+Coordinate System Transformation: At the beginning of the function, a local coordinate system is established at the intersection point (isect) on a surface, with the surface normal (isect.n) aligned with the Z-axis of this local system. This simplifies the calculations by providing a consistent frame of reference for sampling and evaluating the BSDF (Bidirectional Scattering Distribution Function).
+Base Case Handling: The function first checks the depth of the incoming ray (r.depth). If the depth is 0, it means no bounces have occurred, and it returns the result of zero_bounce_radiance, which typically accounts for direct lighting or an ambient term. If the depth is 1, it returns one_bounce_radiance, which accounts for light after exactly one bounce. This step is crucial for recursion termination and ensures that the function can compute lighting for scenes with varying levels of indirect illumination complexity.
+Sampling a New Direction: Using the BSDF at the intersection point, a new direction for the ray (w_in) is sampled. This new direction is where the next bounce of light will be simulated. The BSDF also returns a value (f) representing the fraction of light reflected in the new direction and a probability density function (pdf) value, which is used for importance sampling.
+Russian Roulette Termination: This is a technique used to probabilistically terminate the recursion of light bounces. It helps in reducing the computation time by stopping the recursion on less significant light paths while ensuring energy conservation through proper weighting of the paths that are continued.
+Recursive Ray Tracing: A new ray (bounce_ray) is created in the sampled direction from just above the surface (to avoid self-intersection) and traced into the scene. If this ray intersects another surface (bvh->intersect(bounce_ray, &new_isect)), the function recursively calculates the radiance for this new intersection point, simulating an additional bounce of light.
+Accumulating Radiance: The radiance returned from the recursive call is weighted by the BSDF value (f), the cosine of the angle between the new direction and the surface normal (dot(w_in_world, isect.n)), and inversely by the PDF value. This weighted radiance is then added to the outgoing radiance (L_out). If isAccumBounces is true, the function also adds the radiance from the current bounce (bounce_radiance) to L_out. This accumulation simulates the additive nature of light, where each bounce contributes to the total illumination.
+Returning the Result: The function returns the accumulated radiance (L_out) for the point, after at least one bounce of light. If no new intersection is found for the bounce ray, it returns a zero vector, indicating no contribution from that path.
+Overall the Indirect Lighting Function simulates indirect indirect lighting by recursively tracing rays bounced off surfaces, accumulating their contributions to the total radiance. It uses a combination of BSDF sampling for directionality, Russian Roulette for efficiency, and recursive ray tracing to simulate the complex interactions of light within a scene. This approach captures the subtle effects that contribute to the overall of a screen.
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
For direct vs indirect illumination, there are big differences.
+Direct Illumination Only: Rendering with direct illumination involves calculating light that comes directly from light sources without any bounce off surfaces. The bunny would appear relatively flat, with hard shadows directly opposite the light sources. The parts of the bunny facing the light sources would be well-lit, while areas facing away would be much darker. There would be a lack of soft shadows and the subtle color bleeding effects that come from light bouncing off the walls and the bunny itself.
+Indirect Illumination Only: This involves light that has bounced at least twice off surfaces. Without direct illumination, the scene would initially appear much darker since the primary light paths from light sources to the camera are not considered. However, as indirect illumination accumulates from multiple bounces, the scene would reveal more nuanced shading and color variations. The bunny would show softer shadows, and areas not directly visible to light sources could still be illuminated through light bouncing off other surfaces. This would result in a more realistic and visually rich image, with subtle details and color variations that are not present with direct illumination alone.
+ +
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
Max Ray Depth = 0: Renders with no light bounces, essentially capturing direct illumination only. The result is similar to the direct illumination scenario described above.
+Max Ray Depth = 1: Includes one bounce of light, allowing for some indirect illumination. Shadows start to soften, and there might be a slight color bleed from the floor or walls onto the bunny, enhancing realism.
+Max Ray Depth = 2 and 3 (Second and Third Bounce): With two and three bounces, indirect lighting effects become more pronounced. By the second bounce, you'd notice significant improvements in the softness of shadows and the presence of color bleeding, where light colors from the environment start to tint the bunny subtly. The third bounce enhances these effects further, filling in shadows more and adding to the overall illumination of the scene. These bounces contribute significantly to the quality of the image, making it appear more natural and realistic compared to rasterization, which cannot simulate these complex light interactions.
+Max Ray Depth = 4 and 5: Additional bounces refine the lighting even more, with diminishing returns. The scene is fully illuminated, with very soft shadows and a very natural blend of colors from the environment. The image has a high degree of realism, with light behaving as it does in the real world.
+ + +
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+ We used the -s flag to adjust the number of camera rays per pixel, fixing 4 light rays across all images. Varying the samples per pixel rate dramatically affects the noise level and clarity of the rendered image: +
+Low Samples (1, 2, 4, 8, 16): The rendered images show significant noise, especially in areas requiring complex light calculations like soft shadows and indirect illumination. The images progressively improve in quality as the sample rate increases, with noise levels decreasing and details becoming clearer.
+Higher Samples (64, 128, 1024): With 64 samples, the image be much cleaner, with most noise eliminated and details well-represented. At 1024 samples per pixel, the image be very smooth and detailed, with very accurate lighting effects. The trade-off is the increased computational cost, but the result is a high-quality image that closely approximates real-world lighting.
+ +Overall, the visual outcome of rendering a scene with a bunny model varies significantly with the rendering settings. Direct illumination provides a basic understanding of the scene's lighting but lacks the subtlety and richness of indirect illumination. Increasing the max ray depth enhances the realism of the scene by simulating complex light interactions, with diminishing returns as the depth increases. Russian Roulette optimizes the rendering process without sacrificing visual quality. Finally, increasing the samples per pixel rate reduces noise and improves image clarity, essential for capturing detailed and realistic lighting effects.
+Adaptive sampling is a technique used in computer graphics, particularly in rendering, to optimize the allocation of computational resources while maintaining or improving image quality. The core idea behind adaptive sampling is to vary the number of samples taken across different parts of an image based on the complexity or variance within those areas. Regions of an image that are more complex or have higher variance (e.g., sharp edges, detailed textures, or areas with significant lighting changes) receive more samples to accurately capture the detail and minimize noise. In contrast, simpler areas with less variance (e.g., flat surfaces or uniform colors) require fewer samples, as additional samples would not significantly improve image quality.
+Batch Sampling and Radiance Accumulation: In a nuanced approach to adaptive sampling, radiance values are accumulated in batches. For each batch, a set number of samples (samplesPerBatch) is processed, with each sample's radiance contributing to both the sum and squared sum of illuminances. This batch processing allows for a more granular assessment of variance within the pixel, facilitating a responsive adjustment to the sampling rate based on real-time feedback from the scene's lighting complexity.
+Variance Calculation and Convergence Checking: The algorithm calculates the mean and variance of the sample illuminances, using these statistical measures to assess the uniformity of the lighting within the pixel. A key innovation in this process is the use of a convergence criterion based on the standard deviation of the illuminance values. This criterion dynamically adjusts the sampling effort by comparing the interval estimate of the mean illuminance (I) to a predefined tolerance level (maxTolerance). The process iteratively continues sampling until this convergence criterion is met or the maximum number of samples is reached, ensuring that pixels with higher variance receive more attention to detail.
+Final Integration and Termination: Once the adaptive sampling criterion is satisfied, the average radiance for the pixel is calculated by dividing the total radiance sum by the number of samples taken. This average radiance is then used to update the pixel's color in the sample buffer. Additionally, the final number of samples processed for each pixel is recorded, providing insight into the adaptive sampling process's efficiency and effectiveness.
+A challenge we faced was not realizing the effects of Russian Roulette on our sample rates - the random early termination often conflicted with our adaptive sampling, leading to an incomplete approach. Once resolved, our sampling methods worked as expected.
+Overall, through applying the intended formulas for adaptive sampling, we are able to reduce the sampling of less geometrically dense scenes, and improve the overall process of sampling and optimizing it.
+ +
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+