forked from RayTracing/raytracing.github.io
-
Notifications
You must be signed in to change notification settings - Fork 0
/
RayTracingTheRestOfYourLife.html
2638 lines (2021 loc) · 107 KB
/
RayTracingTheRestOfYourLife.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<meta charset="utf-8">
<!-- Markdeep: https://casual-effects.com/markdeep/ -->
**Ray Tracing: The Rest of Your Life**
[Peter Shirley][]
edited by [Steve Hollasch][] and [Trevor David Black][]
<br>
Version 3.2.3, 2020-12-07
<br>
Copyright 2018-2020 Peter Shirley. All rights reserved.
Overview
====================================================================================================
In _Ray Tracing in One Weekend_ and _Ray Tracing: the Next Week_, you built a “real” ray tracer.
In this volume, I assume you will be pursuing a career related to ray tracing, and we will dive into
the math of creating a very serious ray tracer. When you are done you should be ready to start
messing with the many serious commercial ray tracers underlying the movie and product design
industries. There are many many things I do not cover in this short volume; I dive into only one of
many ways to write a Monte Carlo rendering program. I don’t do shadow rays (instead I make rays more
likely to go toward lights), bidirectional methods, Metropolis methods, or photon mapping. What I do
is speak in the language of the field that studies those methods. I think of this book as a deep
exposure that can be your first of many, and it will equip you with some of the concepts, math, and
terms you will need to study the others.
As before, https://in1weekend.blogspot.com/ will have further readings and references.
Thanks to everyone who lent a hand on this project. You can find them in the acknowledgments section
at the end of this book.
A Simple Monte Carlo Program
====================================================================================================
Let’s start with one of the simplest Monte Carlo (MC) programs. MC programs give a statistical
estimate of an answer, and this estimate gets more and more accurate the longer you run it. This
basic characteristic of simple programs producing noisy but ever-better answers is what MC is all
about, and it is especially good for applications like graphics where great accuracy is not needed.
Estimating Pi
--------------
<div class='together'>
As an example, let’s estimate $\pi$. There are many ways to do this, with the Buffon Needle
problem being a classic case study. We’ll do a variation inspired by that. Suppose you have a circle
inscribed inside a square:
![Figure [circ-square]: Estimating π with a circle inside a square
](../images/fig-3.01-circ-square.jpg)
</div>
<div class='together'>
Now, suppose you pick random points inside the square. The fraction of those random points that end
up inside the circle should be proportional to the area of the circle. The exact fraction should in
fact be the ratio of the circle area to the square area. Fraction:
$$ \frac{\pi r^2}{(2r)^2} = \frac{\pi}{4} $$
</div>
<div class='together'>
Since the $r$ cancels out, we can pick whatever is computationally convenient. Let’s go with $r=1$,
centered at the origin:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include "rtweekend.h"
#include <iostream>
#include <iomanip>
#include <math.h>
#include <stdlib.h>
int main() {
int N = 1000;
int inside_circle = 0;
for (int i = 0; i < N; i++) {
auto x = random_double(-1,1);
auto y = random_double(-1,1);
if (x*x + y*y < 1)
inside_circle++;
}
std::cout << std::fixed << std::setprecision(12);
std::cout << "Estimate of Pi = " << 4*double(inside_circle) / N << '\n';
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [estpi-1]: <kbd>[pi.cc]</kbd> Estimating π]
</div>
The answer of $\pi$ found will vary from computer to computer based on the initial random seed.
On my computer, this gives me the answer `Estimate of Pi = 3.0880000000`
Showing Convergence
--------------------
<div class='together'>
If we change the program to run forever and just print out a running estimate:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include "rtweekend.h"
#include <iostream>
#include <iomanip>
#include <math.h>
#include <stdlib.h>
int main() {
int inside_circle = 0;
int runs = 0;
std::cout << std::fixed << std::setprecision(12);
while (true) {
runs++;
auto x = random_double(-1,1);
auto y = random_double(-1,1);
if (x*x + y*y < 1)
inside_circle++;
if (runs % 100000 == 0)
std::cout << "Estimate of Pi = "
<< 4*double(inside_circle) / runs
<< '\n';
}
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [estpi-2]: <kbd>[pi.cc]</kbd> Estimating π, v2]
</div>
Stratified Samples (Jittering)
-------------------------------
<div class='together'>
We get very quickly near $\pi$, and then more slowly zero in on it. This is an example of the *Law
of Diminishing Returns*, where each sample helps less than the last. This is the worst part of MC.
We can mitigate this diminishing return by *stratifying* the samples (often called *jittering*),
where instead of taking random samples, we take a grid and take one sample within each:
![Figure [jitter]: Sampling areas with jittered points](../images/fig-3.02-jitter.jpg)
</div>
<div class='together'>
This changes the sample generation, but we need to know how many samples we are taking in advance
because we need to know the grid. Let’s take a hundred million and try it both ways:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include "rtweekend.h"
#include <iostream>
#include <iomanip>
int main() {
int inside_circle = 0;
int inside_circle_stratified = 0;
int sqrt_N = 10000;
for (int i = 0; i < sqrt_N; i++) {
for (int j = 0; j < sqrt_N; j++) {
auto x = random_double(-1,1);
auto y = random_double(-1,1);
if (x*x + y*y < 1)
inside_circle++;
x = 2*((i + random_double()) / sqrt_N) - 1;
y = 2*((j + random_double()) / sqrt_N) - 1;
if (x*x + y*y < 1)
inside_circle_stratified++;
}
}
auto N = static_cast<double>(sqrt_N) * sqrt_N;
std::cout << std::fixed << std::setprecision(12);
std::cout
<< "Regular Estimate of Pi = "
<< 4*double(inside_circle) / (sqrt_N*sqrt_N) << '\n'
<< "Stratified Estimate of Pi = "
<< 4*double(inside_circle_stratified) / (sqrt_N*sqrt_N) << '\n';
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [estpi-3]: <kbd>[pi.cc]</kbd> Estimating π, v3]
On my computer, I get:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Regular Estimate of Pi = 3.14151480
Stratified Estimate of Pi = 3.14158948
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
Interestingly, the stratified method is not only better, it converges with a better asymptotic rate!
Unfortunately, this advantage decreases with the dimension of the problem (so for example, with the
3D sphere volume version the gap would be less). This is called the *Curse of Dimensionality*. We
are going to be very high dimensional (each reflection adds two dimensions), so I won't stratify in
this book, but if you are ever doing single-reflection or shadowing or some strictly 2D problem, you
definitely want to stratify.
One Dimensional MC Integration
====================================================================================================
Integration is all about computing areas and volumes, so we could have framed
chapter [A Simple Monte Carlo Program] in an integral form if we wanted to make it maximally
confusing. But sometimes integration is the most natural and clean way to formulate things.
Rendering is often such a problem.
Integrating x²
---------------
Let’s look at a classic integral:
$$ I = \int_{0}^{2} x^2 dx $$
<div class='together'>
In computer sciency notation, we might write this as:
$$ I = \text{area}( x^2, 0, 2 ) $$
We could also write it as:
$$ I = 2 \cdot \text{average}(x^2, 0, 2) $$
</div>
<div class='together'>
This suggests a MC approach:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
#include "rtweekend.h"
#include <iostream>
#include <iomanip>
#include <math.h>
#include <stdlib.h>
int main() {
int N = 1000000;
auto sum = 0.0;
for (int i = 0; i < N; i++) {
auto x = random_double(0,2);
sum += x*x;
}
std::cout << std::fixed << std::setprecision(12);
std::cout << "I = " << 2 * sum/N << '\n';
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [integ-xsq-1]: <kbd>[integrate_x_sq.cc]</kbd> Integrating $x^2$]
</div>
This, as expected, produces approximately the exact answer we get with algebra, $I = 8/3$. We could
also do it for functions that we can’t analytically integrate like $\log(\sin(x))$. In graphics, we
often have functions we can evaluate but can’t write down explicitly, or functions we can only
probabilistically evaluate. That is in fact what the ray tracing `ray_color()` function of the last
two books is -- we don’t know what color is seen in every direction, but we can statistically
estimate it in any given dimension.
One problem with the random program we wrote in the first two books is that small light sources
create too much noise. This is because our uniform sampling doesn’t sample these light sources often
enough. Light sources are only sampled if a ray scatters toward them, but this can be unlikely for a
small light, or a light that is far away. We could lessen this problem if we sent more random
samples toward this light, but this will cause the scene to be inaccurately bright. We can remove
this inaccuracy by downweighting these samples to adjust for the over-sampling. How we do that
adjustment? To do that, we will need the concept of a _probability density function_.
Density Functions
------------------
<div class='together'>
First, what is a _density function_? It’s just a continuous form of a histogram. Here’s an example
from the histogram Wikipedia page:
![Figure [histogram]: Histogram example](../images/fig-3.03-histogram.jpg)
</div>
<div class='together'>
If we added data for more trees, the histogram would get taller. If we divided the data into more
bins, it would get shorter. A discrete density function differs from a histogram in that it
normalizes the frequency y-axis to a fraction or percentage (just a fraction times 100). A
continuous histogram, where we take the number of bins to infinity, can’t be a fraction because the
height of all the bins would drop to zero. A density function is one where we take the bins and
adjust them so they don’t get shorter as we add more bins. For the case of the tree histogram above
we might try:
$$ \text{bin-height} = \frac{(\text{Fraction of trees between height }H\text{ and }H’)}{(H-H’)} $$
</div>
<div class='together'>
That would work! We could interpret that as a statistical predictor of a tree’s height:
$$ \text{Probability a random tree is between } H \text{ and } H’ = \text{bin-height}\cdot(H-H’)$$
</div>
If we wanted to know about the chances of being in a span of multiple bins, we would sum.
A _probability density function_, henceforth _PDF_, is that fractional histogram made continuous.
Constructing a PDF
-------------------
Let’s make a _PDF_ and use it a bit to understand it more. Suppose I want a random number $r$
between 0 and 2 whose probability is proportional to itself: $r$. We would expect the PDF $p(r)$ to
look something like the figure below, but how high should it be?
![Figure [linear-pdf]: A linear PDF](../images/fig-3.04-linear-pdf.jpg)
<div class='together'>
The height is just $p(2)$. What should that be? We could reasonably make it anything by
convention, and we should pick something that is convenient. Just as with histograms we can sum up
(integrate) the region to figure out the probability that $r$ is in some interval $(x_0,x_1)$:
$$ \text{Probability } x_0 < r < x_1 = C \cdot \text{area}(p(r), x_0, x_1) $$
</div>
<div class='together'>
where $C$ is a scaling constant. We may as well make $C = 1$ for cleanliness, and that is exactly
what is done in probability. We also know the probability $r$ has the value 1 somewhere, so for this
case
$$ \text{area}(p(r), 0, 2) = 1 $$
</div>
<div class='together'>
Since $p(r)$ is proportional to $r$, _i.e._, $p = C' \cdot r$ for some other constant $C'$
$$
area(C'r, 0, 2) = \int_{0}^{2} C' r dr
= \frac{C'r^2}{2} \biggr|_{r=0}^{r=2}
= \frac{C' \cdot 2^2}{2} - \frac{C' \cdot 0^2}{2}
= 2C'
$$
So $p(r) = r/2$.
</div>
How do we generate a random number with that PDF $p(r)$? For that we will need some more machinery.
Don’t worry this doesn’t go on forever!
<div class='together'>
Given a random number from `d = random_double()` that is uniform and between 0 and 1, we should be
able to find some function $f(d)$ that gives us what we want. Suppose $e = f(d) = d^2$. This is no
longer a uniform PDF. The PDF of $e$ will be bigger near 1 than it is near 0 (squaring a number
between 0 and 1 makes it smaller). To convert this general observation to a function, we need the
cumulative probability distribution function $P(x)$:
$$ P(x) = \text{area}(p, -\infty, x) $$
</div>
<div class='together'>
Note that for $x$ where we didn’t define $p(x)$, $p(x) = 0$, _i.e._, the probability of an $x$ there
is zero. For our example PDF $p(r) = r/2$, the $P(x)$ is:
$$ P(x) = 0 : x < 0 $$
$$ P(x) = \frac{x^2}{4} : 0 < x < 2 $$
$$ P(x) = 1 : x > 2 $$
</div>
<div class='together'>
One question is, what’s up with $x$ versus $r$? They are dummy variables -- analogous to the
function arguments in a program. If we evaluate $P$ at $x = 1.0$, we get:
$$ P(1.0) = \frac{1}{4} $$
</div>
<div class='together'>
This says _the probability that a random variable with our PDF is less than one is 25%_. This gives
rise to a clever observation that underlies many methods to generate non-uniform random numbers. We
want a function `f()` that when we call it as `f(random_double())` we get a return value with a PDF
$\frac{x^2}{4}$. We don’t know what that is, but we do know that 25% of what it returns should be
less than 1.0, and 75% should be above 1.0. If $f()$ is increasing, then we would expect $f(0.25) =
1.0$. This can be generalized to figure out $f()$ for every possible input:
$$ f(P(x)) = x $$
</div>
<div class='together'>
That means $f$ just undoes whatever $P$ does. So,
$$ f(x) = P^{-1}(x) $$
</div>
<div class='together'>
The -1 means “inverse function”. Ugly notation, but standard. For our purposes, if we have PDF $p()$
and cumulative distribution function $P()$, we can use this "inverse function" with a random number
to get what we want:
$$ e = P^{-1} (\text{random_double}()) $$
</div>
<div class='together'>
For our PDF $p(x) = x/2$, and corresponding $P(x)$, we need to compute the inverse of $P$. If we
have
$$ y = \frac{x^2}{4} $$
we get the inverse by solving for $x$ in terms of $y$:
$$ x = \sqrt{4y} $$
Thus our random number with density $p$ is found with:
$$ e = \sqrt{4\cdot\text{random_double}()} $$
</div>
Note that this ranges from 0 to 2 as hoped, and if we check our work by replacing `random_double()`
with $\frac{1}{4}$ we get 1 as expected.
<div class='together'>
We can now sample our old integral
$$ I = \int_{0}^{2} x^2 $$
</div>
<div class='together'>
We need to account for the non-uniformity of the PDF of $x$. Where we sample too much we should
down-weight. The PDF is a perfect measure of how much or little sampling is being done. So the
weighting function should be proportional to $1/pdf$. In fact it is exactly $1/pdf$:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
inline double pdf(double x) {
return 0.5*x;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
int main() {
int N = 1000000;
auto sum = 0.0;
for (int i = 0; i < N; i++) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto x = sqrt(random_double(0,4));
sum += x*x / pdf(x);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
}
std::cout << std::fixed << std::setprecision(12);
std::cout << "I = " << sum/N << '\n';
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [integ-xsq-2]: <kbd>[integrate_x_sq.cc]</kbd> Integrating $x^2$ with PDF]
</div>
Importance Sampling
--------------------
Since we are sampling more where the integrand is big, we might expect less noise and thus faster
convergence. In effect, we are steering our samples toward the parts of the distribution that are
more _important_. This is why using a carefully chosen non-uniform PDF is usually called _importance
sampling_.
<div class='together'>
If we take that same code with uniform samples so the PDF = $1/2$ over the range [0,2] we can use
the machinery to get `x = random_double(0,2)`, and the code is:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
inline double pdf(double x) {
return 0.5;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
int main() {
int N = 1000000;
auto sum = 0.0;
for (int i = 0; i < N; i++) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto x = random_double(0,2);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
sum += x*x / pdf(x);
}
std::cout << std::fixed << std::setprecision(12);
std::cout << "I = " << sum/N << '\n';
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [integ-xsq-3]: <kbd>[integrate_x_sq.cc]</kbd> Integrating $x^2$, v3]
</div>
<div class='together'>
Note that we don’t need that 2 in the `2*sum/N` anymore -- that is handled by the PDF, which is 2
when you divide by it. You’ll note that importance sampling helps a little, but not a ton. We could
make the PDF follow the integrand exactly:
$$ p(x) = \frac{3}{8}x^2 $$
And we get the corresponding
$$ P(x) = \frac{x^3}{8} $$
and
$$ P^{-1}(x) = 8x^\frac{1}{3} $$
</div>
<div class='together'>
This perfect importance sampling is only possible when we already know the answer (we got $P$ by
integrating $p$ analytically), but it’s a good exercise to make sure our code works. For just 1
sample we get:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
inline double pdf(double x) {
return 3*x*x/8;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
int main() {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
int N = 1;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
auto sum = 0.0;
for (int i = 0; i < N; i++) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto x = pow(random_double(0,8), 1./3.);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
sum += x*x / pdf(x);
}
std::cout << std::fixed << std::setprecision(12);
std::cout << "I = " << sum/N << '\n';
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [integ-xsq-4]: <kbd>[integrate_x_sq.cc]</kbd> Integrating $x^2$, final version]
</div>
Which always returns the exact answer.
<div class='together'>
Let’s review now because that was most of the concepts that underlie MC ray tracers.
1. You have an integral of $f(x)$ over some domain $[a,b]$
2. You pick a PDF $p$ that is non-zero over $[a,b]$
3. You average a whole ton of $\frac{f(r)}{p(r)}$ where $r$ is a random number with PDF $p$.
Any choice of PDF $p$ will always converge to the right answer, but the closer that $p$
approximates $f$, the faster that it will converge.
</div>
MC Integration on the Sphere of Directions
====================================================================================================
In our ray tracer we pick random directions, and directions can be represented as points on the
unit sphere. The same methodology as before applies, but now we need to have a PDF defined over 2D.
Suppose we have this integral over all directions:
$$ \int cos^2(\theta) $$
By MC integration, we should just be able to sample $\cos^2(\theta) / p(\text{direction})$, but what
is _direction_ in that context? We could make it based on polar coordinates, so $p$ would be in
terms of $(\theta, \phi)$. However you do it, remember that a PDF has to integrate to 1 and
represent the relative probability of that direction being sampled. Recall that we have vec3
functions to take uniform random samples in (`random_in_unit_sphere()`) or on
(`random_unit_vector()`) a unit sphere.
<div class='together'>
Now what is the PDF of these uniform points? As a density on the unit sphere, it is $1/\text{area}$
of the sphere or $1/(4\pi)$. If the integrand is $\cos^2(\theta)$, and $\theta$ is the angle with
the z axis:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
inline double pdf(const vec3& p) {
return 1 / (4*pi);
}
int main() {
int N = 1000000;
auto sum = 0.0;
for (int i = 0; i < N; i++) {
vec3 d = random_unit_vector();
auto cosine_squared = d.z()*d.z();
sum += cosine_squared / pdf(d);
}
std::cout << std::fixed << std::setprecision(12);
std::cout << "I = " << sum/N << '\n';
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [main-sphereimp]: <kbd>[sphere_importance.cc]</kbd>
Generating importance-sampled points on the unit sphere]
</div>
The analytic answer (if you remember enough advanced calc, check me!) is $\frac{4}{3} \pi$, and the
code above produces that. Next, we are ready to apply that in ray tracing!
The key point here is that all the integrals and probability and all that are over the unit sphere.
The area on the unit sphere is how you measure the directions. Call it direction, solid angle, or
area -- it’s all the same thing. Solid angle is the term usually used. If you are comfortable with
that, great! If not, do what I do and imagine the area on the unit sphere that a set of directions
goes through. The solid angle $\omega$ and the projected area $A$ on the unit sphere are the same
thing.
![Figure [solid-angle]: Solid angle / projected area of a sphere
](../images/fig-3.05-solid-angle.jpg)
Now let’s go on to the light transport equation we are solving.
Light Scattering
====================================================================================================
In this chapter we won't actually program anything. We will set up for a big lighting change in the
next chapter.
Albedo
-------
Our program from the last books already scatters rays from a surface or volume. This is the commonly
used model for light interacting with a surface. One natural way to model this is with probability.
First, is the light absorbed?
Probability of light scattering: $A$
Probability of light being absorbed: $1-A$
Here $A$ stands for _albedo_ (latin for _whiteness_). Albedo is a precise technical term in some
disciplines, but in all cases it is used to define some form of _fractional reflectance_. This
_fractional reflectance_ (or albedo) will vary with color and (as we implemented for our glass in
book one) can vary with incident direction.
Scattering
-----------
In most physically based renderers, we would use a set of wavelengths for the light color rather
than RGB. We can extend our intuition by thinking of R, G, and B as specific algebraic mixtures of
long, medium, and short wavelengths.
If the light does scatter, it will have a directional distribution that we can describe as a PDF
over solid angle. I will refer to this as its _scattering PDF_: $s(direction)$. The scattering PDF
can also vary with _incident direction_, which is the direction of the incoming ray. You can see
this varying with incident direction when you look at reflections off a road -- they become
mirror-like as your viewing angle (incident angle) approaches grazing.
<div class='together'>
The color of a surface in terms of these quantities is:
$$ Color = \int A \cdot s(direction) \cdot \text{color}(direction) $$
</div>
Note that $A$ and $s()$ may depend on the view direction or the scattering position (position on a
surface or position within a volume). Therefore, the output color may also vary with view direction
or scattering position.
The Scattering PDF
-------------------
<div class='together'>
If we apply the MC basic formula we get the following statistical estimate:
$$ Color = \frac{A \cdot s(direction) \cdot \text{color}(direction)}{p(direction)} $$
where $p(direction)$ is the PDF of whatever direction we randomly generate.
</div>
For a Lambertian surface we already implicitly implemented this formula for the special case where
$p()$ is a cosine density. The $s()$ of a Lambertian surface is proportional to $\cos(\theta)$,
where $\theta$ is the angle relative to the surface normal. Remember that all PDF need to integrate
to one. For $\cos(\theta) < 0$ we have $s(direction) = 0$, and the integral of cos over the
hemisphere is $\pi$.
<div class='together'>
To see that, remember that in spherical coordinates:
$$ dA = \sin(\theta) d\theta d\phi $$
So:
$$ Area = \int_{0}^{2 \pi} \int_{0}^{\pi / 2} cos(\theta) sin(\theta) d\theta d\phi =
2 \pi \frac{1}{2} = \pi $$
</div>
<div class='together'>
So for a Lambertian surface the scattering PDF is:
$$ s(direction) = \frac{\cos(\theta)}{\pi} $$
</div>
<div class='together'>
If we sample using a PDF that equals the scattering PDF:
$$ p(direction) = s(direction) = \frac{\cos(\theta)}{\pi} $$
The numerator and denominator cancel out, and we get:
$$ Color = A \cdot color(direction) $$
This is exactly what we had in our original `ray_color()` function! However, we need to generalize
so we can send extra rays in important directions, such as toward the lights.
The treatment above is slightly non-standard because I want the same math to work for surfaces and
volumes. To do otherwise will make some ugly code.
</div>
<div class='together'>
If you read the literature, you’ll see reflection described by the bidirectional reflectance
distribution function (BRDF). It relates pretty simply to our terms:
$$ BRDF = \frac{A \cdot s(direction)}{\cos(\theta)} $$
So for a Lambertian surface for example, $BRDF = A / \pi$. Translation between our terms and BRDF is
easy.
For participation media (volumes), our albedo is usually called _scattering albedo_, and our
scattering PDF is usually called _phase function_.
</div>
Importance Sampling Materials
====================================================================================================
<div class='together'>
Our goal over the next two chapters is to instrument our program to send a bunch of extra rays
toward light sources so that our picture is less noisy. Let’s assume we can send a bunch of rays
toward the light source using a PDF $pLight(direction)$. Let’s also assume we have a PDF related to
$s$, and let’s call that $pSurface(direction)$. A great thing about PDFs is that you can just use
linear mixtures of them to form mixture densities that are also PDFs. For example, the simplest
would be:
$$ p(direction) = \frac{1}{2}\cdotp \text{Light}(direction)
+ \frac{1}{2}\cdot \text{pSurface}(direction) $$
</div>
As long as the weights are positive and add up to one, any such mixture of PDFs is a PDF. Remember,
we can use any PDF: _all PDFs eventually converge to the correct answer_. So, the game is to figure
out how to make the PDF larger where the product $s(direction) \cdot color(direction)$ is large. For
diffuse surfaces, this is mainly a matter of guessing where $color(direction)$ is high.
For a mirror, $s()$ is huge only near one direction, so it matters a lot more. Most renderers in
fact make mirrors a special case, and just make the $s/p$ implicit -- our code currently does that.
Returning to the Cornell Box
-----------------------------
Let’s do a simple refactoring and temporarily remove all materials that aren’t Lambertian. We can
use our Cornell Box scene again, and let’s generate the camera in the function that generates the
model.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
...
color ray_color(...) {
...
}
hittable_list cornell_box() {
hittable_list objects;
auto red = make_shared<lambertian>(color(.65, .05, .05));
auto white = make_shared<lambertian>(color(.73, .73, .73));
auto green = make_shared<lambertian>(color(.12, .45, .15));
auto light = make_shared<diffuse_light>(color(15, 15, 15));
objects.add(make_shared<yz_rect>(0, 555, 0, 555, 555, green));
objects.add(make_shared<yz_rect>(0, 555, 0, 555, 0, red));
objects.add(make_shared<xz_rect>(213, 343, 227, 332, 554, light));
objects.add(make_shared<xz_rect>(0, 555, 0, 555, 555, white));
objects.add(make_shared<xz_rect>(0, 555, 0, 555, 0, white));
objects.add(make_shared<xy_rect>(0, 555, 0, 555, 555, white));
shared_ptr<hittable> box1 = make_shared<box>(point3(0,0,0), point3(165,330,165), white);
box1 = make_shared<rotate_y>(box1, 15);
box1 = make_shared<translate>(box1, vec3(265,0,295));
objects.add(box1);
shared_ptr<hittable> box2 = make_shared<box>(point3(0,0,0), point3(165,165,165), white);
box2 = make_shared<rotate_y>(box2, -18);
box2 = make_shared<translate>(box2, vec3(130,0,65));
objects.add(box2);
return objects;
}
int main() {
// Image
const auto aspect_ratio = 1.0 / 1.0;
const int image_width = 600;
const int image_height = static_cast<int>(image_width / aspect_ratio);
const int samples_per_pixel = 100;
const int max_depth = 50;
// World
auto world = cornell_box();
color background(0,0,0);
// Camera
point3 lookfrom(278, 278, -800);
point3 lookat(278, 278, 0);
vec3 vup(0, 1, 0);
auto dist_to_focus = 10.0;
auto aperture = 0.0;
auto vfov = 40.0;
auto time0 = 0.0;
auto time1 = 1.0;
camera cam(lookfrom, lookat, vup, vfov, aspect_ratio, aperture, dist_to_focus, time0, time1);
// Render
std::cout << "P3\n" << image_width << ' ' << image_height << "\n255\n";
for (int j = image_height-1; j >= 0; --j) {
...
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [cornell-box]: <kbd>[main.cc]</kbd> Cornell box, refactored]
<div class='together'>
At 500×500 my code produces this image in 10min on 1 core of my Macbook:
![Image 1: Cornell box, refactored](../images/img-3.01-cornell-refactor1.jpg class=pixel)
Reducing that noise is our goal. We’ll do that by constructing a PDF that sends more rays to the
light.
</div>
First, let’s instrument the code so that it explicitly samples some PDF and then normalizes for
that. Remember MC basics: $\int f(x) \approx f(r)/p(r)$. For the Lambertian material, let’s sample
like we do now: $p(direction) = \cos(\theta) / \pi$.
<div class='together'>
We modify the base-class `material` to enable this importance sampling:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class material {
public:
virtual bool scatter(
const ray& r_in, const hit_record& rec, color& albedo, ray& scattered, double& pdf
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
) const {
return false;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
virtual double scattering_pdf(
const ray& r_in, const hit_record& rec, const ray& scattered
) const {
return 0;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
virtual color emitted(double u, double v, const point3& p) const {
return color(0,0,0);
}
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [class-material]: <kbd>[material.h]</kbd>
The material class, adding importance sampling]
</div>
<div class='together'>
And _Lambertian_ material becomes:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class lambertian : public material {
public:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
lambertian(const color& a) : albedo(make_shared<solid_color>(a)) {}
lambertian(shared_ptr<texture> a) : albedo(a) {}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
virtual bool scatter(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
const ray& r_in, const hit_record& rec, color& alb, ray& scattered, double& pdf
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
) const override {
auto scatter_direction = rec.normal + random_unit_vector();
// Catch degenerate scatter direction
if (scatter_direction.near_zero())
scatter_direction = rec.normal;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
scattered = ray(rec.p, unit_vector(direction), r_in.time());
alb = albedo->value(rec.u, rec.v, rec.p);
pdf = dot(rec.normal, scattered.direction()) / pi;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
return true;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
double scattering_pdf(
const ray& r_in, const hit_record& rec, const ray& scattered
) const {
auto cosine = dot(rec.normal, unit_vector(scattered.direction()));
return cosine < 0 ? 0 : cosine/pi;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
public:
shared_ptr<texture> albedo;
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [class-lambertian-impsample]: <kbd>[material.h]</kbd>
Lambertian material, modified for importance sampling]
</div>
<div class='together'>
And the `ray_color` function gets a minor modification:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
color ray_color(const ray& r, const color& background, const hittable& world, int depth) {
hit_record rec;
// If we've exceeded the ray bounce limit, no more light is gathered.
if (depth <= 0)
return color(0,0,0);
// If the ray hits nothing, return the background color.
if (!world.hit(r, 0.001, infinity, rec))
return background;
ray scattered;
color attenuation;
color emitted = rec.mat_ptr->emitted(rec.u, rec.v, rec.p);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
double pdf;
color albedo;
if (!rec.mat_ptr->scatter(r, rec, albedo, scattered, pdf))
return emitted;
return emitted
+ albedo * rec.mat_ptr->scattering_pdf(r, rec, scattered)
* ray_color(scattered, background, world, depth-1) / pdf;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [ray-color-impsample]: <kbd>[main.cc]</kbd>
The ray_color function, modified for importance sampling]
</div>
You should get exactly the same picture.
Random Hemisphere Sampling
---------------------------
<div class='together'>
Now, just for the experience, try a different sampling strategy. As in the first book, Let’s choose
randomly from the hemisphere above the surface. This would be $p(direction) = \frac{1}{2\pi}$.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
virtual bool scatter(
const ray& r_in, const hit_record& rec, color& alb, ray& scattered, double& pdf
) const override {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
auto direction = random_in_hemisphere(rec.normal);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
scattered = ray(rec.p, unit_vector(direction), r_in.time());
alb = albedo->value(rec.u, rec.v, rec.p);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
pdf = 0.5 / pi;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
return true;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[Listing [scatter-mod]: <kbd>[material.h]</kbd> Modified scatter function]
</div>
<div class='together'>
And again I _should_ get the same picture except with different variance, but I don’t!
![Image 2: Cornell box, with different sampling strategy
](../images/img-3.02-cornell-refactor2.jpg class=pixel)
</div>
It’s pretty close to our old picture, but there are differences that are not noise. The front of the
tall box is much more uniform in color. So I have the most difficult kind of bug to find in a Monte
Carlo program -- a bug that produces a reasonable looking image. I also don’t know if the bug is the
first version of the program, or the second, or both!
Let’s build some infrastructure to address this.
Generating Random Directions
====================================================================================================
In this and the next two chapters, let’s harden our understanding and tools and figure out which
Cornell Box is right.
Random Directions Relative to the Z Axis
-----------------------------------------
Let’s first figure out how to generate random directions. To simplify things, let’s assume the
z-axis is the surface normal, and $\theta$ is the angle from the normal. We’ll get them oriented to
the surface normal vector in the next chapter. We will only deal with distributions that are
rotationally symmetric about $z$. So $p(direction) = f(\theta)$. If you have had advanced calculus,
you may recall that on the sphere in spherical coordinates $dA = \sin(\theta) \cdot d\theta \cdot
d\phi$. If you haven’t, you’ll have to take my word for the next step, but you’ll get it when you
take advanced calculus.
<div class='together'>
Given a directional PDF, $p(direction) = f(\theta)$ on the sphere, the 1D PDFs on $\theta$ and