Monte Carlo Integration
Monte Carlo Integration is a numerical method for computing an integral which only requires being able to evaluate the function at arbitrary points. It is especially useful for higher-dimensional integrals (involving multiple variables) such as . In this article I will touch on the basics of Monte Carlo Integration as well as explain a number of variance reduction techniques which we can use to get a more accurate result with fewer computations.
Numerical Integration
Say we have a function which we wish to integrate. Recall that the integral of can, in some sense, be thought of as the (signed) area under the graph of :
Suppose we're interested in integrating this function between and . We can do this analytically:
Quadrature Rules
One simple way of approximating the integral would be to:
- Partition the domain into equally-sized subintervals:
Let be the size of each subinterval
Define
This gives the following JavaScript code (integrating between zero and one):
function integrate_zero_one(f, n) {
var sum = 0.0;
var delta_x = 1/n;
for (var i = 0; i < n; i++) {
sum += delta_x * f(i * delta_x);
}
return sum;
}
Visually, this can be thought of as summing the area of rectangles of equal width, with height determined by the integrand at the left of each rectangle:
There are other methods for numerical approximation that work similarly to this basic method (known as a left Reimann sum), such as Newton-Cotes rules (including the Trapezoidal rule and Simpson's rule), or Gauss-Legendre rules. Collectively these types of methods are called 'quadrature rules'.
Problems with Quadrature Rules
Quadrature rules are typically excellent for integrating functions of one variable. However they all suffer from a problem known as the 'curse of dimensionality', meaning that the approximations they give converge extremely slowly to the desired integral in multiple dimensions. See Veach's thesis (section 2.2) for a more rigorous analysis of quadrature rules.
Monte Carlo Integration
For integrating functions of multiple variables, it may be preferable to use a technique called Monte Carlo Integration.
Let's say that we want to compute the value of an integral .
Suppose we are given a set of random variables (typically called 'samples'). We would like to find some function such that
Using the jargon, this means that is an 'estimator' of . It should be intuitive that given a set of random variables (typically called 'samples'), then
That is to say, the mean of our computed samples approaches the expectation of as gets larger (according to the law of large numbers). Since the are distributed uniformly, we know that , so using the law of the unconscious statistician gives
hence
This is excellent, as we can now set (which I will just write as from now on) equal to this left hand side quantity. This gives
which is exactly what we want!
Monte Carlo Estimator
In fact, we can sample random variables from any distribution - we just need to change our slightly. Suppose now that the are sampled from some distribution with an arbitrary probability density function . Then define the Monte Carlo estimator of with samples to be:
Again, algebraic manipulation shows that:
It will become clear later why it is useful to be able to pick the from any distribution.
Returning to the previous case setting , we can write some code which computes the value of the Monte Carlo estimator of with samples:
function integrate_monte_carlo(f, a, b, n) {
var sum = 0.0;
for (var i = 0; i < n; i++) {
// Obtain x_i ~ Unif(a, b) from Unif(0, 1)
var x_i = Math.random() * (b - a) + a;
sum += f(x_i);
}
return (b - a) / n * sum;
}
// e.g integrate_monte_carlo(x => (x * x * x), 0.0, 1.0, 100)
Monte Carlo Integration in Multiple Dimensions
The Monte Carlo estimator extends easily to multiple dimensions. For example, in three dimensions: let be drawn according to a joint probability density function . Then the Monte Carlo estimator of a function is again
Keep in mind that the Monte Carlo estimator is a function of - for low values of , the estimate will be quite inaccurate, but the law of large numbers tells us that as increases, the estimate gets better and better. It is possible to show analytically that the error of the Monte Carlo estimator (defined by ) is in any number of dimensions, whereas no quadrature rule has an error better than in dimensions. Again, see Veach's thesis for a more rigorous exploration of these results.
Variance Reduction Techniques
Ideally we want our estimator to give us an accurate result with as small a value of as possible (since this implies fewer computations). Mathematically speaking, we would like to minimize .
Importance Sampling
One extremely clever way of reducing the variance of a Monte Carlo estimator is to strategically sample the according to some probability density that closely approximates the integrand . To see why this works, consider picking 1 (where is some constant which ensures that integrates to 1). Then
This would be the perfect estimator! For all values of , our estimator gives us the exact value of the integral. However unfortunately, it is not possible to choose such a in the first place, because computing the normalization constant involves computing the integral of , which is exactly the thing we're trying to calculate. Also, it may be difficult to sample the from the probability density function if we cannot find an analytic formula for the cumulative distribution function (which is required for CDF inversion sampling). Therefore we usually have to settle for picking samples from probability density functions which merely approximate the integrand.
To think about why this is an improvement over picking from a uniform distribution, consider the variance of our estimator:
(where the second line holds if the are independent). Now, if our choice of has a similar shape to , then the expression should be almost constant, hence will be low, which is exactly what we want for our estimator.
Importance sampling is absolutely crucial for reducing the amount of computational work. For example in computer graphics we are often trying to calculate the color of a point on a surface by (very loosely speaking) integrating the energy of light rays arriving at the point in a hemisphere of directions. We know that incoming light rays arriving perpendicular to the surface have a greater effect on its color than light rays arriving parallel to the surface, so we can sample more light rays close to the normal of the surface and get a faster converging result!
Low Discrepancy Sampling
One final technique I will talk about for reducing the variance of our estimator is uniform sample placement.
In addition to importance sampling, it is intuitive that we would like to explore the domain of our function as evenly as possible. For example, recall the case of the Monte Carlo estimator where we have a uniform PDF. It should be clear that we would like our samples to be evenly spaced: that is, they should not be clumped up together (as they would be retrieving values of that are nearly the same, providing little additional information) and similarly they should not be far apart. In the case of a non-uniform PDF, the same holds true, though the connection is a little harder to see intuitively.
We can mathematically quantify how 'evenly spaced' the points in a sequence are using a measurement called the Star Discrepancy. Using a low discrepancy sequence (LDS) gives us a slightly lower variance (especially for small sample counts) than naive pseudorandom sampling.
The image below uses pseudorandom samples to generate the coordinates of each point.
As you can see, there are areas of higher point density and lower point density. This results in high variance when doing Monte Carlo integration. The image below instead uses a low discrepancy sequence called a Sobol sequence for each coordinate:
The points are much more evenly spaced throughout the image, which is what we want.
One of the simplest low discrepancy sequences is called the van der Corput sequence. The base- van der Corput sequence is defined by:
where is the th digit of the expansion of in base . With , the sequence begins
Conclusion
Monte Carlo integration is a powerful tool for evaluating high-dimensional integrals. We have seen how its variance can be reduced significantly through importance sampling and through choosing a low discrepancy sequence, both of which result in lowering the amount of computational work we need to do to obtain a reasonable result.
In the next article, I will talk about a technique called Multiple Importance Sampling which allows us to combine samples from multiple different probability density functions that we think match the shape of the integrand, reducing variance without introducing bias.
Footnotes
1: For the rest of this post we assume that is non-negative, otherwise such a choice of PDF would not be possible. We also assume that the PDF is non-zero wherever is non-zero, to avoid division by zero.
References
- Eric Veach's thesis is a truly excellent resource and covers everything here and more in greater detail (mainly section 2).
- Physically Based Rendering: From Theory to Implementation chapter 10 covers Monte Carlo Integration. Chapter 7 describes the theory and implementation behind a number of low discrepancy sequences.