Monte Carlo Methods

Estimate pi, integrate curves, and simulate Buffon's needle

Monte Carlo Methods

Monte Carlo methods use random sampling to solve problems that might be deterministic in principle but intractable in practice. The core idea is simple: generate many random samples, use them to compute some quantity of interest, and rely on the law of large numbers to guarantee convergence to the true answer as the number of samples grows.

Named after the Monte Carlo Casino in Monaco, these methods were formalized during the Manhattan Project by Stanislaw Ulam and John von Neumann. Today they are indispensable in physics, finance, machine learning, and computational biology. In this lesson, you will estimate pi two different ways and explore how Monte Carlo integration approximates definite integrals.

Estimating Pi with Random Points

Inscribe a circle of radius 1 inside a 2x2 square. The ratio of the circle's area (π) to the square's area (4) equals π/4. By generating uniform random points in the square and counting how many fall inside the circle, we estimate this ratio -- and hence π itself. Press Start and watch the estimate converge.

Total points: 0Estimate: --Error: --

Key insight: The estimate converges at a rate of O(1/√N) -- to halve the error, you need four times as many samples. This slow convergence is a fundamental limitation of basic Monte Carlo, but the method works in any number of dimensions where traditional numerical integration becomes impractical.

Monte Carlo Integration

To estimate ∫ f(x) dx, enclose the function in a bounding box and throw random darts. The fraction landing below the curve, multiplied by the box area, approximates the integral. Try different functions and observe how the error decreases with more samples. The convergence chart on the right compares actual error against the theoretical 1/√N rate.

Samples: 0Estimated: --Exact: 2.0000Error: --%

Key insight: Unlike deterministic quadrature rules (trapezoid, Simpson's), Monte Carlo integration does not suffer from the curse of dimensionality. Its convergence rate of O(1/√N) is independent of the number of dimensions, making it the method of choice for high-dimensional integrals in physics and finance.

Buffon's Needle

One of the oldest problems in geometric probability: drop a needle of length L onto a floor with parallel lines spaced t apart. The probability of the needle crossing a line is P = 2L / (πt). Rearranging gives an estimate of π. Adjust the needle-to-spacing ratio and watch needles accumulate. Crossing needles are highlighted in amber.

Needles: 0Estimate: --Error: --

Formula: \u03C0 \u2248 2L / (t \u00B7 P) where L = needle length, t = line spacing, P = proportion of needles crossing a line.

Key insight: Buffon's Needle (1777) was one of the first connections between probability and geometry. The crossing probability depends on π because it involves the projection of a randomly oriented line segment -- an inherently circular (angular) calculation. When L/t = 1, the crossing probability is exactly 2/π ≈ 0.6366.

Key Takeaways

  • Random sampling -- Monte Carlo methods convert hard deterministic problems into statistical estimation by generating random samples and averaging results
  • Convergence rate -- the error decreases as O(1/√N), meaning four times the samples halves the error; this rate is independent of problem dimension
  • Dimensional advantage -- Monte Carlo integration shines in high dimensions where grid-based methods become exponentially expensive (the curse of dimensionality)
  • Geometric probability -- Buffon's Needle demonstrates that π emerges naturally from random geometric experiments, connecting probability theory with classical geometry