Bayesian Inference

Prior-to-posterior updating, medical tests, and Bayesian reasoning

Bayesian Inference

Bayes' theorem is the mathematical engine of rational belief updating. Given a prior belief about a hypothesis and new evidence, it produces an updated posterior belief. The formula is deceptively simple: P(H|E) = P(E|H) · P(H) / P(E). But its implications are profound -- it tells us exactly how to change our minds when we learn new things.

Visual Bayes' Theorem

See the three components of Bayes' theorem side by side: the prior (what you believed before), the likelihood (how well each hypothesis explains the evidence), and the posterior (what you should believe after). Drag the sliders to see how prior and evidence compete.

Bayes' theorem combines your prior belief with new evidence (likelihood) to produce an updated posterior belief. Drag the sliders to see how prior and evidence compete.

Key insight: When the evidence strongly favors one hypothesis (likelihoods are very different), the posterior shifts dramatically regardless of the prior. But when evidence is ambiguous (likelihoods are similar), the prior dominates -- your initial belief barely changes.

Bayesian Updating

Bayesian updating is Bayes' theorem applied repeatedly. Start with a Beta prior over the unknown probability θ, then observe successes and failures one at a time. Each observation sharpens and shifts the posterior. With enough data, the posterior concentrates around the true value, regardless of the starting prior.

Start with a Beta prior (dashed), then click Success/Failure to add observations. Watch the posterior (solid amber) sharpen and shift toward the true proportion. The posterior is always Beta(α + successes, β + failures).

Key insight: Try starting with a very wrong prior (e.g., α=1, β=10, which concentrates near 0) and then add observations with p=0.7. Watch how the data gradually overwhelms the prior -- with enough evidence, Bayesians and frequentists agree.

The Medical Test Paradox

Perhaps the most counterintuitive application of Bayes' theorem: if a disease affects 1% of the population and a test is 95% accurate, what's the probability that a positive test actually means you're sick? Most people guess 95%, but the answer is much lower -- around 16%. This is the base rate fallacy.

The base rate fallacy: when a disease is rare, most positive tests are false positives. Drag prevalence to see how prior probability dominates the result.

Key insight: When a disease is rare, there are far more healthy people than sick people. Even with a small false positive rate, the sheer number of healthy people tested generates more false positives than true positives. That's why screening tests often require confirmation.

Key Takeaways

  • Bayes = Prior × Likelihood -- the posterior is proportional to the product of what you believed before and how well the evidence fits.
  • Data overwhelms priors -- with enough evidence, the posterior concentrates on the truth regardless of where you started.
  • Base rates matter -- ignoring prior probabilities leads to systematic errors, especially when testing for rare events.
  • Sequential updating -- you can apply Bayes' theorem one observation at a time; yesterday's posterior becomes today's prior.