Distribution Explorer

Interactive PDF, CDF, and parameter exploration for common distributions

Probability Distributions

A probability distribution describes how likely different outcomes are. The probability density function (PDF) gives the relative likelihood of each value, while the cumulative distribution function (CDF) gives the probability of being at or below a value. Every distribution is defined by a few parameters that control its shape, center, and spread.

Distribution Explorer

Choose a distribution and drag the parameter sliders to see how the PDF changes in real time. Toggle the CDF overlay to see cumulative probabilities. Notice how changing parameters shifts, stretches, or reshapes the distribution.

The bell curve -- most important distribution in statistics

Key insight: The total area under any PDF is exactly 1 -- it represents 100% of all possible outcomes. The CDF always starts at 0 and ends at 1, increasing monotonically. At any point x, CDF(x) = P(X ≤ x).

Distribution Relationships

Distributions don't exist in isolation -- they form a rich web of connections. The Binomial arises from summing Bernoulli trials. As n grows, the Binomial converges to the Normal (Central Limit Theorem). The Gamma generalizes the Exponential. Hover over each node to explore these relationships.

Hover over distributions to highlight their connections. Each node shows a mini PDF. Arrows indicate how one distribution arises from or converges to another.

Key insight: Understanding how distributions relate to each other helps you choose the right model. For instance, if you're counting rare events, use Poisson; if modeling wait times, use Exponential; if summing many small effects, the result is approximately Normal.

Random Sampler

Draw random samples from any distribution and watch the empirical histogram take shape. With few samples, the histogram is noisy and irregular. As you add more samples, it converges to the smooth theoretical PDF -- a visual proof of the law of large numbers at work.

Draw samples from any distribution and watch the histogram build up. The dashed line shows the sample mean. Compare the empirical shape to the theoretical PDF.

Key insight: Try drawing just 10 samples from the bimodal distribution -- you might not even see two modes. With 10,000 samples, the two peaks become unmistakable. This illustrates why sample size matters for statistical inference.

Key Takeaways

  • PDF and CDF -- the PDF gives relative likelihood; the CDF gives cumulative probability. They contain the same information in different forms.
  • Parameters control shape -- mean shifts the center, variance controls spread, and shape parameters (like skewness) determine asymmetry.
  • Distributions are connected -- most distributions arise as special cases or limits of more general families.
  • Empirical converges to theoretical -- with enough samples, the histogram approximates the true PDF arbitrarily well.