State transitions, stationary distributions, and random walks on graphs
A Markov chain is a stochastic process that hops between a finite set of states, where the probability of the next state depends only on the current state -- not on the history of how we got here. This is the memoryless property (or Markov property): the future is conditionally independent of the past given the present.
Markov chains model everything from weather patterns and web page navigation to genetic drift and financial markets. In this lesson, you will visualize state transitions, watch distributions converge to their stationary limits, and explore random walks on graphs.
Choose a preset Markov chain and watch a particle hop between states according to the transition probabilities. Each arrow is labeled with its probability, and the histogram below tracks how often each state has been visited. Over time, the visit frequencies reveal the chain's long-run behavior.
Key insight: The transition matrix P fully determines the chain's behavior. Entry P(i, j) is the probability of moving from state i to state j. Notice how absorbing states (like $0 and $4 in the Gambler's Ruin) trap the particle forever once entered.
Start with many particles distributed across states and transition them all simultaneously. Regardless of where the particles begin, the population fractions converge to a unique stationary distribution -- a vector that satisfies pi = pi * P. The dashed lines show the theoretical target.
200 particles transition simultaneously at each step. Dashed lines show the theoretical stationary distribution. Regardless of the starting distribution, the population fractions converge to the same stationary values.
Key insight: For an irreducible, aperiodic Markov chain, there exists a unique stationary distribution pi such that pi * P = pi. No matter the initial distribution, the chain converges to pi. This is the Fundamental Theorem of Markov Chains.
A random walk on a graph is a Markov chain where each step moves to a uniformly random neighbor. The nodes are colored by visit frequency (cold blue to hot amber) and edges thicken with each traversal. Try different graph topologies and see how the structure determines where the walker spends its time.
At each step the particle uniformly chooses a neighbor. Node color indicates visit frequency (blue = cold, amber = hot). Edge thickness reflects traversal count. The stationary distribution is proportional to each node's degree.
Key insight: For a random walk on an undirected graph, the stationary distribution is proportional to each node's degree: pi(i) = deg(i) / (2|E|). Nodes with more connections are visited more often. This is the foundation of the PageRank algorithm.