Convergence Theorems

MCT, Fatou's lemma, and DCT — the power tools of modern analysis

When Can We Swap Limits and Integrals?

One of the central questions in analysis is: given a sequence of functions fₙ → f, when does ∫ fₙ → ∫ f? In other words, when can we interchange the limit and the integral? This interchange is not always valid — there are simple examples where fₙ → 0 pointwise yet ∫ fₙ → ∞, or where ∫ fₙ → c ≠ ∫ f. The convergence theorems of Lebesgue theory provide precise conditions under which the interchange is justified.

The three pillars are the Monotone Convergence Theorem (MCT), Fatou's Lemma, and the Dominated Convergence Theorem(DCT). Together, they form the backbone of Lebesgue integration theory. MCT handles increasing sequences, Fatou's Lemma provides a universal one-sided inequality, and DCT covers the most common case in applications — sequences bounded by an integrable function.

These results have no adequate analogues in Riemann integration theory. They are the primary reason the Lebesgue integral is preferred in modern analysis, PDE theory, probability, and physics. Mastering them is essential for any serious work in these fields.

The Monotone Convergence Theorem

Theorem (MCT): Let (fₙ) be a sequence of non-negative measurable functions with fₙ ≤ fₙ₊₁ for all n (i.e., the sequence is increasing). Let f = limₙ fₙ (which exists in [0, ∞] by monotonicity). Then ∫ f dμ = limₙ ∫ fₙ dμ.

The MCT says that for increasing sequences of non-negative functions, we can always pass the limit through the integral. No additional hypotheses are needed beyond monotonicity and non-negativity. The result is remarkable because the limit function f might not even be bounded — the theorem allows ∫ f = ∞.

The MCT is used constantly in the construction of the Lebesgue integral itself (to extend from simple functions to general non-negative functions), in the proof of Fubini's theorem, and whenever we need to integrate a series term-by-term: if gₙ ≥ 0, then ∫ Σₙ gₙ = Σₙ ∫ gₙ (by applying MCT to the partial sums).

The hypotheses are sharp: without monotonicity or non-negativity, the conclusion can fail. For example, fₙ = n · χ_{(0, 1/n)} satisfies ∫ fₙ = 1 for all n but fₙ → 0 pointwise, so limₙ ∫ fₙ = 1 ≠ 0 = ∫ limₙ fₙ. This sequence is not monotone increasing, so MCT does not apply.

Fatou's Lemma

Lemma (Fatou): If (fₙ) is a sequence of non-negative measurable functions, then ∫ lim inf fₙ dμ ≤ lim inf ∫ fₙ dμ. In words: the integral of the lim inf is at most the lim inf of the integrals. Fatou's Lemma requires only non-negativity — no monotonicity, no domination, no pointwise convergence.

The inequality in Fatou's Lemma can be strict. Consider fₙ = n · χ_{(0, 1/n)} on [0, 1]. Then fₙ → 0 pointwise, so lim inf fₙ = 0 and ∫ lim inf fₙ = 0. But ∫ fₙ = 1 for every n, so lim inf ∫ fₙ = 1. We get 0 < 1, a strict inequality. Mass can "escape to infinity" (or concentrate and then vanish), and Fatou's Lemma only guarantees a lower bound.

Despite the inequality going only one way, Fatou's Lemma is an indispensable tool. It is used in the proof of the Dominated Convergence Theorem, in establishing existence results via weak convergence, and throughout probability theory (e.g., proving that expectations of non-negative random variables are lower semicontinuous under convergence in distribution). It is often the first result applied when analyzing a sequence of integrals.

Visualizing the Convergence Theorems

Explore the convergence theorems in action below. See how monotone sequences have integrals that converge to the integral of the limit, how Fatou's Lemma gives only a one-sided bound when monotonicity fails, and how domination by an integrable function restores full convergence.

Monotone Convergence Theorem

If 0 ≤ f₁ ≤ f₂ ≤ ... and fₙ → f pointwise, then ∫fₙ → ∫f.

Key insight: The "escape of mass" phenomenon — where a sequence of functions with constant integral converges pointwise to zero — is the fundamental obstruction to interchanging limits and integrals. MCT prevents it via monotonicity, and DCT prevents it via domination. Fatou's Lemma acknowledges it and gives the best possible one-sided bound.

The Dominated Convergence Theorem

Theorem (DCT): Let (fₙ) be a sequence of measurable functions with fₙ → f pointwise almost everywhere. Suppose there exists an integrable function g ≥ 0 with |fₙ| ≤ g for all n (the dominating function). Then f is integrable, and ∫ f dμ = limₙ ∫ fₙ dμ. Moreover, ∫ |fₙ − f| dμ → 0 (convergence in L¹).

The DCT is the most widely used convergence theorem in practice. Its hypotheses are easy to check: find a single integrable function that bounds the entire sequence. The conclusion is strong: not only do the integrals converge, but convergence holds in the L¹ norm. The DCT is used to differentiate under the integral sign, to justify term-by-term integration of series, and throughout probability theory (e.g., computing expectations of transformed random variables).

The proof of DCT is elegant: apply Fatou's Lemma to the non-negative functions g + fₙ and g − fₙ separately, then combine. The dominating function g absorbs the "escape of mass" that would otherwise cause the inequality in Fatou's Lemma to be strict.

Example: Let fₙ(x) = sin(x/n) / (1 + x²) for x ∈ ℝ. Then fₙ → 0 pointwise, and |fₙ(x)| ≤ 1/(1 + x²) = g(x), which is integrable (∫ g = π). By DCT, ∫ fₙ → ∫ 0 = 0. Without the DCT, establishing this would require ad hoc estimates for each specific sequence.

Key Takeaways

  • MCT for monotone limits — if 0 ≤ fₙ ↗ f, then ∫ fₙ → ∫ f. No domination needed; monotonicity and non-negativity suffice. This also gives term-by-term integration of non-negative series.
  • Fatou gives a one-sided bound — ∫ lim inf fₙ ≤ lim inf ∫ fₙ for non-negative sequences. The inequality can be strict when mass escapes, but this lower bound is universally applicable and surprisingly powerful.
  • DCT is the workhorse — if |fₙ| ≤ g with g integrable and fₙ → f a.e., then ∫ fₙ → ∫ f. This is the theorem you reach for most often in applications: differentiation under the integral, term-by-term integration, and computing limits of expectations.