Learn to analyze sequences of operations using potential functions, proving that expensive operations are rare enough to be efficient
Sometimes individual operations can be expensive, but they "pay for themselves" by making future operations cheaper. Amortized analysis captures this by spreading costs over sequences of operations.
The potential method assigns "potential energy" to data structure states. Expensive operations drain potential that cheap operations accumulated. If potential never goes negative, total actual cost ≤ total amortized cost.
In this section, you'll see how dynamic arrays achieve O(1) amortized insertion, how splay trees achieve O(log n) amortized access, and how to design your own potential functions.
Inserting into a full array requires copying all n elements — an O(n) operation. But with the potential function Φ = 2n - capacity, each insert has amortized cost O(1). Watch the potential rise and fall!
When we insert without resizing, potential increases by 2 (amortized cost = 1 + 2 = 3). When we resize (double), potential drops by n (amortized cost = n + 1 - n = 1).
| Op | Actual | Φ before | Φ after | ΔΦ | Amortized |
|---|---|---|---|---|---|
| + 1 | 1 | 0 | 1 | +1 | 2 |
| 🔄 2 | 2 | 1 | 2 | +1 | 3 |
| 🔄 3 | 3 | 2 | 2 | +0 | 3 |
| + 4 | 1 | 2 | 4 | +2 | 3 |
| 🔄 5 | 5 | 4 | 2 | -2 | 3 |
| + 6 | 1 | 2 | 4 | +2 | 3 |
| + 7 | 1 | 4 | 6 | +2 | 3 |
| + 8 | 1 | 6 | 8 | +2 | 3 |
| 🔄 9 | 9 | 8 | 2 | -6 | 3 |
| + 10 | 1 | 2 | 4 | +2 | 3 |
| + 11 | 1 | 4 | 6 | +2 | 3 |
| + 12 | 1 | 6 | 8 | +2 | 3 |
| + 13 | 1 | 8 | 10 | +2 | 3 |
| + 14 | 1 | 10 | 12 | +2 | 3 |
| + 15 | 1 | 12 | 14 | +2 | 3 |
| + 16 | 1 | 14 | 16 | +2 | 3 |
| 🔄 17 | 17 | 16 | 2 | -14 | 3 |
| + 18 | 1 | 2 | 4 | +2 | 3 |
| + 19 | 1 | 4 | 6 | +2 | 3 |
| + 20 | 1 | 6 | 8 | +2 | 3 |
| + 21 | 1 | 8 | 10 | +2 | 3 |
| + 22 | 1 | 10 | 12 | +2 | 3 |
| + 23 | 1 | 12 | 14 | +2 | 3 |
| + 24 | 1 | 14 | 16 | +2 | 3 |
| + 25 | 1 | 16 | 18 | +2 | 3 |
| + 26 | 1 | 18 | 20 | +2 | 3 |
| + 27 | 1 | 20 | 22 | +2 | 3 |
| + 28 | 1 | 22 | 24 | +2 | 3 |
| + 29 | 1 | 24 | 26 | +2 | 3 |
| + 30 | 1 | 26 | 28 | +2 | 3 |
| + 31 | 1 | 28 | 30 | +2 | 3 |
| + 32 | 1 | 30 | 32 | +2 | 3 |
Key Insight:
Even though individual insertions can cost O(n), the average cost over n insertions is only O(1) because expensive resizes are rare and "paid for" by the potential accumulated during cheap insertions.
Incrementing a binary counter can flip many bits (like 0111 → 1000 flips 4 bits). But with Φ = number of 1-bits, the amortized cost per increment is just O(1). Each 1→0 flip uses potential saved by a previous 0→1 flip.
Incrementing a binary counter flips trailing 1s to 0s, then one 0 to 1. Each 0→1 flip costs 1 and increases Φ by 1 (amortized = 2). Each 1→0 flip costs 1 but decreases Φ by 1 (amortized = 0).
Why O(1) Amortized?
Each increment flips one bit from 0→1 (adds 1 to Φ) and possibly many bits from 1→0 (subtracts from Φ). But we can only flip 1→0 if we previously flipped 0→1. Over n increments: total flips ≤ 2n.
Splay trees move accessed nodes to the root, potentially requiring O(n) rotations. The rank-based potential function Φ = Σ log₂(size(x)) proves that each access costs O(log n) amortized.
Each node's rank depends on its subtree size. Splaying moves accessed nodes to the root, restructuring the tree. Heavy nodes near leaves contribute more to potential than when at the root.
| Access | Rotations | Φ before | Φ after | Amortized |
|---|---|---|---|---|
| 4 | 3 | 10.0 | 5.0 | -1.0 |
| 2 | 1 | 5.0 | 6.0 | 3.0 |
| 6 | 3 | 6.0 | 5.0 | 3.0 |
| 1 | 3 | 5.0 | 6.0 | 5.0 |
| 7 | 3 | 6.0 | 7.0 | 5.0 |
| 3 | 4 | 7.0 | 7.0 | 5.0 |
| 5 | 4 | 7.0 | 7.0 | 5.0 |
Why O(log n) Amortized?
The Access Lemma proves that splaying has amortized cost ≤ 3(rank(root) - rank(x)) + 1. Since rank(root) = O(log n) and rank(x) ≥ 0, each access costs O(log n) amortized.
Explore different amortized analysis techniques: the potential method uses energy functions, while the accounting method assigns credits to operations. Both prove the same O(1) amortized bounds!
PUSH costs 1 and increases Φ by 1 → amortized cost = 2. MULTIPOP(k) costs k but decreases Φ by k → amortized cost = 0.
| Operation | Actual | Φ before | Φ after | ΔΦ | Amortized |
|---|---|---|---|---|---|
| push | 1 | 0 | 1 | +1 | 2 |
| push | 1 | 1 | 2 | +1 | 2 |
| push | 1 | 2 | 3 | +1 | 2 |
| push | 1 | 3 | 4 | +1 | 2 |
| push | 1 | 4 | 5 | +1 | 2 |
| multipop(3) | 3 | 5 | 2 | -3 | 0 |
| push | 1 | 2 | 3 | +1 | 2 |
| push | 1 | 3 | 4 | +1 | 2 |
| multipop(4) | 4 | 4 | 0 | -4 | 0 |
You now understand how to prove amortized bounds:
Next: We'll explore Union-Find with path compression and the mysterious inverse Ackermann function — achieving nearly O(1) per operation!