Linear regression, portfolio optimization, and optimal transport
Optimization is everywhere: machine learning minimizes loss functions, finance maximizes risk-adjusted returns, and logistics minimizes transportation costs. These applications demonstrate optimization in action.
Linear regression fits a line to data by minimizing the mean squared error. The loss surface is a convex paraboloid, guaranteeing gradient descent finds the global minimum. Watch the fit improve as GD converges.
Key insight: Linear regression is the simplest case of convex optimization in ML. The normal equations give the closed-form solution, but GD scales to millions of data points.
Markowitz mean-variance optimization finds portfolios that maximize expected return for a given level of risk. The efficient frontier traces the best possible risk-return tradeoffs.
The efficient frontier shows the best achievable risk-return tradeoff. Move the slider to explore portfolios from minimum variance to maximum return.
Key insight: Diversification is not just intuition — it is an optimization result. The minimum variance portfolio typically assigns non-zero weight to many assets, reducing risk below any individual asset.
Optimal transport asks: what is the cheapest way to move mass from one distribution to another? The cost is the total distance times mass moved. The optimal plan minimizes this cost, giving the Earth Mover's Distance between distributions.
Compare transport plans: Nearest greedily matches nearby bins, Spread distributes evenly, and Optimal minimizes total distance times mass moved.
Key insight: Optimal transport connects to LP duality (Kantorovich formulation), geometry (Wasserstein metric), and machine learning (Wasserstein GANs). It is one of the most active areas of modern applied mathematics.