Fiveable

๐Ÿ“ˆNonlinear Optimization Unit 7 Review

QR code for Nonlinear Optimization practice questions

7.2 Lagrange multiplier theory

๐Ÿ“ˆNonlinear Optimization
Unit 7 Review

7.2 Lagrange multiplier theory

Written by the Fiveable Content Team โ€ข Last updated September 2025
Written by the Fiveable Content Team โ€ข Last updated September 2025
๐Ÿ“ˆNonlinear Optimization
Unit & Topic Study Guides

Lagrange multipliers are game-changers in constrained optimization. They let us tackle tricky problems by adding new variables and turning constraints into part of the main function. It's like having a secret weapon for solving complex math puzzles.

This method introduces the Lagrangian function, which combines our goal and limits into one expression. We're looking for a special point called a saddle point, where the function is at its highest in one direction and lowest in another. It's the key to cracking these optimization challenges.

Lagrange Multiplier Theory

Fundamental Concepts and Functions

  • Lagrange multipliers introduce additional variables to solve constrained optimization problems
  • Lagrangian function combines objective function and constraints into a single expression
    • Formulated as L(x,ฮป)=f(x)+ฮปg(x)L(x, \lambda) = f(x) + \lambda g(x), where f(x)f(x) is the objective function and g(x)g(x) represents constraints
    • Allows transformation of constrained problem into unconstrained problem
  • Saddle point represents the solution to the optimization problem
    • Characterized by maximum along one direction and minimum along another
    • Occurs at the point where partial derivatives of Lagrangian function equal zero

Duality and Theoretical Foundations

  • Duality principle establishes relationship between primal and dual problems
    • Primal problem focuses on original optimization problem
    • Dual problem provides alternative formulation, often easier to solve
  • Weak duality theorem states optimal value of dual problem provides lower bound for primal problem
  • Strong duality theorem asserts optimal values of primal and dual problems are equal under certain conditions
  • Duality gap measures difference between primal and dual optimal values
    • Zero duality gap indicates strong duality holds

Optimality Conditions

Karush-Kuhn-Tucker (KKT) Conditions

  • KKT conditions provide necessary conditions for optimal solutions in nonlinear programming
  • Primal feasibility ensures all constraints are satisfied
    • Includes equality constraints hi(x)=0h_i(x) = 0 and inequality constraints gj(x)โ‰ค0g_j(x) \leq 0
  • Dual feasibility requires non-negative Lagrange multipliers for inequality constraints
    • Expressed as ฮปjโ‰ฅ0\lambda_j \geq 0 for all jj
  • Stationarity condition states gradient of Lagrangian function must equal zero at optimal point
    • Formulated as โˆ‡f(xโˆ—)+โˆ‘i=1mฮปiโˆ—โˆ‡hi(xโˆ—)+โˆ‘j=1pฮผjโˆ—โˆ‡gj(x)=0\nabla f(x^*) + \sum_{i=1}^m \lambda_i^* \nabla h_i(x^*) + \sum_{j=1}^p \mu_j^* \nabla g_j(x^) = 0

Constraint Qualification and Complementary Slackness

  • Constraint qualification ensures KKT conditions are necessary for optimality
    • Linear independence constraint qualification (LICQ) requires gradients of active constraints to be linearly independent
    • Mangasarian-Fromovitz constraint qualification (MFCQ) provides alternative condition when LICQ fails
  • Complementary slackness connects dual variables to constraints
    • For inequality constraints, either constraint is active or corresponding Lagrange multiplier is zero
    • Expressed as ฮผjโˆ—gj(xโˆ—)=0\mu_j^* g_j(x^*) = 0 for all jj
  • Strict complementarity occurs when exactly one of constraint or multiplier is zero for each inequality
    • Ensures uniqueness of Lagrange multipliers at optimal solution