Fiveable

๐Ÿ“ŠMathematical Methods for Optimization Unit 8 Review

QR code for Mathematical Methods for Optimization practice questions

8.1 Optimality conditions for unconstrained problems

๐Ÿ“ŠMathematical Methods for Optimization
Unit 8 Review

8.1 Optimality conditions for unconstrained problems

Written by the Fiveable Content Team โ€ข Last updated September 2025
Written by the Fiveable Content Team โ€ข Last updated September 2025
๐Ÿ“ŠMathematical Methods for Optimization
Unit & Topic Study Guides

Unconstrained optimization is all about finding the best solution without any limits. We'll look at how to spot optimal points using gradients and Hessian matrices, which tell us about a function's slope and curvature.

We'll also explore the difference between local and global optima, and how convexity affects our search. Understanding these concepts is key to solving real-world problems where we want to maximize or minimize something.

Optimality Conditions for Unconstrained Optimization

Gradient and Hessian in Optimality Conditions

  • First-order necessary condition (FONC) for local optimum requires gradient of objective function to equal zero at optimal point
  • Gradient vector โˆ‡f(x) contains first-order partial derivatives of objective function f(x) with respect to each decision variable
  • Second-order necessary condition (SONC) involves Hessian matrix of objective function
    • Must be positive semidefinite for local minimum
    • Must be negative semidefinite for local maximum
  • Hessian matrix H(x) contains second-order partial derivatives of f(x)
  • Second-order sufficient condition (SOSC) for local minimum requires positive definite Hessian matrix at stationary point

Global Optimality and Coercivity

  • Global optimality conditions depend on convexity or concavity of objective function over entire feasible region
  • For twice continuously differentiable function, stationary point becomes global minimum if function convex and global maximum if function concave
  • Coercivity concept crucial for establishing existence of global optima in unbounded domains
    • Occurs when objective function approaches infinity as decision variables approach infinity
  • Global optimality more challenging to determine in non-convex problems

First- and Second-Order Optimality Conditions

Identifying and Classifying Stationary Points

  • Stationary point identified when gradient vector equals zero (โˆ‡f(x) = 0)
  • Hessian matrix used to classify stationary points
    • Positive definite Hessian indicates local minimum
    • Negative definite Hessian indicates local maximum
    • Indefinite Hessian indicates saddle point (neither local minimum nor maximum)
  • Eigenvalues of Hessian matrix at stationary point determine its definiteness
    • All positive eigenvalues indicate positive definiteness (local minimum)
    • All negative eigenvalues indicate negative definiteness (local maximum)
    • Mixed positive and negative eigenvalues indicate indefiniteness (saddle point)
  • Singular Hessian (zero eigenvalue) may require higher-order derivatives to determine nature of stationary point

Examples of Optimality Conditions

  • Quadratic function: f(x)=x2+2y2f(x) = x^2 + 2y^2
    • Gradient: โˆ‡f(x,y) = (2x, 4y)
    • Hessian: H=[2004]H = \begin{bmatrix} 2 & 0 \\ 0 & 4 \end{bmatrix}
    • Stationary point at (0,0), positive definite Hessian indicates local minimum
  • Saddle function: f(x,y)=x2โˆ’y2f(x,y) = x^2 - y^2
    • Gradient: โˆ‡f(x,y) = (2x, -2y)
    • Hessian: H=[200โˆ’2]H = \begin{bmatrix} 2 & 0 \\ 0 & -2 \end{bmatrix}
    • Stationary point at (0,0), indefinite Hessian indicates saddle point

Convex vs Non-Convex Optimization Problems

Properties of Convex Optimization

  • Function f(x) convex if line segment between any two points on its graph lies above or on graph
  • Second-order condition for convexity requires positive semidefinite Hessian matrix for all x in domain
  • Convex optimization problems have simplified optimal solution search
    • Any local minimum also global minimum
  • Set of optimal solutions for convex problem forms convex set
  • Convex problems satisfy Karush-Kuhn-Tucker (KKT) conditions as necessary and sufficient conditions for optimality

Challenges in Non-Convex Optimization

  • Non-convex optimization problems may have multiple local optima
    • Challenging to determine global optimum
  • Non-convex problems may have disconnected sets of optimal solutions
  • KKT conditions necessary but not sufficient for global optimality in non-convex problems
    • Requires additional global optimization techniques (genetic algorithms, simulated annealing)
  • Examples of non-convex functions
    • f(x)=x4โˆ’2x2+1f(x) = x^4 - 2x^2 + 1 (multiple local minima)
    • f(x,y)=x2+y2+sinโก(xy)f(x,y) = x^2 + y^2 + \sin(xy) (oscillating surface with multiple local optima)