Fiveable

๐Ÿ“ˆNonlinear Optimization Unit 12 Review

QR code for Nonlinear Optimization practice questions

12.2 Heuristic methods

๐Ÿ“ˆNonlinear Optimization
Unit 12 Review

12.2 Heuristic methods

Written by the Fiveable Content Team โ€ข Last updated September 2025
Written by the Fiveable Content Team โ€ข Last updated September 2025
๐Ÿ“ˆNonlinear Optimization
Unit & Topic Study Guides

Heuristic methods are problem-solving techniques that find good solutions quickly, even if they're not perfect. They're crucial for tackling complex nonconvex optimization problems where traditional methods might fail or take too long.

From hill climbing to ant colony optimization, these approaches offer diverse strategies for exploring solution spaces. While they may not guarantee global optima, heuristics provide practical ways to solve real-world problems efficiently.

Local Search Methods

Hill Climbing and Greedy Algorithms

  • Hill climbing iteratively improves a solution by moving to neighboring solutions with better objective function values
    • Starts with an initial solution and explores adjacent solutions
    • Selects the best neighboring solution in each iteration
    • Continues until no better neighboring solution can be found
  • Hill climbing can get stuck in local optima, limiting its effectiveness for complex problems
  • Greedy algorithms make locally optimal choices at each step
    • Construct solutions incrementally by selecting the best available option
    • Often provide fast but suboptimal solutions
    • Work well for problems with optimal substructure (knapsack problem)
  • Both hill climbing and greedy algorithms are computationally efficient but may not find global optima
  • Local search explores the solution space by moving between neighboring solutions
    • Defines a neighborhood structure for the problem
    • Evaluates neighboring solutions using an objective function
    • Accepts moves that improve the current solution
  • Local search can be enhanced with restart mechanisms to escape local optima
  • Iterated local search alternates between local search and perturbation phases
    • Applies local search to find a local optimum
    • Perturbs the solution to escape local optima
    • Repeats the process for a specified number of iterations or until a stopping criterion is met
  • Iterated local search balances exploitation (local search) and exploration (perturbation)

Metaheuristic Algorithms

Tabu Search and Particle Swarm Optimization

  • Tabu search enhances local search by using memory structures
    • Maintains a tabu list of recently visited solutions or solution attributes
    • Prevents revisiting previously explored areas of the search space
    • Allows accepting non-improving moves to escape local optima
    • Uses aspiration criteria to override tabu status in certain cases
  • Particle swarm optimization simulates the social behavior of bird flocking or fish schooling
    • Initializes a population of particles with random positions and velocities
    • Updates particle positions based on personal best and global best solutions
    • Particles move through the search space, guided by their own experience and swarm intelligence
    • Balances exploration and exploitation through inertia weight and acceleration coefficients

Ant Colony Optimization and Differential Evolution

  • Ant colony optimization mimics the foraging behavior of ants
    • Simulates ants depositing pheromones on paths as they search for food
    • Constructs solutions by probabilistically choosing solution components
    • Updates pheromone levels based on solution quality
    • Applies pheromone evaporation to forget poor solutions over time
  • Ant colony optimization works well for combinatorial optimization problems (traveling salesman problem)
  • Differential evolution evolves a population of candidate solutions
    • Generates new solutions through mutation and crossover operations
    • Mutation creates a donor vector by combining three random population members
    • Crossover produces a trial vector by mixing the donor vector and a target vector
    • Selection compares the trial vector with the target vector, keeping the better solution
  • Differential evolution adapts well to different problem types and dimensions