Augmented Lagrangian Method
Resources
- Nocedal, Jorge, and Stephen J. Wright. Numerical Optimization. 2nd ed. Springer, 2006.
Related
Main Idea
To solve a Constrained Optimization problem, solve a sequence of problems which add a penalty term and a Lagrange multiplier term to the objective function. The penalty parameter still increases, but by approximating the Lagrange multipliers, it does not need to increase as much. Similarly, by including a (convex) penalty term, we avoid issues if our original problem is non-convex.
Equality Constrained
Consider the equality constrained problem below.
The Augmented Lagrangian is given as
This resembles both the Lagrangian and the objective function from a quadratic Penalty Method. The unconstrained minimization of this yields the first-order optimality condition (as a vector equation):
or equivalently,
Yet, the stationarity of the Lagrangian of the original problem requires
To make these two optimality conditions equivalent, we need that
This also gives
as opposed to in the Quadratic Penalty Method where
Thus, we should satisfy
Inequality Constrained
If we instead have the more general inequality constrained optimization problem given below, there are further nuances.
By introducing a vector of slack variables
Taking the new design variable as
The bound-constrained Lagrangian approach only takes the equality constraints (originally from
This formulation is then very similar to the equality constrained case, but the subproblem must be solved to account for the box constraints. One option to solve the subproblem is with a projected gradient descent. The old package LANCELOT, or the newer GALAHAD uses this approach.
The Unconstrained approach is based on the so-called proximal point approach. The optimization problem can be rewritten as
If we have
In this formulation,
We can rewrite this over individual components of
For
Finally, we plug this into the expression for