Sequential quadratic programming (SQP) is an iterative method for constrained nonlinear optimization which may be considered a quasi-Newton method. SQP methods are used on mathematical problems for which the objective function and the constraints are twice continuously differentiable.
SQP methods solve a sequence of optimization subproblems, each of which optimizes a quadratic model of the objective subject to a linearization of the constraints. If the problem is unconstrained, then the method reduces to Newton's method for finding a point where the gradient of the objective vanishes. If the problem has only equality constraints, then the method is equivalent to applying Newton's method to the first-order optimality conditions, or Karush–Kuhn–Tucker conditions, of the problem.
Consider a nonlinear programming problem of the form:
The Lagrangian for this problem is
where and are Lagrange multipliers.
The standard Newton's Method searches for the solution by iterating the following equation:
However, because the matrix is generally singular (and therefore non-invertible), the Newton step cannot be calculated directly. Instead the basic sequential quadratic programming algorithm defines an appropriate search direction at an iterate , as a solution to the quadratic programming subproblem
Note that the term in the expression above may be left out for the minimization problem, since it is constant under the operator.
Together, the SQP algorithm starts by first choosing the initial iterate , then calculating and . Then the QP subproblem is built and solved to find the Newton step direction which is used to update the parent problem iterate using . This process is repeated for until the parent problem satisfies a convergence test.
Sequential linear programming
Sequential linear-quadratic programming
Augmented Lagrangian method
SQP methods have been implemented in well known numerical environments such as MATLAB and GNU Octave. There also exist numerous software libraries, including open source:
SciPy (de facto standard for scientific Python) has scipy.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
This course introduces students to continuous, nonlinear optimization. We study the theory of optimization with continuous variables (with full proofs), and we analyze and implement important algorith
This doctoral course provides an introduction to optimal control covering fundamental theory, numerical implementation and problem formulation for applications.
Mathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
This paper proposes a computational approach to form-find pin-jointed bar structures subjected to combinations of tension and compression forces. The generated equilibrium states can meet structural and geometrical constraints via gradient-based optimizati ...
Non-convex constrained optimization problems have become a powerful framework for modeling a wide range of machine learning problems, with applications in k-means clustering, large- scale semidefinite programs (SDPs), and various other tasks. As the perfor ...
In this paper, we present a spatial branch and bound algorithm to tackle the continuous pricing problem, where demand is captured by an advanced discrete choice model (DCM). Advanced DCMs, like mixed logit or latent class models, are capable of modeling de ...