Optimization Problem Types
Linear Programming (LP) Problems
A linear programming (LP) problem is one in which the objective and all of the constraints are linear functions of the decision variables. An example of a linear function is:
75 x1 + 50 x2 + 35 x3
...where x1, x2 and x3 are decision variables. The variables are multiplied by coefficients (75, 50 and 35 above) that are constant in the optimization problem; they can be computed by your Excel worksheet or custom program, as long as they don't depend on the decision variables.
Since all linear functions are convex, linear programming problems are intrinsically easier to solve than general nonlinear (NLP) problems, which may be non-convex. In a non-convex NLP there may be more than one feasible region and the optimal solution might be found at any point within any such region. In contrast, an LP has at most one feasible region with 'flat faces' (i.e. no curves) on its outer surface, and the optimal solution will always be found at a 'corner point' on the surface where the constraints intersect. This means that an LP Solver needs to consider many fewer points than an NLP Solver, and it is always possible to determine (subject to the limitations of finite precision computer arithmetic) that an LP problem (i) has no feasible solution, (ii) has an unbounded objective, or (iii) has a globally optimal solution (either a single point or multiple equivalent points along a line).
Quadratic Programming (QP) Problems
A quadratic programming (QP) problem has an objective which is a quadratic function of the decision variables, and constraints which are all linear functions of the variables. An example of a quadratic function is:
2 x12 + 3 x22 + 4 x1 x2
where x1, x2 and x3 are decision variables. A widely used QP problem is the Markowitz mean-variance portfolio optimization problem, where the quadratic objective is the portfolio variance (sum of the variances and covariances of individual securities), and the linear constraints specify a lower bound for portfolio return.
QP problems, like LP problems, have only one feasible region with "flat faces" on its surface (due to the linear constraints), but the optimal solution may be found anywhere within the region or on its surface. The quadratic objective function may be convex -- which makes the problem easy to solve -- or non-convex, which makes it very difficult to solve.
The "best" quadratics have Hessians that are positive definite (in a minimization problem) or negative definite (in a maximization problem). You can picture the graph of these functions as having a "round bowl" shape with a single bottom (or top) -- a convex function. Portfolio optimization problems are usually of this type.
A quadratic with a semi-definite Hessian is still convex: It has a bowl shape with a "trough" where many points have the same objective value. An optimizer will normally find a point in the "trough" with the best objective function value.
A quadratic with an indefinite Hessian has a "saddle" shape -- a non-convex function. Its true minimum or maximum is not found in the "interior" of the function but on its boundaries with the constraints, where there may be many locally optimal points. Optimizing an indefinite quadratic function is a difficult global optimization problem, and is outside the scope of most specialized quadratic solvers.
Solving LP and QP Problems
LP problems are usually solved via the Simplex method. This method, originally developed by Dantzig in 1948, has been dramatically enhanced in the last decade, using advanced methods from numerical linear algebra. This has made it possible to solve LP problems with up to hundreds of thousands -- sometimes millions -- of decision variables and constraints.An alternative to the Simplex method, called the Interior Point or Newton-Barrier method, was developed by Karmarkar in 1984. Also in the last decade, this method has been dramatically enhanced with advanced linear algebra methods so that it is often competitive with the Simplex method, especially on very large problems.
LP and convex QP problems are special cases of SOCP problems (second order cone programming, a type of conic optimization), and they can be solved with high performance by SOCP Solvers, most of which currently use interior point methods.
Since a QP problem is a special case of a smooth nonlinear problem, it can be solved by a smooth nonlinear optimization method such as the GRG or SQP method. However, a faster and more reliable way to solve a QP problem is to use an extension of the Simplex method or an extension of the Interior Point or Barrier method.
Frontline Systems Solver Technology and Products for LP and QP Problems >