Optimization Problem Types
Smooth Non linear Optimization (NLP) Problems
A smooth non linear optimization problem or nonlinear programming (NLP) is one in which the objective or at least one of the constraints is a smooth nonlinear function of the decision variables. An example of a smooth nonlinear function is:
2 X12 + X23 + log X3
...where X1, X2 and X3 are decision variables. Nonlinear functions may be convex or non-convex, as described below. A quadratic programming (QP) problem is a special case of a smooth non linear optimization problem, but it is usually solved by specialized, more efficient methods. Nonlinear functions, unlike linear functions, may involve variables that are raised to a power or multiplied or divided by other variables. They may also use transcendental functions such as exp, log, sine and cosine.
NLP problems and their solution methods require nonlinear functions that are continuous, and (usually) further require functions that are smooth -- which means that derivatives of these functions with respect to each decision variable, i.e. the function gradients, are continuous.
A continuous function has no "breaks" in its graph. The Excel function =IF(C1>10,D1,2*D1) is discontinuous if C1 is a decision variable, because its value "jumps" from D1 to 2*D1. The Excel function =ABS(C1) is continuous, but nonsmooth -- its graph is an unbroken "V" shape, but its derivative is discontinuous, since it jumps from -1 to +1 at C1=0.
An NLP problem where the objective and all constraints are convex functions can be solved efficiently to global optimality, up to very large size; interior point methods are normally very effective on the largest convex problems. But if the objective or any constraints are non-convex, the problem may have multiple feasible regions and multiple locally optimal points within such regions. It can take time exponential in the number of variables and constraints to determine that a non-convex NLP problem is infeasible, that the objective function is unbounded, or that an optimal solution is the "global optimum" across all feasible regions.
Although functions can be non-smooth but convex (or smooth but non-convex), you can expect much better performance with most Solvers if your problem functions are all smooth and convex.
Solving NLP Problems
There are a variety of methods for solving NLP problems, and no single method is best for all problems. The most widely used and effective methods, used in Frontline's solvers, are the Generalized Reduced Gradient (GRG) and Sequential Quadratic Programming (SQP) methods, both called active-set methods, and the Interior Point or Barrier methods.
NLP solvers generally exploit the smoothness of the problem functions by computing gradient values at various trial solutions, and moving in the direction of the negative gradient (when minimizing; the positive gradient when maximizing). They usually also exploit second derivative information to follow the curvature as well as the direction of the problem functions. To solve constrained problems, NLP solvers must take into account feasibility and the direction and curvature of the constraints as well as the objective.
As noted above, if the problem is non-convex, NLP solvers normally can find only a locally optimal solution, in the vicinity of the starting point of the optimization given by the user. It is frequently possible, but considerably more difficult, to find the globally optimal solution. To learn more about this issue, click Global Optimization Methods.
<< Back to: Tutorial Start Next: Smooth and NLP Problem Technology >
Next: Other Problem Types >