Advanced Methods
The MOSEK Solver includes state-of-the-art implementations of both the primal and dual Simplex method and an Interior Point or Barrier method, called the Homogeneous Self-Dual method, to solve LP, QP, QCP, and SOCP problems, and even convex smooth nonlinear (NLP) problems, with no fixed limits on problem size.
It can automatically transform quadratic objectives and constraints into conic (SOCP) constraints. It has been used to solve SOCP problems of over 100,000 variables. MOSEK Solver problem rescaling options range from conservative to aggressive -- and the Solver can even choose the degree of scaling automatically. | Click on the MOSEK Solver |
Sparse Matrix Storage
Sparse matrix storage methods take advantage of sparsity in large models, where subsets of the constraints typically depend on only a small subset of the variables. For example, an LP coefficient matrix for a problem with 10,000 variables and 10,000 constraints would take about 800 megabytes for matrix storage using dense storage methods, but if this problem has the sparsity typical of larger models, it would take about 24 megabytes using the sparse storage methods in the MOSEK Solver. (To learn more, click on Problem Size and Sparsity, covered in our Solver Tutorial.)
Matrix Factorization
Large models require hundreds of thousands to millions of floating-point arithmetic calculations to solve. Because of the finite precision inherent in computer arithmetic, small numerical errors occur in these calculations. Using conventional matrix representation, these errors typically have a cumulative effect, leading to a numerically unstable problem and possibly large errors in the "solution."
The MOSEK Solver Engine computes the Cholesky factorization of the matrix of normal equations, using advanced methods that minimize "fill-in" (and hence preserve sparsity), and maintain matrix conditioning and numerical stability. These methods enable the MOSEK Solver to minimize the effect of floating point roundoff errors, find the optimal solution, and do so much faster than conventional methods.
Tolerances for LP/QP/QCP, Conic, and Nonlinear Problems
The MOSEK Solver offers nearly 40 options and tolerances that you can set to fine-tune the solution of linear programming problems, quadratic and quadratically constrained problems, and convex nonlinear optimization problems. These include the primal and dual feasibility tolerance, the complementarity gap tolerance, the relative step size to the constraint boundary, and the central path tolerance.
Options for Mixed-Integer Linear Problems
The MOSEK Solver uses the Premium Platform's Branch and Bound method to handle integer variables and "alldifferent" constraints. If your problem includes integer constraints, you can obtain a quick solution of the relaxation (temporarily ignoring the integer constraints) without having to delete these constraints and then re-enter them later. You can control the number of Branch and Bound subproblems and the number of integer feasible solutions found before the Solver stops. And you can speed up the solution of problems with integer constraints by supplying an integer cutoff value -- often known from a previous run. | Click on the MOSEK Solver |
The MOSEK Solver offers several methods for faster solution of linear programming problems with integer variables. It uses preprocessing and probing techniques to preset values for some binary integer variables based on the settings of others. It automatically recognizes, and takes advantage of "cliques" (also called Special Ordered Sets), tightens bounds on constraints, and reorders the variables to be branched upon.
It also uses cut generation techniques to automatically add new constraints, known as knapsack cuts or lifted cover inequalities, that reduce the size of the LP feasible region without eliminating any integer solutions. These methods can save significant time on LP/MIP problems, depending on the model.