It is helpful to understand what the GRG Nonlinear Solving method can and cannot do, and what each of the possible Solver Result Messages means for this Solver engine. At best, the GRG Solving method alone – like virtually all “classical” nonlinear optimization algorithms – can find a locally optimal solution to a reasonably well-scaled, non-convex model. At times, Solver will stop before finding a locally optimal solution, when it is making very slow progress (the objective function is changing very little from one trial solution to another) or for other reasons.
Locally Versus Globally Optimal Solutions
When the message “Solver found a solution” appears, it means that the GRG method has found a locally optimal solution – there is no other set of values for the decision variables close to the current values that yields a better value for the objective function. Figuratively, this means that Solver has found a “peak” (if maximizing) or “valley” (if minimizing) – but if the model is non-convex, there may be other taller peaks or deeper valleys far away from the current solution. Mathematically, this message means that the Karush - Kuhn - Tucker (KKT) conditions for local optimality have been satisfied (to within a certain tolerance, related to the Precision setting in the Solver Options dialog).
When Solver has Converged to the Current Solution
When the message “Solver has converged to the current solution” appears, it means that the objective function value is changing very slowly for the last few iterations or trial solutions. More precisely, the GRG method stops if the absolute value of the relative change in the objective function is less than the value in the Convergence box in the Solver Options dialog for the last 5 iterations. While the default value of 1E-4 (0.0001) is suitable for most problems, it may be too large for some models, causing the GRG method to stop prematurely when this test is satisfied, instead of continuing for more iterations until the KKT conditions are satisfied.
A poorly scaled model is more likely to trigger this stopping condition, even if the Use Automatic Scaling check box in the Solver Options dialog is selected. So it pays to design your model to be reasonably well scaled in the first place: The typical values of the objective and constraints should not differ from each other, or from the decision variable values, by more than three or four orders of magnitude.
If you are getting this message when you are seeking a locally optimal solution, you can change the setting in the Convergence edit box to a smaller value such as 1E-5 or 1E-6; but you should also consider why it is that the objective function is changing so slowly. Perhaps you can add constraints or use different starting values for the variables, so that Solver does not get “trapped” in a region of slow improvement.
When Solver Cannot Improve the Current Solution
With the GRG Nonlinear Solving method, the message “Solver cannot improve the current solution,” occurs only rarely. It means that the model is degenerate and Solver is probably cycling. One possibility worth checking is that some of your constraints are redundant, and should be removed. If this suggestion doesn’t help and you cannot reformulate the problem, try using the Evolutionary Solving method.