CONSTRAINED_MIN

The CONSTRAINED_MIN procedure solves nonlinear optimization problems of the following form:

Minimize or maximize gp(X), subject to:

glb i £ g i (X) £ gub i   for i = 0,...,nfuns-1, i p

xlb j £ x j £ xub j   for j = 0,...,nvars-1.

X is a vector of nvars variables, x 0  ,...,x nvars -1 and G is a vector of nfuns functions g 0  ,...,g nfuns -1 which all depend on X . Any of these functions may be nonlinear. Any of the bounds may be infinite and any of the constraints may be absent. If there are no constraints, the problem is solved as an unconstrained optimization problem. The program solves problems of this form by the Generalized Reduced Gradient Method. See References 1-4.

CONSTRAINED_MIN uses first partial derivatives of each function g i with respect to each variable x j . These are automatically computed by finite difference approximation (either forward or central differences) unless the user supplies a function that evaluates them.

CONSTRAINED_MIN is based on an implementation of the GRG algorithm supplied by Windward Technologies, Inc. See Reference 11.

Calling Sequence

CONSTRAINED_MIN, Xbnd, Gbnd, Nobj, Gcomp, X, Inform

Arguments

Xbnd

Bounds on variables. Xbnd is an nvars x 2 element array.

Gbnd

Bounds on constraint functions. Gbnd is an nfuns x 2 element array.

Bounds on the objective function are ignored; set them to 0.

Nobj

Index of the objective function.

Gcomp

A scalar string specifying the name of a user-supplied IDL function. This function must accept an nvars -element vector argument x of variable values and return an nfuns -element vector G of function values.

X

An nvars -element vector. On input, X contains initial values for the variables. On output, X contains final values of the variable settings determined by CONSTRAINED_MIN.

Inform

Termination status returned from CONSTRAINED_MIN.

  • Inform argument values

Inform value

Message

0

Kuhn-Tucker conditions satisfied.
This is the best possible indicator that an optimal point has been found.

1

Fractional change in objective less than EPSTOP for NSTOP consecutive iterations. See Keywords below.
This is not as good as Inform =0, but still indicates the likelihood that an optimal point has been found.

2

All remedies have failed to find a better point.
User should check functions and bounds for consistency and, perhaps, try other starting values.

3

Number of completed 1-dimensional searches exceeded LIMSER. See Keywords below.
User should check functions and bounds for consistency and, perhaps, try other starting values. It might help to increase limser. Use LIMSER= larger_value to do this.

4

Objective function is unbounded.

CONSTRAINED_MIN has observed dramatic change in the objective function over several steps. This is a good indication that the objective function is unbounded. If this is not the case, the user should check functions and bounds for consistency.

5

Feasible point not found.
CONSTRAINED_MIN was not able to find a feasible point. If the problem is believed to be feasible, the user should check functions and bounds for consistency and perhaps try other starting values.

6

Degeneracy has been encountered.
The point returned may be close to optimal. The user should check functions and bounds for consistency and perhaps try other starting values.

-1

Fatal Error.
Some condition, such as nvar s < 0, was encountered. CONSTRAINED_MIN documented the condition in the report and terminated. In this case, the user needs to correct the input and rerun CONSTRAINED_MIN.

Keywords

EPSTOP

Set this keyword to specify the CONSTRAINED_MIN convergence criteria. If the fractional change in the objective function is less than EPSTOP for NSTOP consecutive iterations, the program will accept the current point as optimal. CONSTRAINED_MIN will accept the current point as optimal if the Kuhn-Tucker optimality conditions are satisfied to EPSTOP. By default, EPSTOP = 1.0e-4.

LIMSER

If the number of completed one dimensional searches exceeds LIMSER, CONSTRAINED_MIN terminates and returns inform = 3. By default: LIMSER = 10000.

NSTOP

Set this keyword to specify the CONSTRAINED_MIN convergence criteria. If the fractional change in the objective function is less than EPSTOP for NSTOP consecutive iterations, CONSTRAINED_MIN will accept the current point as optimal. By default, NSTOP = 3.

REPORT

Set this keyword to specify a name for the CONSTRAINED_MIN report file. If the specified file does not exist it will be created. If it exists, it will be overwritten. By default, CONSTRAINED_MIN does not create a report file.

TITLE

Set this keyword to specify a title for the problem in the CONSTRAINED_MIN report.

Example

This example has 5 variables {X0, X1, ..., X4}, bounded above and below, a quadratic objective function {G3}, and three quadratic constraints {G0, G1, G2}, with both upper and lower bounds. See the Himmelblau text [7], problem 11.

Minimize:

G3 = 5.3578547X2X2 + 0.8356891X0X4 + 37.293239X0 - 40792.141

Subject to:

0 < G0 = 85.334407 + 0.0056858X1X4 + 0.0006262X0X3 - 0.0022053X2X4 < 92

90 < G1 = 80.51249 + 0.0071317X1X4 + 0.0029955X0X1 + 0.0021813X2X2 < 110

20 < G2 = 9.300961 + 0.0047026X2X4 + 0.0012547X0X2 + 0.0019085X2X3 < 25

and,

78 < X0 < 102

33 < X1 < 45

27 < X2 < 45

27 < X3 < 45

27 < X4 < 45

This problem is solved starting from X = {78, 33, 27, 27, 27} which is infeasible because constraint G2 is not satisfied at this point.

The constraint functions and objective function are evaluated by HMBL11:

FUNCTION HMBL11, x ; ; Himmelblau Problem 11, 5 variables and 4 functions.

g = DBLARR(4)

g[0] = 85.334407 + 0.0056858*x[1]*x[4] + 0.0006262*x[0]*x[3] $

   - 0.0022053*x[2]*x[4]

g[1] = 80.51249 + 0.0071317*x[1]*x[4] + 0.0029955*x[0]*x[1] $

   + 0.0021813*x[2]*x[2]

g[2] = 9.300961 + 0.0047026*x[2]*x[4] + 0.0012547*x[0]*x[2] $

   + 0.0019085*x[2]*x[3]

g[3] = 5.3578547*x[2]*x[2] + 0.8356891*x[0]*x[4] $

   + 37.293239*x[0] - 40792.141

RETURN, g

END

; Example problem for CONSTRAINED_MIN
; Himmelblau Problem 11
; 5 variables and 3 constraints
; Constraints and objective defined in HMBL11

xbnd = [[78, 33, 27, 27, 27], [102, 45, 45, 45, 45]]

gbnd = [[0, 90, 20, 0], [92, 110, 25, 0 ]]

nobj = -4

gcomp = 'HMBL11'

title = 'IDL: Himmelblau 11'

report = 'hmbl11.txt'

x = [78, 33, 27, 27, 27]

CONSTRAINED_MIN, x, xbnd, gbnd, nobj, gcomp, inform, $

REPORT = report, TITLE = title;

g = HMBL11(x)

PRINT, g[nobj] ; Print minimized objective
function for HMBL11 problem.

References

1. Lasdon, L.S., Waren, A.D., Jain, A., and Ratner, M., "Design and Testing of a Generalized Reduced Gradient Code for Nonlinear Programming", ACM Transactions on Mathematical Software, Vol. 4, No. 1, March 1978, pp. 34-50.

2. Lasdon, L.S. and Waren, A.D., "Generalized Reduced Gradient Software for Linearly and Nonlinearly Constrained Problems", in "Design and Implementation of Optimization Software", H. Greenberg, ed., Sijthoff and Noordhoff, pubs, 1979.

3. Abadie, J. and Carpentier, J. "Generalization of the Wolfe Reduced Gradient Method to the Case of Nonlinear Constraints", in Optimization, R. Fletcher (ed.), Academic Press London; 1969, pp. 37-47.

4. Murtagh, B.A. and Saunders, M.A. "Large-scale Linearly Constrained Optimization", Mathematical Programming, Vol. 14, No. 1, January 1978, pp. 41-72.

5. Powell, M.J.D., "Restart Procedures for the Conjugate Gradient Method," Mathematical Programming, Vol. 12, No. 2, April 1977, pp. 241-255.

6. Colville, A.R., "A Comparative Study of Nonlinear Programming Codes," I.B.M. T.R. no. 320-2949 (1968).

7. Himmelblau, D.M., Applied Nonlinear Programming, McGraw-Hill Book Co., New York, 1972.

8. Fletcher, R., "A New Approach to Variable Metric Algorithms", Computer Journal, Vol. 13, 1970, pp. 317-322.

9. Smith, S. and Lasdon, L.S., Solving Large Sparse Nonlinear Programs Using GRG, ORSA Journal on Computing, Vol. 4, No. 1,Winter 1992, pp. 1-15.

10. Luenbuerger, David G., Linear and Nonlinear Programming, Second Edition, Addison-Wesley, Reading Massachusetts, 1984.

11. Windward Technologies, GRG2 Users's Guide, http://web.wt.net/~wti, 1997.