The CONSTRAINED_MIN procedure solves nonlinear optimization problems of the following form:
Minimize or maximize gp(X), subject to:
glb i £ g i (X) £ gub i for i = 0,...,nfuns-1, i p
xlb j £ x j £ xub j for j = 0,...,nvars-1.
X is a vector of nvars variables, x 0 ,...,x nvars -1 and G is a vector of nfuns functions g 0 ,...,g nfuns -1 which all depend on X . Any of these functions may be nonlinear. Any of the bounds may be infinite and any of the constraints may be absent. If there are no constraints, the problem is solved as an unconstrained optimization problem. The program solves problems of this form by the Generalized Reduced Gradient Method. See References 1-4.
CONSTRAINED_MIN uses first partial derivatives of each function g i with respect to each variable x j . These are automatically computed by finite difference approximation (either forward or central differences) unless the user supplies a function that evaluates them.
CONSTRAINED_MIN is based on an implementation of the GRG algorithm supplied by Windward Technologies, Inc. See Reference 11.
Bounds on constraint functions. Gbnd is an nfuns x 2 element array.
Bounds on the objective function are ignored; set them to 0.
A scalar string specifying the name of a user-supplied IDL function. This function must accept an nvars -element vector argument x of variable values and return an nfuns -element vector G of function values.
An nvars -element vector. On input, X contains initial values for the variables. On output, X contains final values of the variable settings determined by CONSTRAINED_MIN.
Termination status returned from CONSTRAINED_MIN.
Set this keyword to specify the CONSTRAINED_MIN convergence criteria. If the fractional change in the objective function is less than EPSTOP for NSTOP consecutive iterations, the program will accept the current point as optimal. CONSTRAINED_MIN will accept the current point as optimal if the Kuhn-Tucker optimality conditions are satisfied to EPSTOP. By default, EPSTOP = 1.0e-4.
If the number of completed one dimensional searches exceeds LIMSER, CONSTRAINED_MIN terminates and returns inform = 3. By default: LIMSER = 10000.
Set this keyword to specify the CONSTRAINED_MIN convergence criteria. If the fractional change in the objective function is less than EPSTOP for NSTOP consecutive iterations, CONSTRAINED_MIN will accept the current point as optimal. By default, NSTOP = 3.
This example has 5 variables {X0, X1, ..., X4}, bounded above and below, a quadratic objective function {G3}, and three quadratic constraints {G0, G1, G2}, with both upper and lower bounds. See the Himmelblau text [7], problem 11.
G3 = 5.3578547X2X2 + 0.8356891X0X4 + 37.293239X0 - 40792.141
0 < G0 = 85.334407 + 0.0056858X1X4 + 0.0006262X0X3 - 0.0022053X2X4 < 92
90 < G1 = 80.51249 + 0.0071317X1X4 + 0.0029955X0X1 + 0.0021813X2X2 < 110
20 < G2 = 9.300961 + 0.0047026X2X4 + 0.0012547X0X2 + 0.0019085X2X3 < 25
This problem is solved starting from X = {78, 33, 27, 27, 27} which is infeasible because constraint G2 is not satisfied at this point.
The constraint functions and objective function are evaluated by HMBL11:
FUNCTION HMBL11, x ; ; Himmelblau Problem 11, 5 variables and 4 functions.
g[0] = 85.334407 + 0.0056858*x[1]*x[4] + 0.0006262*x[0]*x[3] $
g[1] = 80.51249 + 0.0071317*x[1]*x[4] + 0.0029955*x[0]*x[1] $
g[2] = 9.300961 + 0.0047026*x[2]*x[4] + 0.0012547*x[0]*x[2] $
g[3] = 5.3578547*x[2]*x[2] + 0.8356891*x[0]*x[4] $
; Example problem for CONSTRAINED_MIN
; Himmelblau Problem 11
; 5 variables and 3 constraints
; Constraints and objective defined in HMBL11
xbnd = [[78, 33, 27, 27, 27], [102, 45, 45, 45, 45]]
gbnd = [[0, 90, 20, 0], [92, 110, 25, 0 ]]
CONSTRAINED_MIN, x, xbnd, gbnd, nobj, gcomp, inform, $
REPORT = report, TITLE = title;
PRINT, g[nobj] ;
Print minimized objective
function for HMBL11 problem.
1. Lasdon, L.S., Waren, A.D., Jain, A., and Ratner, M., "Design and Testing of a Generalized Reduced Gradient Code for Nonlinear Programming", ACM Transactions on Mathematical Software, Vol. 4, No. 1, March 1978, pp. 34-50.
2. Lasdon, L.S. and Waren, A.D., "Generalized Reduced Gradient Software for Linearly and Nonlinearly Constrained Problems", in "Design and Implementation of Optimization Software", H. Greenberg, ed., Sijthoff and Noordhoff, pubs, 1979.
3. Abadie, J. and Carpentier, J. "Generalization of the Wolfe Reduced Gradient Method to the Case of Nonlinear Constraints", in Optimization, R. Fletcher (ed.), Academic Press London; 1969, pp. 37-47.
4. Murtagh, B.A. and Saunders, M.A. "Large-scale Linearly Constrained Optimization", Mathematical Programming, Vol. 14, No. 1, January 1978, pp. 41-72.
5. Powell, M.J.D., "Restart Procedures for the Conjugate Gradient Method," Mathematical Programming, Vol. 12, No. 2, April 1977, pp. 241-255.
6. Colville, A.R., "A Comparative Study of Nonlinear Programming Codes," I.B.M. T.R. no. 320-2949 (1968).
7. Himmelblau, D.M., Applied Nonlinear Programming, McGraw-Hill Book Co., New York, 1972.
8. Fletcher, R., "A New Approach to Variable Metric Algorithms", Computer Journal, Vol. 13, 1970, pp. 317-322.
9. Smith, S. and Lasdon, L.S., Solving Large Sparse Nonlinear Programs Using GRG, ORSA Journal on Computing, Vol. 4, No. 1,Winter 1992, pp. 1-15.
10. Luenbuerger, David G., Linear and Nonlinear Programming, Second Edition, Addison-Wesley, Reading Massachusetts, 1984.
11. Windward Technologies, GRG2 Users's Guide, http://web.wt.net/~wti, 1997.