Turkish Journal of Mathematics




We consider a general class of nonlinear constrained optimization problems, where derivatives of the objective function and constraints are unavailable. This property of problems can often impede the performance of optimization algorithms. Most algorithms usually determine a quasi-Newton direction and then use line search techniques. We propose a smoothing algorithm without the need to use a penalty function. A new algorithm is developed to modify the trust region and to handle the constraints based on radial basis functions (RBFs). The value of the objective function is reduced according to the relation of the predicted reduction of constraint violation achieved by the trial step. At each iteration, the constraints are approximated by a quadratic model obtained by RBFs. The aim of the present work is to keep the good position for the interpolation points in order to obtain a proper approximation in a small trust region. The numerical results are presented for some standard test problems.


Exact penalty function, derivative-free method, trust-region method, nonsmooth optimization, radial basis functions, constrained optimization, nonlinear programming

First Page


Last Page


Included in

Mathematics Commons