Go to the first, previous, next, last section, table of contents.
There are several minimization methods available. The best choice of
algorithm depends on the problem. Each of the algorithms uses the value
of the function and its gradient at each evaluation point.
- Minimizer: gsl_multimin_fdfminimizer_conjugate_fr
-
This is the Fletcher-Reeves conjugate gradient algorithm. The conjugate
gradient algorithm proceeds as a succession of line minimizations. The
sequence of search directions to build up an approximation to the
curvature of the function in the neighborhood of the minimum. An
initial search direction p is chosen using the gradient and line
minimization is carried out in that direction. The accuracy of the line
minimization is specified by the parameter tol. At the minimum
along this line the function gradient g and the search direction
p are orthogonal. The line minimization terminates when
@math{dot(p,g) < tol |p| |g|}. The
search direction is updated using the Fletcher-Reeves formula
@math{p' = g' - \beta g} where @math{\beta=-|g'|^2/|g|^2}, and
the line minimization is then repeated for the new search
direction.
- Minimizer: gsl_multimin_fdfminimizer_conjugate_pr
-
This is the Polak-Ribiere conjugate gradient algorithm. It is similar
to the Fletcher-Reeves method, differing only in the choice of the
coefficient @math{\beta}. Both methods work well when the evaluation
point is close enough to the minimum of the objective function that it
is well approximated by a quadratic hypersurface.
- Minimizer: gsl_multimin_fdfminimizer_vector_bfgs
-
This is the vector Broyden-Fletcher-Goldfarb-Shanno conjugate gradient
algorithm. It is a quasi-Newton method which builds up an approximation
to the second derivatives of the function @math{f} using the difference
between successive gradient vectors. By combining the first and second
derivatives the algorithm is able to take Newton-type steps towards the
function minimum, assuming quadratic behavior in that region.
- Minimizer: gsl_multimin_fdfminimizer_steepest_descent
-
The steepest descent algorithm follows the downhill gradient of the
function at each step. When a downhill step is successful the step-size
is increased by factor of two. If the downhill step leads to a higher
function value then the algorithm backtracks and the step size is
decreased using the parameter tol. A suitable value of tol
for most applications is 0.1. The steepest descent method is
inefficient and is included only for demonstration purposes.
Go to the first, previous, next, last section, table of contents.