The algorithms described in this section do not require any derivative information to be supplied by the user. Any derivatives needed are approximated from by finite difference.
gsl_multiroots_fdjac
with a relative step size of GSL_SQRT_DBL_EPSILON
.
The discrete Newton algorithm is the simplest method of solving a multidimensional system. It uses the Newton iteration
where the Jacobian matrix @math{J} is approximated by taking finite differences of the function f. The approximation scheme used by this implementation is,
where @math{\delta_j} is a step of size @math{\sqrt\epsilon |x_j|} with @math{\epsilon} being the machine precision (@c{$\epsilon \approx 2.22 \times 10^{-16}$} @math{\epsilon \approx 2.22 \times 10^-16}). The order of convergence of Newton's algorithm is quadratic, but the finite differences require @math{n^2} function evaluations on each iteration. The algorithm may become unstable if the finite differences are not a good approximation to the true derivatives.
The Broyden algorithm is a version of the discrete Newton algorithm which attempts to avoids the expensive update of the Jacobian matrix on each iteration. The changes to the Jacobian are also approximated, using a rank-1 update,
where the vectors @math{dx} and @math{df} are the changes in @math{x} and @math{f}. On the first iteration the inverse Jacobian is estimated using finite differences, as in the discrete Newton algorithm. This approximation gives a fast update but is unreliable if the changes are not small, and the estimate of the inverse Jacobian becomes worse as time passes. The algorithm has a tendency to become unstable unless it starts close to the root. The Jacobian is refreshed if this instability is detected (consult the source for details).
This algorithm is not recommended and is included only for demonstration purposes.