The simplest way to solve an algebraic equation of the form `g(z) =
0`, for some function `g` is known as *bisection*.
The method assumes that we start with two values of `z` that
bracket a root: `z1` (to the left) and `z2` (to the
right), say. Then, we iteratively narrow the range as follows.
First, define `zm` to be the midpoint of the range: `zm =
(z1 + z2)/2`. Then, if `g(zm)` has the same sign as
`g(z1)`, and assuming that there is only one root in the range,
then we know that the root must lie in `[zm,z2]`, and we can
replace `z1` by `zm`. If not, then the root lies in
`[z1,zm]`, so `zm` can replace `z2`.

A C fragment to implement this logic is:

if (g(zm)*g(z1) > 0) z1 = zm; else z2 = zm;

The process continues until `|z2-z1|` becomes less than some
specified tolerance. Obviously, the uncertainty in the value of the
root (the width of the range) decreases by a factor of two with each
iteration, that is, we gain approximately 1 decimal digit of accuracy
every 3 iterations.

Of course, finding the initial range `[z1,z2]` may not be a
simple task. We will return to this later.

The bisection method relies solely on the assumption that the function
`g` is *continuous*, so its value at the midpoint
(eventually) lies between its values at the end of the range. If
`g` is *differentiable*, we can do better. In that
case, a straight-line approximation to `g` in the range
`[z1,z2]` is appropriate, and we can estimate the root by
linear interpolation (or extrapolation). After each iteration, the
range is refined to incorporate the improved information.

In C, we can write:

Both the

/* False position iteration. */ zs = z1 + (z2 - z1) * (-g(z1)) / (g(z2) - g(z1)); if (g(zs)*g(z1) > 0) z1 = zs; else z2 = zs;

In the secant method, we always use `zs` and the previous
estimate of the root (`z2` say):

/* Secant iteration. */ zs = z1 + (z2 - z1) * (-g(z1)) / (g(z2) - g(z1)); z1 = z2; z2 = zs;Note that, with false position, we are guaranteed that our range always spans the root, and convergence is assured, although the method is generally slower than the secant method. Specifically, in the configuration shown above, the iterates would approach the solution from the right, leaving

Generally speaking, these methods work best when we are already
close to the solution. One can construct situations where the secant
method performs far *worse* than bisection but, as a rule of
thumb, it can be shown that, once we are close to a root, the secant
method more than *doubles* the number of digits of accuracy of
the answer every *two* iterations. False position, while
slower, still converges substantially faster than bisection.

Very often, we start a root-finding exercise by using bisection,
because that method, while relatively slow to converge, is very
stable. However, it is common to perform a couple of secant
iterations at the end of the process, once we are fairly close to the
solution, to ``refine'' the results and gain a great deal more
accuracy.

The most efficient method for finding a root of an equation is known
as *Newton-Raphson*. In this method, instead of doing linear
interpolation between two points known to straddle the root, as in the
secant method, we use the value of the function `g` *and its
derivative* `g'` at some point `z`, and simply
follow the tangent to the point where it crosses the axis.

Obviously,

When it can be used, Newton-Raphson converges extremely rapidly.
In fact, the number of digits of accuracy *doubles* at each
iteration. However, the method requires that we know `g'`,
which may not always be the case (for example, if `g` is known
only as a tabulated or numerically defined function). Note, though,
that it is possible to make numerical estimates of `g'` by
evaluating `g` at two nearby points. The method also suffers
from the fact that it can be wildly unstable if we start far from the
root (for example, if we started at the point labeled `z2` in
the earlier figures, Newton Raphson would send us off in entirely the
wrong direction). And of course, if happen to land near a turning
point, where `g'(z) = 0`, the method will fail badly.

Like the secant method, Newton-Raphson is often used to refine our results after bisection has already brought us fairly close to the desired solution.