For second-order problems like (4) in which f depends only on x, the second-order Leapfrog integration scheme is widely used. Its simplicity makes it an attractive alternative. However, it requires us to make a modification to the way we have been thinking about how -- and when -- our data are defined.
Up to now, we have assumed (quite reasonably) that all data are synchronous -- that is, all the components of the vector are defined at the same time . However, in second-order systems at least, it is often advantageous to define the velocities ( v = dx/dt) at the mid-points of the intervals -- the velocities are said to be staggered with respect to the positions x. Setting aside for the moment how this is accomplished in practice, let us define, following our earlier convention,
With this definition, we can write down a statement of the Leapfrog scheme that advances to and to :
It is depicted graphically below. Notice the symmetry between the ways x and v are advanced in time. You can easily verify by expanding out the Taylor series
that this scheme does indeed give second-order accuracy in x. In fact, it is formally equivalent to the Mid-point or second-order predictor-corrector methods.
Of course, initial conditions are rarely specified at the staggered times required by the leapfrog scheme! Typically, we must use a so-called ``self-starting'' scheme (like Euler, Mid-point or Runge-Kutta-4) to take the first half step and establish the value of . The program leapfrog.c implements the leapfrog scheme for the inverse-square problem studied earlier. The program simple_leapfrog.c applies leapfrog to a simpler 1-D problem. Both use the Euler method to offset the initial velocity. You should verify that this integrator really is second-order.
You might well ask, ``Since we have to start off with one of the other schemes anyway, and since the Mid-point method is already very simple to program, why should I ever bother with the Leapfrog scheme?'' The answer is that, unlike any of the other methods we have described, the Leapfrog integrator is time reversible -- and that property gives it some very important advantages.
To see the time reversibility explicitly , reconsider equation (24) and imagine that we wished to ``reverse our tracks'' and step backward from to . Applying the algorithm, we do the following:
But these are precisely the steps (in reverse) that we took to advance the system in the first place! In other words, if we use the Leapfrog scheme to integrate forward in time, then reverse the velocities (and the sign of the timestep) and use the same integrator to return to time t = 0, we will arrive precisely our starting point -- not approximately, as would be the case with the other integrators, but exactly, at least up to rounding error. Verify this for yourself by modifying leapfrog.c to reverse itself and integrate backwards after integrating forward for some interval -- 10 time units, say.
The Leapfrog scheme is time reversible because of the symmetric way in which it is defined. None of the earlier schemes have this property, because they all evaluate derivatives in an asymmetrical way. For example, in the Euler method, it is clear that the forward and backward steps would not cancel out precisely -- they use different derivatives, evaluated at different times. In the Mid-point method, which uses an estimate of the derivative at the center of the range, that estimate is still based on an extrapolation from the left-hand side of the interval. On time reversal, the corresponding estimate would be based on the derivative at the right-hand edge, and would not yield precisely the same result. The difference is small, but it is enough to prevent the scheme from being exactly time reversible. Similar reasoning applies to Runge-Kutta-4.
Time reversibility is important because it guarantees conservation of energy, angular momentum -- and any other conserved quantity -- in many cases. Consider again the problem of a simple elliptical solution of the gravitational two-body problem. Imagine that our integrator makes an energy error of as it integrates the system forward through 1 orbital period. Now imagine reversing the integration. You might guess that the energy error in the reverse integration would be , but this is not the case. In any system where the equations of motion are unchanged by time reversal (and, specifically, in any case where the function f depends only on the coordinates), the time-reversed orbit is itself a solution of the original ODE (with v simply replaced by -v), so the energy error is still . But if our itegration scheme is time-reversible, we know that the final energy error is zero (because we return to our starting point). The only possible way that this can occur is if (i.e. energy is exactly conserved!).
You can demonstrate for yourself the difference in energy conservation between the various methods we have discussed by plotting the energy error as a function of time for the standard initial conditions used in the earlier program inverse_square.c. As illustrated below, even the small error (exaggerated here by choosing a timestep 5 times greater than in the other two integrators) made by Runge-Kutta-4 is systematic, leading to a long-term drift in the orbital parameters, the energy error in the Leapfrog scheme has no such long-term trend. There is a periodic error over the course of an orbit, at the same level as the error in the Mid-point scheme, but the errors incurred over the outgoing portion of the orbit exactly cancel those produced on the incoming segment, so no net error results. The Leapfrog method is only second-order accurate, but it is very stable.
In situations where we are interested in long-term small changes in the properties of a nearly periodic orbit, and where even small systematic errors would mask the true solution, time-reversible integrators such as the Leapfrog scheme are essential.