Computing Particle Motion


Newton's second law of motion for a particle of mass $m$ moving under the influence of a force ${\bf F}$ is

\begin{displaymath}
m \frac{d^2{\bf x}}{dt^2} =  {\bf F}({\bf x}, {\bf v}, t),
\end{displaymath}

where ${\bf v} = d{\bf x}/dt$. This is perhaps the best known differential equation, at least for physicists. This equation is easy to solve if ${\bf F}$ has some simple form--this's pretty much what Contemporary Physics or PHYS 111 was all about! In general, however, analytic solution of such a second-order, possibly nonlinear, differential equation is not at all an easy matter.

In PHYS 105 you studied the kinematics of uniformly accelerated motion to compute the trajectory of a projectile under gravity, with the inclusion of various complicating factors, such as air resistance, and variation of gravity with height. We repeat some of these solutions here.


A Simple Integration Scheme

The basic idea in numerical integration of the particle's equation of motion is to subdivide the trajectory into many short pieces--for simplicity, let's imagine that we describe the motion as a series of time segments, each of constant duration $\delta t$. The key (which also seems intuitively reasonable) is to choose $\delta t$ sufficiently short that the force ${\bf F}$ may be regarded as constant during the interval. You can quickly convince yourself that, if ${\bf F}$ is sufficiently ``well behaved''--differentiable, or even just continuous--that this requirement on ${\bf F}$ can always be met by choosing a sufficiently short time step $\delta t$.

Having made the assumption that ${\bf F}$ may be regarded as constant, we can easily connect the positions and velocities at the start and end of the step. More precisely, if we let $t_i (= i\delta t)$, ${\bf x}_i = {\bf x}(t_i)$, and ${\bf v}_i = {\bf v}(t_i)$ represent the state of the system at the end if the $i$-th step, and we define ${\bf a}_i = {\bf F}({\bf x}_i, {\bf v}_i, t_i)/m$, we can write

$\displaystyle {\bf x}_{i+1}$ $\textstyle =$ $\displaystyle {\bf x}_i + {\bf v}_i\delta t
+ {\textstyle\frac12}
{\bf a}_i\delta t^2 ,$  
$\displaystyle {\bf v}_{i+1}$ $\textstyle =$ $\displaystyle {\bf v}_i + {\bf a}_i\delta t ,$  
$\displaystyle t_{i+1}$ $\textstyle =$ $\displaystyle t_i + \delta t .$  

This integration scheme--the mapping from state $i$ to state $i+1$--is the discrete numerical analog of the continuous integral operator that maps the initial conditions of the problem (at $t =
0$) into the state of the system at time $t$.

Note that the map is explicit--the procedure for getting from state $i$ to state $i+1$ depends only on quantities determinable at time $t_i$. At time $t_{i+1}$, a new ${\bf a}_{i+1} = {\bf F}({\bf x}_{i+1}, {\bf v}_{i+1}, t_{i+1})/m$ is computed, and the process repeats.


Exercise 1:  Write a program to follow the (1-dimensional) motion of mass $m = 1$ moving under the influence of a restoring force $F = -kx$, with ``spring constant'' $k = 4$, starting with $x(0) = 1$, $v(0) = 0$. Plot your numerical solution $x_{num}$ and compare it with the analytic solution $x_{an} = \cos 2t$, for $\delta t = 0.001$ and $0 \le t \le 10$.


The equations presented above were not in the standard form usually adopted when we derive and express numerical algorithms for the solutions of ODEs, nor is the scheme as stated widely used. Nevertheless, the approach serves as a quite intuitive introduction to the subject.



Numerical Errors

A numerical integration scheme such as the one just described solves an approximation to the true differential equation. Thus, it necessarily makes errors--deviations between the numerical and the true solution that have nothing to do with mistakes in the program, nor misconceptions on the part of the programmer. Errors are part and parcel of the approximations underlying the scheme.

In our case, since we approximate ${\bf F}$ by a constant for the duration of each step, if ${\bf F}$ is differentiable is is clear that the error we incur in ${\bf F}$ is $O(\delta t)$--actually ${\bf F}^\prime\delta t$, the next term in the Taylor series expansion of ${\bf F}$, which we explicitly throw away. The corresponding errors in ${\bf x}$ and ${\bf v}$ at the end of the step are evidently $O(\delta t^3)$ and $O(\delta t^2)$, respectively. The lowest power of $\delta t$ appearing in the error terms is referred to as the order of the integration scheme. The total error in the calculation is the accumulation of these errors over all steps. Understanding these errors and how they scale with $\delta t$ is critical to any study of numerical integration.


Exercise 2:  Analyze the errors in the numerical solution of Newton's equation in exercise #1.



Realistic Numerical Errors

Your results from the exercise above should be at odds with the discussion of errors just presented. You should have found that, for any choice of ${\bf F}$ for which an analytic solution is known, both the position and the velocity errors scale only as the first power of $\delta t$--in other words, reducing the time step results in a far smaller improvement in accuracy than the earlier analysis would suggest. Why is this?

Part of the problem comes from the fact that the errors in our scheme are coherent, that is, errors in successive steps tend to be of the same sign--they don't jump around randomly. Thus, if the error after one step is $\epsilon$, then the error after two is roughly $2\epsilon$, the error after $N$ is $\sim N\epsilon$, and so on. (We assume here that the neglected terms in ${\bf F}^\prime$ don't vary too much with time.) Since we are normally interested in integrating across a fixed time interval, the number of steps needed varies as $N\propto1/\delta t$. We thus immediately lose one power of $\delta t$ in our error assessments when we move from integration over a single step to integration over an interval.

A second problem is that the order of the ${\bf x}$ integration is greater than that of the ${\bf v}$ integration. Applying the reasoning of the previous paragraph, we come to the conclusion that the net scaling of the error in ${\bf v}$ is linear in $\delta t$, as observed. However, this in turn affects the ${\bf x}$ integration, since ${\bf v}$ appears in the second term. The result is that the per-step error in ${\bf x}$ acquires a term of the form $\delta {\bf v}\delta t$, i.e. $O(\delta t^2)$, and this becomes $O(\delta t)$ when integrated over a finite interval. The error in ${\bf v}$ actually overwhelms any benefit of including the $\frac12{\bf a}\delta
t^2$ term! You can verify this for yourself by simply omitting this term from your program.


Better Integration Schemes

Newton's equation of motion

\begin{displaymath}
m \frac{d^2{\bf x}}{dt^2} =  {\bf F}({\bf x}, {\bf v}, t),
\end{displaymath}

constitute an ordinary differential equation of second order with specified initial values for $x$ and $v$. All second order differential equation can be split into 2 coupled first order differential equations with the same initial values. Newton's equation then becomes

\begin{displaymath}
m \frac{d{\bf v}}{dt} =  {\bf F}({\bf x}, {\bf v}, t)
\end{displaymath}


\begin{displaymath}
\frac{d{\bf x}}{dt} =  v
\end{displaymath}

In the following sections we will systematically describe better integration schemes to solve such coupled systems of ordinary first order differential equations.



Michel Vallieres 2009-01-06