Numerical Solution of Partial Differential Equations
We are looking for solutions of the partial differential equations encountered in Mathematical Physics. The object of this section is to model physical systems and predict their behavior.
Finite Difference Approach
We will use the Finite Difference approach in solving these equations. In this approach the solution of the equations is sought on an equally space lattice in both space and time.
The Taylor series expansion of a function near $x_0$ reads
\[ f(x_0+\delta) = f(x_0) + \delta \left. \frac{d f}{d x}\right|_{x_0} + \left. \frac{\delta^2}{2!} \frac{d^2 f}{d x^2} \right|_{x_0} \]
and
\[ f(x_0-\delta) = f(x_0) - \delta \left. \frac{d f}{d x}\right|_{x_0} + \left. \frac{\delta^2}{2!} \frac{d^2 f}{d x^2} \right|_{x_0} \]
at symmetrical distances $x_0 \pm \delta$ about $x_0$. Solving for the derivative terms yields
\[ \begin{aligned} \frac{d f}{d x} = \frac{f(x_0+\delta) - f(x_0)}{\delta} \\ \frac{d f}{d x} = \frac{f(x_0) - f(x_0-\delta)}{\delta}. \end{aligned} \]
The neglected terms are of order $O(h^2)$.
A more accurate form follows by combining the Taylor series expansions with the function evaluated at $x_0$:
\[ \frac{d f}{d x} = \frac{f(x_0+\delta) - f(x_0-\delta)}{2 \delta} \]
and
\[ \frac{d^2 f}{d x^2 } = \frac{f(x_0+\delta) - 2 f(x_0) + f(x_0-\delta)} {\delta^2} \]
Laplace and Poisson Equations
The Laplace equation (in 2-D)
\[ \nabla^2 u = \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial x^2} = 0 \]
and the Poisson equation (in 2-D)
\[ \nabla^2 u = \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial x^2} = S(x,y) \]
are encountered in the solution for potential functions, for instance in electrostatic and gravitational problems, and in describing the steady state reached in heat transmission problem. The function $u(x,y)$ is a time independent field, i.e., a potential or a temperature profile function; $S(x,y)$ represent a source term, i.e., a mass distribution, a charge distribution, or a heat source.
These equations are said to be of elliptic type. No time dependence is involved here. A time independent solution for $u(x,y)$ is sought after within a domain $\Omega$. In 2-D the domain could be a simple rectangle, $0 \le x \le 1$ and $0 \le y \le 1$, or could be of arbitrary shape.
The solution, or its derivative, is specified at the boundary of the domain, $\Gamma$. The possibilities are:
Dirichlet Boundary conditions, in which the function is specified on the boundary of the domain
\[ u(x,y) = f(x,y), \]
Newman boundary conditions, in which the normal derivative is specified,
\[ \frac{\partial u}{\partial n} = g(x,y), \]
- The Fourier case in which a mix of function values and normal derivatives is specified.
Numerical Solution
In a finite difference solution of the partial differential equation, an equally spaced grid is set ($h_x = h_y$ for simplicity) to cover the domain. The number of grid points is $N_x \times N_y$. The grid coordinates are $x_i = h_x i$ and $y_j = h_y j$. The unknown field and the source are written on this grid.
\[ \begin{aligned} u(x_i,y_j) = u_{i,j} \\ S(x_i,y_j) = S_{i,j} \end{aligned} \]
Using the formula for the partial derivative leads to the following form for the partial differential equation:
\[ \frac{1}{h^2} \left( u_{i+1,j} + u_{i-1,j} + u_{i,j+1} + u_{i,j-1} - 4*u_{i,j} \right) = S_{i,j} \]
"Exact" numerical solution
The equation above form a linear system of equations which in principal could be solved for $u_{i,j}$ for all points in the interior of the domain $\Omega$. This is a difficult set of equations to solve because of various reasons: first, there is no obvious "pattern" to the left and right hand sides, especially since the boundary values of the the field must be transferred to the known right hand side. Second, the number of unknown can be very large, of the order $N_x \times N_y$ in 2-D. A 3-D problem using a $1000 \times 1000 \times 1000$ lattice would involve $10^9$ equations!
Iterative numerical solution
A solution of the finite difference equations can be obtained iteratively. An initial value for the field $u_{i,j}$ is guessed for all interior points in the domain. The value of the field at the grid point $i$ and $j$ is solved assuming that the field at the other grid locations is known.
\[ u_{i,j}^{k+1} = \frac{1}{4} \left( u_{i+1,j}^k + u_{i-1,j}^k + u_{i,j+1}^k + u_{i,j-1}^k - h^2 S_{i,j} \right) \]
This equation is used to calculate $u_{i,j}$ at all interior grid points within the domain $\Omega$, calculating $u_{i,j}$ at all grid points in the domain. The index $k$ refers to the iteration number.
The Jacobi method uses a new "array" to store the new value of the field $u_{i,j}$ at the interior grid points.
The Gauss-Siedel method consists in replacing the new value of the field within the same "array". This corresponds to the following form:
\[ u_{i,j}^{k+1} = \frac{1}{4} \left( u_{i+1,j}^k + u_{i-1,j}^{k+1} + u_{i,j+1}^k + u_{i,j-1}^{k+1} - h^2 * S_{i,j} \right) \]
The iterations are repeated until convergence. This is checked by calculating the largest change between the old and new field at all interior points in the domain at each iteration.
\[ diff = Max \| u_{i,j}^{k+1} - u_{i,j}^{k} \| \]
and requiring this difference to reach $diff \le \epsilon$, where $\epsilon$ is a small pre-defined number.
This numerical solution is unconditionally stable, i.e., it will converge to a solution. The accuracy of the solution is a separate issue; it depends on the adequacy of the numerical grid to describe the regions of high curvature of the field and on the ability of the numerical grid to cover the domain.
The Gauss-Siedel method converges faster than the Jacobi method. However, it is a notoriously slow method, often requiring very large number of iterations to achieve convergence. A better rate of convergence is achieved by the Succesive Over Relaxation (SOR) method. In this approach, the old and new fields are mixed via a parameter $\omega$.
\[ \begin{aligned} u_{i,j}^{k+\frac{1}{2}} = \frac{1}{4} \left( u_{i+1,j}^k + u_{i-1,j}^{k+\frac{1}{2}} + u_{i,j+1}^k + u_{i,j-1}^{k+\frac{1}{2}} - h^2 * S_{i,j} \right) \\ u_{i,j}^{k+1} = \omega u_{i,j}^{k+\frac{1}{2}} - ( 1-\omega) u_{i,j}^k \end{aligned} \]
$\omega$ may change as a function of iteration number. It is taken to be small for few iterations, increased to a value close to 1 for many iterations, and eventually taken larger than 1 to "accelerate" convergence in the latter stages of the iteration process. The choice of the best value for $\omega$ is discussed in Numerical Recipes in the section on SOR.
Based on notes by Guillermo Marshall, University of Buenos Aires.