Numerical Solution of Partial Differential Equations


   Back to course contents

Based on notes by Guillermo Marshall, University of Buenos Aires

Numerical Solution of Partial Differential Equations

We are looking for solutions of the partial differential equations encountered in Mathematical Physics. The object of this section is to model physical systems and predict their behavior.


   Back to top of page

Finite Difference Approach

We will use the Finite Difference approach in solving these equations. In this approach the solution of the equations is sought on an equally space lattice in both space and time.

The Taylor series expansion of a function near x0 reads


\begin{displaymath}f(x_0+\delta) = f(x_0) + \delta \left. \frac{d f}{d x }\right...
...t. \frac{\delta^2}{2!} \frac{d^2 f}{d x^2 } \right\vert _{x_0}
\end{displaymath} (1)

and

\begin{displaymath}f(x_0-\delta) = f(x_0) - \delta \left. \frac{d f}{d x }\right...
...t. \frac{\delta^2}{2!} \frac{d^2 f}{d x^2 } \right\vert _{x_0}
\end{displaymath} (2)

at symmetrical distances $x_0 \pm \delta$ about x0. Solving for the derivative terms yields

\begin{displaymath}\frac{d f}{d x} = \frac{f(x_0+\delta) - f(x_0)}{\delta}
\end{displaymath} (3)


\begin{displaymath}\frac{d f}{d x} = \frac{f(x_0) - f(x_0-\delta)}{\delta}.
\end{displaymath} (4)

The neglected terms are of order O(h2).

A more accurate form follows by combining the Taylor series expansions with the function evaluated at x0:

\begin{displaymath}\frac{d f}{d x} = \frac{f(x_0+\delta) - f(x_0-\delta)}{2 \delta}
\end{displaymath} (5)

and

\begin{displaymath}\frac{d^2 f}{d x^2 } = \frac{f(x_0+\delta) - 2 f(x_0) + f(x_0-\delta)}
{\delta^2}
\end{displaymath} (6)


   Back to top of page

Laplace and Poisson Equations

The Laplace equation (in 2-D)

\begin{displaymath}\nabla^2 u = \frac{\partial^2 u}{\partial x^2} +
\frac{\partial^2 u}{\partial x^2} = 0
\end{displaymath} (7)

and the Poisson equation (in 2-D)

\begin{displaymath}\nabla^2 u = \frac{\partial^2 u}{\partial x^2} +
\frac{\partial^2 u}{\partial x^2} = S(x,y)
\end{displaymath} (8)

are encountered in the solution for potential functions, for instance in electrostatic and gravitational problems, and in describing the steady state reached in heat transmission problem. The function u(x,y) is a time independent field, i.e., a potential or a temperature profile function; S(x,y) represent a source term, i.e., a mass distribution, a charge distribution, or a heat source.

These equations are said to be of Elliptic type. No time dependence is involved here. A time independent solution for u(x,y) is sought after within a domain $\Omega$. In 2-D the domain could be a simple rectangle, $0 \le x \le 1$ and $0 \le y \le 1$, or could be of arbitrary shape.

The solution, or its derivative, is specified at the boundary of the domain, $\Gamma$. The possibilities are:

1.
Dirichlet Boundary conditions, in which the function is specified on the boundary of the domain

u(x,y) = f(x,y), (9)

2.
Newman boundary conditions, in which the normal derivative is specified,

\begin{displaymath}\frac{\partial u}{\partial n} = g(x,y),
\end{displaymath} (10)

3.
The Fourier case in which a mix of function values and normal derivatives is specified.


   Back to top of page

Numerical Solution

In a finite difference solution of the partial differential equation, an equally spaced grid is set ( hx = hy for simplicity ) to cover the domain. The number of grid points is $N_x \times N_y$. The grid coordinates are xi = hx i and yj = hy j. The unknown field and the source are written on this grid.

u(xi,yj) = ui,j (11)


S(xi,yj) = Si,j (12)

Using the formula for the partial derivative leads to the following form for the partial differential equation:

\begin{displaymath}\frac{1}{h^2}
\left( u_{i+1,j} + u_{i-1,j} + u_{i,j+1} + u_{i,j-1} - 4*u_{i,j} \right)
= S_{i,j}
\end{displaymath} (13)



   Back to top of page

``Exact'' numerical solution

The equation above form a linear system of equations which in principal could be solved for ui,j for all points in the interior of the domain $\Omega$. This is a difficult set of equations to solve because of various reasons: first, there is no obvious ``pattern'' to the left and right hand sides, specially since the boundary values of the the field must be transferred to the known right hand side. Second, the number of unknown can be very large, of the order $N_x \times N_y$ in 2-D. A 3-D problem using a $1000 \times 1000 \times 1000$ lattice would involve 109 equations!



   Back to top of page

Iterative numerical solution

A solution of the finite difference equations can be obtained iteratively. An initial value for the field ui,j is guessed for all interior points in the domain. The value of the field at the grid point i and j is solved assuming that the field at the other grid locations is known.

\begin{displaymath}u_{i,j}^{k+1} =
\frac{1}{4} \left( u_{i+1,j}^k + u_{i-1,j}^k + u_{i,j+1}^k + u_{i,j-1}^k
- h^2 S_{i,j}
\right)
\end{displaymath} (14)

This equation is used to calculate ui,j at all interior grid points within the domain $\Omega$, calculating ui,j at all grid points in the domain. The index k refers to the iteration number.

The Jacobi method uses a new ``array'' to store the new value of the field ui,j at the interior grid points.

The Gauss-Siedel method consists in replacing the new value of the field within the same ``array''. This corresponds to the following form:

\begin{displaymath}u_{i,j}^{k+1} =
\frac{1}{4} \left( u_{i+1,j}^k + u_{i-1,j}^...
...} +
u_{i,j+1}^k + u_{i,j-1}^{k+1}
- h^2 * S_{i,j}
\right)
\end{displaymath} (15)

The iterations are repeated until convergence. This is checked by calculating the largest change between the old and new field at all interior points in the domain at each iteration

\begin{displaymath}diff = Max \Vert u_{i,j}^{k+1} - u_{i,j}^{k} \Vert
\end{displaymath} (16)

and requiring this difference to reach $diff \le \epsilon$, where $\epsilon$ is a small pre-defined number.

This numerical solution is unconditionally stable, i.e., it will converge to a solution. The accuracy of the solution is a separate issue; it depends on the adequacy of the numerical grid to describe the regions of high curvature of the field and on the ability of the numerical grid to cover the domain.

The Gauss-Siedel method converges faster than the Jacobi method. Yet it is a notoriously slow method, often requiring very large number of iterations to achieve convergence. A better rate of convergence is achieved by the Succesive Over Relaxation (SOR) method. In this approach, the old and new fields are mixed via a parameter $\omega$.

\begin{displaymath}u_{i,j}^{k+\frac{1}{2}} =
\frac{1}{4} \left( u_{i+1,j}^k + ...
...{i,j+1}^k + u_{i,j-1}^{k+\frac{1}{2}} - h^2 * S_{i,j}
\right)
\end{displaymath} (17)


\begin{displaymath}u_{i,j}^{k+1} = \omega u_{i,j}^{k+\frac{1}{2}}
- ( 1-\omega) u_{i,j}^k
\end{displaymath} (18)

$\omega$ may change as a function of iteration number. It is taken to be small for few iterations, increased to a value close to 1 for many iterations, and eventually taken larger than 1 to ``accelerate'' convergence in the latter stages of the iteration process. The choice of the best value for $\omega$ is discussed in ``Numerical Recipes'' in the section on SOR.

 


   Back to top of page

About this document ...

This document was generated using the LaTeX2HTML translator Version 98.1p1 release (March 2nd, 1998)

Copyright © 1993, 1994, 1995, 1996, 1997, Nikos Drakos, Computer Based Learning Unit, University of Leeds.

The command line arguments were:
latex2html Poisson.tex.

The translation was initiated by Michel Vallieres on 2001-05-11


Michel Vallieres
2001-05-11