In order to avoid going on another (epic) rant this week, I’ll change gears and touch upon a classic technique for analyzing the stability of numerical methods along with extensions or the traditional approach.

Before diving into partial differential equations, I thought it would be beneficial to analyze the stability of ordinary differential equation integrators first. This provides the basis of the approach. Next I will show how the analysis proceeds for an important second-order method for linear advection. I will close with providing the analysis of second-order discontinuous Galerkin methods, which introduce an important wrinkle on the schemes. I will close by producing the Mathematica commands used to give the results.

It is always good to have references that can be read for detail and explanation than I will give a few seminal ones here:

* Ascher, Uri M., and Linda R. Petzold. Computer methods for ordinary differential equations and differential-algebraic equations. Vol. 61. Siam, 1998.

* Durran, Dale R. Numerical methods for wave equations in geophysical fluid dynamics. No. 32. Springer, 1999.

* LeVeque, Randall J., and Randall J. Le Veque. Numerical methods for conservation laws. Vol. 132. Basel: Birkhäuser, 1992.

* LeVeque, Randall J. Finite volume methods for hyperbolic problems. Vol. 31. Cambridge university press, 2002.

* Strikwerda, John C. Finite difference schemes and partial differential equations. Siam, 2004.

Let’s jump into the analysis of ODE solvers by looking at a fairly simple method, the forward Euler method. We can write the solver for a simple ODE, , as simply . We take the right hand side , and do some algebra. We have several principal goals, establish conditions for stability, accuracy, and overall behavior of the method.

For stability we determine how much the value of the solution is amplified by the action of the integration scheme, , we remove the variable and call the result the “symbol’’ of the integrator, , we then solve for then take its magnitude, , being less than one for stability. We can write down this answer explicitly . We can also plot this result easily (see the commands I used in Mathematica at the end of the post). On all the plots the horizontal axis is the real values and the vertical axis is the imaginary values .

This plot just includes the values where they amplitude of the symbol is less than or equal to one.

Next we look at accuracy using a Taylor series expansion. The Taylor series is simple given the analytical solution to the ODE, , and the Taylor series expansion is classical, . We simply subtract this Taylor series from the symbol of the operator and look at the remainder, $\latex E= \frac{1}{2}(\lambda \Delta t)^2+ O(\Delta t^3)$, where the time has been replaced by the time step size.

The last couple of twists can add some significant texture to the behavior of the integration scheme. We can plot the “order stars’’ which shows whether the numerical scheme changes the amplitude of the answer more or less than the exact operator. These are call stars because they start to show star-like shapes for higher order methods (mostly starting at third- and higher order accuracy). He is the plot for forward Euler.

The last thing we will examine for the forward Euler scheme is the order of accuracy you should see during a time step refinement study as part of a verification exercise. Usually this is thought of as being the same as the order of the numerical scheme, but for a finite time step size the result deviates from the analytical order substantially. Computing this is really quite simple, one simply compute the symbol of the operator for half the time step size , for two time steps (so that the time ends up at the same place as you get for a single step with . This is simply the square of the operator at the smaller time step size. To get the order of accuracy you take the operators and subtract the exact solution, take the absolute value of the result, then compute the order of accuracy like usual verification,

We can plot the result easily with Mathematica. It is notable how different from the asymptotic value of one the results are for reasonable, but finite values of . As the operator becomes unstable, the convergence rate actually becomes very large. This is a word of warning to the practitioner that very high rates of convergence can actually be a very bad sign for a calculation.

Look for

We can now examine a second-order method with relative ease. Doing the analysis is akin to writing a computer code albeit symbolically. The second-order is using a predictor-corrector format where a half step is taken using forward Euler and this result is used to advance the solution the full step. This is an improved forward Euler method. It is explicit in that the solution can be evaluated solely in terms of the initial data. The scheme is the following: for the predictor, and . The symbol is computed as before giving . Getting the full properties of the method now just requires “turning the crank’’ as we did for the forward Euler scheme.

The truncation error has gained an order of accuracy and now is .

The stability plot is more complex giving a larger region for the stability particularly along the imaginary axis.

The order star looks much more like a star.

Finally the convergence rate plot is much less pathalogical although some of the same conclusions can be drawn from the behavior where the scheme is unstable (giving the very large convergance rates).

We will finish this week’s post by turning our attention to a second order implicit scheme, the backwards differentiation formula (BDF). Everything will follow from the previous two example, but the scheme adds an important twist (or two). The first twist is that the method is implicit, meaning that the left and right hand sides of the method are coupled, and the second is that the method depends on three time levels of data, not two as the first couple of methods.

The update for the method is written , and the amplification is now a quadratic equation, with two roots. One of these roots will have a Taylor series expansion that demonstrates second-order accuracy for the scheme, the other will not. The inaccurate root must still be stable for the scheme to be stable. The accurate root is

with a error of .

The second inaccurate root is also called spurious and has the form

.

The stability of the scheme requires taking the maximum of the magnitude of both roots.

Using the accurate root we can examine the order star, and the rate of convergence of the method as before.

Next week we will look at a simple partial differential equation analysis, which adds new wrinkles.

While this sort of analysis can be done by hand, the greatest utility can be achieved by using symbolic or numerical packages such as Mathematica. Below I’ve included the Mathematica code used for the analyses given above.

Soln = Collect[Expand[Normal[Series[Exp[h L], {h, 0, 6}]]], h]

(* Forward Euler *)

a =.; b =.

A = 1 + h L

Aab2 = (1 + h L/2 )^2

Collect[Expand[Normal[Series[A, {h, 0, 6}]]], h];

Soln – %

L = ( a + b I)/h;

ContourPlot[If[Abs[A] < 1, -Abs[A], 1], {a, -3, 1}, {b, -2, 2},

PlotPoints -> 250,

Contours -> {-0.1, -0.2, -0.3, -0.4, -0.5, -0.6, -0.7, -0.8, -0.9},

ContourShading -> False, Axes -> {True, True},

AxesLabel -> {a dt, b dt}, LabelStyle -> Directive[18, Bold, Black]]

Export[“forwardEuler.jpg”, %]

“forwardEuler.jpg”

ContourPlot[

If[Abs[A]/Abs[Exp[a + b I]] < 1, -Abs[A]/Abs[Exp[a + b I]],

1], {a, -3, 2}, {b, -2, 2}, PlotPoints -> 250,

Contours -> {-0.1, -0.2, -0.3, -0.4, -0.5, -0.6, -0.7, -0.8, -0.9},

ContourShading -> False, Axes -> {True, True},

AxesLabel -> {a dt, b dt}, LabelStyle -> Directive[18, Bold, Black]]

Export[“forwardEuler-star.jpg”, %]

“forwardEuler-star.jpg”

Plot3D[Log[Abs[A – Exp[a + b I]]/Abs[Aab2 – Exp[a + b I]]]/

Log[2], {a, -3, 3}, {b, -2, 2}, PlotPoints -> 100,

AxesLabel -> {a dt, b dt, n},

LabelStyle -> Directive[18, Bold, Black], PlotRange -> All]

Export[“forwardEuler-order.jpg”, %]

“forwardEuler-order.jpg”

ContourPlot[

Log[Abs[A – Exp[a + b I]]/Abs[Aab2 – Exp[a + b I]]]/Log[2], {a, -3,

3}, {b, -2, 2}, PlotPoints -> 250,

Contours -> {0, 0.25, 0.5, 0.75, 0.9, 1, 1.1, 1.25, 1.5, 2, 3, 4, 5,

10}, ContourShading -> False, ContourLabels -> All,

Axes -> {True, True}, AxesLabel -> {a dt, b dt},

LabelStyle -> Directive[18, Bold, Black], PlotRange -> All]

Export[“forwardEuler-order-contour.jpg”, %]

“forwardEuler-order-contour.jpg”

ContourPlot[

If[Abs[A] < 1, Abs[A – Exp[a + b I]]/Abs[Aab2 – Exp[a + b I]],

0], {a, -3, 2}, {b, -2, 2}, PlotPoints -> 250,

Contours -> {2, 2.5, 3, 3.5, 4}, ContourShading -> False,

Axes -> {False, True}]

Plot3D[Abs[A – Exp[a + b I]]/Abs[Aab2 – Exp[a + b I]], {a, -3,

3}, {b, -2, 2}, PlotPoints -> 100]

ContourPlot[

If[Abs[A]/Abs[Exp[a + b I]] < 1, -Abs[A]/Abs[Exp[a + b I]],

1], {a, -3, 2}, {b, -2, 2}, PlotPoints -> 250,

Contours -> {-0.1, -0.2, -0.3, -0.4, -0.5, -0.6, -0.7, -0.8, -0.9},

ContourShading -> False, Axes -> {False, True}]

(* RK 2 *)

L =.

A1 = 1 + 1/2 h L

A = 1 + h L A1

A12 = 1 + 1/4 h L;

Aab2 = (1 + h L A12/2 )^2

Collect[Expand[Normal[Series[A, {h, 0, 6}]]], h] – Soln

L = ( a + b I)/h;

ContourPlot[If[Abs[A] < 1, -Abs[A], 1], {a, -3, 1}, {b, -2, 2},

PlotPoints -> 250,

Contours -> {-0.1, -0.2, -0.3, -0.4, -0.5, -0.6, -0.7, -0.8, -0.9},

ContourShading -> False, Axes -> {True, True},

AxesLabel -> {a dt, b dt}, LabelStyle -> Directive[18, Bold, Black]]

Export[“rk2.jpg”, %]

ContourPlot[

If[Abs[A]/Abs[Exp[a + b I]] < 1, -Abs[A]/Abs[Exp[a + b I]],

1], {a, -3, 2}, {b, -2, 2}, PlotPoints -> 250,

Contours -> {-0.1, -0.2, -0.3, -0.4, -0.5, -0.6, -0.7, -0.8, -0.9},

ContourShading -> False, Axes -> {True, True},

AxesLabel -> {a dt, b dt}, LabelStyle -> Directive[18, Bold, Black]]

Export[“rk2-star.jpg”, %]

Plot3D[Log[Abs[A – Exp[a + b I]]/Abs[Aab2 – Exp[a + b I]]]/

Log[2], {a, -3, 3}, {b, -2, 2}, PlotPoints -> 100,

AxesLabel -> {a dt, b dt, n},

LabelStyle -> Directive[18, Bold, Black], PlotRange -> All]

Export[“rk2-order.jpg”, %]

ContourPlot[

Log[Abs[A – Exp[a + b I]]/Abs[Aab2 – Exp[a + b I]]]/Log[2], {a, -3,

3}, {b, -2, 2}, PlotPoints -> 250,

Contours -> {0, 0.25, 0.5, 0.75, 0.9, 1, 1.1, 1.25, 1.5, 2, 3, 4, 5,

10}, ContourShading -> False, ContourLabels -> All,

Axes -> {True, True}, AxesLabel -> {a dt, b dt},

LabelStyle -> Directive[18, Bold, Black], PlotRange -> All]

Export[“rk2-order-contour.jpg”, %]

(* BDF 2 *)

A =.

L =.

Solve[3/2 A^2 – 2 A + 1/2 == h A^2 L, A]

A1 = (-2 – Sqrt[1 + 2 h L])/(-3 + 2 h L); A2 = (-2 + Sqrt[

1 + 2 h L])/(-3 + 2 h L);

Solve[3/2 A^2 – 2 A + 1/2 == 1/2 h A^2 L, A]

Aab2 = ((-2 – Sqrt[1 + h L])/(-3 + h L) )^2;

Collect[Expand[Normal[Series[A1, {h, 0, 6}]]], h] – Soln

Collect[Expand[Normal[Series[A2, {h, 0, 6}]]], h] – Soln

L = ( a + b I)/h;

ContourPlot[

If[Max[Abs[A1], Abs[A2]] < 1, -Max[Abs[A1], Abs[A2]], 1], {a, -8,

6}, {b, -5, 5}, PlotPoints -> 250,

Contours -> {-0.1, -0.2, -0.3, -0.4, -0.5, -0.6, -0.7, -0.8, -0.9},

ContourShading -> False, Axes -> {True, True},

AxesLabel -> {a dt, b dt}, LabelStyle -> Directive[18, Bold, Black]]

Export[“bdf2.jpg”, %]

ContourPlot[

If[Abs[A1]/Abs[Exp[a + b I]] < 1, -Abs[A1]/Abs[Exp[a + b I]],

1], {a, -3, 2}, {b, -2, 2}, PlotPoints -> 250,

Contours -> {-0.1, -0.2, -0.3, -0.4, -0.5, -0.6, -0.7, -0.8, -0.9},

ContourShading -> False, Axes -> {True, True},

AxesLabel -> {a dt, b dt}, LabelStyle -> Directive[18, Bold, Black]]

Export[“bdf2-star.jpg”, %]

Plot3D[Log[Abs[A1 – Exp[a + b I]]/Abs[Aab2 – Exp[a + b I]]]/

Log[2], {a, -8, 6}, {b, -5, 5}, PlotPoints -> 100,

AxesLabel -> {a dt, b dt, n},

LabelStyle -> Directive[18, Bold, Black]]

Export[“bdf2-order.jpg”, %]

ContourPlot[

Log[Abs[A1 – Exp[a + b I]]/Abs[Aab2 – Exp[a + b I]]]/Log[2], {a, -8,

6}, {b, -5, 5}, PlotPoints -> 250,

Contours -> {-1, 0, 0.5, 0.9, 1, 1.1, 1.5, 2, 5, 10},

ContourShading -> False, ContourLabels -> All, Axes -> {True, True},

AxesLabel -> {a dt, b dt}, LabelStyle -> Directive[18, Bold, Black],

PlotRange -> All]

Export[“bdf2-order-contour.jpg”, %]

Pingback: Robustness is Stability, Stability is Robustness, Almost | The Regularized Singularity

Pingback: Do Even Know What These Terms Mean? | The Regularized Singularity

Pingback: A More Robust, Less Fragile Stability for Numerical Methods | The Regularized Singularity