There are a lot of numerical methods out there and it might be hard to decide which one is right for your purpose.  A large part of the decision should be determined by the basic properties of the numerical method.  These properties can be determined by some basic analysis techniques.  I highly recommend doing these analyses personally rather than relying on what the authors of papers provide you.  A good place to start is reproducing the analysis found in textbooks, then research papers.  It provides the template to really learn how to do this.  These analyses have a number of benefits that I’ll outline below.

In the writing I will assume you’re pretty well versed in numerical methods, and you’ve had advanced mathematics classes at least an undergraduate level (and this will miss cause most of this is graduate level stuff).  On the other hand, why dumb it down? There is too much of that these days!

The classical starting point for numerical analysis is round-off error, which had importance back in the day when computers were crude.  Round-off error could be a real practical problem and even simple problems could be corrupted by the manner in which numbers were represented.  The analysis of Gaussian elimination is the archetype.  These issues have faded into the woodwork, but are staged to make a return to prominence as computers become less reliable.  This lack of reliability is related to the end of Moore’s law, and the physical limits of our present computing engineering.  As circuits become so small that the size of an atom becomes substantial, the difference between encoding a zero and one becomes fuzzy.  Cosmic rays can cause issues, and the roundoff problems will be back from the grave. 

Another topic you might cover in undergraduate classes is interpolation error.  This is another closely related topic.  In fact many numerical methods are designed via interpolation.  The interpolation problem is multifaceted

The most fundamental aspect of numerical analysis is the stability of an algorithm, and the simplest thing to analyze is the solution of ordinary differential equations.  The analysis is as simple as it gets, and is a good starting point.  Furthermore many algorithms can be split into two pieces, time and space differencing.  The ODE analysis can give results to the temporal part, and introduces the basic concepts. Like many of the basic analysis techniques, the analysis only formally applies to linear methods and models.  Below, a method that can cope with nonlineariy will be given (modified equation analysis).   The basic techniques can be found in many book, but Ascher and Petzold’s SIAM book is my favorite.

For partial differential equations Von Neumann stability analysis is the gold standard.  It was first introduced in the late 40’s at a lecture in Los Alamos, and the report on the topic was classified for decades.  Nevertheless Von Neumann shared his method with others on the outside and it rapidly spread to become the standard.  It was part of the paper published with Von Neumann that introduced artificial viscosity.  It is also called Fourier analysis because the technique involves replacing the spatial variables with a complex Fourier series.  Aside for this the conduct of Von Nuemann’s method is almost identical to ODE analysis, but produces a richer set of outcomes because the space and time aspects are characterized.  Trunctation error estimates, dissipation and dispersion (phase or propagation) aspects come out of the analysis.    Durran’s book on numerical geophysics contains a good introduction or Strikwerda’s numerical analysis book.

Pure dispersion analysis is closely related.  This most clearly can be applied to the pure spatial operator and used similarly to ODE analysis.  Unlike ODE analysis it also involves a Fourier series expansion, but only in space.  It is a simple, but limited alternative to Von Neumann’s method.

Modified equation analysis has less power than any of the methods above, but formally applies to nonlinear equations and importantly to nonlinear methods.  While it is limited in scope, it carries powerful intuitive capability.  One expands the method in terms of Taylor series, and looks at the remainder.  The remainder always involves more complex, higher order differential equations than the original equation.  For example, modified equations provided a very clear picture of implicit numerical diffusion arising from upwind differencing.  One derives the differential equation that you more closely approximate than the original equation, and in the case of upwind differncing you approximate the original advection law plus a diffusion term, and the solution looks like this.  This was important in the theoretical development because it showed how to connect numerical solutions to entropy, which determine what solutions are physical.

Energy methods are a form of analysis that I have little experience with.  I know there are some tests and conditions that exist with the finite element community, but I can’t personally vouch for the utility of them.

Symbolic arithmetic makes it easy to do the analysis.  The course I took was to look at a very simple method (e.g., upwind differencing), and analyze it by hand.  I then took on a more complex method, but rapidly discovered that the analysis is terribly tedious and error prone.  For me, Mathematica came to the rescue!  I could do simple methods easily, and extend to more complicated methods, and still more complicated methods.  You can start to analyze systems of equations or solutions in multiple dimensions, or multi-level methods.  At some point you can push current symbolic capabilities to the breaking point, but by then you’re examining quite exotic methods.  The point is that you can do a great deal with relative ease.  It also really makes you appreciate the skill and patience of those that successfully analyzed complex schemes by hand!

Here are the reasons in no particular order:

  1. You can do it easily, so there really isn’t an excuse for not.  If you have the basic analysis at hand, it can be reused to understand a particular case you’re interested in.
  2. You learn by actually doing, not simply reading, not by simply coding and implementing (although that part of it is recommended too).  Holistically speaking one really knows their topic when you’ve read about, thought about, analyzed personally and implemented personally.  At this point you’ve successfully learned.  Learning is the heart of excellence.  Learning is the heart of expertise.
  3. You’ll understand your calculations behave in ways you never antisipated.
  4. You’ll start to see how to improve things and you’ll be equiped to test, understand and prove that your improvements are better.
  5. Your view of numerical methods will become nuanced, and you’ll be a better user of them.
  6. The difficulty of solving differential equations will become clear, and your appreciation for progress in the field will be more complete.
  7. You will get a feel for numerical stability, dispersion, dissipation and accuracy.  This understanding will allow you to diagnose the nature of your simulations like a boss.
  8. Your simulations will start to tell you new things that you never appreciated or even saw before.
  9. You will learn something new almost every time you analyze a new method, or analyze an old method in a new way.

10. It will push you to learn new math, and expand your knowledge to areas you might not have thought you needed.

11. Your curiousity will be piqued.

12. You won’t rely on the published literature for knowledge.  It’s the give a man a fish and you feed him, teach him to fish and you feed him for life…

Advertisements