The core idea here is that adaptive mesh technology in codes seems to encourage a certain level of outright sloppiness because of the adaptivity engendering a false sense of confidence.

An interesting (and generally good) development in computation is the increasingly pervasive availability of mesh adaptivity particularly in computational fluid mechanics.  My comments are not only relevant to CFD although they might be the worst abusers.  Of course, things like verification are most often done in CFD, so it might all work out in the wash.  Hand-in-hand with the use of mesh adaptivity is a seeming laxness in attitudes toward solution quality.  In other words, adaptive meshes seem to fill people with the belief that they are inoculated from worrying about the quality of their calculations.  It is a bit ironic considering that the users of adaptive meshes are exhibiting an implicit commitment to mesh converged solutions, but often avoid doing any of the actions that might confirm this.  There seems to be an innate belief that the solution is sufficiently high quality if the mesh is refined enough.  They seem to be ignoring some of the deeper dangers associated with adaptivity.  In fact, an adaptive mesh is provides a substantially greater risk to solution quality and even greater care is necessary.

The first and most common incarnation of a misbegotten belief in intrinsic quality with adaptivity involves mesh sensitivity, or poor man’s verification.  The common statement is that we refined the mesh and the answer didn’t change, thus the mesh is fine enough.  This might be true, or it might simply indicate a stalled stagnant solution.  Without assessing the actual convergence of the solution towards a mesh independent answer these two situations cannot be distinguished easily from one another.  The adaptivity also add the danger that should the mesh refinement be chosen poorly, the mesh adaptive process may itself be causing the solution to stagnate.  I’ve actually seen this in the case where the problem is focused on a single region in the flow, and we bias the mesh refinement toward that region while ignoring the rest.   Instead of getting a better answer where we think we care about the result we actually hurt the quality by ignoring important parts of the solution that influence the result we care about.  For some people this may be counter-intuitive, but the honest truth is that the quality of the solution where we care about it is not only a function of the solution in that particular location.  The way to avoid this problem is simple enough, check the results for convergence and make certain that the solution is simply not stagnating.  It also may not be a good idea to de-refine the solution where something important is happening even if we don’t care about the details in that location.

The secondary issue with adaptivity relates to the overall attitude towards solution quality.   It has been said that adaptivity actually makes all verification unnecessary.  This is simply wrong-headed; adaptivity actually makes verification more necessary because the mesh selection procedure can be thought of as an additional approximation.  One is using the mesh selection to approximate the effect of having a fine mesh everywhere.  Adaptive code users often seem to implicitly expect that they are getting the fine grid everywhere solution at a discount by using adaptivity.  This might be approximately true under ideal circumstances, but when part of the mesh is selected to not be refined, one expects the solution to suffer.  One must also remember that the criteria for selecting the refined mesh (and where it is at lower resolution) is not entirely perfect, often basis on heuristics.  We need to check whether the degradation is significant.  The check for degradation is the verification process.  The faith placed in adaptivity is to do this where the solution is not important.  This faith can be seriously misplaced, and the entire procedure can go off the rails.  A large part of the cure for this is systematic code verification where the adaptivity is tested.

The second issue is the change in the discretization due to the variation in the mesh density.  One needs to make sure that the differencing of the equations in the presence of mesh adaption does not upset the basic properties of the original solution procedure.  Generally, one expects that the accuracy of the solution will be equal to the unadapted case AT BEST.  Usually the differencing in the face of adaption is degraded in terms of accuracy.  Again, the quality of the code and solution needs to be extensively tested.  In fact, the adaption places a greater burden for the testing of quality rather than lessening it as many assume.

Finally, I can speak of the upside of adaptivity.  Computational practitioners should consider adaptivity as providing two chief benefits: you can get refined meshes with less cost (but greater danger), and you can get solutions at multiple resolutions.  If you have done a sufficient job testing the quality of your code, then the refined mesh solutions are a massive benefit.  With solutions on multiple meshes you can test and estimate numerical errors more easily than standard approaches. 

In summary, adaptive meshes are wonderful techniques, but rather than lessen the burden on the computational code and analysts to prove quality, it increases the burden.  This is largely due to the greater complexity of adaptive mesh codes, and the potential to cut corners in the implementation.    As with most things, adaptivity is not a something for nothing technique, with great power comes, great responsibility! 

Use adaptivity, use it wisely and trust, but verify!