Reviews are for readers, not writers. If I get a bad one, I shrug it off. If I get a good one, I don’t believe it
― William Meikle

UnknownA week ago I received bad news, the review for a paper were back. One might think that getting a review back would be good, but it rarely is. These reviews are too often a horrible soul-crushing experience. In this case I had reports from two reviewers, and one of them delivered the ego thrashing I’ve come to fear.

 I’ve found the best way to revise your own work is to pretend that somebody else wrote it and then to rip the living shit out of it.

― Don Roff

imagesIn total the two reviews were generally consistent on the details of the paper, and the sorts of suggestions for bringing the paper into the condition needed to allow publication. The difference was the tone of the reviews. One of the reviews was completely constructive and detailed in its critique. Each and every critique was offered in a positive light even when the error was pure carelessness.

The other review couldn’t be more different in tone. From the outset it felt like an attack on me. It took me until several days until I could read it in a manner that allowed me to take constructive action. For example including a comment that says “the writing is terrible” is basically an attack on the authors (yes it feels personal). This could be stated much more effectively, “I believe that you have something important to say here, but the ideas do not come across clearly.” Both things say the same thing, but one of them invites a positive and constructive response. I invite the readers to endeavor to write your own reviews in a manner to invite authors to improve. One of my co-authors who has a somewhat more unbiased eye noted that the referee’s report seemed a bit defensive.

So now I’m taking the path of revising the paper. A visceral report makes this much more difficult to accomplish. The constructive review is relatively easy to accommodate, and makes for a good blueprint for progress. The nasty review is much harder to employ in the same fashion. I feel that I’m finally on the path to do this, but Unknown-1it could have been much easier. There is nothing wrong with being critical, but the way its done matters a lot.

That’s the magic of revisions – every cut is necessary, and every cut hurts, but something new always grows.

― Kelly Barnhill

Just for the record the paper is titled “Robust Verification Analysis” by myself, Jim Kamm (Los Alamos), Walt Witkowski and Tim Wildey (Sandia), it was submitted to the Journal of Computational Physics. As part of the revision I’ve taken the liberty of rewriting the abstract:

We introduce a new methodology for inferring the accuracy of computational simulations through the practice of solution verification. Our methodology is well suited to both well- and ill-behaved sequences of simulations. Our approach to the analysis of these sequences of simulations incorporates expert judgment into the process directly via a powerful optimization framework, and the application of robust statistics. The expert judgment is systematically applied as constraints to the analysis, and together with the robust statistics guards against over-emphasis on anomalous analysis results. We have named our methodology Robust Verification Analysis.

The practice of verification is a key aspect for determining the correctness of computer codes and their respective computational simulations. In practice verification is conducted through repeating simulations with varying discrete resolution and conducting a systematic analysis of the results. The accuracy of the calculation is computed directly against an exact solution, or inferred by the behavior of the sequence of calculations.

Nonlinear regression is a standard approach to producing the analysis necessary for verification results. We note that nonlinear regression is equivalent to solving a nonlinear optimization problem. Our methodology is based on utilizing multiple constrained optimization problems to solve the verification model in a manner that varies the solutions underlying assumptions. Constraints applied in the solution can include expert judgment regarding convergence rates (bounds and expectations) as well as bounding values for physical quantities (e.g., positivity of energy or density). This approach then produces a number of error models, which are then analyzed through robust statistical techniques (median instead of mean statistics).

This provides self-contained, data driven error estimation including uncertainties for both the solution and order of convergence. Our method will produce high quality results for the well-behaved cases consistent with existing practice. The methodology will also produce reliable results for ill-behaved circumstance. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and more challenging simulations. We pay particular attention to the case where few calculations are available and these calculations are conducted on coarse meshes. These are compared to analytical solutions, or calculations on highly refined meshes.

Here is abstract from the the original submission:

Code and solution verification are key aspects for determining the quality of computer codes and their respective computational simulations. We introduce a verification method that can produce quality results more generally with less well-behaved calculations. We have named this methodology Robust Verification Analysis. Nonlinear regression is a standard approach to producing the analysis necessary for verification results. Nonlinear regression is equivalent to solving a nonlinear optimization problem. We base our methodology on utilizing multiple constrained optimizations to solve the verification model. Constraints can include expert judgment regarding convergence rates and bounding values for physical quantities. This approach then produces a number of error models, which are then analyzed through robust statistical techniques (e.g., median instead of mean statistics). This provides self-contained, data driven error estimation including uncertainties for both the solution and order of convergence. Our method will produce high quality results for the well-behaved cases consistent with existing practice as well. We demonstrate the method and compare the results with standard approaches used for both code and solution verification on well-behaved and challenging data sets.

 There is a saying: Genius is perseverance. While genius does not consist entirely of editing, without editing it’s pretty useless.

― Susan Bell

When you print out your manuscript and read it, marking up with a pen, it sometimes feels like a criminal returning to the scene of a crime.
― Don Roff

Advertisements