Statistics from Altmetric.com
This commentary provides a consensus view by two reviewers on a paper appearing in this issue that investigates the change in the quality of evidence published in clinical journals over a period of 25 years.
John Mayberry sought the opinion of Robert Newcombe as the statistical referee on the accompanying paper1 after it had been revised twice. At this stage, differences of opinion between the authors and the original referee, Justin Stebbing, remained unresolved. Robert suggested that the paper should be published as it stands, accompanied by the reviewer’s comments and his appraisal of the situation. The referees acknowledge that (1) all papers have deficiencies and can be improved, (2) it is meritorious to publish original work that advances a field, and (3) such work is often useful even if it generates more questions than answers. The comments that follow are a consensus view arrived at by these two reviewers.
The paper did not seem a particularly strong contender for journal space. Straightaway, the reference to a 30 year perspective, meaning 1983–2003, did little to inspire confidence. Here, it is important to bear in mind that the whole scientific publication system is based on trust—the reader cannot possibly verify everything that is presented in an article, nor indeed can a referee. The refereeing process is largely a negative one, looking for obvious deficiencies that need correcting, rather than providing positive reassurance that what has been done is appropriate. Moreover in the domain of quality control, retrospective is no adequate substitute for prospective. Furthermore, the paper seems to fail the “So what?” test. Nevertheless, it provides a useful springboard to reflect on the research and publication system in more general terms.
The main areas of dispute seem to be the range of years studied and the statistical methods used. To …