Article Text
Statistics from Altmetric.com
The most common measures of diagnostic test performance, sensitivity and specificity, involve two simplifying assumptions: that disease status is binary (present or absent), and test results are binary (positive or negative). Bayes’ theorem tells us that these features must be combined with a third ingredient, the prior or pretest probability (PreTP) of disease, to update the disease probability following the observed test result. As we1 2 and others3 4 have written, readers and authors of medical literature often interpret test results in ways that violate Bayes’ rule. Publications celebrating diagnostic tests that on close inspection perform near-chance level represent an important example of this interpretation risk.5 6 In some cases, even frankly paradoxical combinations of sensitivity and specificity are reported, for example, with certain physical examination findings,7 in which a ‘positive’ result lowers the disease probability, and ‘negative’ result increases it.
We recently pointed out a simple litmus test that can be used at a glance to recognise the potential for chance or paradoxical performance, which we term the ‘rule of 100’.6 Any test for which the sensitivity and specificity add to 100%
does not modify the PreTP of disease, that is, the result (positive or negative) provides a post-test probability (PostTP), identical to the PreTP. Such tests therefore provide no information. Here is a three-line proof:
The second line of the proof makes use of the fact that
when
For those comfortable with combining sensitivity and specificity into the likelihood ratio (LR) value, …
Footnotes
Contributors AMW, DS, MBW and MTB all participated in planning the study, analysing the data and writing the manuscript. MBW submitted the study.
Funding This research received no specific grant from any funding agency in the public, commercial or not-for-profit sectors.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.