The Importance of Accurate Testing [Updated]

The D.C. neurosurgeon whom I quoted here and whose mortality chart I reproduced here writes to comment on the need for, and difficulty of obtaining, accurate COVID-19 testing. Please note, especially, his conclusion:

The accuracy of testing for COVID-19 needs to be discussed, as all the numbers we are following (and the massive reaction to it) hinge on the tests’ accuracy. As of now, global testing for COVID is based on a tribal group of genetic tests, that differ by country, region, and laboratory. The CDC test, for example, tests a different set of genes, than say the German WHO test. These tests were rapidly developed under a lot of pressure, and potentially have error (as we already saw with the CDC test).

For the most part, the validation and specifications for these tests have not been publicly released, and probably have variation across methods and even laboratories. As a clinician, we look at the false positive and false negative rates of a diagnostic test. However, even very accurate tests can lead to a very high false positive rate, if the condition prevalence is low. How? The answer is through conditional (Bayesian) probability. Imagine if you have a population of 100 people, of which 1 truly has Disease X. Your test has a 5% false positive rate, and a 5% false negative rate. If you test all 100, ~6 will test positive (the one true positive, and the 5 false positive), but only 1 will be truly positive. This means, even with a 95% accurate test, there is an 87% chance that a positive test represents a false positive!

We know very little about the false positive and false negative rates of these varied tests. Similar PCR (gene) tests for common respiratory viruses have a 1-2% false positive/false negative rate (https://www.biomerieux-diagnostics.com/filmarrayr-respiratory-panel). For COVID, since the numbers being reported (e.g. on Worldometer) are positives, it is very possible that a large fraction of this number could represent false positives, in relatively low prevalence situations. It depends on adequate pre-test screening and limiting the test to high-risk populations (where have we heard this before?!). As we test more, we will find more positives — but only a fraction can be expected to be true, and the true positive rate will decrease as testing expands. As Dr.Birx (White House COVID Coordinator) said on Tuesday, “Quality testing,” she said, is “paramount.” “It doesn’t help to put out a test where 50 percent are false positives.” This is a very crucial point — with significant political implications. The CDC’s methodological and quality concerns are being mistaken for a “botching,” rather than a quality concern and a very basic epidemiological principle.

UPDATE: Our neurosurgeon adds, in response to some of the comments:

Briefly, on the possibility that false negatives are large as well–that is certainly possible. The point is nobody really knows what the false positive or false negative rates are for COVID testing. Given a nearly equal false positive and false negative rate, in low prevalence situations, the likelihood that a positive represents a true positive (something called the “positive predictive value”) is a lot lower than the likelihood than a negative representing a true negative (e.g. the negative predictive value). It’s not an issue of desired “narrative”, but simply an issue of math. But if the false negative rate is high, then the negative predictive value may be low as well. The real question is where is the data validating the test, so that we can understand the likelihoods better?

By the way, determining a clinically-relevant false negative rate for a test in these situations is hard, because you need a KNOWN population of true positives, which is hard to find on the upswing of an epidemic in a short period of time. How do you know someone is truly infected in order to “test the test”? You would need a “gold standard” test–there are possibilities here, but laborious. Also keep in mind that the technical specifications determined at a reference laboratory, may get results that differ from smaller labs with less experience (e.g. user error). Looking at the numbers globally, one wonders how much quality checking these labs have gone through, in the short time frame with intense pressure. But even a small amount of lab error leads to large changes in probabilities.

Finally, having a large rate of false positives and false negatives are not mutually exclusive situations, and it is entirely possible that both co-exist. In this rapidly developing situation, the assumption that the test has been vigorously validated is questionable and needs to be challenged. This would normally be done in the FDA regulatory process, which has been bypassed. Even more reason to ask for significant scrutiny on the technical aspects of the testing, in order to make the right political, financial and social decisions.

Notice: All comments are subject to moderation. Our comments are intended to be a forum for civil discourse bearing on the subject under discussion. Commenters who stray beyond the bounds of civility or employ what we deem gratuitous vulgarity in a comment — including, but not limited to, “s***,” “f***,” “a*******,” or one of their many variants — will be banned without further notice in the sole discretion of the site moderator.

Responses