Welcome to Math Mutation, the podcast where we discuss fun, interesting, or weird corners of mathematics that you would not have heard in school. Recording from our new headquarters in Wichita, Kansas, this is Erik Seligman, your host.

Recently the infamous Elizabeth Holmes of Theranos has been in the news again, apparently filing new motions to delay her trial. As you may recall, Theranos was the company that claimed to have developed powerful blood testing kits, which could run hundreds of standard medical tests at home on a single drop of blood. It turned out that the invention just didn’t work, and Holmes was eventually charged with fraud as the company collapsed. But not enough people have noticed that the lies about the science-fiction technology weren’t the only problems in Theranos’s basic concept. We need to think about the flaws in their fundamental premise to “democratize” your health information. This is the idea that average consumers should be encouraged to run lots of tests, for rare diseases or issues, on their own blood. We especially need to pay attention now that numerous non-fraudulent companies, like the well-intentioned Everlywell, have entered this space.

At first glance, the core concept sounds like an unmitigated benefit. Why not let everyone run their own blood tests, without worrying about expensive doctors? And there are good philosophical arguments why this should be allowed, as a matter of individual freedom, regardless of the mathematical issues I’m about to discuss. (I won’t be getting into those arguments, as that’s beyond the scope of this podcast!) But there is a key element of the math behind these tests that too many consumers are likely to overlook or be unaware of: the fact that if a highly accurate test shows positive for an extremely rare disease, you probably DON’T actually suffer from that disease.

To make this more concrete, let’s assume there is a blood test which can, with 99% accuracy, determine if you suffer from the deadly virus of Math Madness, or MM; and in the general population, only one person out of every million has this disease. You run the test, and it shows up positive. You might intuitively think you are 99% likely to have MM. However, let’s think about the total numbers here. Out of every million people tested, only one has MM, due to its frequency in the population. Yet with a 99% accurate test, 1% of the approximately 1 million healthy people, or 10,000 people, are going to incorrectly test positive. So a given person who tests positive only has about a 1 in 10000 chance of carrying the disease.

How did our basic intuition fail us here? The key problem is that we need to realize the conditional probability of A given B is quite different from the probability of B given A. That 99% represents the probability of a positive test given that we have the disease, and the probability of a negative test given that we don’t have the disease. But it doesn’t accurately measure the chance we have the disease given a positive test, the converse of what that 99% is about. When we reverse the terms like that, we need to convert the probability using Bayes’ Theorem:

P(A|B) = P(B|A)P(A)/P(B)

That P(A) term, or the prior probability of the condition being tested, is the key factor here that drastically cuts down the ultimate chance of having the disease. For our MM example, that gives us .99 * (1/1000000)/(1/100), or approximately 1/10000.

Now you might point out that a false positive test is OK, as this is just an initial check to see if we should consult the doctor for more accurate testing and followup. But the problem is that once the “easy” tests are out of the way, often much more intrusive, stressful, and life-altering testing and treatment is required. This was brought home to me by an interesting poster I saw at my doctor’s office, provided by the US Preventative Services Task Force, on whether people under 70 should get PSA tests for prostate cancer. Prostate cancer is an interesting case because, while deadly in the worst cases like all cancer, mild versions of it often do little harm and can be ignored. The poster points out that out of every 1000 men given the PSA test, 1 death from prostate cancer will be prevented. But: 240 of those men will initially test positive, and have to go through a painful biopsy. Then 80 of them will, after testing positive at biopsy, go through long, painful (and unnecessary) courses of surgery or radiation treatment, after which 50 will permanently suffer erectile dysfunction, and 15 will suffer from permanent urinary incontinence. So we’re 65x more likely to have really painful lifetime consequences than we are to save our life by taking the test. It might still be worth it, but you really have to think hard.

Thus, ultimately, it often makes the most sense to avoid medical testing for rare conditions unless there is some overt symptom that causes your doctor to suspect an issue. Otherwise the followup resulting from the test can actually provide many types of very negative patient outcomes. This just naturally falls out of the common fallacy where people fail to apply Bayes’ Theorem, which requires that you factor in the prior probability of an event before you can properly interpret a test’s results. Can average consumers be expected to understand these issues, and the reasons why running every possible test on your blood might not be the wisest course of action? At the very least, I think companies entering this space should be very clear about the issue, and put up posters like the one at my doctor’s office, so their customers will approach the topic with their eyes open.

And this has been your math mutation for today.

References:

https://www.uspreventiveservicestaskforce.org/Home/GetFileByID/3795

https://en.wikipedia.org/wiki/Bayes%27_theorem

https://www.inc.com/christine-lagorio-chafkin/everlywell-democratizing-health-information.html