Full Site Articles EconLog EconTalk Books Encyclopedia Guides

# Doctors' Statistical Ignorance

 Masonomics... Villainy Amok...

Take a test for a disease that has a false positive rate of 5%, and a disease prevalence of 1 in 1000--lupus, say. If you test positive in a random assay, what are the odds that you actually have the disease?

Most people--even, apparently, a shocking number of doctors--would say that the odds are 95%. But this is all wrong. If you test 1,000 people for lupus, 1 of them will correctly test positive for lupus--and 50 of them will falsely test positive. The chances are only 1 in 51, less than 2%, that you actually have the disease.

These are in fact the actual numbers for anti-nuclear antibody tests and systemic lupus, at least as relayed to me by my immunologist after I got a borderline positive result on a screen. These suggest that no one should ever do a random ANA; the information it gives is garbage, particularly since they don't treat lupus until you manifest symptoms. Yet lots of doctors, including mine, do.

I have written about this issue many times, but it is worth writing about again. One of the first things that doctors forget when they leave medical school is elementary probability theory. Their statistical ignorance has real effects on health care delivery. It is one of the reasons why I believe we need a medical guidelines commission, and I am willing to suspend "lose the we" and consider a commission chartered by government (although it does not have to be).

### Comments and Sharing

COMMENTS (22 to date)
writes:

"Take a test for a disease that has a false positive rate of 5%, and a disease prevalence of 1 in 1000--lupus, say. If you test positive in a random assay, what are the odds that you actually have the disease?"

Part of the problem here is the ambiguity in the phrase "false positive rate of 5%" -- 5% of what? 5% of those tested or 5% of those who tested positive? If it were the latter, the doctors would be correct.

Obviously, doctors were taught 5% of what in medical school and must have since forgotten. This indicates bad teaching in medical schools, not generally a lack of grasp of statistics.

I am a doctor writes:

As a physician, I would agree with your assertion that many doctors don't understand statistical probability all that well, but the author missed the point with this case. When looking for lupus, the ANA is a 'screening' test (relatively low cost/risk...high sensitivity...don't want to miss any, low specifity...it's not confirmatory, but it tells you to do more or quit there). It should't be done for population screening, and it's not. Physicians use the test appropriately, apparenlty the author just doesn't understand that.

However, they understand economics well enough to know that, "Yeah, this Kling guys back pain is probably nothing, but it costs me (the doctor) nothing to get an MRI, just in case it is the 5% of the time when it is something bad...but it'll cost me (personally) a whole lot if I don't do an MRI and it is something bad (malpractice). It's well known that the threat of malpractice leads to defensive (read: overtesting) medicine. I'd put this somewhere in the range of 15-25% of medical spending.

And, patients usually want to know the information (and they rarely if ever understand the statistical basis of what various results mean). If people will play the lottery with 1:30 million odds, they'll sure as hell spend someone else's money (3rd party insurers) to take a 1:1000 chance of finding something wrong with them.

It's simple economics. You should understand that.

General Specific writes:

Sadly, most people--even experts--operate on anecdotal auto-pilot. It works by and large but isn't efficient and infrequently leads to catastrophe.

writes:

Part of the problem here is the ambiguity in the phrase "false positive rate of 5%" -- 5% of what? 5% of those tested or 5% of those who tested positive? If it were the latter, the doctors would be correct.

"False positive" is not terribly ambiguous. It means: if you test someone who does not have x, what is the chance that the test will show x? So what you want is "5% of those who do not have lupus."

I think I see what you mean. "The latter," in this case, would be to interpret "5% false positive rate" as meaning "5% of the positives are false." There is no sense in which that is correct, but it is an understandable and apparently quite common mistake.

writes:

Right, I know that it isn't the correct interpretation, but the phrase "the rate of false positives" could theoretically mean that. I remember being confused by it in my biochem class. Apparently, some doctors never got un-confused.

I maintain that it does not mean that the doctors don't understand probabilities or statistics generally, but rather that they don't understand the meaning of false positives. A serious problem, but a more narrow one.

writes:

I am a doctor: as I stated, my doctor in fact used it on a blind screen, and he isn't the only one; I know three other people in New York who have had the same experience I did. Doctors *shouldn't* ever blind screen for ANA, but they indisputibly sometimes do, because there was no reason to test me for lupus.

Biomed Tim writes:

This sort of stuff is actually taught in med schools, but unfortunately it doesn't stick. Many of my med student friends treat biostatistics as a distraction from "what they really need to know."

Alas, Bayes's theorem tends to get lost in the midst of anatomy, physiology, pharm, path.....etc.

I am a doctor writes:

In response to Megan McArdle writes:
"as I stated, my doctor in fact used it on a blind screen"

Two possibilities: 1. Your doctor indeed has no understanding of statistics and just randomly ordered this test (a distinct possibility). Or, 2. Your doctor had some indication to screen you for lupus, and performed this non-invasive, relatively cheap (needle stick) test as a screening (see above). Hard to say what sort of complaints you presented to him to judge his reasoning.

"I know three other people in New York who have had the same experience I did." - Again, without knowing these peoples presentations and complaints, hard to say if they should have been checked or not. Do you know the indications for screening for lupus or any other connective tissues disease? Didn't think so.

"Because there was no reason to test me for lupus" - ANA doesn't just screen for lupus, it can be positive in all range of connective tissue disorders, and in normal people. Do you know all the presentations of lupus? All the clinical manifestations?

Do doctors do unneccessary tests? Sure. Do patients understand everything that doctors do? Nope. Just because you don't understand the indication for every little thing, doesn't mean it was done erroneously.

writes:

I Am a Doctor: You can bet I damn well do know the indications for screening for lupus, since I freaked the hell out when I got a borderline positive ANA on a routine, I repeat routine annual physical. I have none of the symptoms of lupus except being a fair skinned female: no fever, no joint pain, no photosensitivity, no fatigue, no edema, no butterfly rash, no fever, etc. I had no medical complaints when I was screened for lupus other than things I had had for a long time, like asthma, which are also not symptoms of lupus. Nor did any of the other fair skinned females in New York City I talked to who got ANA tests. The immunologist I followed up with was also astounded.

bingo writes:

False positive: a positive result in the absence of disease. The rate of false positive results is also called "sensitivity". Megan is actually asking what the % chance of a true positive would be in this population (incidence 1/1000) if the sensitivity of the test is 95%. The question is posed in a way that BOTH Megan's anwer and the "typical" physician's answer could be correct depending on how you interpret the term "false positive". In order to properly use the "false positive" rate we must, by definition, be looking at a population in which the gold standard test has provided a diagnosis against which we can measure the accuracy of the test in question. In the context of Megan's rant against over use of tests it is easy to see how she arrives at her conclusion.

"I am a doctor" notes two important points, each of which properly removes the question from the pure vacuum of economic analysis. We are not dealing with statistics on a page, we are dealing with individuals who bring bias to the equation. Was there a symptom which has defied a common diagnosis that prompted the test? Is this a patient who is anxiety-prone (certainly not Megan, of course, but in general) who the doctor knows will only be satisfied if no stone is left unturned? Has the doctor recently been sued for malpractice in a case of a bad medical outcome in the absence of medical misadventure, making him particularly defensive? Did Megan really try to prove her point with a self-reported retrospective cohort study with four subjects?

I think that all of the economists who have blogged on health care finance have underestimated the problem of provider demand, medical care prescribed to avoid a tort. Not only does this distort the cost structure but it prevents us from performing a root cause analysis on recurring problems because information is not freely provided for fear of that tort. Are tests overused, medicine overprescribed, and treatments overperformed? Yup. The question is why, and I don't think the answer is quite as tidy as we physicians forgot our statistics.

writes:

Gerd Gigerenzer's "Calculated Risks how numbers deceive us" lists very similar examples along with other types of statistical mistakes made by all of us. Very interesting read, recommended for anyone interested in the subject.

I am a doctor writes:

So...if you knew you couldn't possibly have had lupus since you weren't manifesting any symptoms, why did you let them do the test on you? He must have said, "And we'll draw some blood to run this test because you might have lupus," and you could have said, "No." If he didn't tell you what he was ordering and why (not just that you didn't listen closely or said, 'Yeah, do what you need to do), then that's pretty crappy on his part, but hardly reason to question medicine as a whole. Find a different doctor.

John Thacker writes:

The rate of false positive results is also called "sensitivity".

Sorry, Bingo, you've got that backwards.

"Sensitivity" is the proportion of all positives that are correctly detected; in other words, the ratio of true positives to all positives (true positives plus false negatives (Type II errors)). It is the probability that someone with the disease will be spotted by the test. Sensitivity has nothing to do with false positives.

"Specificity" is the probability that a negative is correctly detected, that the test indicates negative if someone is negative. In other words, the ratio of true negatives to all negatives (true negatives plus false positives (Type I errors)). That's what you're thinking of, though the false positive rate is actually 1 - specificity.

Megan is actually asking what the % chance of a true positive would be in this population (incidence 1/1000) if the sensitivity of the test is 95%. The question is posed in a way that BOTH Megan's anwer and the "typical" physician's answer could be correct depending on how you interpret the term "false positive". In order to properly use the "false positive" rate we must, by definition, be looking at a population in which the gold standard test has provided a diagnosis against which we can measure the accuracy of the test in question.

No. Megan is asking what the % chance of a true positive given a positive result on this test, assuming the test is performed completely randomly on a population with incidence 1/1000. There is only one mathematical answer to the question. The precision rate, or positive predictive value, is unambiguously a function of the sensitivity, specificity, and the prevalence. (True Positives divided by the sum of True Positives plus False Positives.) She did not mention the false negative rate, needed for calculating the sensitivity-- however, she obtained an upper bound on precision by assuming that the sensitivity was 1, or perfect.

Given what she stated, the probability that you actually have the disease is at most 1 in 51-- it is possible that the test has imperfect sensitivity and a non-zero false-negative rate. For example, if the sensitivity is also 95%, then the precision is .95/50.95, or ~1.86%, and if it's a mere 50%, then the precision is only .50/50.50, or ~1%.

The "typical" physician's answer is unambiguously wrong. It is true that in order to use the false positive rate you must by definition have a gold standard test that you use to judge whether the positive is false or not. However, Megan's question already assumed that by saying that the incidence of the disease in the given population tested was 1 in 1000, which we must assume is according to the gold standard. In her very posing of the question she claimed that it was a random assay. Yes, obviously the rates are quite different if you only give the test to those who are already likely to have the disease. But that's irrelevant here.

John Thacker writes:

He must have said, "And we'll draw some blood to run this test because you might have lupus,"

In my experience, quite a few doctors say "And we'll draw some blood in order to have all the normal blood work done," and then go ahead and run a bunch of tests on it, some of which are justified medically and some of which aren't. I certainly grant your point that there's lawsuit-covering in there, and there's a bit of "heck, the insurance company pays for tests anyway" in there on the part of both patient and doctor often, and lots of other contributing factors. However, lots of doctors still do get the statistics wrong. The average doctor certainly does better than the average patient would (and hence, yes, the average patient might well insist on such a test if given the option), but the statistical error is still too common.

John Thacker writes:

I realize that I shouldn't be so hard on someone for mixing up "sensitivity" and "specificity," but considering that that person insisted on using them instead of the (to me) easier to remember "false positive rate," which is just as unambiguously defined in the literature and perfectly acceptable, I felt that in this case it was justified. Certainly people confuse what "false positive rate" means-- that was Megan's point. However, it has an unambiguous technical definition, and "sensitivity" and "specificity" are confused just as easily.

bingo writes:

John Thacker corrects my definition and I agree. The sensitivity is correctly defined in his post. The false positive rate is therefore the % of tests in which a positive result is present in the absence of disease, in this case 5%. However, I stand by my assertion that Megan's answer does not follow directly from the question because of the inexact wording of the question. Having written a white paper while in medical shcool on the proper use of diagnostic tests in evaluating a population at risk for coronary artery disease using bayesian analysis, and personally having a very clear understanding of the concept of positive predictive value, I am equally guilty of presenting my view in a clumsy manner.

I stand by my conclusion, however.

bingo writes:

...school...

Gary Rogers writes:

I would say that 1 in 51 is not the correct probability that you have Lupus unless you just took the test for no good reason. The fact that the test was performed tells me that there is some a priori information that needs to be factored in using Bayes' theorem. Statistics can be a useful tool, but can also lead to wrong conclusions if not tempered with common sense. I do get your point, though, and it is a good one.

senderista writes:

Note that this example assumes that the "false negative" rate (i.e. the percentage of people with lupus who test negative) is zero. In general this cannot be assumed. E.g., if probabilities are as above except the false negative rate is 95%, the probability that a person who tests positive for lupus actually has the disease is exactly the same as the incidence of the disease, i.e. 1 in 1000. (Note that this is expected since any test with equal false positive and "true positive" rates is by definition uncorrelated with the factor being tested.)

writes:

As I think I have been trying to communicate for a day now, the test was done as part of my routine annual bloodwork. I have no idea why it was run, but it wasn't because I had any symptoms indicating for it; and I didn't know it was being done until it came back borderline positive. For all I know, he also had me tested for rickets and dengue fever.

The doctor. was. wrong. The arrogant dismissals of me are really kind of offensive, and also silly; if memorizing the list of lupus indications is really a doctor's idea of something so complicated and arcane that only a doctor is capable of understanding whether I have any of the symptoms, then I'm never going to a doctor again. Apparently I can do their job in five minutes at home on a computer.

bingo writes:

So I re-read my posts above looking for arrogance and dismissive language for two reasons. 1) such language is indicative of a lack of kindness, and each interaction I have is done with kindness in mind, regardless of how much I may disagree with someone. 2) I am a physician and we have rightly been accused of both from time immemorial. Sorry, but I don't see either in my posts.

Was the doctor wrong to order an ANA? Was there insufficient evidence in Megan's history or exam to justify the expense and the anguish caused by what we are asked to assume is a false positive result? I see no reason to doubt Megan's account or even her conclusion about her care.

But I stand by my conclusion that the leap necessary from the ordering of a "screening" ANA to "doctors don't understand statistics," and that THIS fact is the root cause of over-prescribing/utilization is more than a stretch, it's fanciful. Many doctors may not understand statistics and related analyses--true. There is a significant amount of care that is prescribed in the U.S. that does not affect health outcomes--true.

True. True. But not nearly as related as Megan declares.

I am a doctor writes:

" I have no idea why it was run, but it wasn't because I had any symptoms indicating for it; and I didn't know it was being done until it came back borderline positive. For all I know, he also had me tested for rickets and dengue fever."

And herein lies the root problem. You let someone draw blood for from you and run some tests with no knowledge of what was being done. You probably spend more time reading food labels at the grocery store than you spend thinking about medical decisions, as do most people. If you don't pay for soemthing, or don't pay it's true cost, you don't value it as much. As such, instead of being an active participant and taking an interest in everything that was being done, you just went along for the ride, because it probably cost you a \$20 copay or somthing and it was one blood draw. (And you know what, I agree with your premise that the test was probably not neccessary...ANA's are pretty worthless, but that doesn't obviate you from finding out what was going on and saying, 'Nope, don't need that'.)

So, I have no idea the exact complexities, or lack there-of of your visit with this doctor, and he may indeed be a quack. But the conclusion you draw is rather far-fetched.

"Apparently I can do their job in five minutes at home on a computer."
Then by all means. Don't come in for your annual check-up (there is no medical evidence that such routine visits are of any value...check out a number of recent articles). Why did you go in? Because you really don't know all that much about medicine (despite what you can dig up on the internet...go try to take care of someone in the ICU for a day and let me know how it goes for you). You, like so many others, go in because you aren't responsible for the full cost of that activity. People go in for annual screenings because that's usually what their insurer covers. You didn't think twice about asking what all was being done with your blood, or what it would cost, because you probably have good insurance, and therefore, will not be directly responsible for the full cost of what was done, so you cared less about it.

If you don't pay for soemthing, or don't pay it's true cost, you don't value it. If someone added an extra \$50 to your dinner tab, you'd probably take notice and complain. But, if a doctor decides to do a few extra tests that may or may not be neccessary, but you're not paying for anyways, well, may as well, right?

Next time maybe you'll take a little more vested interest in your own medical decision making. Who knows, maybe you can figure it out yourself on the internet. If you do, good for you. You'll get the care you need and cost the system less money. If you could have figured it out on your own, then you went in for no good reason, and it's you who doesn't understand statistics very well.

"Borderline positive", huh? So I would imagine you had a titer of like 1:120 or something. Is that 'borderline positive'? nope. It's not anything other than a titer of 1:120. Did your doctor do any further work-up? Probably not, because without a strong pre-test probability, this test result would leave you with a lower post-test probability, and lead to no further testing. What you have overlooked is that most tests in medicine don't have binary, yes/no results (and that's why you'll need more than an internet connection to play doctor). If you want to talk statistics, feel free to look at complex diagnostic algorithms and factor in things like pre-test probability, positive predicitive value, etc. and then try to figure out if Mr. Jones has cancer, if you should treat him, and if so, with what. Then you can run your mouth and say you'll never visit a doctor again. Oh yeah, and factor in that humans aren't machines and don't play by the rules. Statistics are great for batting averages, but when it comes to a 10% chance of saving grandma, you'll probably give it a try now, won't you? Do you honestly think that, say, if you were the parent of a 5 year old child and we said we could do all kinds of stuff that may cost (anonymous taxpayers) who knows how much, but it would give your child a %5 chance of living vs. certain death, you're going to be statistically rational and say "No, thanks but statistics show that those tax dollars could fund the care of 200 other children with definite benefit, let my child die." That's why statistics and medicine is so complicated.

Should we do away with politicians because of one case of corruption? Do away with the judicial system because one person if wrongly acquited? Do away with education because one teacher doesn't teach her students a thing? What is your statistical basis for saying, "Doctors don't understand basic statistics," based on your anectdotal evedince...not too strong I'd imagine. But, I'm sure you've already done all the calculations and figured it out, because every decision you make is statistically sound.

Comments for this entry have been closed