It used to be assumed that differences among hospitals or doctors in a particular specialty were generally insignificant. If you plotted a graph showing the results of all the centers treating cystic fibrosis--or any other disease, for that matter--people expected that the curve would look something like a shark fin, with most places clustered around the very best outcomes. But the evidence has begun to indicate otherwise. What you tend to find is a bell curve: a handful of teams with disturbingly poor outcomes for their patients, a handful with remarkably good results, and a great undistinguished middle.
There would be nothing alarming about this if there were continuous improvement, with the weaker health care providers either shaping up or going out of business. Such a Darwinian process can be taken for granted in most markets, but not in health care.
So what to do we have to do to compare different doctors? A multi-variate analysis. Here's the bad news - if you don't know what a multi-variate analysis is, you probably can't do one.
There's never anything inherently wrong with information, so I would never be opposed to compiling information. The problem is, raw information can be hijacked by the ignorant.
Gatawande is arguing that the medical profession suffers from a dearth of useful data on outcomes. (I would say the same thing even more strongly about the education profession.) In this regard, Dwight is not being constructive by complaining about the need for multivariate analysis. First, establish the principle that data should be gathered, disclosed, and analyzed. Then complain about methodological impurity.
For Discussion. One thought I have had recently is that it is the goal of much regulation to protect the mediocre from superior competition. To what extent does regulation in medical care fit that model?