Arnold Kling  

Happy April Fools Day

My Tribute to Bill Niskanen... All-Volunteer Matrimony...

Unfortunately, this story came out on March 28th, and it is not a joke.

During a decade as head of global cancer research at Amgen, C. Glenn Begley identified 53 "landmark" publications -- papers in top journals, from reputable labs -- for his team to reproduce. Begley sought to double-check the findings before trying to build on them for drug development.

Result: 47 of the 53 could not be replicated. He described his findings in a commentary piece published on Wednesday in the journal Nature.

Thanks to Russ Roberts for the pointer.

Good thing that research at universities is not corrupted by the profit motive, the way it is at drug companies.

My reaction to this story is somewhat optimistic. We can fix this problem. If government and other funders of research were to shift more resources toward replication, this would do two things. First, it would catch more bad science sooner. Second, it would take away some of the incentive to do bad science, because it would raise the risk of getting caught.

Unfortunately, the situation in economics is much more difficult. Very little research involves repeatable experiments. What natural sciences might try to achieve with replication we can at best achieve by doing meta-analysis of many studies using different methods to analyze the same issue.

Comments and Sharing

COMMENTS (10 to date)
AMW writes:

Unfortunately, even in the one area where replication is trivially easy - experimental economics - not much direct replication actually occurs. The market rewards one much more for coming up with novel experiments than for double-checking established ones.

Costard writes:

"If government and other funders of research were to shift more resources toward replication, this would do two things. First, it would catch more bad science sooner. Second, it would take away some of the incentive to do bad science, because it would raise the risk of getting caught."

The money is political and the results are political. The vast majority of this research is only useful as a lever for public opinion, and this was the purpose for which it was always intended. You think that politicians, bureaucrats and foundations who cloak themselves in the authority of research, will jeopardize this arrangement by calling into question the infallibility of science? Good grief. They're on the same side of the table.

R. Pointer writes:

Interesting to see if those which were replicated are replicated. 5 of 53 surely includes false positives.

rpl writes:
The vast majority of this research is only useful as a lever for public opinion, and this was the purpose for which it was always intended.
I know, it's a total conspiracy. I mean, I heard the other day that you can't even get research on treating cancer with unicorn horns published in a reputable journal. That just shows how ruthless the anti-cancer cabal is about suppressing dissenting voices in their relentless pursuit of a "cure" for "cancer."
Costard writes:

rpl -- it's a transaction. But if this spoils your satire, you can continue to call it a "conspiracy".

Perhaps I phrased my point rashly. Let me try again. The incentive in academia is not to produce an accurate study, but one that will get published, earn a grant, or work towards tenure. The university wants funding from above or tuition from below, and a productive (and perhaps well-known or politically relevant) professor gives them this. Agencies and private foundations provide funds for studies relevant to their own goals, and government money flows naturally towards research that shows a potential need for government action. At every level the use of this research is, in a word, political. In most fields a practical application is never envisioned; and if the results in cancer research provide us with a rare window into the soundness of this system, the implications are much broader.

The notion that we can "fix this problem" - or any other - by enlisting the fox to guard the henhouse, doesn't leave me nearly as optimistic as it does Arnold.

rpl writes:

Costard, let me get this straight; you're going to double-down on the claim that cancer research is a politically motivated "lever for public opinion" and that this is the "purpose for which it was always intended"? Pray, what are these devious cancer researchers trying to influence the public to believe? I confess, their diabolical strategy is so deep as to elude me. I have been laboring under the delusion that their research was "always intended" to uncover biological pathways that might someday help combat cancer. Who knew it was all a ruse?

Kevin Driscoll writes:

@rpl His point isn't that researchers aren't trying to cure cancer, of course they are. If they were to succeed both them and their employer would become exceedingly rich. His point is about the motives. Presumably, curing or effectively treating various forms of cancer would require several landmark advancements in biology, medicine, pharmacology, etc. None of these advances will do it by itself; it takes many steps to solve a big problem.

Groups of people at universities (of which I am one, though not in this field) and private/public labs have to pay for that research and make their bosses happy. Even if actually curing cancer is the ultimate goal, they have secondary goals and these two goal-sets don't always line up. If you really need funding for your next project (that might actually make a difference), you can do some bad science to get a little bit of extra money for this project. As Costard pointed out, each group has at least some motive for engaging in deception (again, even if their ultimate goal really is to cure cancer).

rpl writes:


Of course there are perverse incentives in science, as there are in any human activity. People sometimes game the metrics or pass off substandard work, but that happens in every other field one might work in too. I mean, what are these careers that he (and you?) imagine are free from bad incentives and temptations to cut corners? If all he's saying is that science is done by people with the usual complement of human frailties, then that's a pretty vacuous statement, isn't it?

Looking back at what Costard actually wrote, would you say that's a fair description of the practice science in your experience? If someone insisted that your own work was "political at every level" and "only useful as a lever for public opinion," would you accept that as an accurate characterization? What about the claim that that "was always the purpose for which they were intended." That would seem to indicate that any pretense of an attempt to discover anything was a sham from the beginning. Is that a fair assessment of your work?

Let's stop pretending that we don't know what Costard is about. There are some scientific results out there that seem to support policies that he doesn't like. In response, he's constructed a narrative whereby he can dismiss them because scientists are all in bed with his political opponents, and besides their stuff is all wrong anyhow (just look at the bad cancer results!). It's drivel, and it should be called out as such.

There's a conversation to be had about improving the incentives in science, but it doesn't start with making a blanket accusation that the entire scientific enterprise is based on mendacity. Pretending that someone who tries to take the conversation in that direction has anything constructive to add is a fool's game, and we should stop playing it.

NormD writes:

WSJ had a good article on this last December.

Mike Rulle writes:

It should be illegal not to reveal what are euphemistically referred to as "file drawer" experiments. When Russ sat down with one researcher, he discovered that they had run the experiment 6 times before they got one that had a sufficiently low p-value. This is straight out fraud.

We some how have come to view failed experiments as "bad", so we hide them. It is hard not to assign bad motives. I believe it was Edison who praised failed experiments as a great path toward successful ones.

Even finding that 6 out of 53 of the experiments could be replicated is possibly itself a function of randomness. This is one reason I have never felt fully comfortable with drug company patent laws, although that is an entirely larger issue admittedly. So much incentive to waste money. Academics have their own economic "incentives" to deceive.

As far as meta-studies are concerned in economics----well---good luck on that one!

Comments for this entry have been closed
Return to top