Bryan Caplan  

The Observational/RCT Correlation

Fewer Judges, More Justice... David Bowie, Bourgeois Hero?...
What's the correlation between results from observational studies and results from randomized controlled trials?  Medicine's the obvious place to start, but if you know of any research on any topic that addresses this question, please share.

Comments and Sharing

COMMENTS (12 to date)

Chapter 3 of Deeks et al. (2003) and the systematic reviews included there:

One of the included reviews was updated in:



LemmusLemmus writes:

An older entry from social psychology:

If I understand your question, I venture a correlation: both (observational and RCT) are seeking.

A difference, on the other hand, is that RCT is more advanced science — in the sense of already having a concept of what is not being sought. The difference is in the clarity with which the background of the question (the null hypothesis) has been defined.

Your question may be getting into philosophy of science. Are you already versed in that field? I could recommend a half dozen books.

Tiago writes:

Great short post. This is the kind of question that once asked, we think: how can we not know this?

gwern writes:

Am I glad you asked. I have been compiling a bibliography of studies which explicitly or implicitly compare them:

The upshot so far is that there is no simple difference in average effect size, but the variance of comparisons is extremely high - the randomized effect vs the observational effect can easily be over or underestimated by a factor, or reverse sign.

So there's no easy meta-analytic correction where you can just subtract d=0.5 and get a decent causal estimate, disappointing as this may sound; it's more of a mixture model setup, where there's a probability of ~1/3 that your observational estimate (no matter how precise or how many covariates you add) is totally worthless and entirely unrelated to the actual causal effect, and you will never know until you run a randomized experiment.

(This is maybe not as surprising as it sounds when it comes to economics and sociology, as they are totally confounded by genetics, but it's still disappointing to see it so pervasive in medicine and other areas too.)

Evan writes:

There's some focus on this issue in the realm of electric energy efficiency, where program administrators (e.g., utilities) are often required to demonstrate that their interventions have resulted in actual usage reduction. See, for example, discussion on page 19 of this report from the Department of Energy, where they compare the results of 17 programs under RCT and observational measurement methods:

I was unable to track down the original paper they cited (Allcott 2011).

Fabio Rojas writes:

On a recent Econtalk pod cast, one of Russ Robert's guests (Adam Cifu, a physician) said that about 4 out of 5 correlational studies in a top medical journal stood up in RCT. Maybe you can hunt down the citation. Here is the link:

Alex Tabarrok writes:

I cover this in a video at MRU

Kevin writes:

Here's one from Cochrane:

gwern writes:

Lemmus: Anderson 1999 is about external vs internal validity, not randomized vs correlation. A correlation inside the lab and a correlation outside the lab can be considered a replication, but doesn't tell you anything about what would happen if you randomized one of the variables. So I don't think it's relevant here.

Stuart Buck writes:

Thomas Cook has done a lot of interesting work on this issue recently, including within-study comparisons of the effects seen from the RCT comparison and from other methods:;31/4/463

ZW writes:

Bloom, HS, Michalopoulos, C, and Hill, CJ. 2005. "Using experiments to assess nonexperimental comparison-group methods for measuring program effects." Chapter 5 of ISBN: 978-0-87154-133-8.

Comments for this entry have been closed
Return to top