Joshua D. Angrist and Jörn-Steffen Pischke write,
Empirical microeconomics has experienced a credibility revolution, with a consequent increase in policy relevance and scientific impact.
The quote is from the paper, which is gated, and the link is to the abstract, which is not.
[UPDATE: ungated version here. ]
I easily found ungated versions of replies by Chris Sims and by Ed Leamer.
I think it is a really important controversy. I place myself on one side of it (the minority, naturally). Remaining comments below the fold.The question, broadly speaking, is whether sophisticated empirical work in economics is persuasive. Sophisticated empirical work consists of taking a single data set and using the best econometric techniques to arrive at estimates of the interesting parameters.
In contrast, when I have an empirical question, I look at a variety of data sources. For example, one interesting question is whether economic growth since 1800 has been much faster than economic growth for the preceding 1500 years. I believe that the answer is “yes,” based on a variety of indicators. Many of these can be found in chapter 2 of From Poverty to Prosperity, so I will not reproduce them here. The bottom line is that there are many ways to look at the question, and as far as I know, all of them point to essentially the same answer.
Another question might be, in mortgage performance is borrower’s equity an important determinant of default? I am convinced that the answer is “yes.” Again, this is not because of any one study, but because of a variety of studies that have looked at default rates relative to original loan-to-value ratios, relative to estimates of current loan-to-value ratios, studies that compared default rates under different economic conditions, etc.
Chris Sims represents the opposite school of thought. He believes in the triumph of state-of-the-art technique over weak data. I don’t know if it’s still true, but his professional reputation used to be very imposing. To question Sims was to make yourself look bad. I personally never saw the attraction. He may be gifted and clever, but I have never found him persuasive. If you’re one of those people who regards Sims as super-human, then you probably will not be on my side in the controversy.
Leamer is much more my champion in this. Here are two sample quotes from his paper.
Can we economists agree that it is extremely hard work to squeeze truths from our data sets and what we genuinely understand will remain uncomfortably limited? We need words in our methodological vocabulary to express the limits. We need sensitivity analyses to make those limits transparent. Those who think otherwise should be required to wear a scarlet-letter O around their necks, for “overconfidence.”
…Let’s face it. The evolving, innovating, self-organizing, self-healing human system we call the economy is not well described by a fictional “data-generating process.” The point of the sensitivity analyses that I have been advocating begins with the admission that the historical data are compatible with countless alternative data-generating models.
In an email about a paper in which I express my skepticism about macroeconometrics, Jeffrey Friedman asked me
Why are macro models so bogus? Is it because we just don’t have the right models yet, or because the world is inherently too complex to make sense of, or because there are too many factors at work at any given time, or because history never repeats itself (or some or all of the above)?
The problem is definitely not that we “just don’thave the right models yet.” I think that is close to Sims’ view–we did not have the right models in the 1970’s, but now we are getting there. I just cannot agree.
I think that “the world is inherently too complex” has some validity, but it is too much of a cop-out. We have to deal with the world as it is, as best we can.
I think the main issue is “too many factors at work at any given time.” In statistical theory, every time you add a new observation you get more information. That is because the theory assumes that the number of relevant factors is constant, and an increase in sample size gives you more variation in the relevant factors and thereby enables you to separate the influence of the different factors with greater precision.
In macro, adding observations does not help. When we get a quarter of more-or-less normal growth, the relevant factors do not vary by enough to provide any interesting news. There is not much noise, but not much signal, either. The statistically valuable observations are episodes like the Great Depression or the recent downturn. Unfortunately, in those cases, the list of potential causal factors is too long for the data to be able to distinguish. I think that the best count of potential causes of the financial crisis is well into the twenties. The list of theories of why the Great Depression was so deep and lasted so long is even more extensive. Thus, we are always in the position of having more theories than meaningful data points.
The main point of the Angrist-Pischke paper is that in microeconomics, the use of natural experiments has made econometrics more credible. As an empirical matter, I am not sure that this is true. See the appendix in Kling-Merrifield, where I found the “natural experiments” that supposedly prove a high return to education to be shockingly flimsy. If this is the sort of work that is supposed to take the “con” out of econometrics, it hasn’t.
I think if you step back and look at econometric history, you see the rise and fall of fads: simultaneous equation estimators, distributed lags, VAR’s, instrumental variables, regression discontinuity, and so forth. If you jump through the right hoops at the right time, you get your paper published. But what you publish is not reliable.
Most economists eventually see through the older fads. But while the technique is going through its boom phase, woe be it to the young economists who fails to jump on the bandwagon. It shouldn’t have to be that way.
READER COMMENTS
tom
May 13 2010 at 5:03pm
You might be interested in this open letter to the SEC re: agent-based modelling: http://www.sec.gov/comments/s7-02-10/s70210-109.pdf
[bit.ly url replaced with target url. Please do not use shortened URLs on EconLog. We are not so space-constrained that our readers shouldn’t be able to see where they are going.–Econlib Ed.]
Contemplationist
May 13 2010 at 5:04pm
In other words, the ‘Cyclical Nature of Econometric Fads’
Arthur_500
May 13 2010 at 5:06pm
I tend to agree with you on two points; that our models need to have sufficient data in order for them to be valuable and, we may have too many factors taking place at any given time.
People have been stating for a long time that we can build models on limited information that will represent a greater population. I haven’t bought into this theory although in limited circumstances it can work.
The reason it works is the limited variance of inputs. We have complex lives and a complex economy coupled with the reality that people do not always do things in their own best interest.
Evidence developed in the years following the Great Depression is probably not relevant to today because we are not in a World War, we are not going to college on GI training programs, we are not flush with cash and we have this Internet thing that changes information exchange quickly.
It appears we go back to Statistics 101 where we learn we need adequate inputs and we need to recognize the relevant range. To think otherwise is to promote replacing humans with computers because they can be made perfect.
mlb
May 13 2010 at 5:36pm
I think you are missing the biggest problem…economic participants know those same variables and models and adjust behavior accordingly. The same inputs could result in two different outcomes.
Michael Bishop
May 13 2010 at 5:58pm
I agree that looking at multiple data sets is a good idea but that doesn’t seem like a minority position to contrast with the others. Wouldn’t you agree with Angrist and Pischke’s central claim: microeconomic empirical work has improved considerably. Sims and Leamer are right that there is no magic wand.
Sims seems too reluctant to acknowledge the improvements in micro. He shouldn’t feel ashamed of the fact that macroeconometrics is harder.
As chance would have it, I just attended a lecture by Angrist on Monday, a solid charter school (KIPP) evaluation.
Michael Bishop
May 13 2010 at 6:17pm
I don’t know if this is any less gated but here is the working paper version of A & P
http://www.nber.org/papers/w15794
Eric de Souza
May 14 2010 at 4:27am
You can find an ungated copy here:
http://ftp.iza.org/dp4800.pdf
David Pearson
May 14 2010 at 9:08am
The problem with econometrics is that the single-minded focus on “squeezing the data” doesn’t foster critical thinking. In contrast, reading economic history does.
The housing bubble happened, in part, because econometricians and financial analysts “squeezed” the past fifteen years of housing and credit data to determine that mortgage defaults were correlated with unemployment, period. They didn’t question whether lending standards could indefinitely decline without producing a stronger correlation coefficient. They could didn’t question whether unprecendented leverage — resulting from their own analysis — could increase the volatility of house prices. These were elementary criticisms of their work, and the economists were seemingly immune to self-examination.
Somewhere along the line, probably on the road to tenure, econometricians lost the ability to think critically. The question is twofold: when do they get it back; or, alternatively, when do we stop listening to them?
fundamentalist
May 14 2010 at 9:22am
Very interesting post. I like econometrics and have used it in business modeling for years. I think more businesses should use it as a guide to decision making. But I have to constantly fight two extremes. One group thinks its all voodoo and won’t have anything to do with it. The other group thinks it is so accurate and powerful that you almost don’t need data to get good results. Econometrics is a tool. It can give you an edge, but it can’t produce certainty.
However, theory is far more important than the math. If you have bad theory behind your model, no amount of data or fancy techniques will improve on it. But if you have sound theory, simple techniques prove powerful.
Friedman: “Why are macro models so bogus?”
Hayek addressed that in his 1974 Nobel speech. Essentially, he said that we don’t have the right data and getting the right data is very difficult. Because aggregate demand data is easily available, mainstream econ decided to build models around it. But that’s like the drunk looking for his keys under the street lamp because that’s where the light is. As long as mainstream econ has this fetish with aggregate demand, its econometric models will fail.
Hayek and Mises and others fought this battle for most of their lives. Mises addresses this issue in the first part of “Human Action.” Hayek does it in several places, one of the best being “The Counter-Revolution in Science.” This is an epistemology issue, not just a math issue. Until mainstream econ realizes that it is an epistemology issue they’ll never get it right.
Mainstream economists think that if they parse the data long enough, the correct theory will emerge. Essentially, they are trying to apply the techniques of physics to economics, an error about which great economists from Menger on have warned against. They have failed consistently for the past century, but refuse to give up.
I would think that if it were possible to derive theory from data in economics, as physicists do (the inductive “scientific” method), then neural networks programs would be able to do it. But I have used NN programs on company and macro data quite a bit. It is certainly more analytically powerful and accurate than any statistical method that exists. What a good NN software package will do (I recommend StatSoft’s) is create several models of the data and choose the best based on your criteria. I usually asked for the 10 best models. But when you compare the models you realize that they are all very different interpretations of the data. They achieve relatively similar results in that the MSE is the smallest, but the predictions tend to be very different.
Steve Roth
May 14 2010 at 12:00pm
Three thoughts:
1. Looking at different data sets builds confidence, but multiple analyses of multiple data sets builds even more. Need to judge both the quality of the data and the quality of the analysis. Meta-analyses can help, but it’s ultimately a matter of sifting the evidence in your head. More data and more analyses improve the sifting, potentially “smoothing out” the unreliability of both. Each study is a sample point, but you have to subjectively weight the sample points.
2. The inevitably observational nature of economics (improved somewhat by research designs taking advantage of natural experiments) is most useful in *dis*proving hypotheses and models. Popper and all that.
3. tom’s link is very much on-point. Surprising that this discussion (and the economics profession in general) gives so little thought to agent-based/simulation modeling, given that economies show every sign of being emergent systems.
fundamentalist
May 14 2010 at 12:52pm
Steve: “The inevitably observational nature of economics (improved somewhat by research designs taking advantage of natural experiments) is most useful in *dis*proving hypotheses and models.”
In theory, yes. But how has that worked out for us? Not very well. Do you know of any theories that natural experiments have disproved? The biggest natural experiment of all has been depressions and financial crises, which have proven mainstream econ theory wrong every single time, but no one is abandoning any of it. Like brave soldiers wading through the mud, mainstream econ just keeps plugging away through the contradictions.
Steve Roth
May 14 2010 at 1:49pm
fundamentalist:
>Do you know of any theories that natural experiments have disproved?
First, I’m not just talking about natural experiments but the large body of (necessarily) observational research, of which they are a (perhaps superior) part.
The first theory that comes to mind is the one saying that larger government leads to slower growth. There have been dozens, hundreds of studies slicing various data sets every which way from sunday. Sifting all those studies (with the help of some meta-analyses) gives me high confidence that that theory is false. (In prosperous countries over the last five to ten decades — IOW where data’s available — and for tax levels within the ranges seen in those prosperous countries.)
Based on all those data sets and all those anlyses, that theory has been disproven to my satisfaction — freeing me up to move on to what seem (promise?) to be more fruitful and accurately predictive theories.
Steve Roth
May 14 2010 at 1:52pm
This does not alter the fact, of course, that people — even people with the knowledge, skills and understanding to know better — will continue to cling to beliefs even after the grounds for those beliefs have eroded to nothing. cf. Dawkins on group selection.
Philo
May 14 2010 at 2:14pm
“We have to deal with the world as it is, as best we can.” And to the extent that it is inherently too complex to be understood, we have to acknowledge that.
fundamentalist
May 14 2010 at 5:01pm
Steve: “The first theory that comes to mind is the one saying that larger government leads to slower growth. There have been dozens, hundreds of studies slicing various data sets every which way from sunday. Sifting all those studies (with the help of some meta-analyses) gives me high confidence that that theory is false.”
So you’re saying that the research shows that the size of the government doesn’t matter to growth? So if the state was large enough that it took 100% of income in taxes, that wouldn’t have any impact on growth? That’s tantamount to saying that pure communism works just as well as pure capitalism.
I guess I haven’t seen the research you mention because the research I have seen says there is an optimal size for the state at around 25% of gdp. If the state takes more than 25% of gdp then per capita gdp growth slows, as it does if the state takes less than 25%. At the extremes, that is, if the state took 100% of gdp or 0% of gdp, there would be no growth.
My experience with natural experiments has been very similar to Dr. Klings: they don’t change anyone’s minds but create squabbles over data quality and methodology.
Chris Koresko
May 14 2010 at 5:50pm
@fundamentalist: Your first post on this topic is one of the most interesting things I’ve read in a long time.
I wonder how many econometrics guys realize how damning your neural network results are (or would be if they hold up when applied to the national or global economy).
Steve Roth
May 14 2010 at 8:41pm
@fundamentalist:
>So if the state was large enough that it took 100% of income in taxes, that wouldn’t have any impact on growth?
Utterly unbelievable. Did you ignore the parenthetical beginning “In…” on purpose, or was that unconsciously intentional blindness? (I’m going to make you go look rather than getting it for you myself.)
I’m absolutely certain that you know the fallacy of the extremes when you see it, and object to it mightily, as you rightly should.
But you just employed it shamelessly.
mulp
May 14 2010 at 11:15pm
In contrast, when I have an empirical question, I look at a variety of data sources. For example, one interesting question is whether economic growth since 1800 has been much faster than economic growth for the preceding 1500 years. I believe that the answer is “yes,” based on a variety of indicators.
If by looking at multiple data sources to find data supporting your preconceived opinion, that is far from science.
On the other hand, if you posit that as a hypothesis and then analyze every data set you can find and present all the analysis results with the preponderance of the results supporting your claim, you have done science. Of course, your analysis must stand the scrutiny and criticism of others.
Einstein’s special and general relativity wasn’t accepted as theory based on just one data set, ten data sets, or even a thousand. Even today, his theory is still tested because it is certain to apply to a bounded domain, and the quest continues for data outside that domain.
But here is an interesting hypothesis:
If the progress in the Americas from 1800 to the present has been greater than the progress from year 0 to 1800, can the existence of a strong central government, pursuing activists economic planning and implementation, be rejected as the driving force behind the high rate of progress since 1800?
And from the beginning, the Federal government has been an agent of “wealth redistribution” taking the land from legacy landowners who failed to exploit the land for growth and giving the land as incentives to those who serve the government policy maker objectives of high economic growth. Just one example: railroad companies were given 25 million acres of land plus millions in cash as incentives to build rail lines in order to further the central planners desire for cheap access to promote growth. One can track the economic growth that resulted from those government incentivized rail lines.
The earlier Americans did not have a strong central government promoting the growth in North America, nor the history of strong Asian and European governments driving economic growth and promoting the development of new technologies and their spread. North America lacked only a horse or cow that could be domesticated to match every European natural resource. Was the lack of a government, or the lack of a horse or cow, that explains the slow American growth before they became common in the Americas?
fundamentalist
May 15 2010 at 11:29am
Chris: “I wonder how many econometrics guys realize how damning your neural network results are (or would be if they hold up when applied to the national or global economy).”
Thanks for the complement. From what I have read about econometricians, they don’t know much about NN. Stat guys hate NN. They have many years of hard work invested in traditional stats and don’t want to admit that a machine can do a better job than they can with all of their years of training. But many head-to-head tests have been done comparing NN with traditional stats and NN wins hands down every time in accuracy of forecasting.
This is off topic, but interesting trivia. Bank regulators and FERC won’t let banks and utilities use NN. They claim they can’t tell how the program arrives at its results, but that’s nonsense. It’s pretty easy to tell if you understand NN. So many companies use NN to make their inhouse forecasts and then come up with a stat model that comes as close as possible to the accuracy of the NN model.
And I’m pretty certain that the same results would apply to national and global data because I have done some work with data from FRED. There is no way that the human mind can analyze hundreds of variables for importance, nonlinearity and interactions, yet that is what NN is best at.
Michael Bishop
May 16 2010 at 1:06pm
@fundamentalist, want to cite the papers demonstrating how great neural networks are?
Comments are closed.