Ten years ago, ultra-mathematical theorists were the kings of the economics profession. Now they seem to be nigh irrelevant. Clever and relatively open-minded empiricists rule the roost. Cementing the trend, few of the students coming out of top programs are math theorist types.

How did this transformation happen? Once the math theory people were on top, why would they hire and promote researchers so different from themselves? In short, why did they commit what seems tantamount to memetic suicide?

A few possibilities:

1. People *want *to hire people who won't be their direct competition, so cycling is to be expected.

This seems fishy. Junior faculty who work in your area are more likely to be valuable co-authors than scary rivals.

2. Math theory was a mature literature, with little room left for original research.

Maybe. But this is mostly just an outsiders' impression. To insiders, a seemingly stagnant field can still look like it's full of interesting questions.

3. Senior faculty value quality more than conformity. So if the new talent doesn't want to do math theory, they get hired anyway.

Also possible. But why would grad students enter fields dominated by work they aren't interested in? I certainly didn't expect things to change as much as they have - if at all - when I started my Ph.D.

4. Differential shirking. Theorists in each department would have been better off if they tightly controlled their hiring committees. But they chose the individually-optimal strategy of letting someone else serve on the committee.

Sounds promising, but why would theorists be unusually egregious shirkers?

5. Outside influence. Non-economists (administrators? donors? bloggers?) thought math econ was boring and/or stupid, and were somehow able to influence the hiring process in their preferred direction.

Problem: This goes contrary to all first-hand observation of how departmental politics works. Outsiders barely remember that economics departments even exist. They certainly aren't paying attention to the mix of specialties gettng hiring.

Do any of these stories work? Anyone got a better explanation?

But there are some types of outside influence that presumably affect departmental politics, such as the Nobel committee, which gave Vernon Smith the big prize. I don't know if that by itself would be enough to explain the trend though.

I beleive that cycling is a reason for the change but not due potential competition, as you mention. But rather because the students strive for some kind of differentiation from the majority. Putting them in an advantage over others who follow the established patterns of behavior.

Some people have some insightful clever observations. Then the people that come after them spend a couple decades filling in all the math and testing of predictions and whatnot suggested by the clever insightful people. Then some more insightful clever people come along and the cycle repeats.

I don't know.

Statistical software is better now, and it is easier to get datasets over the internet.

Multiple nonlinear regression can't be done in any old spreadsheet software. If the dataset is big enough, it also can't be opened in any old spreadsheet software. Not to mention that doing, say, solver in excel is slow on anything other than a fairly recent vintage PC.

So, powerful PCs, good software, and datasets gallore on the Internet are the answer.

Perhaps mathematical rigor gets in the way of advocating whatever policies the people who pay the economists' salaries want them to spout?

The best minds in any field achieve new discoveries that lead to breakthroughs with one primary method:

Look at the problem differently.

When a field is dominated by a specific methodology, it rapidly begins to ignore other ways of looking at the problem. In short order, it has looked at every problem with the methodology

du jour, and whatever discoveries were to be had with it are already matters of record. The field then stagnates.Fortunately, along comes the next crop of bright young minds, ready to make their own breakthroughs. They intuitively understand (or stumble over) the concept that looking through the same lens will give you the same view, so they bring a new lens in the form of a competing methodology.

Once someone brings a methodology that proves productive, more people start using it, and breakthroughs start falling out of the process. The field is soon dominated by the new way of viewing the problems, and we end up back where we started: only the names have been changed.

Since we tend to uncover a new layer of problems with the new methodology, we have a side benefit of reviving the potential usefulness of the old methodology, so any of the "old guard" who refused to change still have a chance to be relevant. Later, the old methodology can return with renewed vigor and solve more problems.

Basically, the cycling process proves that the system works: a new methodology arrives when the old methodology reaches a point of diminishing returns, takes over the field, and is itself replaced once it has outlived its usefulness.

Or I could be wrong.

As a sometime mathematical economist (how I am described in Wikipedia without the "sometime," although you have to find it under the entry for my father, with the same name), I think a major reason has been that an awful lot of people have become just too aware that the assumptions made in most of the axiomatic theoretical literature are just seriously empirically incorrect. This has pushed things off into other directions.

The Nobel for Vernon Smith is indeed part of this as is the Clark medal for Rabin (Levitt is simply a practitioner, one who claims to believe in standard economic theory, not one who goes around showing how its basic assumptions are false). So, we now know that the axioms of preference do not generally apply to most people, as any non-economist social scientist would have told anybody listening over the last half a century, if not longer. But the experimental evidence is in, and it is beyond ignoring.

Also, rational expectations, although still in the advanced macro textbooks, is looking pretty cheesy, again partly due to experiments, but also due to such events as the stock market crash of 1987. That looks in retrospect to have been the canary in the coal mine on all this, or maybe more like the Richter scale earthquake shock on the more simplistic axiomatic views. If you want to explain events like that, the standard theoretical models do not get you very far.

The pain to insight quotient got just too high in math econ. The demand for yet another game theoretic equilibrium refinement became small (after the 99th refinement) and economists became more interested in doing something interesting with the already powerful tools they had.

But the pendulum swings. An economist who can do both theory and empirical research (like say Daron Acemoglu, last years John Bates Clark medalist) will always be a more valuable commodity and a more insightful researcher than the economist who entered the field with a big splash mainly because they had access to a novel dataset.

Within a few years the wow factor currently associated with someone having 'neat data' or a cute 'natural experiment' will start to wear off.

And there might also be a new 'theory' breakthrough somewhere around the corner -- some new idea that opens up an entire new field. We haven't really had such an opening since the new trade and new growth theories of the early 90s. Within more mathematical economics the last major really exciting ideas probably happened in the 1980s with the development of new insights in information constrained economies, but most work in those fields since then has been derivative hair-splitting, building upon the work of early pioneers.

Perhaps it is the complementary of specialists not in a different area but with different tools (i.e. auction theorists and experimentalists, macro theorists and econometricians, etc.) that generates the cycling.

This seems specially plausible if specialization is knowing more about less.

Does it have anything to do with the sort of critique that McCloskey is fond of making, i.e., that overly-abstract mathematical theorizing is virtually worthless without some connection to empirical reality?

Taking McCloskey's critique seriously involves a vastly

moremathematical approach to economics. Her criticisms of both theoretical and empirical work are very cutting -- she says that existence theorems are basically useless, and that analyzing datasets without having good loss functions is similarly useless.The existence of an effect is generally not an important question, because in real life all the effects you can think of are usually in play. But some effects are important, and some aren't. This means that in order to evaluate an economic theory, your theory has to make clear predictions about the magnitude of an effect. This is much harder than whether there is some margin at which it could possibly change the outcome, and requires more sophisticated mathematical tools to get useful results.

Likewise, taking a dataset and running a regression on it to find out whether something is statistically significant is utterly wrong-headed: you use the analysis for some particular purpose, and that purpose gives you a loss function, which is what tells you whether you have a significant effect or not. You can't judge significance in isolation. And that means that if you want to connect theory with the practice, your statistical tests must arise from a deep understanding of the theory you are putting to the test, which must themselves be quantitative enough to be testable.

I think part of it has to do with the growth in the premium that industry is willing to pay for highly quantitative skills, relative to what academia will pay. This is definitely a major factor in finance; many of the most quant-focused people I went to grad school with, myself included, didn't even consider going into academia.

Marc and Neel,

High salaries for quants would not reduce math in fin or econ research. It would stimulate it.

The point is (see my comment on the next math

bubble thread) is that what is needed is a

different sort of math, a more pragmatic math,

if you will. This reflects changes within

math itself, where formal axiomatization is

now out of fashion and things like computer

simulation are much more used. These are

also the tools of the econophysicists, who

claim to have a competing set of math for

fin market practitioners.

I'm sure some mathematical physicists would like to think that formalization is going out of fashion, but that's certainly not the trend from where I sit.

I do research in computer science, in the formal semantics of programming languages, and it looks like my subfield is on the verge of a phase change -- I think that very soon paper proofs of theorems will stop being acceptable for publication; we will have to submit formal, machine-checked proofs. That a tremendous

increasein the level of formal rigor. Now, formal semantics is closely related to mathematical logic, and you might argue that we're embracing formalization only because of the cultural influence of the foundationas of math crowd. But we see it starting to happen in other parts of math, as well -- Thomas Hales is formalizing his proof of the Kepler conjecture, because it was too large and complex to fully peer review.Mathematical approaches] have had fifty years of ever-increasing hegemony in economics. The empirical evidence on their contribution is decidedly negative." Sound familiar, Mr. Caplan?

The Austrian school was right. Mathematical rigor in economics cannot reliably predict human action. What caused this paradigm shift, you ask? Why, the French Revolution II, of course. (http://www.paecon.net/HistoryPAE.htm)