Arnold Kling  

Tournament Ranking of Colleges

Campaign Season Economics... Kydland and Prescott win Nobel...

Christopher Avery, Mark Glickman, Caroline Hoxby, and Andrew Metrick use a chess-rating methodology to rank colleges. If a student admitted to both Harvard and Yale selects Harvard, then that is a "win" for Harvard. The schools that rank at the top based on this approach are Harvard, Yale, Stanford, Cal Tech, and MIT. The authors conclude:

If a revealed preference ranking constructed using our procedure were used in place of manipulable indicators like the crude admissions rate and crude matriculation rate, much of the pressure on colleges to manipulate admissions would be relieved. In addition, students and parents would be informed by significantly more accurate measures of revealed preference. We close by reminding readers that measures of revealed preference are just that: measures of desirability based on students and families making college choices. They do not necessarily correspond to educational quality.

Note that if students were to make choices based on this sort of ranking system, then rankings would be self-perpetuating. The best students would see that Harvard is the highest-rated school, and they would pick Harvard. This would lead to Harvard being the highest-rated school. It becomes perfectly circular.

I like the chess-rating approach. However, I would prefer to see it applied to student outcomes (i.e., performance on the same introductory economics exam) rather than students' choice of school.

For Discussion. Will we ever reach a point where college quality is measured in terms of value added?

Comments and Sharing

COMMENTS (11 to date)
Brian Doss writes:

Aren't MBA schools already judged in this manner? I often see "average starting salary of graduates" listed on pages comparing MBA programs.

Mark Nau writes:

For most people, the main value they get from college education is the ability to claim that they are college educated. The more prestigious the school, the more valuable that claim is.

Michael Tinkler writes:

Here at Hobart & William Smith Colleges we are starting to work on the feedback from our 10-year Middle States Association of Colleges and Schools review. "Assessment" and "value added" are all over the place, but it's not clear to everyone that the educrats at Middle States are sure what a value added measurement would BE.

Any help?

BC writes:

The problem is that some schools, particularly at the undergraduate level, offer differing amounts of financial aid. I got accepted into a more prestigious school than the one I'm at, but it was also more expensive and offered me less. There are also other variables besides the prestige/quality of the school and the cheapness of it.

Maybe I'm misunderstanding you. I suppose that at higher-level colleges, financial aid works differently, too.

Patri Friedman writes:

While this is useful information, its far from a complete indication of the quality of a school. I would be more interested in seeing comparisons made by students who had attended multiple schools (perhaps for undergrad and grad) about which was better. Surely students who have actually been to a school have a more useful opinion than prospectives who are choosing.

Your outcomes idea also sidesteps this problem.

I think that someday, in a more efficient and libertarian society, college educations will be ranked partly by value-added.

Bernard Yomtov writes:

No. We will never reach a point where college quality is measured in terms of value added.

At least I hope not. College certainly has economic benefits, but it has, or should have, other benefits as well. To try to lump all this into an economic measure strikes me as very foolish. Human beings are more than consumption/production machines.

Once the college ranking game gets beyond fairly broad categories I think it is dysfunctional. The notion that a school ranked, say, number 16 is automatically better than the one ranked number 20 is absurd.

Economists ought to understand this well. Choosing a school to attend is choosing a consumption bundle. The value different students assign to different components of the bundle can quite reasonably vary widely, leading to different choices, all rational.

I am at an age when some of my friends' children are making, or have recently made, college choices. I am pleased to report that their decision processes are vastly more sensible than worrying about rankings.

greifer writes:

What's the metric for "value added" going to be? And at what point in time? Will students publish their own, or will we have to use some other?

One possible interesting metric for "value added, as perceived by recipient" is alumni giving. Looking at MIT vs. Harvard shows some interesting evidence; MIT students give far less than Harvard students do (and less than many other Ivy-League school alums (per capita)) for the first ten years after graduation. Apparently, this trend reverses later. (Perhaps the value added is not perceived until many years have passed?)

Of course, a less subjective view might be best. The BankBoston study on MIT's Impact on Innovation showed that MIT related companies, if formed as a nation, would rank as the 24th largest world economy in 1994; MIT related companies employed 3/4 of a million people in 1994--at the time, 1 out of every 170 jobs in America.

Honestly, though, I think students (who have choices) choose schools based on what they perceive to be the best value added at the time of their choice. Some students are considering their future in that calculations; others are not. But both are probably rational--those who don't consider it have no way to calculate those probabilities anyway. Isn't this the most rational decision they could make in the face of such uncertainty (at least given that they've decided to go to college?) In that case, isn't the issue how to bring a student's perceived value-added in line with what society will value later in their lives?

Robert Schwartz writes:

Clearly this method is pure GIGO. The sole source of data is the least knowledgable particapants in the system.

The fact that Harvard is always first, rats the system out to me. Harvard has a wonderful faculty of guys who did their best work twenty years ago and who never ever come into contact with an undergraduate. I have forbidden my chidren from applying to Harvard. Better educations are available elsewhwere.

How to construct a better system is a harder question. MBA programs can be ranked on income, that is why MBA students go to school. The Wall Street Journal polls recuiters, which is a really clever take.

Most professional schools can be ranked on things like placements and income.

Undergrad is much harder because the product is so variable. My oldest just graduated with a degree in Theater. Fortunately, the one thing she learned is that she has no future on stage. But many of her classmates will chase the dream. A few will make it. Even so the average income of the class for the next ten years will be $20,000 a year.

Daughter number 2 is interested in theoretical physics. She and he closest friends may spend many years as grad students and post docs before they get half way decent jobs.

My point is just that finding the metric is really hard.

Jim Glass writes:

I'll say this as a former tournament chess player: chess players go nuts about their Elo rankings.

If universities start taking these Elo rankings anything like as seriously as chess players do, then all those disputes about magazine rankings of colleges will be as nothing -- the insanity is just beginning.

Patrick Winslow writes:

No. College quality will never, ever, ever be measured in terms of value added.

Colleges (at least the top tier or two) are all about sorting people into rough IQ strata--and, to a lesser extent, on the basis of their willingness to be educated. This is why the SAT is such a big deal. It's a de facto IQ test. It correlates as strongly to other IQ tests as it does to a second administration of itself.

If exit exams were administered, we'd be embarrassed by the results. Smart students who are willing to work will graduate highly educated, whether they go to MIT or Michigan State.

As Robert Schwartz noted, most undergrads have little contact with their famous professors at Harvard. It makes sense, though. You don't need to be a Nobel laureate to teach differential equations. The Ivies, at least for undergraduates, are like designer jeans.

Graduate schools, on the other hand, are somewhat ranked in terms of value added. This is partly because the student population is already highly sorted. The remaining differences in intelligence are small and pretty inconsequential, so other factors loom large.

I don't mean to imply that colleges don't add much value, but rather that the value added depends mostly on the ability and discipline of the students. Education is indispensible capital for the economy, but stupid people can't acquire it and unwilling students won't acquire it unless they become willing.

sourcreamus writes:

The only metric you could use to measure value added would be some form of standardized test. Compare the results of the graduates of a school with what they scored before they were enrolled, and the difference is the value added. This will never happen because colleges are currently making lots of money without having to prove there is any value added.

Comments for this entry have been closed
Return to top