Arnold Kling  

Two Thoughts on Economic Education

PRINT
Nobel Prize Speculation... The Need for Savings...

First, just a technical note. A fellow named Chris Cardinale sent me a tip that software called "Total Recorder" from a web site highcriteria.com creates really small sound files from .wav files. I downloaded and registered the software, for twelve bucks, and it works great. It compressed an 18-minute lecture down to a 1-meg file, as you can find here. So now I'm even more sold on the idea of a web outline with an audio file as a way to present a lecture. However, .wma files require Windows media player 9 or higher, and Chris says that older Mac users may be out of luck altogether.

Second, I have been thinking about issues related to distance learning, specialization in education, and inter-operability. For instance, why should a student at Harvard take a different economics course than a student at Stanford?

One idea I came up with is that instead of giving students school-specific course grades, they could be given ratings, analogous to chess ratings. Suppose that the average grade on the final in my intro econ class is an 82, and that if those same students take a test in a Harvard class they get a 73. We could adjust downward the scores on my final to make them comparable.

Once we have a whole bunch of different schools' exams standardized, then a student could take two or three exams from different schools to get a reliable rating.

Once we develop confidence in the rating system, then we can get away from the notion of "required courses" or "required credits" and instead go to a system of required ratings. That is, to meet the distribution requirement in economics, you have to achieve a rating of X. To have a minor in economics, your rating has to be Y. To have a major in economics, your rating has to be Z.

For Discussion. Comment on the "rating" idea.


Comments and Sharing


CATEGORIES: Economic Education



COMMENTS (11 to date)
Kieran writes:

Prof. King, curiosity is more important than knowledge. I have had many professors who taught me the truth and bored me so badly I will never think about their topic again. However, I have had professors who probably taught me things that were completely wrong, but made me so curious and passionate about the topic that I have continued learning about the topic daily for the last decade.

Show us passion and we will fill in the details ourselves.

I think that is why a schools reputation is a better measure. People have had first hand experience watching students apply things they reasonably should have long forgot.

Patri Friedman writes:

I really like the idea of class mobility. When visiting Cambridge, I thought it was interesting that a "college" at the University was a living/dining/social group, and the division into departments and majors was totally separate. It would be interesting if higher education evolved in that direction: your college provides your physical amenities - dorms, libraries, computers, health care, support, advice, but your classes are telecommuted anywhere.

But I don't understand how this rating system replaces breadth or depth requirements.

What it does is correct grades for the difficulty of the class and/or the abilities of the other students. I actually implemented a similar system to solve a Mathematical Contest in Modeling problem that asked for a way to compensate for exactly that.

But your ratings are based on a single class. Taking a single class in a subject and getting a high adjusted score doesn't indicate that you've mastered the field, merely that you've mastered the subject. A requirement to explore a field in breadth and in depth still seems necessary to ensure that a student is competent to practice in that field. The ratings you've described are just adjusted grades, and a single grade does not a major make.

That said, combining grade adjustment with mobility of class location does solve the problem of comparing grades between different schools. (If you adjust grades within a school, that still doesn't tell you how tough the school is.) Although this problem is partly solved by standardized tests already.

John Johnson writes:

People have different learning styles, and perhaps that's one of the reasons they choose different colleges. I can see a standard set of materials as part of a course, perhaps even the information portion of the class. However, class projects, discussion, and homeworks needs to be adjusted for the individual needs of the class. I can see where Harvard may differ from Howard, Stanford, or UNC-Chapel Hill. Minimal standards are enforced through accreditation.

So, I have to tend toward the negative on the ratings idea.

Kenny Evitt writes:

Is it really so easy to create useful standardized tests? One of my professors made a point I thought interesting and highly plausible: the best measure of how well one understands a subject is how well one can write about it; he even went so far as to allow students to write a paper as a substitute for taking an exam.

I've always thought that the inherent difficulty in creating tests in general is how to measure understanding versus test-taking. Standardized tests only exacerbate the problem, because it becomes that much easier to study the test As the subject.

Thinking about your analogy to chess ratings: the supposed efficacy of such a system relies on competition between pairs and the assumption that, in general, the person with a better rating, should win. Which gives me an idea for a more strongly analagous rating system: students earn ratings by a system of competition; some form of testing, maybe even multiple forms (standardized test, essays, graded discussions, etc.) are used to grade students and both students and each test are rated, so that a rating earned reflects how well students and tests 'beat' the other side.

Lawrance George Lux writes:

Overcomplicated. Simply make every school advance a standard Final for every Course, have a Commission of Professors choose a Final to be taken, make the Instructors take the Final themselves--their Scores will determine the weight of their Student's scores on the Final chosen. The Slant rate for the Students would be the percentage difference of their Instructor's score off a B- average percentage. The Slant weight can be positive or negative. lgl

Giles writes:

I think what you'll find would happen is that Tioga college of Adult Education would just take the exam later and mark it more leniently.

Think about the problems with adminstrating international SATs GRE's etc and multiply it by 10.

this idea also implicitly precludes the asking of any subjective or open ended questions.

Lawrance George Lux writes:

I just glanced through Manki's work on Prediction Markets (followed the general trend, but screwed up with the numbers). I personally thought there was not significant attention paid to degree of risk in the analysis. Then I thought of Arnold's analysis here.

What if there were Group Finals chosen. The criticism need not apply from an above Post about the use of subjective questions and essays. A point per mentioned item per question works efficiently, as everyone who has graded essays know. Then you turn to the quality of education,remembering this will apply to Final Grade awards, not Course awards. Pay Instructors different Payscales per Grade awarded for the Final. An A of a Student on the Final would give the highest Pay, a D Grade the lowest.

The Instructor's total payscale would be determined by these Final award Grades. A base salary could be guaranteed, but real losses introducted with consistent low grades. Instructors would be induced to study up on all elements of the Course, and propelled to insure these elements were intergrated into the Students. lgl

Austin Johnson writes:

Standardized testing eliminates the eliteness of universities. Why should someone who went to a lower ranked school, but learned just as much, have a tougher time earning a job?

I think being able to list your "chess score" on your resume would be much better comparison of the quality of an eduction instead of comparing a 3.3 GPA at Stanford versus 3.9 GPA at a lesser ranked school.

another bob writes:

"Comment on the "rating" idea."

fascinating. what changes would we have to make to the concept of 'education' to sustain the analogy to a chess rating?

first, seperate the role of teacher/facilitator from the role of tester/opponent.

take your classes anywhere (Harvard, Haverford, the College of Alameda) or nowhere (buy books from amazon. watch arnold's class room lectures without matriculating to GMU).

now, when you are ready, sign-up for a test/match. this creates's a new role for a tester/opponent who presents challenges within a highly defined subject area but with a huge number of possible variations so that the same question is never asked twice. (actually, this role exists in the software certification world and is performed by a company called PROMETRIC.)

just as one can play any number of matches, a student can take an exam any number of times. exams could be ranked for difficulty.

people who are teachers/facilitators at an institution could also be testers/opponents with a certification/testing organization.

the tests can be a combination of multiple-guess, numeric answer, prose or even verbal response. the prose and verbal responses, too difficult to grade in an automated way, would be graded by more than one tester/opponent. the more difficult matches/tests might include oral discussion with a panel of testers/opponents.

each student's prose or verbal response would have to be judged by a multi-person tester/opponent. sort of like figure-skating judges.

hmmm...interesting, but only good for highly constrained fields of study, not unlike the highly constrained nature of a chess game.

where there might continue to be a plethora of teaching institutions, one might expect that there would be only one or two testing institutions.

Jeremy Loscheider writes:

Standardization has promise, though I fear that the biases held now against "lesser" schools would simply be factored into a standardized score. I have met econ students from public colleges (Truman State, U of Tennessee, U of Missouri - Kansas City, New Mexico State) who can run circles around Stanford, Princeton and Yale grads when it comes to theory, application and investigation. But the Ivy grads get the World Bank and Council of Economic Advisors posts, while we public school grads end up as private sector analysts, barely utilizing our training.

If a standardized score is to be produced, let it be specific to the material alone; do not factor in the school or the professor. There is ample "core" economic subject matter to definitively say what should be known by the time a degree is awarded.

another bob writes:

so, jeremy, are you suggesting a post-graduate SAT/GRE/Achievement style economics test that anyone could take? sounds great to me.

what do you think motivates the buyers/hirers of economics graduates to hire from brand-name schools rather than non-brand-name schools? are the hirers showing preferences for their alma maters? are the hirers naieve about economics and hiring a brand because they can't directly test the abilities of individual candidates? are the hirers simply saving time by playing an average; the average prestigious school grad is better than the average state school grad?

Comments for this entry have been closed
Return to top