Some countries are more corrupt than others: Haiti is not Finland. Measures like the Corruption Perceptions Index attempt to quantify these differences.
Some academic disciplines are more bogus than others: Women's Studies is not Mathematics. But as far as I know, no one has tried to quantified these differences.
Questions for discussion:
1. How would you propose to measure a discipline's level of intellectual corruption?
2. If you had to guess, how do you think the ranking would turn out?
3. What field's integrity is most overrated? Most underrated?
P.S. Since I'm itching to answer #3: The most overrated field is psychiatry. The most underrated is political science.
My undergrad thesis tried to answer some allied questions: broadly, which academic departments' grading is the most bogus (least meaningful as index of student quality by extrinsic measures).
Result: women's studies is not mathematics (etc.).
1. We could measure this one by testing to see if we could publish a set of entirely bogus papers. The disciplines that let the most through would thus be the most corrupt.
2. The hard sciences (math, physics, chemistry, and allied fields like engineering) would have the least getting through. Biology, a semi-hard science, might have a few more. The soft sciences would have probably a couple get through. The humanities would probably have the most. Now, the reason for this is probably not what people think. You will notice that as we move up this list, we go from the simplest systems to the most complex systems. Incredibly high levels of complexity make it more possible to have a high b.s. factor. With something as simple as math, you can simply go through and double check the proof.
3. The most overrated fields are any that have the word "Studies" in them. They are overrated because most of them have practically no standards. The most underrated is Literature, because even most of the people in literature do not understand the true value and complexity of what they study.
Let me add a fourth. The most misunderstood and misused "discipline" is Interdisciplinary Studies. Done properly, it is the most complex major possible. Done improperly (that is, the way it is done in every university in the U.S.), it acts as a clearing house to help pass students who should have never been allowed into the university in the first place.
You could use the standard deviation of grades given to the same paper by different instructors as a metric.
The most overrated fields are probably soft (non-mathematical) economics and philosophy. The most underrated one is history.
Perhaps this might help as a place to start (Note: the examinations referred to are the LSAT, GMAT and GRE's - the quote is taken from the link in the response for #2 below):
Students who major in a field characterized by formal thought, structural relationships, abstract models, symbolic language, and deductive reasoning consistently outperform others on these examinations.
Now, here's how I'd answer the three questions:
1. The degree to which a particular discipline deviates from the academic standards exemplified by those fields that are characterized by formal thought, structural relationships, abstract models and deductive reasoning.
2. Link: Ranking School Smarts by Major
3. Overrated: Education. Underrated: Philosophy.
Going back to the first question - there might be an relatively easy method of measuring academic corruption. Many universities subsidize their inferior academic programs by requiring students majoring in the more rigorous undergraduate disciplines (business, sciences, engineering, pre-law, pre-med, etc. - basically all the academic programs that lead to high starting salaries) to take a number of "elective" classes from academically inferior programs.
Meanwhile, those majoring in the inferior programs have no such requirement to take electives from the classes offered in the more rigorous disciplines. One could quickly determine the most corrupt disciplines by seeing how "one-way" the elective options are.
Got a copy of GMU's course catalog?
Intensity of Non-Integrity could be measured by the number of researchers paid to obtain certain results (medical sciences apparently suffer from this), the frequency of plagiarism as Troy mentions above (hard to measure?), the number of unresolved debates where people agree to disagree, the percentage of citation-less results (math fares badly there, although a study of the culture of citations might reveal strategic citation behavior in other sciences), the relative level of successful unfunded results (if only well-funded results are celebrated that becomes suspicious).
I'm a little confused as to what exactly would constitute a bogus, say, philosophy paper. One filled with "if p then q; q; therefore p" arguments? Misquoting primary sources? It seems to me that the first kind is just bad philosophy (probably not going to get published),and the second kind is just as easily checked as a mathematical proof, provided references are properly made. I guess I don't see how to differentiate bogus from just plain bad in the humanities. I'm also not sure about the claim that something like lit. theory is more complex than quantum physics, lending it more easily to bs-ing.
I honestly have no idea how one would test the bogosity of a given field, but I won't let that stop me from giving my two cents.
Underrated: Philosophy. This pick might be made out of a pure desire to retroactively justify decisions I've already made. But given the amount of flack that we philosophy majors get, you'd never guess that philosophy majors outperform almost all other majors (excepting math, engineering, and physics) on the GRE and the LSAT.
Overrated: Sociology. I don't think I can do much to justify this one. Pretty self-evident, I think.
There's a dirty joke about integration by parts to be made somewhere, but this is probably neither the time nor the place for that.
As for measuring academic rigor, it strikes me the following should be assessed:
1) How much does scholarship matter?
2) How much of the material is normative and how much is positive?
3) When using material from other fields, how much does it dumb the material down?
I like the idea of testing via bogus paper submissions, but I think the hypothesis that "simple" fields will weed out bad results more effectively is off. The systems being discussed in physics (say, a ferromagnetic plate) may be simpler than those in macroecon (say, the economy of an entire nation), but the statement of results may be much more complicated or more detailed in physics. The complexity of result being published is probably negatively correlated with that of the system being studied. (E.g. Fermat's Last Theorem can be stated in one sentence, but the proof is hundreds of pages.) Most of the simple, rigorous and true statements to be made in math, physics etc were made centuries ago, so everything being said now is much more complicated. There are still fairly straightforward results being stated in many social sciences, since they've only been rigorously investigating for a century or less.
Another thing that needs to be considered if you actually tried this is that growing fields will have increasing numbers of conferences, and some of those will be trying to up attendance at the expense of rigorous review. You would need to submit papers only to the consensus top journals or conferences in each field to account for this.
And I second the notion that any "Studies" is likely corrupt. They're an excuse for balkanization, overspecialization, and in my opinion, narcissism. At least go through the effort of drumming up a pseudo-greek name.
Interesting. I'll join the chorus praising the Sokal test. (And, fwiw, I expect it would be pretty difficult to get a bogus paper into (analytic) philosophy journals.)
Along the lines of Fazal's grading-variance metric, we might look at how much consensus there is about prestige in the field (which are considered the top departments, etc.). [Kieran Healey did this for philosophy, here.] But I'm not sure whether homogeneity vs. heterogeneity in assessment standards reliably indicates the quality of those standards. (The alleged link is suspicious in both directions: one could have legitimate contestation, or ideological group-think.)
There is already a paradigm for measuring how scientific certain disciplines are. See these papers:
How Hard is Hard Science, How Soft is Soft Science? The Empirical Cumulativeness of Research.
I'll quote the abstract to this one to give you a flavor of some of the variables used as clues:
Psychology's status as a scientific discipline: Its empirical placement within an implicit hierarchy of the sciences:
Psychology's standing within a hypothesized hierarchy of the sciences was assessed in a 2-part analysis. First, an internally consistent composite measure was constructed from 7 primary indicators of scientific status (theories-to-laws ratio, consultation rate, obsolescence rate, graph prominence, early impact rate, peer evaluation consensus, and citation concentration). Second, this composite measure was validated through 5 secondary indicators (lecture disfluency, citation immediacy, anticipation frequency, age at receipt of Nobel Prize, and rated disciplinary hardness). Analyses showed that the measures reflected a single dimension on which 5 disciplines could be reliably ranked in the following order: physics, chemistry, biology, psychology, and sociology. Significantly, psychology placed much closer to biology than to sociology, forming a pair of life sciences clearly separated from the other sciences.
How beautiful, so many oxes getting gored!
I suspect cultural anthropology is roughly as scientific as, say, astrology.
There is also probably a nontrivial correlation between the quality of output in a given scientific discipline and the average GRE score of the academics who populate it: 'IQ & the Wealth of Disciplines'.
Obviously this isn't always the case, and should just be added as one informative ranking variable.
Fazal Majid writes "The most overrated fields are probably soft (non-mathematical) economics and philosophy. The most underrated one is history." Maybe. Many historians are quite impressive, but the _Arming America_ affair impresses me as much as the Sokal affair.
"You could use the standard deviation of grades given to the same paper by different instructors as a metric."
I agree that that is an important thing to understand. For example, to know what conclusions to draw from the much-talked-about result about women musicians getting downrated in auditions until the auditions were made blind, I would very much like to know whether two independent panels of judges listening to the same blind auditions can reliably give similar rankings of performers.
That said, though, I see at least one serious problem with using it as a guide to overall bogusness of a field: multiple non-bogus sets of priorities can exist in a non-bogus field. For example, physics has a tension between experimentalists and theorists. (Other fields like computer science do too.) A person of an experimental bent might be unimpressed with difficult-to-interpret screwing around with hypotheses about partial differential equations on the square root of the probability distribution. A person of a theoretical bent might be unimpressed with grotty little-theoretical-interest screwing around with practical techniques for getting materials from merely one part in a million purity to better than one part in a billion. In retrospect, anyone using the Internet should agree that both lines of work were far from bogus (notably because of how they came together in the transistor). At the time the work was done, though, I doubt it was so obvious. Part of the problem is superficial, the technical difficulty of evaluating work from another school, but I think there's also a deeper problem of sometimes systematically undervaluing the other school's work.
Interesting question and interesting answers, but I have a few questions of my own.
Does lack of rigor in terms of quantitative observations automatically bring integrity into question?
How do we evaluate qualitative observations? In utilitarian terms perhaps?
What are the political agendas here?
1) Measuring a discipline's level of intellectual corruption will suspect of intellectual corruption.
2)How can a discipline be corrupt?; William of Ockham-Entities should not be multiplied unnecessarily.
Why not departments, colleges, even professors? Yet I see problems/controversy interpreting any coalesced particular results derived. For example:
3) Overrated: Excuses for thinking = Mathematics/ Econometric scientism: validity contra truth. Think you know? Plan an economy. And put your money on it.
Underrated: Homo sociologicus juxtaposed with Homo economicus [individualism not atomism]. It is (micro-)Sociology not Political Science that is overrated (Game Theory R.C. is elemental). Note: Meat is in the particulars (increasing expert knowledge dissonance rate) & CULTURE IS RATIONAL (Weingast, Chong, Chwe, Opp, Hechter, Schelling).
[Culture anthropology is has the most potential, but needs some economic imperialism-but I dont see economists doing ethnography. Constitutional Econ has a foundational-theoretical shoe in but needs to avoid the P.D. fetish (see Stag Hunt Game.)]
Political science is a bag of peanut brittle. Some professors are whole bars, others are crumbs. Constitutional law will be mostly bars. Modern political history will be some bars and lots of crumbs.
"Studies" courses can't be overrated because so very little is expected of them in the first place. No one, not even their students, expect them to have any objectivity. As a result, perhaps, I've been pleasantly surprised by what I've heard about "studies" professors.
"soft studies": English. I know of too many very credible stories, first and second hard, of professors grading English papers by the paper's subject, ideology, or the prof's attitude towards the student.
"hard studies": Engineering. Teaching ability is only treasured for courses taught to students from other departments. In department, research grant acquisition is the most rewarded skill. As a result, in department grading is wildly inconsistent between profs teaching the same course.
I can't think of any departments that are underrated.
A webbed paper in the research program mentioned by Jason Malloy can be found at psychology.ucdavis.edu/experimetrix/Papers/Simonton.pdf .
I would quibble with at least one of their metric choices, "citation immediacy." Their argument for its relevance makes sense, but I think it tends to conflate messiness of problem with messiness of thinking. E.g., I don't mind dumping on schools of education, but I don't think it would be fair to use this metric to compare them with physics. Education seems to have resolved to attack an ecnonomically important problem regardless of how messy it gets. My impression is they have done it rather badly, but to demonstrate that honestly, I think you'd need to include comparisons to approximation-heavy fields like aeronautical engineering or engineering mechanics (or, admittedly, some subfields of physics and chemistry).
Also, incidentally, speaking of "immediacy": "Received January 31, 2006. Accepted February 8, 2006." Wow!
Actually, if you could separate intellectual rigor from confounding factors, my field, physics, might be most overrated.
For example, in my graduate work, I took data with a signal-to-noise ratio of 600:1, then fit it with an RMS error in the tenths-of-a-percent range. I didn't really understand the statistics involved in interpreting the results, but when the result is 90 standard deviations above chance, you can be pretty sloppy with the statistics and not hurt anything.
An educational psychologist never sees data like that. He has no margin for sloppiness. And even so, it's likely that the information he seeks just cannot be extracted from the data available. His results may sound less impressive than mine, but for reasons other than intellectual rigor.
I cannot find myself to deem any academic discipline “bogus.” I propose that a discipline should be measured on how it challenges a student to critically think, utilize analytical skills, and reasoning instead of solely focusing on the material being studied. As a Political Science double major I often find that this field is regarded as inefficient and non-beneficial for a collegiate student. I have to agree with Caplan that this is an underrated field. My classes are comprised of intense research and are constantly challenging me to critically think and form educated responses to important worldwide issues which are all working to better prepare me for my future endeavors.
Although Political Science is not Mathematics, it does not mean that it holds any less value in the academic world. It is a beneficial area of study that is adequately preparing me for the competitive and educated job market.
It is unfortunate that in our society fields such as, Women’s Studies, Sociology, Psychiatry, and even Political Science are regarded as “bogus.” There are important academic concepts to be learned from all of these disciplines and perhaps it would be beneficial to all of those in the academic world to be more open to all that can be gained from these so-called “bogus” disciplines.
Ilkka Kokkarinen discussed that idea here.
I'm glad someone got my allusion to Sokal. That is the kind of bogus paper I was talking about.
Someone mentioned education -- and I don't know why I didn't think of it, because I think that education is incredibly overrated, and should be completely eliminated as a major. I might be convinced to keep it as a minor just to introduce a class teaching method class or two. Beyond that my experience with education as a major is that it is a collosal waste of time and money.
As for complexity, people don't seem to really understand what complexity is. Math and quantum physics are difficult, yes, and the explanations seem complicated, but the fact is that they include an incredibly small number of variables. We are far, far closer to completely understanding quantum physics than we are to completely understanding Homer's Iliad. The reason is complexity.
hehe, so your saying that, Philosophy takes some of the brightest people in the world and produces absolutely nothing with them?
The integrity of mathematicians is the most overrated.
Their maths may be honest, but their misuse of English is incredible. The barefaced dishonesty with which they use English words like "clear" "easily apparent" and "obvious" makes mathematicians into liars on a scale unimaginable by mere mortals such as politicans and used-car salesmen.
1. See if you can find any questions in the field for which the practitioners are completely sure about the answer but for which the answer varies from expert to expert. This means that they're willing to make statements of certainty about subjects for which there obviously isn't consensus, a sure sign of integrity problems. Various branches of medicine would fail this test, as Andy Grove found. Another test is to get someone from a field that passes the first test to examine the other disciplines and see what they find. You could argue that the first test only measures consensus and the second is subjective but, since we're dealing with a profession that is supposed to be learning and exploring the frontiers, you're not going to find a cut-and-dried litmus test.
2. I think engineering would turn out at the top because it constantly has to pass the reality test, which is like the market test but even more stringent. Literature would end up at the bottom, you can see how the reality test would be much weaker there.
3. Some commenters seem to wrongly be making the overrated and underrated designation based on what discipline is the worst or best, rather than which one is worst but isn't popularly considered to be. According to that criteria, mathematics is the most overrated, which is reaffirmed by Caplan's mentioning it. Mathematics may be more internally consistent than any other discipline but because it doesn't have to face the reality test much, it's allowed flights of fancy that are pretty useless and harmful. The most underrated is economics as it is the social science that actually tries to answer some tough questions and, because of some of the great thinkers who have adorned the field, does make some headway.
Most posters' comments seem to hinge on what subjects that they personally value rather than on what subjects are bogus. From my understanding, women's studies is sociology, literature, history, etc. viewed through a specific lens. The kinds of critical thinking skills and research that this subject demands would depend a great deal on the expectations and skill of the professor teaching the class just as would be the case in other disciplines. Anthropology can be either bogus or incredibly demanding depending upon the approach. Many posts place relative value on different types of neurological processes as if the right frontal lobe were somehow 'better' than other parts of the brain. If you want to make an economic argument or even a utilitarian argument, that would seem more useful. To claim that women's studies is bogus without providing any level of data or analysis is nothing more than saying, "We don't value women's perspectives and achievements. Women are bogus!"
If women's studies was about the perspectives of women, you'd be correct. However, that's not what women's studies is actually about. Women's studies is about the promotion of anti-Western, postmodern, neo-Marxist ideology. If I wanted to learn about women's perspectives, I'd go to an evolutionary psychologist, not a women's studies professor.