James Schneider  

The Popularity of Silly Methods

Milton and Rose Friedman on In... Pikkety and Inequality: My Rep...

In Bryan's blog post Predicting the Popularity of Obvious Methods, he suggests that social scientists are more likely to pursue non-obvious methods when the obvious methods don't provide the answer that they like. In the spirit of his post, the use of non-obvious or overly sophisticated methods can signal that the researcher kept trying until they got the "right" answer. The skeptical might view this as a warning sign about the research.

Most people do not consume research firsthand. Instead, it is filtered through the media. Journalists also have preferences over answers. If a newspaper doesn't like the answer that research provides, they have wide latitude to ignore it. This means that one can infer the preferences of the media by the research that they do cite. Instead of sophisticated methods being the "tell," journalists show their preferences by citing weak research.

To support their preferences for certain questions or certain answers, journalists might discuss research at lower quality journals. Or they might discuss research with inferior methods. In medicine, randomized controlled trials (RCT) are accepted as being better than observational studies. And within observational studies, it is agreed that more test subjects are preferred to fewer. A recent paper compares articles that get covered in the leading newspapers to articles published in the best medical journals. It finds that newspapers are more likely to discuss observational studies than RCTs. And these observational studies have smaller sample sizes than the observational studies published in the best medical journals. The paper suggests that the press skews reporting away from the most reliable research. (Given that this paper is not a million-subject, RCT published in the New England Journal of Medicine, you should infer that I "like" its conclusion.)

An article posted Friday in The Atlantic brought this topic to mind. It described how researchers took 37 girls and randomly assigned them to play with one of three dolls. The dolls were a Barbie doll, a "doctor" Barbie doll, and a Mrs. Potato Head. Each of the girls played with their assigned doll for five minutes. The girls then answered a short survey about what types of jobs they could do when they grew up. The girls who were assigned Barbie dolls saw fewer jobs open to them.

Halfway through The Atlantic article, it reads:

The paper has a few limitations: The sample size was small, as was the effect size. Still, it's ... icky. Why does a plastic spud make your daughter more likely to think she can be a scientist than an actual scientist doll does?

That would be a great question, if it was prodded by a study with more than 37 girls. I wondered if other media outlets would try to downplay the weakness of the research methodology. Interestingly, the LATimes included the weaknesses in the very first paragraph -- including the five-minute play period. Their first paragraph states:

After spending just five minutes with Jane Potato-Head, girls believed they could grow up to do pretty much anything a boy could do.

Comments and Sharing

CATEGORIES: Economic Methods

COMMENTS (10 to date)
Tom West writes:

I'll say the obvious. Papers want to report "interesting" results, where interesting usually means unambiguous and easily interpretable by the layman.

Since good science only rarely finds things of interest to the layman, there's going to be a significant bias towards reporting science that is both observational and uses low sample sizes (and is trying to push towards an interesting result).

I doubt if a typical journalist can tell the difference between a strong study and a weak one.

Ted Craig writes:

How many journalists even read the study and how many just read the press release from the university?

Roman Lombardi writes:

"Stop telling girls they can be what ever they want to be when they grow up. It never ever occurs to them that they couldn't until you say something like that"

-Sarah Silverman

Josiah writes:

Here is an interesting example of this effect. Last week, the LA Times ran a story entitled "Five-Second Rule Has Some Merit, Study Says." The study in question purports to test the old adage that it's okay to eat food dropped on the ground as long as you pick it up within five seconds. After describing in detail the results of the study, the following appears as the very last sentence of the article:

"The study has yet to be peer reviewed or officially published."

Mark V Anderson writes:

Thank you for this post. This study was in my local paper in Minneapolis, and I wondered how good this study was. The paper mentioned that there was only 37 subjects, but they never said it was based on playing with the dolls for only 5 minutes. Heck, this couldn't have taken more than a week to do this study; did they have some open time in their schedule? There are thousands of scientific studies that are done every year. It is amazing what makes the papers. Well, not amazing, but really icky, to quote the Atlantic person.

Shane L writes:

I would second the first three comments by Tom, Joseph and Ted. I used to work as a journalist, occasionally freelancing for a tabloid. We had never been trained in statistics at even a basic level so we really treated scientists as priests and presumed that they knew what they were talking about: "scientists say", "researchers now believe", etc. We would scan the press release, give the research team a quick phone call for an additional quote, maybe phone someone who we thought might disagree, and quickly write an article. We simply didn't have time to read a research paper cover to cover and ponder it.

If it was boring or obvious, the editor wouldn't touch it. If it was interesting or bizarre, he'd love it!

Terry Hulsey writes:

The sillier the method, and the sillier the results, the more empowered are those who wear a white lab coat like a shaman's garment. If a method or result is commonsensical, then why is the self-anointed expert necessary? Obviously then, the more outrageous the statement that can be foisted on the layman, the more empowered is the occult practitioner.
Consider the following statements that assault common sense. "There can be general underconsumption." "There can be a 'paradox of thrift' such that savings are not invested." "There can be a general economic disequilibrium that can only be alleviated by government spending." "Money has no intrinsic worth." "Inflation produces prosperity."
These statements should be proof that in modern times the economist has assumed the position of the primitive shaman.

ChrisA writes:

Actually I think this lazy approach by Media types is gradually changing as a result of the internet, and especially comment forums like this. I must admit that probably 15 years ago if I read something in a reputable paper, I would probably regard it something like a peer review, they wouldn't have published it unless it was real would they. The only difference was in areas where I was somewhat familiar with the science, and there I remember I usually found what was published simplistic at best and quite often wrong. But I never really connected the dots and extrapolated to other areas. Nowadays I scroll down to the comments and can usually see someone providing a counter view. So as well as propagating false memes, the internet is also good at killing them and the media are getting away with far less than they used to.

Shane L writes:

Incidentally, here is an honest comment of the Science & Technology Correspondent of broadcast company RTE, admitting their difficulties in reporting science stories:

"It is next to impossible for us in the media to verify for certain the veracity of the claims being made in such journals. Instead we rely heavily on the editorial rigour and long-term credibility of the established peer-review journals, for whom sloppy mistakes or in the worst case scenario fraudulent results can prove hugely embarrassing. We can ask independent experts in the field for an opinion on findings and claims being made in papers. But in the final analysis those comments remain opinions, until such time as those commentators have a chance to probe them in the lab themselves.

"It’s a tricky challenge, and one which underlines the importance of science journalists putting every new “discovery”, “breakthrough” and “ground-breaking” development in context."

Comments for this entry have been closed
Return to top