Arnold Kling  

Statistical vs. Material Significance

PRINT
Personality Profile of a Free-... The Indispensable Mark Thoma...

I appreciate Bryan's pointer to an article by Alan Gerber and others on personality and ideology. However, the article illustrates what I consider to be a methodological error. That error is to use statistical significance as a metric.

Statistical significance is not a measure of the importance of a relationship. Statistical significance is a measure of the unlikelihood that you got your results solely due to chance sampling error.

In measuring the importance of personality traits in predicting political ideology, I would ask how much of political ideology can be explained by personality. The answer, if you look at the R2 in table 6 is, "not much." For economic conservatism, in fact, the R2 ranges from .07 to .15. Because the sample sizes are large, this is statistically significant. But from a practical point of view, one would have to describe variation in economic ideology as almost entirely unexplained by these variables.

In statistical research, I believe strongly that you should focus on the practical significance of results. Statistical significance is not practical significance.

For example, suppose that you found that the elasticity of demand for good X was -3.0, but the t-ratio was 1.8, while the elasticity of demand for good Y was -.06 with a t-ratio of 8. The practical significance is that X is elastic while Y is inelastic. But if you were to use the t-ratio as a metric, you would conclude that the demand for Y is more elastic than the demand for X. That would be incorrect.


Comments and Sharing





COMMENTS (6 to date)
conchis writes:
But if you were to use the t-ratio as a metric, you would conclude that the demand for Y is more elastic than the demand for X. That would be incorrect.

Of course, the standard move in this case would be to test whether the difference between the two estimates is itself significant - which it almost certainly would not be. So it would also be a stretch to conclude that e(X)>e(Y).

While statistical significant !=> practical significance, statistical insignificance => unclear practical significance.

Norman writes:

I have to agree with the first commenter. The actual practical significance here is that good Y is inelastic, but there is insufficient evidence to make any sort of claim about good X. The purpose of statistical significance is to identify how confident we can be in the practical results obtained. So while I agree that statistical significance is not sufficient for practical significance, in general it is necessary.

Radford Neal writes:

Certainly one should look at the actual magnitude of the effect for any effects where there is statistical significance. But it's not really right to conclude in this case that personality explains almost nothing about ideology just because the R-squared is small. That would be valid only if the measures of personality and ideology were precise. Actually, both measures are likely to be very imprecise, which could result in a small R-squared even if the actual relationship is very strong.

Marc Resnick writes:

You need both. Without statistical significant, the practical significance is just imaginary. But without practical significance, the statistical significance is meaningless.

I recently completed a study on the dashboard computers that cops have in their patrol cars. We found a statistically significant different in reaction time based on some interface design variables (yes, I am an engineer). But the actual reaction time difference was fractions of a second. Not very helpful in the real world.

Dr. T writes:

I agree with Arnold Kling, though I would have made a much blunter argument.

We see similar statistics in medical papers about risk evaluations (if your test A result is above cutoff B, then your risk of disease C is D times higher than if your test A result is lower than cutoff E), utility of diagnostic tests, efficacy of treatments, etc. Biased authors tout the statistical significance of their studies while minimizing the ridiculously low correlation coefficients and the clinical insignificance of the studies. I blame journal reviewers and editors for not stomping on such misrepresentations.

VRE writes:

Isn't this the same sort of critique made by Deirdre McCloskey regarding statistical vs economic significance?

Comments for this entry have been closed
Return to top