Bryan Caplan  

Four Unrelated Points on Education

PRINT
Noah Smith on Education... On Greece...
1. Tyler Cowen responds on signaling.  He saved his best argument, though, for this morning's conversation in his office.  Are we really to believe, he asked me, that signaling is the one force that vitiates the rule that everyone earns their marginal product?  My response: Signaling is just a special case of a broader force: statistical discrimination.  In statistical discrimination models, people earn the average marginal product of people who superficially resemble them.  If you think that statistical discrimination often has lasting effects, you should have no fundamental objection to the signaling model.  (And if you think that taste-based discrimination often has lasting effects, you should be even more open-minded!)

2. Arnold prefers credentialism to signaling.  (We've debated this before; see here, here, and here for starters).  I'm happy to admit that government pay grades and licensing are part of the reason why seemingly useless studies pay off in the labor market.  But private employers of workers in unlicensed occupations still hand out interviews and jobs partly on the basis of your grades in Ancient Philosophy and European History, too.

3. My primary complaint about Tyler here is that he falsely insists that the returns to education literature is relevant to signaling.  But I've got a secondary complaint with this statement:
What's striking about the work surveyed by Card is how many different methods are used and how consistent their results are.  You can knock down any one of them ("are identical twins really identical?, etc.), but at the end of the day which are the pieces -- using natural or field experiments -- standing on the other side of the scale?
None of these methods find much evidence of ability bias, true enough.  But there is one plausible empirical approach that finds large ability bias.  Namely: Directly measure ability, then re-estimate the return to education controlling for ability.  The most famous example, of course, is estimating the return to education controlling for IQ, which reduces naive estimates of the return to education by about 40%. 

Now tell me.  Which is a more compelling test of ability bias: using compulsory schooling as an instrumental variable, or statistically comparing people with the same measured ability and different amounts of schooling?

4. David Leonhardt reignited this debate by remarking that "a bachelor's degree pays off for jobs that don't require one: secretaries, plumbers and cashiers."  Observation: Replace the word require with the word use - and this supposedly pro-education factoid becomes yet another reason to believe in ability bias and/or signaling.

P.S. I couldn't resist searching David Card's 63-page article on "The Causal Effect of Earnings on Education" for "signaling/signalling."  There is only one appearance in the entire body of an article that Tyler treats as a decisive refutation of the signaling model.  And Card explicitly states that his approach makes no distinction between education that causally raises wages by increasing productivity and education that causally raises wages via signaling.  Footnote 14:
The market opportunity locus y(S) may reflect productivity effects of higher education, and/or other forces such as signalling.


Comments and Sharing





COMMENTS (11 to date)
Steve Miller writes:

Can someone please explain to me why we live in a world where instrumental variables are preferred to direct controls?

Noah writes:

I was hoping for the EconLong Trifecta, but I guess I'll have to keep dreaming... ;-)

Nathan Smith writes:

I wonder whether anyone has taken into account the following hypothesis: that people with education earn more because they have debts to pay, and therefore have to work harder. That's my life.

Finch writes:

> that people with education earn more because they
> have debts to pay, and therefore have to work
> harder.

The corporate finance literature suggests firms behave this way. Some debt pressures them into making better decisions.

It wouldn't be that surprising to see that individuals feel the same effect.

Jack writes:

Small typo: the book chapter's title is: causal effect of education on earnings

Adam writes:

The Card paper includes estimates that control for IQ scores. For the IV estimates, the marginal effect of schooling is the same. (And for the OLS estimates the reduction is 20% not 40%.)

I also think your view is incompatible with what you might call more macro evidence. A model of statistical discrimination has strong empirical predictions about, for example, the share of the population with a college (or high school) diploma and the wage premium for college. It must be true that, on average, each group gets paid its marginal product. So in a country where rates of college completion vary from like 20% in WV to 60% in HI, and coming off an era where highschool and then college completion rates went rom across time from 0 to 90% and 0 to ~30% (say, 1915 to 2000), how is it possible that the return to college has been relatively constant.

A more general point is that, you guys are conflating the point "college should be subsidized" with the point that the marginal person to go to college reaps a significant reward. You are basicaly telling 18 year olds on the fence between college or not, to not go because the social return is low even though the private return is high. There are many reasons to argue that we do a bad/inefficient job of financing private investments in education. But it sounds like you are throwing that 18 year old kid under the bus to advance a different point.

scott cunningham writes:

Steve asked "Can someone please explain to me why we live in a world where instrumental variables are preferred to direct controls?"

Because rarely are these controls satisfying the conditions required by the Gauss Markov assumption that the conditional mean of the disturbance be zero. IQ is measured with error, for instance, making it endogenous (classical errors in the variables).

dbc2 writes:

As an aspiring econometrician, albeit not one who plans to work in labor economics, this post lets me bring up one of my favorite underappreciated statistical concepts: post-treatment bias. Suppose one wants to measure the causal effect of A on B, and one wants to know whether or not to control for C. Generally, if one believes C may be correlated with A and there may be some causal pathway between C and B, it is necessary to control for C to get a consistent estimate of the direct effect of A on B (A here being education, B being earnings, and C being ability). However, even if C is correlated with A and there exists a path from C to B, it is not always appropriate to control for C. The general case when this should be avoided is when A has some (direct or indirect) causal effect on C. In this case, controlling for C will lead to biased and inconsistent estimates of the effect of A on B, for the simple reason that the estimates will not account for the effect of A on B that is mediated via C. So, for example, if one wants to know the causal effect of being shot in the head on mortality, it is incorrect to control for, say, the presence of a hole in the brain. Indeed, one could run such a regression and would likely find that the estimated coefficient on being shot in the head is quite small, much smaller than the uncontrolled estimates. However, it would be quite unwise to conclude from this that being shot in the head has little effect on mortality.

Why is this relevant here? Well, there may be some reason to believe that IQ scores or proxies for them are themselves affected by education. Primary evidence for this comes from actual randomized controlled experiments, such as the Perry preschool program, for which weak instruments or instrument validity should not be a concern, in addition to quasi-experimental methods.*1 The results suggest that the causal effect of education on measured IQ is positive and statistically significant, although critics have loudly pointed out that effects appeared to be modest and to fade over time for the intervention considered. Though one may again quibble regarding instrument validity or relevance for many such quasi-experiments, the presence of such evidence suggests that one should be cautious regarding straight OLS (GLS, NLLS, Nadaraya-Watson regression, series least squares, parametric or nonparametric quantile regression, etc. etc. you get the point) estimates controlling for IQ, as they may be somewhat biased by the presence of such effects. The solution, of course, is once again proper experiments or credible quasi-experiments, which should provide estimates of effects which are not influenced by either omitted variable bias or post-treatment bias. (Under some assumptions, which would require finding good controls for IQ, the method of extended instrumental variables of Chalak and White (2011) *2 provides a method which should allow the quantification of such bias. That paper also provides a good reference regarding precisely when and how to control for variables which may be related to the variables of interest). And so, given the preponderance of quasi-experiments and actual randomized controlled experiments finding significantly higher effect for education than provided by the method using controls, it seems as reasonable to find such a method to be less trustworthy than to place significant credence in the estimated coefficients as reasonable measures of true causal effect.

This may be less relevant, of course, if one uses estimates of IQ taken prior to the administration of the treatment. In such a case, there still exist concerns as the causal effect of education on income may differ across subjects with different measured IQ, but methods exist to account for this which do not put too many additional requirements on the quality of the data beyond those used in estimating causal effects more generally. Heckman and Vytlacil *3 have a number of papers on estimating treatment effects in the presence of an essentially arbitrary degree of heterogeneity in returns which should be relevant here, though not doing labor economics I don't know whether such techniques have been used to address this particular question. However, given that the Card paper you reference suggests that studies which use both a quasi-experimental method and a control for IQ have found little difference in estimated effects, this suggests that the finding that controlling for IQ reduces the effects by the magnitudes you suggest may be a feature of the particular measure used in the studies (unreferenced here) you have examined, and I would be quite careful regarding whether the measure is pre or post treatment, as well with areas of heterogeneity which are known to affect the estimated return (age, previous education, location, time period of sample, etc), as such results may not be representative of the population of relevance for the debate.

As for the other issues raised, I don't have much to say, except to note that even you seem to be confusing signaling and ability bias explanations. The reason the Card paper notes that most of the methods have little to say regarding signaling theories per se is not that there exists residual concern that these estimates are capturing ability bias, whether that ability is conventional intelligence, work ethic, or any other sort of characteristics relevant to job performance: these appear to be reasonable estimates of the true causal effect of schooling, and so signaling models cannot be taken to say that students should not be concerned with their level of education. Thus, your point 4, which provides an indirect argument for ability bias (which requires, to be a valid critique, that the causal effect of education on performance in these jobs is small or zero, which may or may not be true but appears not to be an interpretation accepted by other participants in the debate, and so ought to be evaluated by some test more severe than mere introspection), seems irrelevant in the face of the large literature which shows that the causal effect, taking into account ability bias, is indeed substantial. Rather the concern is that these measure individual rather than equilibrium effects. If little Johnny gets one more year of school, he probably will earn approximately 5-15% more because of it (depending on lots of things). The issue is whether this is socially beneficial, or if Johnny is just free-riding off of other students who are on average more able than him so that employers will believe he is more able than he truly is, or if this reflects real improvement in work-relevant abilities.

The appropriate evidence for such a claim cannot be gleaned purely from micro-level studies of the sort considered in the returns to education literature, and must rely on macro-level evidence. Cross-national evidence finds a positive correlation between aggregate educational attainment and output, but that is not sufficient to make conclusions regarding true causal effects at the macro level (which would be decisive against a large signaling effect as at the market level the standard Spence story of pure signaling should yield a zero or slightly negative correlation between aggregate schooling and output as additional schooling may have an opportunity cost in terms of amount worked). Barro and Lee have studies on this, and note that these effects are similar when using an IV approach, though again one can try to dispute their IV. Significant causal effects at the macro level may be consistent with a signaling explanation under a certain strange combination of circumstances which I have not heard advocated by anyone but is at least logically possible, which is that individually schooling has no causal effect on job-relevant ability and serves purely to differentiate high and low types, but at the aggregate level schooling has significant positive production externalities. This could be the case, say, if the act of schooling itself has some sort of external benefit (college students contribute to the stock of knowledge or what have you). Such a story is at least consistent with the Barro-Lee evidence, which finds that at the aggregate level the returns are highest for higher education, while the micro-evidence suggests that returns are highest for low levels of education. Of course, this would require one to believe that the signaling value of preschool is greater than the signaling value of college, which may not be the most palatable assumption, as college education is often put on resumes but preschool education is not. As such, more plausible causal stories are that there are both individual ability and aggregate externality effects of education and the former predominate for primary education while the latter predominate for secondary and tertiary. Or, as the macro evidence is much less developed, there could be some issues with the instruments such that we should discount the macro-level studies as well. A previous commenter mentions time series evidence which is also consistent with significant positive aggregate returns, which should account for unobserved country-level heterogeneity (fixed effects) but again may not be conclusive due to the possibility of endogeneity over time.

One may also look to predictions regarding the specific mechanisms involved in signaling and statistical discrimination. A little bit of bibliography-hopping directs me to a paper which tackles this very question, estimating the relative effect of signaling by examining how quickly wages converge upon hiring to the level consistent with true rather than inferred ability (using achievement test scores as their unobservable measure of true ability).*4 They find that employers learn relatively quickly, and that an upper bound of 10% for the amount of the return to education accounted for by signaling seems plausible, though they admit that if you fiddle around with the parameters enough you can get this upper bound to reach values of at most 45%, suggesting that estimates of signaling effect around 40% are reaching the upper bound of the upper bound of plausible estimates. Note also that since this study uses achievement test scores as a measure of ability, the measure of returns to education may be subject to the post-treatment biased discussed above, and so this study will overestimate the amount to which signaling can contribute to the returns to education. Given that the micro evidence is not supportive of large ability bias, and neither the macro evidence nor direct tests of statistical discrimination present evidence for substantial signaling effects, it appears that at best, if on were to discount the data due to lack of certainty regarding causal effect estimation, one would not be able to conclusively support point estimates where signaling accounts for a large proportion of the returns to education, except as high points of a very wide confidence band centered at a much lower level. The returns to education is probably one of the most studied areas in all of empirical economics, and yet there seems to be very little substantial evidence in favor of strong signaling effects. Generally speaking, if one were to take even a very high estimate of the degree of ability and signaling bias in estimating the true productivity returns to education and very low estimates of the uncorrected return, one would have an estimate still substantially higher than that of the average return on bonds, and taking even moderate values of the true return suggests rates much more comparable to the average return to equity or higher, which might be what someone who considered the topic naively would expect if human capital has properties roughly similar to physical capital. Even with high estimates, whether such accumulation should be subsidized is not answered by measures of the true return, as that depends on what one believes about the presence of market failures (externalities, barriers to arbitrage, etc) in human capital markets, which requires more information than just rates of return.

1: Hansen, Heckman, and Mullen (2004), "The effect of schooling and ability on achievement test scores," Journal of Econometrics, 121 1-2, pp. 39-98.
2: Chalak, K. and H. White (2011), “An Extended Class of Instrumental Variables for the Estimation of Causal Effects,” Canadian Journal of Economics, 44, pp. 1-51.
3: Heckman, James J. (2010), "Building bridges between structural and program evaluation approaches to evaluating policy" NBER working paper 16110, http://www.nber.org/papers/w16110 provides a relatively comprehensive and nontechnical summary of this work.
4: Lange, Fabian (2007) "The Speed of Employer Learning," Journal of Labor Economics. See also Lange, F. and Robert Topel (2006), "The Social Value of Education and Human Capital" Handbook of the Economics of Education, vol. 1, pp. 459-509. for a similar study containing a broad literature review on the issues here.

Dan Carroll writes:

Wow, that last comment (dbc2) was long.

I wonder what is the correlation coefficient between believing that ability is genetic and believing in the signaling model of education.

Generally, my observations are:
1. Signaling and ability bias accounts for all of the difference in earnings between Harvard grads and grads of other reasonably well-run 4-year schools. This is corraborated by the fact that earnings of Harvard grads get an initial boost after graduation, but dissipate over time relative to comparable grads elsewhere. Indeed, anecdotal evidence suggests that one can get better instruction at the latter.

2. Both signaling and education (college versus no college) have high ROI's to the individual. Education has positive externalities as well, as does perhaps signaling.

3. However, it is not clear that formal institutional training in its current form is superior to altneratives, such as privately motivated training verified by certification and testing. Indeed, the private sector has been evolving towards a certification regime for various specialties (and in many cases going too far by legislating it).

Tro Camplin writes:

Please ask Tyler what it is I am signaling with my Ph.D. in the humanities, M.A. in English, and B.A. in Recombinant Gene Technology, because whatever it is I am signaling, it's not that I should be hired to do anything. I'm serious. Ask him. If education is signaling, he should be able to answer what it signals in specific cases.

Martin writes:

@dbc2
Great post. I feel I must read all of your work. Could you post some links please?

Thanks

Comments for this entry have been closed
Return to top