Bryan Caplan  

The Ethics and Etiquette of Statistical Discrimination: A Critique of Readers' Comments

PRINT
Help Me Understand... Minsky-Jones Answers Tyler Cow...
Last week, I posed the following challenge:
[T]he inevitable existence of some statistical discrimination doesn't make the practice immune to criticism.  You can grant that it's OK to some degree, but - even if the law is silent - still limited by ethics and/or etiquette.  But precisely what limitations do you think are justified, and why?
Many readers took the bait.  Here's my critique of the most interesting responses.

From Phil:
Perhaps if the variation within the group is much higher than the group difference from the mean, the benefit (the amount of error reduced by statistical discrimination multiplied by the cost of error) is much, much less than the cost...
But don't market forces already provide incentives to take within-group variation into account?  If half of employers act as if everyone of group X is average, that leaves remaining employees with plenty of cream to skim.  So is there any obligation to go beyond this market outcome?

From HispanicPundit (who doesn't "necessarily agree with these reasons"):

What about when it harms someone else? That seems like a fair limitation.

Statistical discrimination always harms people who are above average for their group, so on this theory statistical discrimination is never permissible.  Which is crazy for the reasons I gave in the original post.

If statistical discrimination leads me to give someone lower preference in an interview, for example, maybe I should give that person a deeper look to compensate for what could be my statistical error?

Statistical error is always possible, but it could go in either direction, so it's hard to see why compensation is typically in order.

Or how about, if it runs parallel to racial stereotypes. For obvious historical reasons.

Racial stereotyping is of course one common form of statistical discrimination.  But everyone uses racial stereotypes some of the time; even black taxi drivers hesitate to pick up groups of young black male passengers.  So blanket objections aren't plausible.

From Henry:

[T]he reasons for different standards mostly lack any philosophical depth and are instead post-hoc reactions to past injustices. We frown upon racial discrimination not necessarily because racial discrimination is currently worse than any other kind, but because of what it used to be like.

I tend to agree.  In signaling terms, people criticize statistical discrimination because they don't want people to think that they'd engage in taste-based discrimination.

From David O:

I would make a distinction between mutable and immutable characteristics. Then I would recognize a few of those are a little fuzzy because while these characteristics may be changeable, they are very difficult to change.

We should avoid discriminating based on truly immutable characteristics such as race, gender and sexual orientation.
But everyone statistically discriminates on the basis of immutable characteristics - when you prefer a female baby-sitter, market Maxim to men, speak Spanish to someone who looks Hispanic, etc.  In fact, it's often seen as rude not to statistically discriminate; for example, focusing on stereotypically male interests in a mixed-sex conversation.

David O adds:
I would also argue that from a political economy standpoint, libertarians should oppose statistical discrimination. The idea of equality of opportunity, that anyone from any background can become successful, is one of the best safeguards against redistributive socialism. Destroy it and watch how the politics becomes a discussion of class warfare.
The problem, though, is that market forces often strongly encourage statistical discrimination.   So libertarians have two choices: either join the popular chorus against it, but insist that it should still be legal; or argue that it should be legal because it isn't nearly as bad as it's perceived to be.  This might make markets less popular, as David suggests; but endorsing an ideal that markets will never meet seems counter-productive in the long-run.

From stephen:
It seems the "wrongness" of discrimination is inversely proportional to its usefulness. The more you know about the individual(s) the less you need statistics. It would be silly the assume someone is going to take on the average value of the population when you know they are in the 95th percentile, for instance.
From Oliver Beatson:
I think that discrimination is probably only unethical (if at all) when it is intentionally harmful.
Aren't both of these statements just another way of saying that taste-based discrimination is bad, but statistical discrimination is OK?

From Carlsson:

Statistical discrimination makes a lot of sense if rationally applied, but think of the data requirements in order to apply the theory correctly. You need a whole lot more than the means of two (or more) distributions, you'll need variances too, along with skewness. You'll need to be able to compute probabilities of Type I and II errors, obviously...

What most people call statistical discrimination is just the application of rules of thumb, based on little or no data. Prejudice is not statistics.

This strikes me as an unreasonably binary perspective.  In the real world, there's a continuum between careful actuarial analysis and pure prejudice.  And I'll bet (terms open to negotiation) that popular stereotypes correlate highly with higher-quality statistical work.

Philo gets closest to my own view:
Relying on statistical inference is not unethical; it is simply rational. But etiquette will sometimes require that we conceal some statistical inference we have made about someone from that person; we ought not needlessly to hurt other people's feelings.
Bad etiquette and bad ethics overlap.  It's rude and wrong to tell people they're ugly, even (especially?) when it's true.  The same goes for statistical discrimination.  In fact, there's a good case for a broader ethic of "Don't ask, don't tell."  It's rude and wrong to ask others if they think you're ugly - or whether they take statistics about your group into account when they decide how to treat you.

From John:

It's perfectly fine to discriminate. Rules of thumb are fine. Incorrect rules are fine. Prejudice is fine...

This seems to me to be the obvious position for any libertarian.

The obvious position for any libertarian is that these things should be legal, nothing more.  "Fine" is much stronger.  In any case, I'm not just presenting the libertarian position; I'm trying to find common ground between libertarians and reasonable people with other views.

Two points that I don't think anyone made in the comments:

1. While "Don't ask, don't tell" is the most reasonable limitation on statistical discrimination, a norm of "give people a chance" is also plausible when the cost is low.  It's like letting others merge in front of you in traffic.

2.  Many people have contractual or semi-contractual obligations not to statistically discriminate in some ways.  Professors, for example, are supposed to grade students based on their classroom performance alone.  In purely statistical terms, this is suboptimal - a weighted average of classroom performance and outside information would yield more accurate predictions.  But part of the product universities sell is "judging people as individuals" - and it would be wrong to renege on that promise.  In a free market, I suspect that many other businesses would pledge allegiance to similar principles - provided, of course, that's it's inexpensive to do so.


Comments and Sharing





COMMENTS (14 to date)
Hyena writes:

When would the cost be high? Statistically speaking, we live in a very low risk world. When the cost truly is high, we usually engage in lots of research ahead of time. Anyone who doesn't is at a disadvantage: research can uncover false signals or crystallize views based on verifiable facts. I've watched a lot of people be hired through false signaling and myself have often been hired over much better applicants (I love knowing who I beat out) because I'm very good at disabling the interview process.

The issue that arises with point two is that it assumes things about the people judging. You're assuming that bias or non-academic preference would not overtake any signals. The alternate story is one of reproducibility: objective grading standards mean that any person should be able to reproduce your results.

Likewise, the other tests of discrimination should resolve on that issue: can other people reproduce your decisions using merely the transcripts of your interactions? If not, then you're relying on "secret knowledge" and accepting this seems like an epistemic sin.

Steve Sailer writes:

The ethical response is to try to judge people individually by objective measures, avoiding favoritism, just as institutions should avoid nepotism, cronyism, and the spoils system. For example, firemen and other civil servants could be promoted by blind-graded civil service exams, an idea that was invented in China 1400 years ago, and was brought to the federal government in the Chester Arthur administration.

This entire discussion seems to be going on in a Bizarro Universe in which the Supreme Court never invented the legal rule of "disparate impact" 39 years ago. The legal issue for the last 39 years has not been whether you can unfairly discriminate against protected classes -- you can't -- but whether you can use objective, fair measures to treat job applicants in an unbiased fashion, or if you'll have to systematically discriminate against unprotected classes to avoid legal trouble.

Steve Sailer writes:

Hyena writes:

"I've watched a lot of people be hired through false signaling and myself have often been hired over much better applicants (I love knowing who I beat out) because I'm very good at disabling the interview process."

Precisely. And the regime of disparate impact law over the last 39 years has made interviewing (along with resume keywords) the legally safest way to hire, even though it's obviously much more whimsical than written methods. For example, when I was at Dun & Bradstreet in 1993, I was told to hire a computer programmer. So, I went to Human Resources and asked for their computer programmer test to give job applicants. I was told that if D&B had a written test, they'd need to expensively validate it for the EEOC as meeting the strict test of "business necessity." I was, however, free to make up any programming questions I liked and give them to job applicants orally, as long as I didn't write them down, which would leave a paper trail.

On a much vaster scale, the outgoing Carter Administration junked the federal civil service exam, the new and sophisticated PACE test in 1981, in a consent decree to the Luevano disparate impact discrimination case. Carter's people promised that a new federal civil service exam would be introduced as soon as one was invented that didn't have disparate impact.

That was 29 years ago.

In place of the civil service test, the federal government has since used a hodge-podge of hiring techniques.

Je writes:

I am curious about your analysis of whether grades should be based on statistical discrimination. If you wanted the grades the to be maximal useful in isolation as a predictor you would discriminate, I agree on that.

But if you are supposed to use the grades in combination with other information it is far less clear. Potential employers presumably know the gender for example of the applicant in addition to the grades. So there would be no reason to adjust the grades of women upwards just because women are usually better academically. The employer can make that adjustment themselves.

Tracy W writes:

Professors, for example, are supposed to grade students based on their classroom performance alone. In purely statistical terms, this is suboptimal - a weighted average of classroom performance and outside information would yield more accurate predictions.

More accurate predictions of what? Shouldn't grades be measuring classroom performance? How does adding in outside information make for more accurate predictions in that case?

david writes:

Certain kinds of statistical discrimination are more likely to create entrenched social problems than others, and the law isn't perfectly intended to counteract this but instead is a patchwork job?

Josh Weil writes:

@Tracy W

"More accurate predictions of what? Shouldn't grades be measuring classroom performance? How does adding in outside information make for more accurate predictions in that case?"

I think grades - to a degree - also measure how well one knows the material. For example, if a student produces a whole semester's worth of high quality work and has an emergency and does extremely poorly on an exam, I think a professor can take those circumstances into account when assigning a grade, as long as the cost is low.

Philo writes:

You should probably distinguish between prejudice and discrimination, the former being purely epistemic--a matter simply of belief or expectation--while the latter is practical--a matter of *action* (directed toward the person being discriminated for or against). Statistical prejudice (assuming one makes use of all the available evidence) is simply rational, but it is a further question how to act, given one's epistemic condition.

Example: suppose the quicker I can find someone with property P and bestow a valuable prize upon him/her, the greater the reward *I* will receive. If I encounter you, and (initially) all I know about you is that you belong to group G, and I know that 90% of the members of G lack property P, I should not expect you to have property P. (If property P is a good property, I will be displaying prejudice *against* you.) But it is another question how I should treat you--what action I should take toward you. If I dismiss you from consideration, proceeding to examine other people for possible possession of property P, I will be discriminating against you. This may or may not be rational and proper. My best course might be to gather more information *about you* (to "give you a chance"); perhaps you can even demonstrate to me, quickly and easily, that you do, after all, possess P. Among the additional information that goes into determining whether discrimination is proper: how quickly and easily can I pass on to other candidates, how common is P in the general population, how easy is it likely to be to test other candidates for P, how rapidly does my own potential reward decline with the passage of time, etc.

It is easy to see that, for now, I should expect you *not* to have P; I should be prejudiced against you. It is a much more complicated question how I should deal with you: what form (if any) of *discrimination against* you would be proper.

gabriel writes:

another model would be if statistical discrimination were endogenous to the traits on which it is based. or in plain english, imagine if the reason why a group tends to be characterized by certain problems is because people assume that members of the group have those problems. (i believe roland fryer has a formal model along these lines). a norm of ignoring the priors would then be a collective action attempt to shift to a better equilibrium.

Steve Sailer writes:

Let's consider a real-world example to see what America is actually like 2010, rather than in a theoretical world in which Bayesian insights are held in high regard.

Robert Klitgaard, now with the Rand Institute, pointed out in his book Choosing Elites in the 1980s based on his work in the Harvard admissions office and wide review of the social science literature, that African-Americans who score high on the SAT tend to underperform in college in terms of GPA by about half a standard deviation. There are a lot of theories about why this is, but the most unsettling and Occam's Razorish is the old statistical tendency of regression toward the mean. High scores by blacks on individual tests are more likely to be flukes than high scores by Asians, because the black mean toward which data points regress is lower than the Asian mean.

Now, in the quarter century since Klitgaard published his finding, has a wave of discrimination against high-scoring blacks swept the country as everyone comes to understand this logic?

Not exactly. In fact, almost nobody has ever heard of it.

Instead, the dominant assumption in the American discourse is that the SAT is biased against blacks in that it underpredicts subsequent performance. That it tends to overpredict subsequent achievement is simply unknown, if not unthinkable. In countless testing situations, the legal assumption is that blacks and whites ought to score the same on the hiring or promotion test, and if they don't lead to outcomes meeting the EEOC's Four-Fifth's Rule (which they always don't), the employer risks expensive litigation. (So, if you're smart, employers impose de facto quotas on themselves.)

My personal view is that in formal selection processes, we shouldn't worry the Klitgaard Effect. We should make the formal process as predictively valid as is reasonable within a cost-benefit structure, and put up with some inevitable regression toward the mean effects. But the actual public debate is definitely not between the Bayesian advocating color-aware admissions and hiring and the Sailerites advocating colorblind selection, even at the cost of some efficiency.

Bob Layson writes:

If we are to treat people according to 'their' group do we not have a moral/intellctual duty to get the group right. They may be sortable into two groups or many more or be in a group of their own. What to expect of a member of many groups may be unknowable statistically, or rather, the expectations may point counterpointedly.

Tracy W writes:

Josh Weil - agreed, but that's not statistical discrimination. That's using past information about the person themself. And I don't see how statistical discrimination would help a university professor - most topics taught at university are only learnt by a minority of people, no matter what ethnic group or age or gender breakdown. Students doing my electrical engineering degree may have been 90% male, but that doesn't mean that 90% of men know a lot about Maxwell's Equations.

Hyena writes:

Steve Sailer,

I actually work for the Federal government and we use a civil service exam plus a scoring system for resumes. Interviews add very few points to an applicant and only to applicants screened through the civil service exam and resume review as having very high scores relative to the pool (top 3).

I find this to be a very good idea for applicant screening, though our implementation and other aspects of personnel practice leave much to be desired and generally mitigate the benefits of the system.

Steve Sailer writes:

The State Department and some other departments came up with their own exams to replace the PACE. The military, of course, continues to use the AFQT/ASVAB, and very seldom allows anyone with an IQ below 92 to enlist, even in wartime.

Comments for this entry have been closed
Return to top