Bryan Caplan  

Lake Wobegon on the Job

Who Would Win in an Ideologica... Introducing Bart Wilson...
Neat stuff from Baker, Jensen, and Murphy's "Compensation and Incentives: Practice vs. Theory" (Journal of Finance, 1988):
The lack of financial incentives reported by Medoff and Abraham [32] and summarized in  able I is surprising, but even more surprising is the result that supervisors tend to assign uniform performance ratings and tend not to assign poor performance ratings. Only .2 percent of the 4,788 employees in Company A received the lowest rating; 94.5 percent were rated "Good or "Outstanding". None of the 2,841 Company B employees received an "Unacceptable" or "Minimum Acceptable" rating, and only 1.2 percent received a rating of "Satisfactory"; 95 percent of the Company B employees are rated "Good" or "Superior".

The general reluctance of managers to give poor performance evaluations to employees is  puzzling but consistent with well-documented evidence that most people believe their performance is better than average. Of several studies cited in Meyer [31], one indicates that 58 percent of a sample of white-collar clerical and technical workers rated their own performance as falling within the top 10 percent of their peers in similar jobs, 81 percent rated themselves in the top 20 percent. Only about 1percent rated themselves below the median. Another study of 1,088 managerial and professional employees found an even stronger bias: 47 percent rated their own performance in the top 5 percent, 83 percent rated their performance in the top 10 percent, no one rated their performance below the 75th percentile.

The biased perceptions of individuals regarding their own performance may explain why supervisors appear to have a strong aversion to giving subordinates poor evaluations. There will be more dissatisfaction induced by telling someone that he or she is in the bottom 20 percent than there will be satisfaction induced by giving a top-20-percent rating. Telling everyone that they are average will make almost everyone unhappy. Forced-ranking systems will therefore generate considerable conflict in organizations. Similarly, pay-for performance systems that provide large rewards for good performance and small rewards for mediocre performance will be avoided since these schemes force managers to give poor evaluations to a large number of employees. Visible rewards will not be granted for superior performance unless there is significant incentive for superiors to undertake the unpleasant task of telling subordinates that they are poor or even average performers.
Would my pet proposal for improving student evaluations make any difference here?

COMMENTS (10 to date)
Tom West writes:

It's been quoted several times and I tend to agree. Having a realistic sense of exactly where you stand in the hierarchy of ability is highly correlated with depression.

An organization that fails to account for the fact that their employees are human beings and not homo economus is probably a failing organization.

I understand the intellectual desire to have reality match nice, elegant economic models, but trying to force square human pegs into homo economus holes is going to result in unhappy human beings and disastrous productivity.

In other words, realistic job evaluations are almost certainly not in *anyone* interests including the organization's.

Emily writes:

This is an issue in military performance ratings. It makes it hard from a research perspective to tell things like which kinds of officers are higher-performing (for instance, by commissioning source) or what the relationship between performance and attrition (are your best officers leaving?) when your measure of performance has so little variation. Time to promotion gets used a lot instead, and you can look at performance in the actual commissioning sources (like, how good were their grades at the Academy) but it'd be nice to have functional OERs as well.

However, I gather they still figure out how to communicate who the high-performers really are via the written sections. I suspect the same thing happens among civilians- that for the purposes at least of internal promotion, there are ways to distinguish between candidates who all have "superior" ratings.

David writes:

Supervising numbers of employees who do a similar, but not easily quantifiable, job offers few incentives to rack and stack for job performance purposes. The best may not be demonstrably better than the worst, leading to value judgements that make performance reviews painful for supervisor and supervised. It only gets worse if compensation is tied to the review. Easier to give "average" grades and comp.

MingoV writes:

Anyone who has been a supervisor or a manager understands the temptation to giver all workers an above average rating (the Lake Woebegone effect). Honest evaluations result in angry employees. That is worsened if evaluations occur only once per year and employees receive no feedback throughout the year.

I was a laboratory director who required supervisors to give honest employee evaluations based on the radical concept that an average employee gets an average score. Supervisors had to show me their evaluations before discussing them with employees, and I rejected upward-shifted evaluations. The honest evaluation system resulted in many angry employees, some of whom settled down after we posted histograms of ratings by lab section. There were few problems in subsequent years.

Low performers were counseled and warned. High performers were praised and given higher-level (and more interesting) duties. Our small pool of bonus money was split among the highest performers.

Handle writes:

In the Air Force, enlisted accessions are indeed Lake Wobegon - practically all of them score above average on the AFQT (equivalent to a percentile-scoring IQ test - Category I + II + IIIA = 50th percentile and above).

Maniel writes:

Evaluations can be useful, but they are most definitely not trivial. In the late ‘80s, I took a one-week engineering management class from Dr. Frank Wagner on the 5 levels of commitment. His message was that we would benefit from committing to 1) our company, and therefore, our direct supervisor, 2) those who report to us, 3) our customer, 4) our specialty (e.g., electrical engineering), and 5) ourselves (and our careers). He constructed elaborate processes for 1) obtaining and independently analyzing anonymous evaluations from our direct reports in a variety of areas including leadership, listening, objectivity, etc. and 2) following up the evaluation results with plans to exploit strengths and address challenges. His messages were that each of us could learn a great deal from the observations of others and that those lessons-learned held the potential to strengthen “the team.” He gave much higher priority to the goal of improvement than to that of comparison. I found his approach to be valuable; it is consistent with many aspects of Six Sigma and ISO 9001 which came afterwards.

ThomasH writes:

If employees have different ideas of what "good performance" is and each does what she thinks is best everyone will think they are above average.

Tom West writes:

Here's an interestingly timed article called
The Poisonous Employee-Ranking System That Helps Explain Microsoft’s Decline that appeared in Slate today.

This is pretty much the thesis:

At the center of the cultural problems was a management system called “stack ranking.” Every current and former Microsoft employee I interviewed—every one—cited stack ranking as the most destructive process inside of Microsoft, something that drove out untold numbers of employees.

Certainly there are no figures, but it certainly meshes with my expectation about the results of such a system.

Glen Smith writes:

In my field (engineering), I found that the ratings inflation was not really all that severe a problem. Anybody still employed was at least meeting expectations or else they would be fired or somehow moved to another position where they were average or better. So, the starting point of an employed engineer on a scale of 1-5 was about 3. I guess that generally meant that you saw an engineer that performed above expectations, this employee probably was just a good employee. The problem was more that there wasn't much room to give high marks to good performers without subjecting that manager to expectations (usually compensation-wise) that he/she couldn't meet. So, there was also a lot of deflation, if you will.

WM13 writes:

As a manager, I note that (i) subordinates who receive middling ratings will be resentful, (ii) anyone who is seriously sub-par should be fired, and (iii) there is room for only a small number of promotions. Accordingly, the optimal strategy is to (i) rate the one or two people you want to get rid of as "bad," (ii) rate the one or two people you want promoted as "excellent," and (iii) rate almost everyone else as "good." And that is what I do, and what I think most managers do. Switching all those "good" ratings to "average" might be intellectually satisfying, but it would achieve nothing.

Comments for this entry have been closed
Return to top