EconLog small logo

James Bullard has a Powerpoint presentation on NeoFisherian economics. It concludes with 8 bullet points; here is the first:

Policymaker conventional wisdom and NK theory both suggest low nominal rates should cause inflation to rise.
Is this actually what they believe? I have mixed feelings. On the one hand IS/LM certainly doesn't say that low interest rates will lead to higher inflation, rather it says that low interest rates caused by a rightward shift in the LM curve (i.e. easy money) will raise inflation. But on the other hand Bullard is certainly correct that both monetary policymakers and New Keynesians economists often talk as if low rates lead to higher inflation. I'm not sure if they really believe that, or if they are just temporarily "reasoning from a price change".

In contrast, the Neo-Fisherian view is that low interest rates lead to lower inflation. This view is also an example of reasoning from a price change, and hence is also false. Low rates lead to low inflation only if the low rates are caused by a leftward shift in the IS curve.

The market monetarist view is that easy money leads to higher inflation, and easy money sometimes lowers interest rates and sometimes raises them. Any reductions in interest rates tend to occur in the short run, whereas higher interest rates tend to result in the long run. In addition, it's more useful to think in terms of causation as going from inflation to interest rates, rather than interest rates to inflation. (The one exception is if you hold the money supply and the IOR rate constant, in which case lower rates are deflationary.)

The simple empirical evidence reviewed here suggests this is not happening even after 6.5 years of ZIRP.
That's right, and that's why the conventional view is wrong.
Even if the Fed begins normalization this year, U.S. and other rates will still be exceptionally low over the medium term.

These very low rates may be pulling inflation and inflation expectations lower via the neo-Fisherian mechanism.

That seems extremely unlikely, given that the markets respond to news of a surprise Fed rate increase in a way that suggests the market views this policy as disinflationary.

For now, I am willing to argue that current inflation is low in part due to temporary commodity price movements, and that inflation expectations remain well anchored.

If the neo-Fisherian effect is strong in the quarters and years ahead, however, we will need to think about monetary policy in alternative ways.

But how would we know if the "neo-Fisherian effect is strong in the quarters and years ahead"? What evidence would we look for? Certainly not a correlation between inflation and interest rates, that's also a prediction of the Market Monetarist model, which relies on the old-fashioned Fisher effect. Instead, we'd want to see evidence that markets interpret unexpected Fed rate increases as inflationary and rate decreases as deflationary.

Don't hold your breath waiting for such evidence.

PS. I travel to England this weekend, to the Warwick Economics Summit, so blogging will be light.


Here's the link.

I will also be giving a talk at St. John's on Wednesday night but I don't have the link for that yet.

Via email, Easterly tells me that their "distance from the equator" variable is "the ratio of degrees of latitude to 90 degrees."  So what do their results imply about the net effect of immigration?  Let's return to their key table, remembering that the published paper accidentally flipped the sign on "landlocked."


So suppose you move the population of a landlocked equatorial country to a coastal country at 45 degrees latitude.  45 degrees latitude corresponds to a "distance from the equator" score of .5.  CEG's results in column (3) therefore predict their log per-capita GDP will rise by .5*1.68 + .25*1.063+.646=1.75.  That almost exactly equals the effect of having ancestors with a tech score of 0 (the minimum) and ancestors with a tech score of 1 (the maximum).

The comparison for columns (1) and (2) is even more lopsided.  If you think CEG provides a clear-cut rationale for restricting Third World immigration, you're suffering from confirmation bias.

David R. Henderson  

Comedian Tackles Immigration

David Henderson

Comedian Steve Gerben has a 30-minute comedy act on immigration.

His idea is brilliant. I had never before thought of reaching people on that touchy subject with comedy, but he does an excellent job.

I won't do my usual comprehensive "go to the x:yz point and watch this" routine because I think that would undercut the way the show builds. But I will highlight a few things.

0:58: How to reduce immigration.
4:40: He makes a big math error. Using George Borjas's estimate, the loss to a high-school dropout who is a minimum wage worker is about $731 a year, not $45 a year.
NOTE: To his credit, Steve, less than 24 hours after I posted my criticism on Don Boudreaux's CafeHayek post, went on YouTube and corrected his error. He has also contacted me to get the math right. Impressive. He explained to me over the phone how he reached his error. Since Borjas didn't say explicitly that the 4.8% loss was a loss in a rate, Steve took it to be a cumulative loss over a period of years and divided by the number of years to get a very tiny annual loss.
7:25: The unintended, but totally predictable, consequences of Alabama's immigration policy.
12:20: Picky criticism because I find economists making this mistake nowadays. If your wage increased to 16 times what it was, you did not get a 1600% increase; you got a 1500% increase.
16:20: Is there, for most people a legal way to come to the United States? And check what he does with that.
18:30: I like what he does with Ann Coulter's views, but I actually think, contrary to Steve, that she's quite pretty.
20:50: He segues to "work the problem." Less humorous but nicely done.
29:30: A moving story to end his piece.

One overall criticism. The first rule of comedy is never to laugh at your own jokes. Never? Well, hardly ever. Gerben is way better at this than the worst comedian on regular TV, Conan O'Brien. That's a low bar. Gergen is actually pretty good. But there's room for improvement.

By the way, the Alabama example above reminds me of an argument I had with my father in the late 1960s. The Canadian government, on a nationalist jag sometime in the 1960s, had changed the tax law so that no Canadian corporation that advertised in a non-Canadian publication could deduct advertising expenses. But Time and Reader's Digest already had set up Canadian versions of their publications. Time had a special 8-page section and Reader's Digest had a special section. They sold advertising to Canadian firms for these sections. So the government, not wanting to disrupt, grandfathered the law to allow Canadian companies to deduct advertising expenses if they advertised in those two publications. Then Pierre Trudeau got rid of that grandfather clause. My father loved his Canadian edition of Time, especially the 8-page section on Canada. He also favored changing the law to benefit Canadian magazines. When the change in the law was being discussed, I told my father that a likely consequence, if the law changed, would be that Time would drop its special Canadian section. Didn't matter. He wanted that change in law. Within months (it might have even been just weeks) of the change in the law, Time did just what I had predicted. My father's reaction? Those bastards. Whom do you think he was referring to? Pierre Trudeau? Guess again.

HT2 Don Boudreaux.

Civil engineers are rarely able to predict the collapse of US highway bridges, at least to within a period of 12 months. How should we feel about that fact? Maybe I'm biased, as my grandfather was a highway engineer in Michigan, but I feel much better knowing that civil engineers are unable to accurately predict highway bridge collapses.

Screen Shot 2016-02-03 at 9.41.28 AM.png
One reason they are so unsuccessful is that they are responsible for both predicting and preventing bridge collapses. So while their conditional forecasts of bridge collapses may be excellent, their unconditional forecasts are lousy. Thus civil engineers may be able to say, "If you don't fix the crumbling 3rd avenue bridge overpass, it's likely to collapse within 12 months". But then other civil engineers in the state highway department go out and prop up the bridge. So it doesn't end up collapsing. Conditional and unconditional forecasting ability---they are very different concepts.

Economists have lots of good models of the economy, and would probably be able to make pretty good conditional forecasts of recessions. "If you let the price of NGDP futures contracts fall by 8%, a recession is likely."

But economists don't just predict, they control monetary policy. As soon as a recession is predicted, the Fed tries to prevent it. They raise NGDP growth expectations. Thus while conditional forecasts of recessions might well be excellent, unconditional forecasts of recessions are lousy. And that makes me feel much better about the economics profession, just as the inability of civil engineers to predict bridge collapses makes me feel much better about civil engineering.

Engineers and economists, creators of the modern world. Without either you are in the Stone Age. Add engineers and you get up to North Korea, with its 300 meter high skyscrapers and its atomic bombs (and starving people.) Add engineers and economists and you get to South Korea.

Screen Shot 2016-02-03 at 9.34.37 AM.png

Welcome to the second installment of the EconLog Reading Club on Ancestry and Long-Run Growth. This week's paper: Comin, Diego, William Easterly, and Erick Gong. 2010. "Was the Wealth of Nations Determined in 1000 BC?" American Economic Journal: Macroeconomics 2 (3): 65-97.  The authors' data is here.


We've already seen what happens if we try to predict nations' current economic conditions using measures of early political organization and agriculture.  As long as we measure nationality by ancestry rather than location, historical precocity predicts modern prosperity.  What happens if we measure early technology in the same way?  Comin, Easterly, and Gong construct a data set on societies' technologies in 1000 BC, 0 AD, and 1500 AD.  Measured by location, these technology measures are mildly predictive.  Measured by ancestry - using Putterman-Weil's migration matrix - these technology measures are highly predictive.

What historic technologies do CEG consider?  For 1000 BC and 0 AD, the basics:


For 1500 AD, measured techs are far more advanced.


Finally, here is how CEG measure current technology.
This measure captures (one minus) the average gap in the intensity of adoption of ten major current technologies with respect to the United States. These technologies are electricity (in 1990), Internet (in 1996), PCs (in 2002), cell phones (in 2002), telephones (in 1970), cargo and passenger aviation (in 1990), trucks (in 1990), cars (in 1990), and tractors (in 1970) all in per capita terms.

More specifically, for each technology, Comin, Hobijn, and Rovito (2008) measure how many years ago the United States last had the usage of technology "x" that country "c" currently has. We take these estimates and normalize them by the number of years since the invention of the technology to make them comparable across technologies, take the average across technologies and multiply the average lag by minus one and add one to obtain a measure of the average gap in the intensity of adoption with respect to the United States, whose adoption level is one, by construction.
What are the basic facts of technological development over time?  Here's a fun table.


In 1000 BC, China and the Arab world were the pinnacle of civilization; Western Europe and India lagged well behind.  By 0 AD, all four were almost maxed out.  By 1500 AD, Western Europe took the lead, with China close behind.  Today, Western Europe dominates (though it's still way behind the United States), followed by the Arab world, with China and India at the bottom.  Despite these contrarian findings, CEG find it necessary to remark:
Why do our historical rankings differ from the view that ancient Europeans were barbarians, while China and the Middle East/Islamic civilizations were well in the lead for most of our sample period and produced most of the useful inventions? Basically, it is because what we are measuring is the adoption of technologies rather than the invention...
As long as "ancient" means "1000 BC," though, CEG confirm the "ancient European barbarians" stereotype.

After several intervening sections, CEG estimate the effect of historical technology on modern conditions.  Unadjusted for migration, the results are only mildly impressive.

Effect sizes are big for 1000 BC and 1500 AD, but not for 0 AD.  The statistical significance for 1000 BC, however, is frail.  And this is before adding a single control variable.

Adjusting for ancestry, however, dramatically pumps up the estimates.  How do CEG do this?  By following the yellow brick road laid out in last week's paper, where Putterman and Weil remarked:
The [ancestry] matrix can be used as a tool to adjust historical data to reflect the status in the year 1500 of the ancestors of a country's current population. That is, we can convert any measure applicable to countries into a measure applicable to the ancestors of the people who now live in each country.

Here's what CEG find in the Emerald City:


Takeaway: "Our key 1500 AD result implies large magnitudes. Regressing income today on the migration-weighted index for 1500 AD, a coefficient of 3.261 implies that a movement from 0 to 1 is associated with an increase in per capita income today by a factor of 26.1. The log difference in per capita income today between Western Europe and sub-Saharan Africa is 2.59 (a factor of 13.3). This income difference is usually attributed to the post-1500 slave trade, colonialism, and post-independence factors in sub-Saharan Africa."

CEG handily show that in a horse-race, ancestry-adjusted measures of technology crush location-based measures.  But how do the results for log per-capita income hold up in the face of well-established geographic controls? 


Overall, pretty poorly.  Adjusting for ancestry, adding standard geographic controls reverses the sign of technology in 1000 BC, leaves a barely statistically significant effect of technology in 0 AD, and reduces the coefficient on 1500 AD tech by almost 50%.  Since we're predicting log income, that's an even bigger downward revision than it looks.  Moving from 0 to 1 now yields a increase of 590%, not 2,610%.  (By the way, if you're puzzled by the benefits of being landlocked, so was I.  When I checked with Easterly, he confirmed those positive landlocked coefficients are typos; all should be negative).   Note further that the lack of statistical significance on distance from the equator probably stems from the inclusion of a linear and squared term; if CEG stuck with a linear specification like Putterman and Weil, absolute latitude would have its usual clear-cut payoff.

CEG wrap up the paper with further robustness tests.  As far as I can tell, they don't explicitly answer the question, "Was the Wealth of Nations Determined in 1000 BC?"  But their data say No.

Critical Comments

1. This is another very impressive paper.  While there's plenty to debate in the history of technology, CEG make a serious effort to code the basic facts.  If I re-did their work, I'd expect a correlation of at least .8 between their measures and mine.  And it's great to see CEG extend PW's ancestry matrix to a new and plausibly important variable.

2. A careful reading shows that CEG is widely misread.  They don't show that events thousands of years ago matter today - even adjusting for ancestry!  After adding standard controls, tech in 1000 BC doesn't matter.  Neither does tech in 0 AD.  Tech in 1500 AD definitely seems important.  But since only one out of three measures of tech works as expected, the wise interpretation is unclear.

3. Suppose CEG found a much larger effect of tech in 1000 BC.  Would this really show "the wealth of nations was determined in 1000 BC"?  Only if you interpret the wealth of nations in relative terms: the most advanced countries in the past are the richest countries in the present.  This emphatically does not mean, however, that countries that were technologically backward in 1000 BC are doomed to absolutely poor today.  Worldwide economic growth over the last two centuries means that holding the past fixed, the present is dramatically improving.  Graphically:


Does this graph show per-capita income is "determined by historic technology"?  At any point in time, yes.  But the overall function is still dramatically shifting up over time.  My point: Contrary to some, CEG provide no support for fatalism.  Frankly, they should have tried harder to prevent that misreading.

4. Does CEG show low-skilled immigration is bad?  No more than Putterman-Weil did.  In both cases, you need to remember all the results.  Yes, countries inhabited the descendants of more advanced civilizations do better - and you can't change your ancestors.  But CEG, like PW, also detect enormous long-run benefits of geography.  And you can effectively change geography by letting people to move to the parts of the globe most hospitable to prosperity.  Contrary to popular opinion, this is not a great "national sacrifice."  Since migration dramatically increases productivity, migrants enrich themselves by enriching their customers - starting with their new neighbors.

5. While CEG's results are interesting and important, they're noticeably weaker than PW's.  In a three-way race between State History, Agriculture, and Technology - Garett Jones' "SAT score" - State History and Agriculture are roughly on par, but Technology lags behind.  Why isn't anyone doing a standard econometric horse race with all three explanatory variables in the same regression?  My best guess is that they're too correlated to disentangle.  If I learn more, I'll post it here.

David R. Henderson  

Studies of Public Goods

David Henderson

In a comment on a recent post of mine about government spending on infrastructure, Ben H. wrote:

This seems to ignore the problem of public goods. It is, arguably, pretty easy to identify public goods for which spending $1 increases total welfare across the population by more than $1, but which no individual person has an incentive to spend their own $1 upon.

I didn't think it was easy at all. I know that the theoretical argument for public goods is easy to make and I make it in every class I teach. But I've found that it's much harder to identify specific public goods on which we can be quite confident that $1 of spending creates substantially more than $1 of benefits. I say "substantially more" because we need also to take account of the deadweight losses from the taxes to pay for those public goods, and the deadweight losses from those taxes tend to be 30% or more of the revenue raised.

Because I didn't think it was easy to identify such cases, I challenged Ben H. to give cites. He replied:

To those demanding citations of specific papers, etc.: sorry, I'm not an expert in public goods theory and I'm not going to try to pretend that I am. It's a vast, vast literature, and Google Scholar can provide you with thousands of citations in less than a second. If you want more curated cites to particular studies that are particularly rigorous and convincing, I'm sure there are lots of public goods theorists out there who you could ask, and who would happily provide you with their opinion on that. If you honestly want to know - if demanding cites is not just a rhetorical tactic intended to shut me up - then I suggest you pursue that avenue.

He's right that the literature on public goods theory is vast. But his original claim was not about public goods theory--it was that it's easy to identify cases where government can create value substantially above cost. That's public goods empirics.

By the way, recall also that my original post was not about public goods per se. It was about infrastructure. With much infrastructure, though not all, it is technologically relative low-cost to exclude non-payers, which means that the infrastructure fails to satisfy one of the two criteria that Paul Samuelson laid down in the early 1950s for a public good.

CATEGORIES: Public Goods

Louis Putterman responds to my reading club post on Putterman-Weil.  Reprinted with his kind permission:

I just wanted to offer one correction and one further quick comment. 

You wrote: "How could one even begin to construct such a matrix?  Whenever possible, P&W use actual genetic data, then supplements genetics with history." For better or worse, this isn't really the case. We rely heavily on heroic assumptions that unless identified by sources like the ones mentioned in our appendix such as,, Encyclopaedia Britannica, World Christian Encyclopedia, etc., people living in a country conventionally assumed to be fairly ethnically homogeneous, e.g. France or Spain, are descended from people of that same country. The sources do mention migrants, for instance Algerians in France, but there may well be an undercount of mobility of the presumptively French population, who could well have ancestors who crossed what is now a border with Germany, Italy, Spain, or even a couple of borders (from, say, Poland) sometime in the 500 years ending in 2000. (We have to hope that such migration doesn't make a big difference, as descendants will in the meantime have perhaps become for all practical purposes like those with only ancestors within current French boundaries during that half millennium.) We used sources based on genetic studies only to get estimates of the regional ancestries of populations described as being of mixed origin, e.g. "mestizos", in those same sources, and only for countries having a large share of such people, 30% or more, for instance Mexico. The method is described on p. 1632 of our paper, and I think I can see how your misunderstanding might have arisen based on the first sentence on that page: "whenever possible we have used genetic evidence as the basis for dividing the ancestry of modern mixed groups that account for large fractions of their country's population." As you can see from a closer reading, we do this only for mixed groups such as "mestizo" "mulatto" etc. and only when they are a large fraction. The extant DNA studies mainly attempt to pick up differences between long separated populations, such as sub-Saharan Africans and Europeans, but not between members of populations that haven't been as separated, like Germans and Italians. 

My other comment is that you write that "civilized migration - where people voluntarily move to a new country to peacefully improve their lives - is an extreme historical rarity." We don't take any definite position on that, but off hand it seems wrong and we didn't mean to suggest it. Although there was a lot of forced movement of Africans to the New World early on, there was overall even more voluntary movement to the Americas, mostly of people from Europe but also ones from other regions, and likewise large parts of the overall migration to places like Australia, New Zealand, South Africa, Singapore, and Taiwan was voluntary on the parts of the migrant.  Whether these arrivals were welcomed by the original inhabitants (Native Americans, indigenous Australians and Taiwan indigenous peoples, etc.) is a whole other story.

By the way, regarding state history working well with 5% discounting, I believe we tested a bit and found it relatively insensitive, so we used the discount that had been applied in other papers. It does turn out that if one lacks back to years before 1 CE, some results become more sensitive to discounting. Borcan, Olsson and I have a working paper in which we report state history for roughly the same number of countries, but going back to the first states, in Mesopotamia before 3,000 BCE. We find that current GDP is concave in this longer-term state history especially when using a low discount like 1% so that the more ancient periods still get non-negligible weight; by concave, I mean that the oldest states such as Iraq are predicted to have lower GDP than "middle aged" ones like England and Germany.

- Louis

My replies:

1. I did indeed overstate PW's reliance on genetic data.  My mistake.

2. I included the word "peaceful" in my definition of "civilized migration" to exclude migrants who took  the land (and often lives) of the existing inhabitants.  Voluntary on the part of the migrant isn't enough.  Of course, there's a continuum.  But European colonization was largely predicated on military conquest, unlike immigration as we usually conceive it today.

3. Thanks for the details on the state history discounting.

Bryan Caplan  

Two Election Bets

Bryan Caplan
1. Steve Pearlstein bets $50 at even odds that Ted Cruz wins the Republican nomination.  I bet against.

2. Nathaniel Bechhofer bets $40 at 2:1 that Hillary Clinton wins the 2016 presidential election.  I bet against.

My reasoning, in both cases, is that betting markets are more accurate than the people I eat lunch with.

David R. Henderson  

Is the TPP Good or Bad on Net?

David Henderson

If you read the latest Econlib Feature Article by Pierre Lemieux, "Free Trade and TPP," you won't have an answer to the title question of this post. So why I did I ask him to write it? Because I knew that Pierre would do a great job of laying out the pros and cons.

Here's what I said in the blurb that goes with the piece:

Free trade is good for almost everyone and so are movements to freer trade. But is that enough for economists to judge that the proposed Trans-Pacific Partnership (TPP) is a good idea? No, argues economist Pierre Lemieux. Lemieux points out that the TPP is an instance of managed trade. Whether the TPP is good or bad depends heavily on how it is carried out. If you want to know how a careful economist with no dog in the hunt thinks about the TPP, this article is for you.

And here are some of my favorite lines from his article:
Contrary to what some critics claim, TPP is certainly not too dangerous to national sovereignty--which amounts often to nothing more than the right of a state to oppress its citizens. Indeed, TPP is not dangerous enough, as, in many ways, it reinforces the legitimacy of states' power of regulation and control.

Perhaps the decisive argument is about signaling. What signal would the approval or the defeat of TPP send? Defeat would likely signal in public opinion that free trade is a dead horse, while approval would hopefully signal that real free trade is still an alternative on the table. If this appraisal is correct, TPP would be a small step--a very small step--toward free trade.

This is, I suggest, how one should think about TPP. But the conclusion, as we have seen, is not clear. As French biologist Jean Rostand wrote, "We are not writing an exam where it is better to write anything than nothing." One thing, however, is certain: economists and educators should continue to present the argument for real free trade, whether agreed to multilaterally or declared unilaterally.

What a great line, and a great use of the line, by Rostand.

Bryan Caplan  

ADHD Reconsidered

Bryan Caplan
Several readers have taken issue with my use of the term "ADHD."  To be honest, I'm not comfortable with it either, but my reason is the opposite of my critics.  Like the late great Thomas Szasz, my objection is that labels like ADHD medicalize people's choices - partly to stigmatize, but mostly to excuse.  In his words, "The business of psychiatry is to provide society with excuses disguised as diagnoses, and with coercions justified as treatments."  I realize this is an unwelcome view, but I do have a whole paper defending it, and I stand by it.

My general claim:

[A] large fraction of what is called mental illness is nothing other than unusual preferences - fully compatible with basic consumer theory. Alcoholism is the most transparent example: in economic terms, it amounts to an unusually strong preference for alcohol over other goods. But the same holds in numerous other cases. To take a more recent addition to the list of mental disorders, it is natural to conceptualize Attention Deficit Hyperactivity Disorder (ADHD) as an exceptionally high disutility of labor, combined with a strong taste for variety.

Consider how economists would respond if anyone other than a mental health professional described a person's preferences as 'sick' or 'irrational'. Intransitivity aside, the stereotypical economist would quickly point out that these negative adjectives are thinly disguised normative judgments, not scientific or medical claims. Why should mental health professionals be exempt from economists' standard critique?

This is essentially the question asked by psychiatry's most vocal internal critic, Thomas Szasz. In his voluminous writings, Szasz has spent over 40 years arguing that mental illness is a 'myth' - not in the sense that abnormal behavior does not exist, but rather that 'diagnosing' it is an ethical judgment, not a medical one. In a characteristic passage, Szasz (1990: 115) writes that:
Psychiatric diagnoses are stigmatizing labels phrased to resemble medical diagnoses, applied to persons whose behavior annoys or offends others. Those who suffer from and complain of their own behavior are usually classified as 'neurotic'; those whose behavior makes others suffer, and about whom others complain, are usually classified as 'psychotic'.
The American Psychiatric Association's (APA) 1973 vote to take homosexuality off the list of mental illnesses is a microcosm of the overall field (Bayer 1981). The medical science of homosexuality had not changed; there were no new empirical tests that falsified the standard view. Instead, what changed was psychiatrists' moral judgment of it - or at least their willingness to express negative moral judgments in the face of intensifying gay rights activism. Robert Spitzer, then head of the Nomenclature Committee of the American Psychiatric Association, was especially open about the priority of social acceptance over empirical science. When publicly asked whether he would consider removing fetishism and voyeurism from the psychiatric nomenclature, he responded, 'I haven't given much thought to [these problems] and perhaps that is because the voyeurs and the fetishists have not yet organized themselves and forced us to do that' (Bayer 1981: 190). Even if the consensus view of homosexuality had remained constant, of course, the 'disease' label would have remained a covert moral judgment, not a valuefree medical diagnosis.

Although Szasz does not use economic language to make his point, this article argues that most of his objections to official notions of mental illness fit comfortably inside the standard economic framework. Indeed, at several points he comes close to reinventing the wheel of consumer choice theory:
We may be dissatisfied with television for two quite different reasons: because the set does not work, or because we dislike the program we are receiving. Similarly, we may be dissatisfied with ourselves for two quite different reasons: because our body does not work (bodily illness), or because we dislike our conduct (mental illness). (Szasz 1990: 127)

My analysis of ADHD specifically:

4.2. Attention-Deficit Hyperactivity Disorder

Substance abuse is a particularly straightforward case for economists to analyze, since it involves the trade-off between (1) one's consumption level of a commodity and (2) the effects of this consumption on other areas of life. But numerous mental disorders have the same structure. One way to be diagnosed with ADHD, for example, is to have six or more of the symptoms of inattention shown in Table 2.


Overall, the most natural way to formalize ADHD in economic terms is as a high disutility of work combined with a strong taste for variety. Undoubtedly, a person who dislikes working will be more likely to fail to 'finish school work, chores or duties in the workplace' and be 'reluctant to engage in tasks that require sustained mental effort'. Similarly, a person with a strong taste for variety will be 'easily distracted by extraneous stimuli' and fail to 'listen when spoken to directly', especially since the ignored voices demand attention out of proportion to their entertainment value.

A few of the symptoms of inattention - especially (2), (5) and (9), are worded to sound more like constraints. However, each of these is still probably best interpreted as descriptions of preferences. As the DSM uses the term, a person who 'has difficulty' 'sustaining attention in tasks or play activities' could just as easily be described as 'disliking' sustaining attention. Similarly, while 'is often forgetful in daily activities' could be interpreted literally as impaired memory, in context it refers primarily to conveniently forgetting to do things you would rather avoid. No one accuses a boy diagnosed with ADHD of forgetting to play videogames.

What about all the contrary scientific evidence?  It's not really contrary.  The best empirics in the world can't resolve fundamental questions of philosophy of mind.

Another misconception about Szasz is that he denies the connection between physical and mental activity. Critics often cite findings of 'chemical imbalances' in the mentally ill. The problem with these claims, from a Szaszian point of view, is not that they find a connection between brain chemistry and behavior. The problem is that 'imbalance' is a moral judgment masquerading as a medical one. Supposed we found that nuns had a brain chemistry verifiably different from non-nuns. Would we infer that being a nun is a mental illness?

A closely related misconception is that Szasz ignores medical evidence that many mental illnesses can be effectively treated. Once again, though, the ability of drugs to change brain chemistry and thereby behavior does nothing to show that the initial behavior was 'sick'. If alcohol makes people less shy, is that evidence that shyness is a disease? An analogous point holds for evidence from behavioral genetics. If homosexuality turns out to be largely or entirely genetic, does that make it a disease?

Bottom line: My use of the term "ADHD" was indeed problematic because the concept itself is problematic.  Then why use it?  Because you can grasp my original point without sharing my broader perspective - and if I started with my broader perspective, it would drown out my original point.

CATEGORIES: Economic Philosophy

Of all the things that puzzle me about macroeconomics, the relationship between changes in monetary policy and changes in long-term interest rates is perhaps the most confusing. I get why monetary injections tends to reduce short-term rates----prices are sticky and short-term rates temporarily fall to equilibrate monetary supply and money demand, until prices have time to adjust. But prices are not sticky for 30 years.

I also notice that persistent periods of easy money (such as the 1960s and 1970s) leads to higher interest rates, and periods of tight money like 1929-33 lead to lower interest rates.

What confuses me is that (unexpected) easy money announcements will sometimes raise long-term rates (as in January 2001 and September 2007) and at other times easy money shocks will cause long-term rates to fall, as in the market response to the recent moves by the Bank of Japan. And here is what's even stranger; in both cases stock prices soared. Stocks soared as long-term bond yields rose in response to easy money announcements in January 2001 and September 2007, and stock prices soared in response to easier money in Japan that lowered long-term interest rates. This suggests that stocks are responding to the easy money itself, not the impact the easy money has on long-term bond yields.

I find my inability to understand why easy money sometimes raises rates and sometimes lowers rates to be very frustrating. But now I have an idea that might make the problem at least slightly less mystifying. When mathematicians are struggling to prove a difficult theorem, they will sometimes look for connections with other theorems that are better understood. Let's translate the "response of long-term bond yields to money shocks puzzle", into another area of economics, which is much better understood.

Back in 1975, Rudi Dornbusch showed that an easy money policy would cause exchange rate "overshooting". The argument went as follows: Easy money would reduce domestic interest rates. Because of the interest parity theorem, lower interest rates imply a higher rate of appreciation of the domestic currency over time. (For simplicity, assume the interest rates in both countries were originally equal.) But the quantity theory of money implies that the easy money will raise the price level, at least in the long run. And purchasing power parity implies that the higher future price level will lead the exchange rate to depreciate, again in the long run. So how can the exchange rate be expected to both appreciate and depreciate? Dornbusch showed that it would do so by overshooting its long run equilibrium. Thus an expansionary money announcement might cause the exchange rate to instantly fall by 5%, and then gradually appreciate by 3%, still ending up 2% lower than the original exchange rate.

I'd like to translate the money shock/long bond yield puzzle into the language of the interest parity theorem. To do so, I'd like you to consider a central bank that does not use the interest rate as its policy instrument (as the Fed does) but rather uses the exchange rate as its policy instrument. More specifically, I'd like you to consider two policy options for the Singapore central bank, which does use exchange rates as a policy instrument:

Option A: A policy of suddenly depreciating the currency by 5%, but then promising to gradually appreciate it afterwards at a rate 0.1% higher than was previously expected.

Option B: A policy of suddenly depreciating the currency by 5%, and then promising to gradually depreciate it afterwards at a rate 0.1% higher than was previously expected.

These are both expansionary monetary policies. Both policies are clearly feasible for the central bank of Singapore. And yet option A lowers long-term bond yields by 0.1%, while option B raises long-term bond yields by 0.1%.

If you are very smart, like a Paul Krugman or a Nick Rowe, you are probably bored stiff by this exercise. "Tell me something new." But for me this new framing helps to clarify some issues. Now I can think about what sort of policy regimes might lead to each of the two outcomes. Under a Bretton Woods type regime, with a gold price peg, one might view any easy money policy as likely to be temporary. Then you get option A occurring. And since that type of regime describes most of our history, it's no surprise that the "liquidity effect" (i.e. easy money leads to low interest rates) is the normal way that most people look at monetary policy.

On the other hand when there's a lot of uncertainty about the future path of policy, option B might occur more often. Easy money leads to fears of inflation, and higher long-term bond yields.

This exercise also clarifies that the puzzle I opened the post with is essentially a "levels vs. growth rates" puzzle. Monetary policy can be thought of as working in two dimensions: changes in the price level, and changes in the rate of inflation. We don't see this clearly because most prices are sticky, so even level shifts look a lot like growth rate shifts. But one price is not at all sticky---exchange rates! They are sort of the canary in the coal mine; showing you what the impact of a monetary policy shock would be in the long run. Think of the exchange rate as a sort of "shadow price level", showing what changes in the price level (in response to monetary shocks) would look like if prices were not sticky.

Because monetary policy operates in terms of both growth rate and level shocks, it's really complicated. But what makes it far, far more complicated is that the interest rate (which can go either way) is also used as a policy instrument. So now investors need to figure out how a given change in the target interest rate will:

a. Impact the expected future path of short-term interest rates, relative to a natural interest rate that is itself influenced by the path of monetary policy.

b. How the current change in interest rates, and the change in the future path of short-term rates relative to the natural rate, will influence the current and expected future path of exchange rates.

c. How the change in the expected future path of exchange rates will impact current long-term bond yields.

Notice that the last step is a sort of afterthought. Easier money will have its effects (lower exchange rate, higher inflation, more jobs, more NGDP, higher stock and commodity prices, etc.) regardless of whether long-term rates rise or fall in response to easier money. Long rates are just an epiphenomenon.

This post doesn't really solve anything, but for me it makes the mystery a bit less confusing. For instance, look at the response of the yen to the recent BOJ announcement:

Screen Shot 2016-02-01 at 9.27.15 AM.png
Notice the zigzag immediately after the announcement. I recall a similar pattern for US stock indices right after the September meeting, where the Fed failed to boost rates. In my view this uncertainty is caused by the incredible complexity of monetary policy.

Remember that the market is like a giant brain, far smarter than any individual. So the fact that even it has trouble figuring out this levels/growth rate problem shows just how confusing monetary policy actually is. Still, the market gives us the best estimate we have for the effects of monetary policy.

Some commenters argued that markets can be irrational and that the tendency of currencies to depreciate after negative IOR, or QE, might be an irrational response. They say it "doesn't prove anything". What they forget is that the exchange rate is itself an important macro variable. If easier money causes exchange rate depreciation, then easier money is effective at boosting prices and NGDP, even if the market is completely stupid in driving exchange rates lower. Perhaps markets never studied Post Keynesian endogenous money theory. But even if the market is in some sense "wrong", its beliefs become self-fulfilling.

I believe that in the 21st century, macro will be increasing seen as a field focused on stabilizing various market price indicators, and that people like Roger Farmer will be increasingly influential, even though his approach (focusing on stock prices) is somewhat different from my obsession with NGDP futures prices.

This is my long overdue comment on what I observed at the meetings of the San Francisco Federal Reserve Bank in April 2014. I had previously given some highlights of my talk here and commented on Glenn Rudebusch's talk here.

When I made the agreement with the San Francisco Fed two years ago, I was told that the talks of the three presenters, of whom I was one, would be put up on the web within a few months. They weren't. When I inquired almost a year later, I was told that "regrettably" they never ended up posting the videos. I'm virtually positive that it was not because one of the technicians had messed up. They were obviously professionals and the equipment they used was very high quality.

I'm guessing that their not using the videos had more to do with the content of my talk, and the fact that in the Q&A period, one of the other presenters, Atif Mian, said that he agreed with me that the Federal Reserve could probably not spot bubbles in advance and that Hayek had a good point about information. I may never know the real reason.

Now to an observation that I made at the time, an observation that leads to the first two words of this post. Both in the lunch and dinner and in the various presentations, a number of people would refer to the previous president of the SF Fed, Janet Yellen, as, simply, Janet. I heard business people from Utah, from California, from Oregon, from pretty much ever state in the SF Fed's district, refer to her as Janet.

Why, I wondered. Here's what I think. Janet Yellen is shrewd. Duh! And she figured out some time ago that one of the main ways to get people on your side is to encourage them to be familiar and buddy-buddy with you.

Moreover, the various regular everyday business people whom I spoke to or who spoke up betrayed no particular understanding of monetary policy. The impression I got is that they wanted to feel as if they were "in on things."

But why would the SF Fed want this? Here's why I think. And, if I were doing this "Scott Alexander style," I would give it a higher than 80% probability. The SF Fed did it to create good will, at very little cost to the Fed and somewhat of a cost to the U.S. Treasury. The strategy reminded me of a passage from Enoch Crowder, The Spirit of Selective Service, 1920, which is quoted in David M. Kennedy's excellent book Over Here: The First World War and American Society. The passage is about Selective Service administrator Crowder's strategy for having local draft boards manned by prominent citizens rather than having a central board. Crowder wrote:

[T]hey became the buffers between the individual citizen and the Federal Government, and thus they attracted and diverted, like local grounding wires in an electric coil, such resentment or discontent as might have proved a serious obstacle to war measures, had it been focused on the central authorities. Its diversion and grounding at 5000 local points dissipated its force, and enabled the central war machine to function smoothly without the disturbance that might have been caused by the concentrated total of dissatisfaction.

Bryan Caplan  

ADHD Shall Save Us

Bryan Caplan
Back in 2012, I wrote:
The median American is no Nazi, but he is a moderate national socialist - statist to the core on both economic and social policy.  Given public opinion, the policies of First World democracies are surprisingly libertarian.
Since then, popular yearning for national socialism has grown even more pronounced.  But I still don't expect policies to get too much worse.  The same psychological force that thwarted the masses' wishes before 2012 continues to shield us.  What is that force?  For want of a better term, ADHD - Attention Deficit Hyperactivity Disorder.  Populist policy preferences go hand-in-hand with intellectual laziness and intellectual impatience.  As a result, populist voters fail to hold their leaders' feet to the proverbial fire - allowing wiser, elitist heads to prevail.

Take protectionism.  Keeping imports out of our country is perennially popular.  Never mind centuries of economics classes on the wonders of comparative advantage; the masses are convinced that cheap foreign products make us poorer.  Given public opinion, then, it's amazing that trade barriers are as low as they are.  What's particularly striking is that presidential candidates routinely make protectionist noises to curry favor with the masses.  Once elected, however, they get convenient amnesia.

Why would vote-seeking politicians show so little follow-through?  Because talking about foreign trade, titillating at first, gets old fast.  And actually measuring the change in trade barriers bores the masses instantly.  As a result, protectionist promises are cheap to break.  The masses delight to hear politicians vow to get "tough on China," but they don't want to have to think about Chinese imports months after the election, much less monitor their leaders' concrete efforts to cut China down to size. 

The same goes for the War on Terror.  Americans were quick to back the Afghanistan and Iraq wars in their early stages.  After all, they were angry.  But after a year or two, their minds wandered.  If they combined their anger with determination and follow-through, we'd now be well into World War III.  Millions would be dead, and American soldiers would occupy most of the Middle East, fending off ten thousand guerrilla armies.  ADHD spared us these horrors.

Emotionally, I look down on the public's ADHD.  When I get an idea into my head, it stays there until someone (possibly myself) argues me out of it.  I'm a puritan.  Once convinced something is true, I tenaciously act on it.  But I'm the first to admit that these are conditional virtues.  If you've genuinely figured out the right thing to do, determination and follow-through are wonderful.  Otherwise, though, they're a menace. 

Mankind can and should shape up across the board, but it won't.  I'll bet on it.  And since mankind won't discover a passion for rationality anytime soon, we should be thankful its ADHD isn't going away either.

David R. Henderson  

Don't Make Canadians Poorer

David Henderson
In short, because the price of oil has fallen, Canadians as a whole are somewhat poorer than if the price had not fallen.

What should government do about this? Not make them poorer. Wasteful government spending is bad enough when the taxpayers who pay for it are doing well. It's even worse when the taxpayers are worse off than they were.

Unfortunately, government officials are often very much like the politician in the British comedy "Yes, Prime Minister." They say, "Something must be done. This is something. Therefore it must be done."

This is from my recent blog post at the Fraser Institute sight.

Here's part of my reasoning:

When government spends on infrastructure, it doesn't use market signals that tell where money is best spent. So the government is flying blind. This means that the odds that even the most well-intentioned government officials will spend it better than people would spend their own money are vanishingly small.

It gets worse. Government officials have perverse incentives because they are spending other people's money. People spend other people's money more carelessly than they spend their own. The result--the odds that the money will be spent well are even closer to zero. More government spending will make Canadians poorer. When you're poorer, it's even more important not to waste resources.

"Yes, we're planning to add this note to all future stories about Trump," the [Huffington Post] spokesperson said. "No other candidate has called for banning 1.6 billion people from the country! If any other candidate makes such a proposal, we'll append a note under pieces about them."
This is from Peter Sterne, "HuffPost to publish anti-Trump kicker with all Trump coverage," On Media, January 28, 2016.

Is the Huffington Post spokesperson unaware that under current immigration law, only about a million or so new immigrants are allowed into the United States every year? Is he/she also unaware that none of the Democratic and Republican candidates for president advocates substantially liberalizing that law?

This means that all of the candidates advocate banning over 6 billion people from the country.

Of course, you can argue that Trump wants not even to let those 1.6 billion people enter the country, so that the issue is simply entry, not immigration.

But not so fast. Talk sometime to people from other countries who routinely get turned down. I ran into two young Chinese people, while visiting my daughter in Thailand a few years ago, who told me that the U.S. government would not give them visas to enter the United States. Has the U.S. government said no 1.6 billion times? Probably not. It doesn't need to. Nor would Donald Trump's government need to say no 1.6 billion times either, because (a) It's a certainty that not all 1.6 billion Muslims would want to enter and (2) even of those many Muslims who do want to visit the United States, many would be discouraged by the experience of their fellow Muslims.

I look forward to the Huffington Post's appending a note about Hillary Clinton the next time they write about her. How long do you think I will have to wait?

Scott Sumner  

Influence, target, control

Scott Sumner

I often get commenters complaining that the Fed should not be controlling interest rates. They think the market should set rates. In one sense I agree, but in another sense I wonder if the commenters are confused. I wonder if people think the Fed moves interest rates away from the market equilibrium. It does not, it influences the market equilibrium. Let's start with an analogy from the oil industry, and while doing so I want you to think about how inadequate the English language is in this area.

Consider two cases:

1. President Carter introduces price controls on oil, setting a price ceiling at $25/barrel, even as the market price in $34/barrel. A shortage results, with long gas lines.

2. The Saudi oil company Aramco adjusts oil production to keep market prices at $36/barrel. There is no shortage, no gas lines.

In ordinary English, these are two ways that governments could be said to "control" prices. But I think you'll agree that they are vastly different methods, and have vastly different implications for the economy. The term 'control' is being used in two very different ways.

I hope it's obvious that the Fed's control over interest rates is like the Aramco case, not the price ceiling case. There are at least three different senses in which the Fed could be said to control interest rates:

1. The Fed might fix interest rates, via usury laws.

2. The Fed might target interest rates (as the Fed actually does.)

3. The Fed might target some other variable like the money supply or exchange rates, and yet nonetheless Fed actions indirectly determine the level of interest rates.

Or I suppose the Fed could be abolished. But as long as the Fed exists, the meaning of the term 'control' is more ambiguous than you might think. Consider 3 policy options:

1. The Fed has a 2% inflation target, and sets a new fed funds targets every 6 weeks, adjusting rates as needed to target inflation. In that case is the Fed controlling inflation, interest rates, or both?

2. The Fed has a 2% inflation target, and sets a new fed funds targets every single day, adjusting them as needed to target inflation. In that case is the Fed controlling inflation, interest rates, or both?

3. The Fed has a 2% inflation target, and has no fed funds target. Instead it targets the TIPS spread, or a CPI futures contract. In that case is the Fed controlling inflation, interest rates, or both?

Case one and two are pretty similar, as we've merely shortened the time between meetings from 6 weeks to 1 day. And yet arguably cases #2 and #3 are much more similar to each other, than to case #1. That's because in both case #2 and #3, the interest rate changes each day, in a way that the Fed believes will stabilize expected inflation over time. The interest rate path is quite similar in those two cases.

Here's the problem. Fed policy affects all nominal variables, including nominal interest rates, nominal exchange rates, nominal GDP, the nominal prices of apples, oranges and Brazilian hot wax treatments. All nominal variables. In the 1970s, the Fed printed lots of money, causing almost all nominal variables to be higher than they would have otherwise been. But we usually don't think of the Fed as "controlling" those prices, for two unrelated reasons. One reason is that the Fed's actions don't impact real variables, in the long run. The other reason is that the Fed was not targeting most of those variables. Does it matter if they were? Not as much as you might think.

Consider two policies: In one case the Fed continually adjusts the target fed funds rate in order to keep expected inflation at 2%. In the other case the Fed continually adjusts the target nominal price of zinc, in order to keep expected inflation at 2%. If the Fed does its job in a competent way, then the path of zinc prices would be identical under either regime. Does it then make sense to talk about the Fed "controlling" zinc prices in one case but not another?

Yesterday I did a post criticizing Mike Konczal on the distinction between the Fed acting and not acting. Let me provide a better analogy over here, as some missed the point. It might make sense to distinguish between an "activist ship engine" that is fed lots of fuel and an idle engine. But the steering wheel of a ship is different. There is no single setting of the steering wheel that is more active than another. It doesn't require any more fuel to set steering at NNE than at NNW.

By analogy, it makes sense to talk about using fiscal policy, as the natural benchmark is a fiscal policy set on classical cost/benefit considerations, and the alternative is a deficit that would not otherwise occur, used to boost GDP in a recession. Deficits require costly future taxes with deadweight losses. In contrast, it's nonsensical to talk about "using monetary policy" unless your alternative is pure barter. There is no benchmark of not using monetary policy; rather there are merely different settings of various monetary indicators. (Even with no Fed, the private money issuers would have some sort of "policy".) If the Fed would like to see NGDP growth at X%, and a certain policy setting would achieve that, then any failure to achieve X% growth can be said to be "caused" by the Fed.

Konczal talked about a housing crisis causing a drop in AD, and the Fed not responding. But there is no evidence that that's what happened. If, after July 2007, the Fed had kept the monetary base growing at exactly the same rate as in the 5 previous years, there might well have been no recession. Or there might have been a Great Depression. (In my view there would have been no recession in 2008, but major problems later, but we can't ever know.) So it's meaningless to talk about a housing crash causing a drop in AD in 2008, without knowing the Fed's policy regime.

The Fed meets every 6 week to set policy. The only question is whether they set the right or wrong policy. They have no ability to "do nothing"; it's a meaningless concept for monetary policy.

PS. I just got this email from the Hypermind prediction market:

Bravo ssumner, Suite à la fermeture d'un concours sur Hypermind, votre compte de gains en Euro a été crédité de 10€

It's one of those good news/bad news situations. The good news is that I'm 10 euros richer. The bad news is that my ability to beat the market undercuts my belief in the EMH. Should I hope that I was right or wrong about the EMH?

(I shorted the one year NGDP contract early in 2015, when it was trading around 4.2%. NGDP growth ended up being 2.9%)

I'm looking forward to reading Robert Gordon's new book, The Rise and Fall of American Growth. PBS recently did an 8-minute segment on it that lays out his argument nicely.

HT2 John Cochrane and Greg Mankiw.

I won't try to resolve the issue of whether future growth will be higher or lower than that of the period from 1870 to 1970 here--or anywhere. It can't be resolved. We simply can't know whether economic growth, properly measured, will be higher. John Cochrane points out that real GDP does not measure consumer surplus. And that's a problem.

I do want to point out, though, a simple error that PBS reporter Paul Solman makes that one would expect anyone who understands basic math not to make.

Note the quote from President Obama near the top of the story (at the 0:33 point):

Anyone claiming that America's economy is in decline is peddling fiction.

Solman then asks: "But really? Tell that to eminent economist Robert Gordon, a Democrat, who's peddling a distinctly nonfictional new book, The Rise and Fall of American Growth."

But there's no contradiction. Obama claimed that the U.S. economy is not in decline. What does this mean? That growth is positive. Is growth positive? Yes. Has growth, measured by real GDP, fallen since the golden era Gordon discusses? Possibly yes.

It's basic math, but it's basic math that people often forget. One of the most valuable things I learned in Ben Klein's course in monetary theory at UCLA was always to distinguish between levels and rates of change.

Solman does it again at the 2:08 point, saying, "But MIT's Erik Brynjolfsson doesn't buy the argument that the U.S. economy's best days are over."

I should hope he doesn't. Nor does Gordon. If per capita economic growth persists, then the U.S. economy's best days are ahead.

UPDATE: Commenter David below reminds me that I've discussed the consumer surplus point before. Here's one instance.

My post on elite high schools and college admission led an Ivy League admissions officer to email me.  Here's what he wrote, with his kind permission.  Name and school redacted.


Your post on TJ caught my eye, as last year I read admissions files from NOVA for [redacted Ivy League school] (along with an actually qualified admissions officer), including those from TJ. Feel free to share any of this, though for reasons of discretion (and being junior faculty!) please don't include my name/exact institution.

Much of what you wrote rings true from my experience, but I'm not sure I'd reach the same conclusion you did. 

We definitely held students from TJ to a higher standard than those from less prestigious schools. In fact, if memory serves they were held to the highest standard by any NOVA students by a decent margin. 

Why? From our perspective, mainly knowing that students there get lots of encouragement/coaching to do the kinds of things that look good on an application, so a student from TJ that looks equally good on paper as someone from another school (setting aside class rank) is probably less good of a student. Also, there was some desire to give students who had fewer opportunities a leg up, though this effect probably wouldn't help the children of a professor even if they went to a less prestigious school. 

On the other hand, we also certainly accounted for the strength of the school when interpreting class rank. I don't remember exact numbers, but I think we gave students from TJ a close look in the 2nd and even 3rd decile while this would be a kiss of death from most other schools. 

From a parent/student perspective, the question is whether the boost in application quality from being surrounded by high achievers and resources/opportunities students don't get elsewhere outweighs the fact that they will face a higher bar when admissions officers read their file.  Theoretically, I would think that causal effect of going to TJ on chances of admissions is probably neutral to somewhat positive. To the extent that admissions officers care about getting talented students who are prepared for an elite college, even if they fully filter out the better preparation at schools like TJ when making inferences about talent, they will still appreciate the better preparation in and of itself. Nothing in my empirical observations led me to think this theoretical expectation is wrong. 

(Of course this sets aside the impacts outside of chances of admission to an elite school, like what they actually learn or how they might be harmed by the pressures of going to such a school!)

Hope this is of interest,


David A. Graham argues that firms that make charitable contributions create long-term problems.

For those who haven't been paying attention to the news lately, the government of Flint, Michigan totally messed up, selling lead-laced water to Flint residents for 16 months, despite repeated objections from local residents complaining about the water. The Michigan Department of Environmental Quality and the federal government's Environmental Protection Agency messed up too. For more details see Shikha Dalmia, "Flint's Water crisis isn't a failure of austerity. It's a failure of government."

In a recent article at The Atlantic titled "The Private Sector Is Now Providing Basic Services to Flint," David A. Graham writes:

A coalition of some of America's biggest companies is organizing a trucklift for Flint, promising to deliver 6.5 million bottles of water to the city in order to provide clean drinking water for schoolchildren through 2016. Walmart, Coca-Cola, Nestlé, and PepsiCo say they will deliver 6.5 million bottles to Flint, enough for the city's 10,000 students.

Graham admits that the generosity shown by Walmart, Coca-Cola, Nestlé, and PepsiCo in donating water to schools in Flint, Michigan is good in the short run.

He also admits the problems with government, writing:

Failures of government and the effective disenfranchisement of Flint voters produced the crisis

But he has a problem with it in the long run. As Mona Lisa Vito asks Vinny Gambini in My Cousin Vinny, after she helps him win a case: "So what's your [his] problem?"

Graham's problem is much like Vinny's, who "wanted to win [his] first case without any help from anybody." Graham writes:

But the big water donation might raise even more uncomfortable questions. Walmart, Coca-Cola, Nestlé, and Pepsi aren't just charitable organizations that might have their own ideologies. They're for-profit companies. And by providing water to the public schools for the remainder of the year, the four companies have effectively supplanted the local water authorities and made themselves an indispensable public utility, but without any amount of public regulation or local accountability. Many people in Flint may want government to work better, but with sufficient donations, they may find that the private sector has supplanted many of government's functions altogether.

This is quite striking. Graham looks at government institutions straight in the eye and admits that they have failed. He also admits that private charity has succeeded. But they aren't regulated. OMG. He doesn't argue that the private firms are giving poisoned water, the way the government did. (I correct myself: the government didn't give poisoned water; it sold poisoned water, and is insisting that Flint residents keep paying for it.) He also worries that the private businesses aren't accountable. Really? So Graham thinks that if somehow the water they gave ended up poisoning people, they wouldn't be liable? Isn't he confusing private firms with government monopolies?

HT@ Robby Soave.

Return to top
Tyler Cowen and Alex Tabarrok
Russell Roberts and Don Boudreaux
Greg Mankiw
Scott Sumner
Robin Hanson
David Friedman
Mark Thoma
Megan McArdle
Matt Zwolinski, et al
Jason Kuznicki, Gene Healy
Daniel J. Mitchell, Ilya Shapiro, et al
Reason Online
Nathan Smith, et al
John Cochrane
James Hamilton
Bob Murphy
Karl Smith
Stephen Bainbridge
Stan Collender, Pete Davis, Andrew Samwick
Brad DeLong
Evan Goldstein
The Economist
Nicolai Foss, Peter Klein
Lynne Kiesling
Steven Levitt and Stephen Dubner
Mike Rappaport and Michael S. Greve
Wall Street Journal
John Taylor
David Tufte
Chris Dillow
Peter Gordon
Heritage Foundation
Stephen Karlson
Stephen Kirchner
Michael Munger
Craig Newmark
William Parke
Virginia Postrel
(was Prestopundit) Greg Ransom
David Warsh
Return to top