Bryan Caplan  

So Far

Does anything matter? (And wh... Don't try to normalize interes...
This world contains mighty social forces that have worked wonders within the observed range.  Top cases:

1. Industrialization.  So far, industrialization has launched mankind from the ubiquitous poverty of the farmer age to the amazing plenty of the modern age.

2. Population growth.  So far, population growth has greatly improved living standards by increasing the total number of idea creators, and hence the global rate of innovation.

3. Computers.  So far, computers have made human existence not only markedly richer, but much more entertaining.

4. Nuclear physics.  So far, nuclear physics has allowed the creation of cheap, clean energy.  And it's far from clear that the net body count of the nuclear bomb even exceeds zero.  A conventional conquest of Japan probably would have exceeded the body counts of Hiroshima and Nagasaki.

5. Immigration.  So far, immigration - especially from poor countries to rich countries - has lifted hundreds of millions of people and their descendants out of poverty, with no clear harm to host countries' institutions.

Still, in each case, the "so far" proviso sounds ominous.  Anyone given to morbid thinking can imagine that these so-far-wonder-working social forces are leading to utter disaster.  Let the nightmares begin!  Industrialization could lead to environmental apocalypse or global totalitarianism.  Population growth could lead to mass famine.  Computers could lead to ultra unfriendly Artificial Intelligence - the Terminator scenario.  Nuclear physics could lead to all-out nuclear war.  Immigration could destroy First World institutions, making the whole world into the Third World.

For the numerate, of course, the mere ability to weave nightmare scenarios is no reason to lift a finger.  Probabilities are essential.  How can we estimate these probabilities?  In practice, most of us pick out a few "serious" nightmares on ideological grounds, and dismiss the rest as too silly to entertain.  But that hardly seems like a reliable way to proceed.

How should we ballpark disaster probabilities?  Think like superforecasters.

First step: Remember the base rate.  Disasters are inherently rare.  As I put it a decade ago:
The fact that we've gotten as far as we have shows that true disaster must be extremely rare. Unless fears almost always failed to materialize, we'd already be back in the Stone Age, or plain extinct. It's overwhelmingly unlikely that we've gotten lucky a million times in a row.
Second step: See how long the observed range has lasted.  Centuries of success deserve heavy weight.

Third step: See how historically prevalent doom-saying about X has been.  Why does this matter?  Because it measures humans' topic-specific propensity for paranoia.  If everyone thought industrialization was fine until ten years ago, we should be more worried than if industrializationphobes started squawking in 1750.

Not good enough?  If you're morbid across the board, I've got nothing more to say - though I'm eager to bet you.  If you're selectively morbid, though, I'd like to know why the nightmares that keep you up at night are so much more compelling than the nightmares that put you to sleep.

Comments and Sharing

COMMENTS (31 to date)
Graham Peterson writes:

I'm not sure we've looked closely enough at what inspires Sky Is Falling mythology before rushing to contradict it. Abrahamic religions, the religion of social and environmental justice, and (so I hear) lots of mythology from other cultures includes a romantic Eden, fall from grace, and eventual cataclysm/judgement. Reformers tend to reproduce some version of that story when they claim that we all need to change course and undo some terrific moral harm, lest we be punished.

Thomas writes:

A probability is a statement about a very large number of like events. It says nothing about an individual event. It certainly says nothing about what will happen next. Further, the events of human history don't lend themselves to probabilistic interpretation: they're too varied and too dependent on human intentions. All of that said, I'm not morbid and don't have apocalyptic nightmares. It's just that I object to the misuse of the concept of probability.

Lawrence D'Anna writes:

1, 2, and 3 have exponential growth. The past can't be much of a guide to how that growth curve will end, but it must end.

David N writes:

Thomas, the word you are looking for is "frequency." A probability may refer to the likelihood of occurrence for a single, unique event.

Blackbeard writes:

Countries like the U.S. (capitalist liberal democracies) are rare in human history and a recent development. For most of human history most of us lived short nasty brutish lives under some sort of tyranny. As I look at political developments in Europe and the U.S. It seems to me that the Age of Enlightenment is coming to an end.

I would appreciate it if you, or someone, could show I'm wrong.

Bernie writes:

I read that Microsoft's Tay AI passed the Turing test! (or at least the Turing twitter test)

Koenfucius writes:

I instinctively share the optimistic outlook in this article, but I'm not sure the arguments against the 'so far' moderation are robust.

The base rate remark in particular is doubtful IMV. Yes, true disasters appear rare, but that perception is coloured by the self-centred bias.

The ability to cause big disasters is relatively recent: ancestors as recent as maybe 10 generations ago simply did not possess the technology to wipe out the planet or its inhabitants, or to throw mankind back to the Stone Age. So to say it's unlikely that we have been lucky a million times in a row is ignoring the fact that the first 999,995 times there was no luck involved.

On a similar note, we only really know about the one planet that we live on. Maybe tens, hundreds, or millions of planets have suffered a terrible fate, without us knowing about it. We simply don't know the base rate.

So the argument appears very similar to that of a trader who believes his success, having beaten the market 20 quarters in a row, is due to his skill and perspicacity, rather than luck. It's the same kind of self-centred bias.

Tyler Wells writes:

I fear cancer, car crashes, and gang/gun violence (I live in Guatemala). Maybe I'm incredibly selfish, but my world will end some day soon. I don't understand why people worry about remote, uncontrollable events. You are just as dead from a car crash as from a meteor hitting the earth.
Here's another thought, if you are really paranoid why would you want to give up certain gain now (from all the wonderful things Bryan mentions) when mass destruction almost certainly lies in the future?

Daniel Klein writes:

Here is Adam Smith on "so far":

"The annual produce of the land and labour of England, for example, is certainly much greater than it was, a little more than a century ago, at the restoration of Charles II. Though, at present, few people, I believe, doubt of this, yet during this period, five years have seldom passed away in which some book or pamphlet has not been published, written too with such abilities as to gain some authority with the public, and pretending to demonstrate that the wealth of the nation was fast declining, that the country was depopulated, agriculture neglected, manufactures decaying, and trade undone. Nor have these publications been all party pamphlets, the wretched offspring of falsehood and venality. Many of them have been written by very candid and very intelligent people; who wrote nothing but what they believed, and for no other reason but because they believed it."

Shane L writes:

Regarding the nuclear attacks on Japan, I gather there is some debate about the Japanese government's willingness to surrender around that time anyway, without atomic bombs or invasion. The Soviet Union had just declared war and conquered areas like Manchuria so their destruction seemed likely. Anyway, I'm not certain surrender was imminent but there is some discussion.

"So far, immigration - especially from poor countries to rich countries - has lifted hundreds of millions of people and their descendants out of poverty, with no clear harm to host countries' institutions."

What's more, the success of the liberal United States probably served as inspiration for republicans in the source countries of Europe and elsewhere.

I wonder does the mass-migration of Europeans to the Americas count as "immigration"? This resulted in catastrophic damage to the indigenous American peoples, as did migration of Germanic peoples into the late Roman Empire. These are invasions and not controlled migration processes but I'm not clear on the difference in some cases; if the indigenous government or people don't favour it, is it an invasion?

Psmith writes:

Blackbeard's got a point. In general, this is all very sensitive to the time frame you induct on. Last three or four centuries, everything looks pretty good. Entirety of human history--nasty, brutish and short.

And for that matter, "Centuries of success deserve heavy weight."--compared to what? Lots of potentially civilization-ending events--asteroid impacts, massive volcanic eruptions, rapid climate change--have occurred seldom or never within written human history but pretty frequently on geologic timescales.

Noah writes:

The Russians would have been in Japan a few days later. Nearly ever major population centre had already been bombed out to a greater extent than even the damage inflicted by the nuclear weapons -Japan clearly didn't care about civilian casuality, and the nuclear bomb should not have seemed any more threatening than a standard course of urban civilian bombardment, especially considering that the decision makers responsible for the capitulation were only aware that, yet again, some city had been demolished. It is certainly not clear that any lives were saved by dropping those two nukes, but it is clear that the Russians were coming. Might Japan have preferred American occupation to being divided between two foreign empires?

If there were lives saved through nuclear weapons, it is in the role that nuclear weapons played in ensuring that America and the Soviets did not engage in conventional warfare, which was largely left up to proxies.

Roger Sweeny writes:

So far, population growth has greatly improved living standards by increasing the total number of idea creators, and hence the global rate of innovation.

1) There are good ideas and bad ideas. Some idea creators have worsened living standards. The more idea creators, the more bad ideas. Would we be better off with two Karl Marxes? You are implicitly assuming that the good ideas will be more powerful than the bad ideas. What kind of ideas people generate depends a lot on what kind of society they grow up in and live in.

2) Whether a potential idea creator will generate ideas that increase living standards depends on the opportunities to develop and transmit those ideas. If Richard Fenyman (or Milton Friedman) had been born into a Zimbabwean family 30 years ago, he'd probably be a starving peasant today. Similar things could be said for much of Africa, the Middle East, etc.

Chris writes:

"though I'm eager to bet you."

Ha, ha, that's funny. Sure, if you win you get my money. If I win, we're both dead so it doesn't matter. Either way, I lose.

Mark Bahner writes:
1, 2, and 3 have exponential growth.

As I tried to point out at the "Peak Prosperity" website, global human population has not been growing exponentially for the last ~50 years. (But they don't let their readers see such things. It's bad for the apocalypse business.)

Sub-exponential growth since ~1965

Tom West writes:

My disaster scenario - robots + AI make the market wage for most humans below subsistence.

In the Capital vs. Labour equation, labour drops to near zero as a robot costs, say $50K, + $1K/year for maintenance.

It requires no malice, just progress, yet inexorably leads to a 90+% population drop (compare horse populations in pre-post internal combustion engine?)

Luckily, I figure it's at least 30 years before we reach that spot, and as a rich North American (by world standards, anyway), I'll have some capital.

Andrew_FL writes:

"no clear harm to host countries' institutions."

Here we see, ladies and gentlemen, the problem with living inside a beautiful bubble. This statement is clearly false to anyone living in, you know, the actual United States and not inside a beautiful bubble.

Brian Holtz writes:

The sleight of hand here is: there is no "so far" about Caplan's proposed open-borders policy. That policy would be a massive discontinuity, not an extrapolation. For obvious reasons, that policy has never been tried anywhere during the modern era of

  • radically decreased costs (in both transportation and cultural dislocation) of emigration

  • radical international differences in depth of capital -- industrial, infrastructural, ecological, human, and institutional

  • radically increased population.

All three of the above have recent hockey sticks in their graphs, and that's why extrapolation from pre-hockey-stick norms are historically illiterate. These factors combine to conjure not an increasing river of immigration, but impending dam break. (If it's not akin to a dam break, then whence the open-borders complaint?) Gallup finds that, under current migration policies, 380M adults worldwide would like to immigrate to Western 1st-world countries. I'd bet that a unilaterally open-borders America would get half a billion immigrants in the first decade.

The U.S. should indeed unilaterally increase immigration -- but not so much as to dilute or deplete our industrial, infrastructural, ecological, human, and institutional capital.

If open-borders advocates really believe their rosy predictions of immigration impacts, then they should advocate increased immigration conditional on the absence of negative impacts. Instead, all I hear is moralizing and untestable predictions -- aka signaling.

HM writes:

I'm not sure I buy the low likelihood of disaster. Sure, over a 200 year stretch of time it looks good now, but 1914-1989 was a rough ride for many parts of the world.

Most importantly, it was an *extremely* rough ride for the dominant bourgeoisie from the turn-of-the-century onwards. I also think the fate of this class is a relevant benchmark as they supported reasonably well-functioning institutions for their own interests, but also pursued strong exclusionary policies to cling to their power -- just like the west does with migration restrictions today.

In 1914 most corners of the world had a bourgeoisie, and they were often allowed to pursue their life without extreme censorship or risk of expropriation. I.e. they were guaranteed the liberties the upper-middle class seemed to value the most.

Sometimes this happened in a fully-fledged democracies -- the US and after voting rights acts the UK and its offhhoots, but often in limited democracies where the middle class was guaranteed excess influence -- Vienna, much of continental Europe, or autocracies that made a limited upper middle-class life possible as long as you didn't blatantly undermine the state's authority -- Austria-Hungary + Russia, or colonial powers that guaranteed many negative liberties for the colonizers and an indigenous bourgeoisie (either by direct control or in concessions like Shanghai).

During this time, there was a strong push for increased participation, via voting rights, revolutions, decolonization, etc. Many worried that the large migration to the cities brought about by urbanization, plus the inclusion of more people in the political process, together with radical ideas, would threaten the stability of the system.

The bourgeoisie could legitimately wonder if their rights would be protected.

Looking at the outcome, the pessimists were often vindicated from the bourgeoise perspective. War and revolutions wrecked most of continental Europe and any semblance of ordinary lives until 1945, and in Eastern Europe until 1989.

Outside of Europe, bourgeoise strongholds such as Alexandria and Shanghai vanquished in revolutions and decolonization processes. (In Alexandria, the foreign community was expulsed in 1957

Decolonization in Sub-Saharan Africa ended the rasism of the colonial regimes, but for the middle class which had had a good relationship with the regime, the outcome wasn't as rosy. A genocide-like treatment in Algeria led to the large migration to France. The expulsion of the Indians in Uganda. The dire situation of the Indians in Mozambique after the communist takeover.

In Ethiopia, the Emperor enriched the landlords, but also allowed private ownership and a business class. After the Derg, rich people were viciously killed and/or expropriated.

Of course we do not need to get started with how 1913-elites in Cambodia, Vietnam, and China fared.

The era of liberalism in Latin America was substituted by decades of populistic parties and various degrees of expropriations etc.

Bryan's perspective seems very much colored by the US which had a reasonably problem-free inclusion of non-elites in the political system, but this is far from the general experience of all parts of the world.

I think that the process of expanding representation and ending colonization was messy but also important in the long-run. The broad-based prosperity that is built in many countries today is more stable than the previous elite-dominated community guaranteed by security police. For similar reasons I support higher levels of migration, but not open borders.

Because make no bones about it. Drastically increasing political inclusion is a drastic thing. And the 1913-pessimists in the bourgeoisie do not look that silly in hindsight.

Anonymous writes:

@Tom West

If in your scenario robots can do everything humans can do, why would the price of a robot be so low? People would be willing to pay quite a bit more than that for labor, right? So why wouldn't they - more precisely why wouldn't the output of robots be increased until the marginal cost of producing a robot was equal to the price people are willing to pay for labor?

When people describe this scenario it seems to me that they take it for granted that it is possible to produce infinite robots - that the marginal production cost of a robot is zero. I don't see why. That's true of software, but robots are hardware, and hardware cannot be endlessly reproduced.

Where you might have a problem is if the inputs to human labor can be more productively used for something else. That's the situation that horses faced. It's not the situation we face just from robots that can do everything we can do, unless you add in the extra assumption that they use the same inputs as us - are grown on farms, for example. If robots are not competing with humans for inputs then it seems that comparative advantage would apply and there would be no problem.

David Condon writes:

Thomas Malthus used similar reasoning to argue we would permanently live at subsistence level. Of the issues you mentioned, two are 20th century innovations (nuclear technology and computers), 1 is an 18th century innovation (industrialization), and only 2 have been around for a significant amount of time (population growth and immigration), and historically not even close to current levels of change so we don't really have centuries of data to look back on for any of these issues or anything comparable to these issues. Individually, none are likely, but collectively I think there is a reasonably good chance one of them will come true within the next century (greater than 30%). I would rank the dangers in this order:

1) Population growth
2) Computers
3) Nuclear War
4) Industrialization
5) Immigration

There is a relationship between these. For instance, there is the isolated danger of population growth without any other problem, the isolated danger of industrialization, and the combined danger of the two to worry about.

ThaomasH writes:

The problem is not whether further movement along the trajectory of variable Z implied by policy X has a zero chance of causing great harm but the benefit/cost of adjusting X so as to change Z marginally.

Immigration has been positive so far and at rates of change so far observed but that does not imply that the non-marginal change from the present crazy and unjust system to Open Borders would be an improvement. Why not make marginal changes to the system?

ChrisA writes:

AI will be the end of the human race per Bostrom et al. I do think most of the hand wringing about technology ending humanity is innumerate musing of clever writers, like global warming or peak oil. But we have really already faced a serious existential risk -nuclear war certainly could have been a civilisation ender.

The real question I think is will AI be the end of the human race in a benign way or will it be a malicious end.

Tom West writes:

Anonymous - Comparative advantage only means that humans could earn *something*. It doesn't have to be above subsistence level.

There doesn't have to be an infinite number of robots, just enough to put most of the population below subsistence.

Now robots wouldn't compete for inputs directly, but there are costs for agriculture. It doesn't matter if I'm the only one in the world who wants to buy a car, I still won't be able to buy one for $1.

And of course, a major input humans need is land to live on. Bad time to be a renter, or have a mortgage, as the price of land also drops like a rock.

After all, it doesn't matter how cheap everything becomes when labour costs drop to near $0 when you're also earning ~$0.

Eliezer Yudkowsky writes:

[Comment removed pending confirmation of email address. Email the to request restoring this comment. A valid email address is required to post comments on EconLog and EconTalk.--Econlib Ed.]

Ari T writes:

I'd like to hear Robin Hanson's response. Caplan is witty but Hanson doesn't let emotions sway judgement.

Anonymous writes:

@Tom West

But I think you need a reason to expect it not to be above subsistence level. All else being equal, comparative advantage means both parties gain even if one of them is less productive at everything. To make the claim you're making, there should be some extra piece of the argument justifying why this won't be the case, unless you think this effect applies much more broadly than just to automation.

The idea of growing robots on farms was kind of glib but I do think this could happen in a more plausible form. For example: maybe it is always more efficient to use computers for robots rather than as tools for human use, so all human workers will have to do without computers, cutting productivity and therefore wages. Or maybe the effect won't be quite that strong and we'll return to timesharing. Will this maybe affect people who directly use computers more than, say, people whose productivity is increased indirectly by computers - automated logistics, for example? What are the inputs to automation, which of them are also used for human labor, what is the cost of producing each - will the limiting factor be something only robots need, or something necessary to both robots and humans?

I don't know the answer to any of these questions, but I don't think you can make any confident statement on the expected effects of more automation without having such answers. So I think this issue is much more nuanced than the simple argument "when robots are more efficient than humans then there will be no jobs left for humans to do" suggests.

I also think that in the long run, it is inevitable that the most efficient way to use any resource will probably not be human labor. So I would argue that eventually this will become a problem, but that for it to happen, robots being able to do everything a human can is a necessary condition but not a sufficient one.

Mark Bahner writes:
If in your scenario robots can do everything humans can do, why would the price of a robot be so low? People would be willing to pay quite a bit more than that for labor, right?

Why wouldn't the price of labor fall to meet the price of robots? For example, let's say a computer smart enough to drive a car costs $2000, and can drive 24/365 for 20+ years. Why would anyone ever pay taxi or Uber/Lyft drivers (more than pennies an hour) in such a situation?

Plucky writes:

When dealing with disaster probabilities, the inherent problem is that you have little relevant data. From a historical perspective, the closest you can do is ask "how close did we get to that outcome?" Where "how close" is measured in number of "failure points" rather than probabilities. Typically in large, industrial disasters, there are 6 or 7 independent failures required to trigger a disaster, non-failure of any one of which would have prevented disaster. That also should probably be the framework for morbid thinking. I'll rank your 5 proposals in order-of-closeness historical closeness:

1) Nuclear. MAD-level nuclear war was absolutely a plausible result of the Cuban Missile Crisis and arguably a plausible outcome from the Able Archer exercise in the early 80s ( In each case, there was only a single point of failure remaining, that being the collective decision of at most a dozen people. In the Cuban Missile crisis, there were two separate single points of failure, either one of which could have triggered that outcome. Coming within a single point of failure twice inside of the first 40 years of invention is not terribly comforting. Geopolitics now makes MAD-level nuclear disaster much less likely than during the cold war, but nuclear terrorism or regional nuclear war is sufficiently plausible to be worth worrying quite a bit about. A lost-Pakistani nuke blowing up Mumbai, an Iranian/Israeli nuclear war, or a security failure at a nuclear power plant near a major city to paramilitary-level terrorist group are all thinkable outcomes, any one of which would probably take the net benefit of nuclear technology into negative territory.

2) Global Totalitarianism. Semi-permanent Nazi domination of Eurasia was a plausible outcome in WW2. Nazi capture of Moscow and Stalingrad could well have precipitated the Soviet Union suing for peace, which would have made the Allied invasions of France & Italy doubtful unless paired with the use of nuclear weapons on a scale that wouldn't have been achievable until '46 or '47. That wouldn't have been "global", but all all-Totalitarian continental Europe would be pretty disastrous. Within 2-3 points of failure on that outcome, with those points of failure being major military engagements.

The next three bad outcomes are historically speaking orders of magnitude further away from realization. Mass famine would have required a coordinated global failure in improvements in agricultural productivity. Tens if not hundreds of thousands of failure points. Widespread institutional failure from immigration is hundreds or thousands of points away if you consider each "instutution" as a single point. That granted, in some areas in Europe is has resulted in institutional collapse, but not one that has been general or society-wide. Unfriendly AI has not gotten anywhere close to being a societal threat historically, but that is not in my opinion the most relevant disaster scenario from computers. The most likely disaster scenario from computers results from the collapse of critical infrastructure reliant on automation with no manual backup plan in the event of computer system failure. It's not easy to guess how close we've gotten to that kind of scenario. but the number of failure points on say, the electric grid, is probably at most on the order of 100. Way more probable than famine, way less probable than nuclear war.

knb writes:

1. Industrialization. Industrialization hasn't made us much happier but I do agree that it appears to be a net win.

2. Population growth: Nope, there is no reason to believe population growth has been net positive. Humans were miserably malthusian-trapped for thousands of years with lots of people starving off at the margins. If people had been able to coordinate to lower birth-rates, they would have been vastly better off. If population was such a great thing for innovation, the Asian countries which developed intensive rice cultivation would have launched the industrial revolution and probably had "The Singularity" by now. Instead, the divergence from malthusian constraints happened in low-density Europe in the wake of the massive culling of the Black Death. Nick Szabo has lots of information about this on his blog.

3. Computers: It's pretty dubious that computers have made the world so much richer. Economic growth actually slowed down compared to the previous decades during the peak of the computer revolution (1970s-2000s had lower GDP growth in developed countries.) Hedonic treadmill means we should doubt the value of "more entertaining).

4. Nuclear physics. "So far, nuclear physics has allowed the creation of cheap, clean energy." Nuclear energy isn't cheap, at least so far (installed costs for new PWR are generally several times higher than for coal, natural gas, or wind).

5. Immigration: This is obviously a negative sum game in many cases. Plenty of research (e.g. Putnam) shows people are happier the more homogeneous their communities. We have plenty of historical examples of mass movements of peoples leading to catastrophic social collapse for those the hosts: Bronze Age Collapse, Fall of the Roman Empire, Colonization of the Americas.

Basically I think you're insanely over-optimistic about all of these things.

Floccina writes:

So far fossil fuels, even with the dumping of co2 into the air, have been great for us. Heck even the added co2 in the air has so far been great, making us a little warmer and crops grow a little faster.


I have seen some folks on my facebook page go anti-vaccination. I cannot get over how bad that seems to me. So Far vaccinations have saved billions of people!

Comments for this entry have been closed
Return to top