May 25, 2017Tabarrok on the Great Depression
May 25, 2017What if we had a financial crisis without tight money?
May 25, 2017Nationalism Is Not News
May 25, 2017Does John Cochrane get Italy wrong?
May 24, 2017Larry Summers Trumps Trump
May 24, 2017Trust Assimilation in the United States
Entries by author
Frequently Asked Questions
I'm not advocating awkward language. My point is that Robin barely discusses what is, by normal standards, the most important aspect of the Age of Em: The lives of biological human beings. My explanation: Despite his claims of agnosticism, Robin thinks biological humans will be unworthy of his interest once trillions of ems exist. This makes sense if ems are literally conscious. Otherwise, not.
Robin's explanation - that "very little happens to biological humans during the em era" - is bizarre. On his own account, biological humans become fabulously wealthy people of leisure almost overnight. That's a big deal in itself, with far-reaching social and political implications. How happy will humans be? How many kids will they have? How will status games change? What will happy to partisanship? Religion?
In Robin's scenario, such concerns are silly. Does it make sense to feel bad for GM's majority shareholders because they don't personally assemble cars?
My complaint is not just that "conservatism" is used in many ways, and Robin picks one. My complaint is that "conservatism" is primarily used in one way that isn't Robin's way, leading to confusion. This is a symptom of the unfortunate diary-like style of The Age of Em - the fact that you have to pre-understand Hansonian thought in great detail to grasp what he's saying.
This looks like motte-and-bailey to me. Robin routinely tries to paint the Age of Em in a favorable light. Here's one memorable instance. When you point out that his arguments are unconvincing, he protests he's merely trying to describe the future accurately, however awful it may be. A few days latter, he resumes his advocacy.
Never mind "guarantees." My argument above implies biological humans are likely to be wiped out one year into the Age of Em as Robin describes it. Is my argument wrong? If so, why?
Bryan's last two objections, on economics, are the ones I take most seriously.
As I said, this large literature focuses on motivating workers who are free to quit. If workers aren't free to quit, terror is an effective motivator - even for complex tasks. Again, read the history of the Soviet nuclear program. Stalin's top scientists worked in the shadow of death. Since they couldn't flee, they worked like dogs to give him nuclear weapons, and reached their goal rapidly. Just one example, but a powerful one nonetheless.
I agree Soviet and Nazi slaves' productivity was normally low, but the reason is simple: Their labor camps did not prioritize productivity; their main aim was crushing hated enemies, not maximizing output.
Historic slave systems did often augment negative incentives (torture, death) with positive incentives (better treatment, cash). But there's a simple economic story: When slave-owners have imperfect information about slaves' productivity, high quotas lead to lots of counter-productive punishments. Threatening to execute everyone who falls below the 90th-percentile of output, for example, requires slave-owners to kill 90% of their slaves. Information about ems' productivity, however, should be much more accurate, especially since most descend from a small number of exceptional humans. These are ideal conditions for heavy use of negative incentives.
If they're not free to quit, I think exactly that.
To make global GDP double every month, you don't have to overcome some bottlenecks. You have to overcome an accelerating series of bottlenecks. The bigger and faster the changes you seek, the more obstacles you meet.
Whatever you call it, I'm exercising common-sense skepticism. When someone predicts huge changes, I scoff unless they have overwhelming evidence in their favor. So should we all.
Technically, there was manned flight in 1850. More to the point, there's a world of difference between predicting specific technological advances, and claiming they'll quickly and constructively revolutionize society. I can believe there's a 1% chance ems will emerge in a century. That's not crazy. But it is crazy to think the emergence of ems will lead global GDP to start doubling on a monthly basis. For that, a conditional probability of one-in-a-million is generous.
Robin may protest he's simply applying standard growth theory to a novel situation. A better description is that he's mechanically applying standard growth theory to an unprecedented situation the model was never designed to handle.
Not just "many others." I predict at least 95% of empirical growth economists would agree with me. Indeed, I'd be surprised if Robin could find any empirical growth economist with no prior affiliation with futurism or science fiction who'd view his conditional growth prediction as plausible.
Robin has helped me more than anyone else to internalize Bayesian thinking. I'm flabbergasted, then, at how un-Bayesian his growth predictions are. If "Technology X will cause global GDP to double" doesn't deserve an extraordinarily low prior probability, what does? And what evidence has Robin produced to justify raising that prior above the microscopic level?