ageofem.jpg

Today is the official release date for Robin Hanson’s first book, The Age of Em
In far mode, I’m delighted.  Robin is one of my favorite minds in the
universe, a perpetual motion machine of intellectual delight.  I’ve long
been worried that he would never write a book; now he’s put my fears to
rest. 

In near mode, however, I’m frustrated by the uneven quality of the book.  Robin makes many excellent points, but he interweaves them with overconfident claims that range from speculative to insane.  While The Age of Em embodies most of my guidelines for worthy non-fiction, it still reads like a diary.  If you don’t already know Robin’s thought well, the book is simply hard to follow.  And while he doesn’t quite preach to the choir, his efforts to connect to readers who don’t read sci-fi are perfunctory at best.

For release day, though, let me focus on the highlights.  The Age of Em embodies a new idea, or even a new genre: hard social science fiction.*  In ordinary hard science fiction, authors try to make their stories consistent with known laws of physical science.  Robin tries to make his analysis consistent with known laws of social science as well.  The goal is to nudge social scientists to think more about the future, and futurists to think more about economics, sociology, psychology, and beyond.  What would really happen if we could fully digitize existing human minds?

While Robin applies a wide range of social science, the Malthusian model is the eye of the storm.  For biological human beings, the model has been a colossal failure for centuries – not to mention a perennial rationale for savagery.  But once we can copy human minds like software – and conveniently upload them into robots and virtual reality – the Malthusian model finally comes into its own.  (In a weird way).

Thus the introduction of competitively supplied ems should greatly lower wages, to near the full cost of the computer hardware needed to run em brains. Such a scenario is famously called “Malthusian,” after Thomas Malthus who in 1798 argued that when population can grow faster than total economic output, wages fall to near subsistence levels.

Note that in this section we are assuming that enough ems are willing to copy themselves to fill new job openings, and that they have not organized to avoid competing with each other. We shall consider these assumptions in more detail in the section “Enough Ems”.

Note also that having em wages near subsistence levels should eliminate most of the familiar wage premiums for workers who are smarter, healthier, prettier, etc., than others. Because ems can be copied so easily, even the most skilled ems can be just as plentiful as any other kind of em. While wages vary to compensate for the costs of training to learn particular tasks, wages do not compensate much for other general differences. This should greatly reduce wage inequality (although not necessarily wealth inequality), and increase the relative fraction of workers hired that are of the types that earn higher wages today. For example, if today we hire fewer lawyers compared with janitors because lawyers are more expensive, in a similar situation ems hire more lawyers and fewer janitors.

Another virtue of the book is that Robin’s powerfully defends the intellectual value of futurism.  If the past is worth studying, so is the future:

If the future matters more than the past, because we can influence it, why do we have far more historians than futurists? Many say that this is because we just can’t know the future. While we can project social trends, disruptive technologies will change those trends, and no one can say where that will take us. In this book, I’ve tried to prove that conventional wisdom wrong, by analyzing in unprecedented breadth and detail the social implications of minds “uploaded” into computers…

And whatever flaws the book has, you can’t accuse Robin of wishful thinking. 

My most basic method in this book is to focus first on expectations, rather than on hopes or fears. I seek first what is likely to happen if no special effort is made to avoid it, instead of what I might prefer to happen, or what I might want to warn others to avoid. It is hard to speak usefully about which directions to push the future if you have little idea of what the future will be if you don’t push. And we shouldn’t overestimate our ability to push.

[…]

Th is book violates a standard taboo, in that it assumes that our social systems will mostly fail to prevent outcomes that many find lamentable, such as robots dominating the world, sidelining ordinary humans, and eliminating human abilities to earn wages. Once we have framed a topic as a problem that we’d want our social systems to solve, it is taboo to discuss the consequences of a failure to solve that problem.  Discussing such consequences is usually only acceptable as a way to scare people into trying harder to solve the problem.

If anything, Robin suffers from “sweet grapes” – trying to convince readers
that however scary the future sounds, we’ll like it once we have it.  As I’ll soon argue, however, he makes this task needlessly difficult by stubbornly sidelining the welfare of the creatures his readers actually care about: biological humans!

* Termed coined by Alex Tabarrok, if I remember correctly.