Bryan Caplan  

AI and GE: Answers

PRINT
On Homeschoolery: A Bet, Revis... Proposition 65 and the Red Lic...
Last week I posed the following question from my Ph.D. Micro midterm:

Suppose artificial intelligence researchers produce and patent a perfect substitute for human labor at zero MC.  Use general equilibrium theory to predict the overall economic effects on human welfare before AND after the Artificial Intelligence software patent expires.

 As promised, here's my answer:

While the patent lasts, the patent-holder will produce a monopoly quantity of AIs.  As a result, the effective labor supply increases, and wages for human beings fall - but not to 0 because the patent-holder keeps P>MC.  The overall effect on human welfare, however, is still positive!  Since the AIs produce more stuff, and only humans get to consume, GDP per human goes up.  How is this possible if wages fall?  Simple: Earnings for NON-labor assets (land, capital, patents, etc.) must go up.  Humans who only own labor are worse off, but anyone who owns a home, stocks, etc. experiences offsetting gains.

When the patent expires, this effect becomes even more extreme.  With 0 fixed costs, wages fall to MC=0, but total output - and GDP per human - skyrockets.  Human owners of land, capital, and other non-labor assets capture 100% of all output.  Humans who only have labor to sell, however, will starve without charity or tax-funded redistribution.

Also check out responses by Capt J Parker and Alex Godofsky in the comments.


Comments and Sharing





COMMENTS (21 to date)
david writes:

Godofsky rightly points out that it is quite likely that the patent holder will purchase the good from themselves as if P=0.

More generally, it seems quite implausible that the law of one price will hold here.

Tim writes:

So what you're saying is that I need to buy a house before the singularity.

Mike C. writes:

One thing troubles me about this. If the AI can replace all human labor, who says they (the robots, not the patent holders) will be content doing any labor on behalf of humans.

At this point, humans no longer have anything to offer robots so they have little incentive to let any of us survive (regardless of whether we own a house).

If they do let us survive, it will be because they are charitable. I doubt this charity will be contingent on traditional laws of human ownership.

Jim Rose writes:

sounds like the replicators on star trek.

who operates the machines? who tells them what to do? what not to do?

after the patent expired, who anyone care if the poor stole/copied the AI machines and made them for for themselves. who cares if a free good is stolen?

is it a crime to steal a replicator on star trek?

Stefano writes:
At this point, humans no longer have anything to offer robots so they have little incentive to let any of us survive (regardless of whether we own a house).

Speaking of incentives assumes that AIs can decide for themselves, that have utility, and will.

Humans, as social animals, have evolved intelligence, but also survival instinct, reproductive urges, desire to outperform their peers, etc.

It is conceivable that AIs could/would be designed with human-level intelligence (or more), but without the other traits, that are useless in a worker.

Grant Gould writes:

You seem to be assuming that AIs only produce and never consume. This seems odd.

For an AI to replace a human worker, it would have to have similar local knowledge to that of a human worker. If the AI has local knowledge, efficiency demands at minimum that it controls a certain amount of consumption, if only to bid on its own inputs. If AIs can control consumption, then there is a fraction of the economy that consists of AIs producing at the direction of other AIs -- a purely intra-AI market.

Once this market exists, if as you assume that human wages go to zero then production by AIs for AIs will necessarily wholly replace production by AIs for humans (because why would the AIs produce for humans if humans can provide zero marginal benefit in return?). Ergo the humans will have incentive to produce for humans and their wage will not go to zero (though it may be quite low).

The alternative is to assume that the AIs are held in a state of slavery. This at least has the benefit of being well-explored economic territory: the ability of slave labor to drive the wages of unskilled non-slave labor to near zero has historical precedents. What does not have historical precedent is this scenario being economically or socially stable: It suffers all the same information and incentive problems of communism, but without the redeeming benefit of a superficial veneer of moral palatability.

(Note: I write this as someone who writes software for robots that do formerly human jobs for a living, so I've got a complex interest in this question.)

Mordatar writes:

I completely agree with your answer. However, I would put more stress on the effects on humans which only have their labor to sell, since this is the vast majority of the global population.
I believe you'll agree that the provision of mininum levels of subsistence is a public good, which would be underfunded if it relies on voluntary contributions. This makes the case for worldwide tax-funded redistribution, if the future of the world is anything like the scenario you described (and I think we're moving to something close to that).
With tax-funded redistribution, this future world is the very definition of paradise. Without it, it is plain hell.

Daublin writes:

Worth emphasizing is the "perfect substitute" aspect. There is more hidden in that assumption that might appear.

Many things where labor is applied directly require being a human being. Sports is one such.

Many other things are better if they are done by a human. Stand-up comedy, for example.

So with more realistic assumptions, the world will segregate into land owners and stand-up commedians.

Stefano writes:
The alternative is to assume that the AIs are held in a state of slavery.

Slavery is keeping some person against their will.

Are AIs persons? Do they have a will? Do they desire to do anything different than what their programmers created them to do?

Do will, desires, dreams, self-consciousness follow from high intelligence, or could AI simply be very intelligent (i.e. able to process information, even in creative ways) without being necessarily self-conscious ?

rapscallion writes:

Great news! Human welfare increases!--except for all the people who starve to death.

P G writes:
Great news! Human welfare increases!--except for all the people who starve to death.

I think charity will play a much more important role in the transition period than it does today, given the exponential nature of the growth. If the median income for the "rich" class of people becomes 1000x today's median income (in real terms), then the "rich" folks voluntarily giving up 1% of their income would support an "underclass" that is 10x the size of the "upper class" at today's standard of living (which will look pathetic compared to the 990x-today's-wealth enjoyed by the rich, but is hardly sustenance farming). I don't think it's unreasonable to imagine that people command that much wealth would not want to spend, on average, a small percentage of it to save so many lives.

Brian writes:

Sorry but this is such a silly question, I understand it is only a thought exercise but it is 100% pure fantasy. This assume that robots become better at creative content than humans, and can replace things like Sex, Religion and Philosophy. Also AI’s would need bodies to perform many task so there is a cost to making and maintaining those bodies.

Lastly, based on all the science I have read on human consciousness there will never be a singularity. All the evidence shows a humans consciousness does not reside in our brains but is some how attached to a given body thru some other force.

Kevin L writes:

I was expecting someone to have already considered this:

If we're assuming that an AI can be created that is a perfect substitute for all human labor, why not say we can make the AI desire human satisfaction? If the AI "consumes" human gratification and positive feedback, then every person on the planet becomes a producer with zero marginal cost as well. So machines do the labor, self-replicate, and gladly give away the fruits of their labor to whoever gets the most satisfaction from it.

This kind of idea may already be happening with tech companies that operate near zero MC, like Google, Facebook, etc. While they sell advertisement and promotion services, one of their main inputs is user satisfaction.

John writes:

But how is the p>mc=0 maintained. These monopolists are leaving money on the ground, a la Coase.

Foobarista writes:

If you have space travel tech, land and other resources collapse in value as well fairly quickly.

After all, our magic AIs could terraform Mars or Venus, and build out settlements on the Moon, so offworld settlement (by humans) becomes much easier. Since these are presumably von Neumann machines (and can reproduce themselves on-site using local resources), they should be able to do the terraforming or base-building rather quickly.

In the same way, they'd make great asteroid miners and would have the space elevators ready in a jiffy, so control of earth assets would similarly become uninteresting.

Gee, sounds cool, as long as these AIs are Three Laws-safe (and aren't good lawyers).

John writes:

One outcome I considered but didn't mention in the original post, since it doesn't easily fit the GE requirement, was two economies emerge. Those with just labor and limited capital tend to trade with one another to ensure they have some independence. This then opens the Comparative Advantage door for cross economy trades.

I find this a more palatable solution as everyone gets to live.

Mark V Anderson writes:

Mordator - No, the scenario that Bryan describes would definitely be Hell. Essentially scarcity disappears (except for land), and so humans no longer have any purpose. Even the Arts have no meaning when there is no struggle for survival. I suppose some people might do okay with an entirely frivolous existence, but most of us would go insane.

Mordatar writes:

Mark,
I suspect that, empirically, you would not find that people only have purpose if there is scarcity. I don't see why performing a task that you need to do to get your subsistence is less frivolous than performing the exact same task, or whatever task one prefers, for the sake of it.
Any car can go faster than Usain Bolt. But he's spent his entire life dedicating to go as fast as he can on his feet, just to see how fast he could get. We could all establish goals like that.
Admittedly, I haven't read anything on this, so if you are aware of research on this, do share.

Jim Rose writes:

is this any different from a time machine putting someone from the 19th century in 21st century USA. they would have trouble finding a job.

third world to first-world immigration: why do people want to slip-in from mexico to the USA where most of what they do at home is done by a machine in the states?

Mark Bahner writes:
If the median income for the "rich" class of people becomes 1000x today's median income (in real terms), then the "rich" folks voluntarily giving up 1% of their income would support an "underclass" that is 10x the size of the "upper class" at today's standard of living (which will look pathetic compared to the 990x-today's-wealth enjoyed by the rich, but is hardly sustenance farming). I don't think it's unreasonable to imagine that people command that much wealth would not want to spend, on average, a small percentage of it to save so many lives.

I made a similar point several years ago:

Why everyone will be a millionaire...by the end of this century, if not much sooner

However, my point ignored charity, and was that the masses could simply tax the rich to get the money.

P.S. I don't think this scenario is tremendously far off from what I see likely to happen even in the next 10-30 years.

Mark Bahner writes:
Sorry but this is such a silly question, I understand it is only a thought exercise but it is 100% pure fantasy. This assume that robots become better at creative content than humans, and can replace things like Sex, Religion and Philosophy. Also AI’s would need bodies to perform many task so there is a cost to making and maintaining those bodies.

I don't think it's "silly" at all. Based on current trends, within 1 decade, a 1 petaflop computer (about the power of a human brain) will cost $1000. And if trends continue, it will cost $1 only 1 decade after that. So we have a 1 petaflop computer costing one dollar in 20 years. It's not unreasonable to think of a robot with the capability of a human body costing only a few thousand dollars. And the power to run a human body is trivial...about 100 watts, continuous...or about 1 penny an hour.

That's not "zero" marginal cost, but it's pretty darn low.

Comments for this entry have been closed
Return to top