Bryan Caplan  

AI and GE

PRINT
Legality of Credentialism Bleg... International IQ Testing Bleg...
My favorite question from this year's Ph.D. Micro midterm:

Suppose artificial intelligence researchers produce and patent a perfect substitute for human labor at zero MC.  Use general equilibrium theory to predict the overall economic effects on human welfare before AND after the Artificial Intelligence software patent expires.

Please share your answers in the comments.  I'll post the best responses - and my suggested answer - in a day or two.




COMMENTS (20 to date)
J.D. writes:

It depends on whether the artificial intelligence researchers' programming adhered to Asimov's Three Laws of Robotics.

Finch writes:

> It depends on whether the artificial intelligence
> researchers' programming adhered to Asimov's
> Three Laws of Robotics.


The relevant Dilbert:
http://www.dilbert.com/strips/comic/2013-03-28/

Alex Godofsky writes:

I'm going to assume a few things, such as the patent actually being enforced/enforceable and the patent owners actually wanting to enforce it. If the patent owners get sufficient utility from giving it away for free, the "before" case doesn't happen and the analysis is much less interesting:

Before the patent expires, the patent owners are monopolists and engage in standard monopolist profit-maximization (which is also revenue-maximization at 0 MC). The interesting thing here is that we have to rethink what "revenue" means. Money isn't a good measure, because most of what money buys is labor, and the owners can provide themselves with an infinite quantity of labor. (For most monopolists, their own product does not comprise a large portion of their own consumption.) So revenue has to be measured by its ability to purchase non-labor inputs - and specifically non-labor inputs that cannot be substituted for with even an infinite amount of labor. I think this is actually a pretty small class of goods, depending on how exactly we nail down the abilities of the AI. For example, with an infinite quantity of labor you can probably produce a lot of land.

The upshot is that the owners try to get their hands on as much of this stuff as possible, and potentially buy all of it. In the case that they buy all of it the quantity produced is probably indeterminate, because there should be a wide range of quantities that are sufficient.

After the patent expires those same people then enjoy the fruits of infinite labor + all of the non-labor inputs they managed to purchase while the rest of the world gets by on the remaining non-labor inputs. Ownership of these inputs constitutes the entirety of tradeable wealth and exchange of this comprises all of economic activity.

That said, a lot of this relies very heavily on a literal reading of the problem construction. I'm considering things like social acclaim to be the products of "human labor". If for some reason the AIs/robots aren't able to produce that for the owners, then other members of society still have "labor" they can perform and exchange for goods and services from the owners.

bryan willman writes:

What are the physical limits on the robotic vessels for this AI?

If the AI is merely able to replace all humans who work sitting at computers, it has one effect.

If the AI is able to control robots, as well as build them, the controllers of the AI (whether the owners of the patent or not) are on a fast track to physical domination of the world.
(Bryan's Robot Army of 10 billion AI Robots persuades 7 billion feeble humans to name me King of the World and do my bidding.)

Of course, such a ruler has low demands of the population (stop murdering each other) since most anything she or he consumes is provided by the robot army.

Silas Barta writes:

Assume for simplicity that

1) The patent is effectively enforced.
2) "Labor" here refers to all labor, whether or not it's currently compensated and provided through cash market (i.e. it includes household work, intimacy, friendship, etc).

As labor can now be produced at zero MC, the value of human labor is likewise zero and wages will go to approximately that level. The price at which they sell the robot-human (RH) labor will have to recoup fixed cost, but these will be negligible since the RHes can themselves produce more RHes, amortizing the fixed costs over a large number of units.

Non-owners of the RHes will, in the limit, be unable to afford the RHes since they will cost as much as "hiring themselves". Non-RH-owner humans (NRHOHs) will therefore be only able to buy from each other, in amounts prohibitive to saving enough to buy the robot labor.

The interaction between the RH owners and the NRHOHs is too complicated for me to work through, so that's about all I've got at the moment.

rapscallion writes:

With wages at 0 seems to me all non-capital owning humans would die without charity.

Jason writes:

See movie Wall-E

Richard Besserer writes:

I think I see what you're really asking here...

All right.

The assumption of zero MC can easily be relaxed, clearly (and should be, given its violation of the no-free-lunch principle). All that is required is that the MC of artificial intelligence be very small compared to that of human labour.

Clearly, production will increase along with the effective (human plus machine) labour force. In the short run, the rate of return on (non-AI)capital will increase. The higher rate of return, of course, will only be temporary, as higher retained earnings get plowed into greater investment in capital and the capital/effective labour ratio falls back to normal levels.

So, humans who make most of their income from renting capital will see their consumption and welfare increase greatly.

Before the patent expires, the patent holder, a Bertrand monopolist, will set his price at or slightly below what humans consider a subsistence wage. Any higher and humans can underbid, at least in principle. After the patent expires, the cost of the AIs fall to zero, along with his profits.

Obviously, the patent holder will command a large share of output. During the lifetime of the patent his income will be in the form of monopoly profits, which he will invest in non-AI capital to allow him and his descendants to maintain their consumption after the patent expires.

The share of output going to humans who make most of their income from selling labour power, of course, will decline sharply.

It's just possible, of course, that a few jobs will remain that only humans can do. Let's make our example, just for fun, members of the patent holder's seraglio. (They are guarded by AI eunuchs, who do their own job better than any human ever could.) The implied assumption that the barriers to entry (time, effort, intelligence, stamina, surgery) into the privileged professions can easily be relaxed without much loss of generality.

In any event, the members of the seraglio will see their welfare greatly increase. To the extent that the seraglio's services are complements to the more mundane ones supplied by AIs, its members' effective wage will increase accordingly.

Non-privileged labourers, of course, will quickly lose their jobs with little hope of ever finding any others (unless they are attractive enough to qualify for the seraglio). The increase in leisure will only improve welfare on balance if they have some capital income as a backup and/or can depend on transfers from friends or relatives who do. (Preferences are rarely separable that way.)

In short: while the output of the economy as a whole will greatly increase, most if not all of the surplus will benefit capital holders and, of course, the monopolist (at first from monopoly profits, later from rents from capital), and a few privileged labourers who cannot easily be replaced with AIs.

The welfare of non-privileged labourers will almost certainly decline grievously. Their ability to even reproduce themselves will be put into question. (Children need to be fed, somehow, and non-privileged female labourers of reproductive age will presumably only be able to raise to maturity the children of capital owners willing to support them. AIs, meanwhile, cannot breed as such.) For his part the patent holder, like Genghis Khan, may well end up as distant ancestor to a large chunk of posterity.

To ask the non-privileged to accept their fate as the price of progress or as "socially optimal" in any meaningful sense is asking quite a lot.

A natural question is whether the patent holder is best advised to distribute some of his wealth as transfer payments to non-privileged labourers in the form of a "national dividend," or invest in a top-notch AI army able to put down attempts at rebellion or expropriation led by the able-bodied non-privileged. Presumably the answer will depend on the patent holder's personal philosophy of social justice and, more importantly, the cost of hiring a general to lead an AI army against unpredictable human rebels.

If AIs cannot be trusted to competently lead the army, a bargaining problem arises between the patent holder and those humans qualified to serve as generals. An attempt at solving any such bargaining problem I leave to future work.

In the meantime, it can be said, without too much fear of contradiction, that the reasonable set of responses to the question of whether some technological or policy change improves human welfare includes the question: "Well, which humans do you mean?" The idea of welfare of humanity as a whole, as opposed to individuals---never mind a social welfare function---is at best a convenient shortcut helpful for those mostly interested in modelling aggregate responses to technological or policy changes. What the idea can prove to be at its worst I'll leave to students of history to detail.

---

Remark: I've assumed above that the AI technology's marginal product is zero unless combined with non-AI capital or raw materials. I consider the assumption reasonable for agricultural or manufacturing production necessary for human subsitence (food, potable water, shelter, etc.) It would be hasty to assume a happy ending after the patent expires and anybody can access the AI technology. Man cannot live on services or information alone. Sooner or later he has to step away from the keyboard and get a sandwich, somewhere, somehow.

---

PS Economics PhD from the early 21st century, bored at work at a central bank. None of this is necessarily the opinion of my central bank or any other.

Would I have passed the GMU comp?

Philo writes:

The supposition strains the imagination, because of the great variety—including especially the *qualitative* variety--of actual human labor, and even moreso because of the infinite *potential* variety. The AI researchers would have to have invented a means to do what the best human artists, athletes, scientists, journalists, religious leaders, service workers (from auto mechanics to prostitutes), etc., do, and even more: what the best human artists, etc., *who could possibly exist* would do; all this at zero cost. Like all fairy-tale scenarios this is probably impossible. But let’s put that worry aside.

If a single person had access to this invention, why would he bother to patent it? He could enslave everyone else and rule the world. But if more than one person had it, the outcome would be indeterminate: each would supposedly have vitually unlimited power, including unlimited power over the others. If they were inclined to rivalry, the first to strike a blow against the others would be the victor, eliminating his rivals. But perhaps they would be inclined not to rivalry but to omni-benevolent cooperation, and all would live peaceably together in Cloud-cuckoo Land.

I think the patent theme is a red herring.

Rob writes:

Suppose the actors in the economy include consumers, a final goods firm that uses labor and machines to produce a consumption good, and a research firm that produced the machine and is the temporary patent holder. Households get utility from consumption and leisure, own both firms, and spend their income on consumption goods. Their income consists of wage income, profits from the research firm, and profits from the final goods firm. Suppose also that the final goods firm is perfectly competitive, implying their profits are zero.

During the patent, the research firm set the price equal to the wage times the relative productivities of machines to workers. Any price more than that leads the final goods firm to strictly prefer hiring workers, any price less than that isn't profit maximizing for the research firm. Consumers use their wage income and the profits from the research firm to purchase the final good. Although consumers are rebated the monopoly profits, they would be better off without the monopoly distortion.

After the patent expires and the recipe for building machines is common knowledge, the price of machines equals its marginal cost, which is zero. Final goods firms strictly prefer using machines to labor provided the wage stays above 0. Since consumers get disutility from working, they provide no labor at a wage of 0. Therefore, consumers end up consuming at least as much in the economy after the patent expires and they consume more leisure. Welfare is higher after the patent expires.

Capt J Parker writes:

I’ll take a swipe at this since it is such a great question, not that I know anything about general equilibrium. If you read anything on this blog you should be able to guess some things about what the answer will look like from the fact that Dr. Caplan says it’s his favorite question.

First, It should be pretty obvious that there will be a huge economic benefit from greatly decreasing the cost of production with zero marginal cost labor, triggering a massive expansion of the economy. If you think there will be a catastrophe from the collapse of labor prices try this experiment: What if instead of a perfect substitute for human labor we were talking about a perfect non-polluting, non-greenhouse gas releasing substitute for fossil fuels also with zero marginal cost? Would that be worth something to ya laddie? Why is it all so different if the commodity is labor instead of energy? Do you really believe only the Mahogany Row dwellers at Exxon-Mobil will suffer if the energy market collapsed?

Now, with suddenly almost free labor (or suddenly almost free energy) you would expect there to be winners and losers. Obviously labor consumers are the winners and labor suppliers (i.e.the 99%) are the losers. So, as far as human welfare goes it seems the question is: Is there any way the winners can compensate the losers (and still be winners?) Here I think, the form of the question is giving you a big hint when it tells you to consider the day the AI patent expires: Think about the classic supply and demand curves for labor (the only thing I remember from micro.) The day the patent expires you are going to see another big drop in the price of labor as all the generic AI/MC-free labor companies jump in. The labor consumers see an economic gain in moving from the old equilibrium price to the new lower price and the suppliers see a loss. But the total economic gain by the labor consumers is always more than the loss by the labor suppliers. To see this you need to draw the curves and do the econ 101 analysis that is pretty much the same as analyzing what happens when someone mandates a price below market. The area from the price axis to the demand curve between the old price and the new lower price is the economic gain to labor consumers. The economic loss to suppliers is always a subset of this. So, human welfare INCREASES with AI zero MC labor, assuming of course the losers ARE compensated.

That’s about the best I can do. The icing on the cake would be some proof that market forces (as opposed to government) will automatically compensate the losers. If there is such a proof, someone else will need to show it to you.


If you still want to obsess about the collapse of labor prices and insist on outlawing new technology because of the damage it will cause (the modern Liberal economic equivalent of bloodletting) I offer this quote from Milton Friedman “why don’t you use spoons?”

Maximum Liberty writes:

I think we are missing something by focusing too much on marginal cost. Marginal cost controls decisions in the short-run. That's often a very good predictor of human behavior because most humans discount the future heavily because information about the future is hard to get.

But the relevant time frame for a patent-holder will be the period of time for which the patent-holder believes the patent will last. The patent holder will maximize profits over that time.

If we use that entire period for the "quantity" axis, then I think normal monopoly analysis works OK.

But once you look at that period, you can no longer ignore the cost of building the perfect substitute. Just because it has no marginal cost does not mean that it has no fixed cost. Because that element is missing, I don't think we can answer Byran's question.

Or, of course, you could again re-define the marginal unit to be one of the patented machines. But I don't think that's what he meant.

Max

Hana writes:

The initial cost of developing the AI code was not cost free. However that cost can be amortized across an infinite number of installations resulting in a unit cost approaching zero. That is how software works. As an example in the case of Microsoft's operating systems there are different prices for different applications, classes of trade and markets.

The host hardware for the code will not be cost free. As an example the hardware to run IBM's Watson cost over $3 million, and included 16 terabytes of RAM (and undoubtedly used enough electricity to light a medium sized village). So even if the AI is free there is still an expense to the utilization of the AI. Even though the MC is zero, the cost to produce each machine is not. The operating costs include power, training, reprogramming, maintenance and repairs.

Having 'perfect' software is not the same as having appropriate software. Each individual AI would need to be programmed for its specific duties. I would expect during the ramp-up of each implementations armies of humans creating the data sets and knowledge base for the AI (or is the assumption that the perfect AI already possesses all known and unknown knowledge).

Based on the openness of the question there is no reason to limit the use of the AI to humanoid configurations. Since we are already out on a limb, why not nanosized or Transformer sized AI. Similarly humans are limited by physical structure and energy requirements to a relatively narrow set of work environments. Our new AI machines are not. High value uses of different configurations could include deep sea, volcanic and space exploration and mining. It is not too much of a reach to envision a host of AI machines mining the sea floor.

The capital requirements for implementing the AI will restrict its usage. Therefore the profit maximizing inventors should initially license the technology. They should be able to achieve the highest revenue value for their invention where the scalability is the easiest and carries the lowest non-software costs. While replacing the counter employees at a fast food restaurant seems like a possibility, the reality is that the required ROI for a fast food AI machine may be prohibitive. This doesn't even consider the reaction the AI machine may cause in humans in face to face encounters.

On the other hand replacing call center (voice interaction only) employees with AI would meet little resistance (are you ever really talking to Terri or Jim). Scaling up for call centers would be easily achieved by just initiating new AI to handle each call. The patience of the AI in dealing with troubled callers may be a real advantage. In cases where someone wishes to speak with a supervisor, the supervisor AI (would that even need to exist) would instantly have access and awareness of the entire conversation.

A secondary high value proposition for them would be in applications, deep sea or moon mining, where the fixed, variable and risk costs of using humans are prohibitively high. Expanding on novel uses, nanoagents could patrol your blood stream repairing and expanding human lifespans. Again licensing the technology will maximize the revenue with a minimum of capital.

Another set of high value uses would be in tasks where human tedium and exhaustion can be dangerous. The transportation industry in particular comes to mind. Pilots integrated directly into the planes, drivers integrated directly into the buses and trucks, captains built in to freighters. Since the AI is perfect, once properly programmed, their performance should always also be perfect. Additionally amortized over the life of the plane, vehicle or vessel the hardware would be competitive with the costs of using a human. Rules governing hours of operation or mandated rest would no longer apply to the AI that operates continuously.

We have to believe that the AI developers do not have knowledge or expertise in all industries, therefore given an aggressive licensing strategy the overall impact of the AI to human welfare would be positive both pre and post patent. While the benefits and applications will be greater post patent, that is as much a function of overall improvements and implementations of technology and capital. 40 years ago cell phones were first sold, they are still not ubiquitous world wide.

Stefano writes:

An AI is a software, it needs an hardware to run on, and it needs hardware to interact with the world. Whether such hardware can be produced at costs lower enough to undercut an human for many jobs is to be proved. A Roomba for example is very limited vs. an human cleaner. It cannot climb stairs, it cannot clean anything that's not at floor level. To be competitive with an human cleaner it needs legs and arms, and its price point would soar by at least an order of magnitude.

Let's suppose that such problems had been solved, and that robotic labor is available at a cost that is marginally less than human labor. On the supply side, as long as the cost of robotic labor is of the same magnitude as the human one, there would be probably little change. Other production factors like land, materials and energy would probably soar a little in price as labor sinks.

The labor cost for the robots would be paid to the hardware and software providers (including the AI patent holders). I don't think that the IP costs for the AI license would be large enough to make much of a difference between and after the patents expire.

On the demand side, there would be potentially a much larger disruption, as all labor income would pratically vanish, and the corresponding purchasing power transferred to capital rent (i.e. to a much smaller fraction of the consumers).

The solution I see more probable is some kind of politically mandated redistribution (either by nationalization, or higher taxes on earnings from capital), so that the former laborers are given some purchasing power, and the ratio between investment and consumption is partially restored.

As the unemployment rate nears 100%, I have no doubt that such a socialist solution would be largely popular.

The effects on the society as a whole would be probably dramatic, as in our western societies much is based on merit and work ethics. It would be difficult to adjust to a situation where everybody is unemployed and 90% are on the dole.

Stefano writes:

An interesting question is whether there are jobs that an AI could not replace.

Probably some would be made safe for humans by normative. I don't think there will ever be AI politicians.

Consumer psychology would matter also: I wouldn't trust a scissor-handling robot near my head, so barbers should be safe too.

I wonder if fund managers, or venture capitalists would be replaced.

What about entrepreneurs? CEOs?

John writes:

w=mc(L) so when AI is substituted for labor w -> 0. Labor now has no effective demand in the economy so output, and output prices, is determined entirely by owners of capital and the cost of capital.

Since AI is actually a form of K rather than L, the cost of K faces a similar downward pressure on it's price as Labor faced. Ultimately the capital base of the economy must become self-replicating and consumer goods become near non-economic items.

A few other implications. Old school economists' head explode as they attempt to apply their favorite models with K/L ratios in a world where L=0.

Education becomes a meaningless requirement because AI knows how to do everything and is doing all the learning. Lots of SF postulated outcomes here but the obvious implication is that Bryan doesn't have a job on two counts -- the AI is doing it and no one has much interest in working towards a degree anymore because it no long provides a wage premium.

JHK writes:

The patent holder will leverage monopolistic power and maximize revenue by adjusting the price where AI is a perfect substitute for human labor and MC is zero for the firm – basically an indifferent curve that is the firm’s budget constraint. The firm will have to decide if the value of AI is worth the explicit cost in addition to upshots from the labor force and households. This creates a hawk and dove game between labor and the firms: If the AI is perceived valuable then the firm will behave aggressively, if the costs are too high then the firm will become averse. The same is true for the labor side, if the value of the job is high they will aggressively oppose the AI but if not then they will behave passively. This latter case could apply to marginal employees where the cost of opposing the AI is greater than the welfare they would be eligible for after losing their job.

The Dove Laborer could be a minority or a majority – if public welfare is fixed and the dependency ratio becomes more one sided, the relationship between firms and labor will evolve into a war of attrition. Attrition could be avoided by increasing the probability of dove behavior, by increasing welfare and lessening the value of the job. Put another way: if an employee values their job because they need food, healthcare, retirement, education for their children … but the list gets shorter and shorter because of increased scope of public welfare then employees may become more dovish (like a frog in a pot of water slowly heated to a boil)

This would be especially true once the patent has expired and companies will begin making cheaper alternatives, perhaps for household use, while the patent holder will make AI-2, 3, 4, 4s … As the firms will become more hawkish and the households/laborers will be more open to the technology. As far as human welfare, I don’t know, it would seem that welfare would increase until enslaved by robots. Before that we would live in a tyranny of government welfare, ran by some aristocracy. So the increase would be nominal, somewhere down the line, we would be subhuman. I’d be curious to see what entrepreneurship would look like among those stuck in this status – I imagine they would behave hawkishly: Sabotaging AI, blackmailing the aristocracy, and finding loopholes in the welfare system to gain competitive advantage. We would remain belligerently human.

Stefano writes:

I wonder if robot soldiers would be protected under the 2nd amendment.

In that case, I see the things turning dystopic very fast.

John writes:

So when do we get to see the picks and Bryan's answer?

Mark Bahner writes:
What if instead of a perfect substitute for human labor we were talking about a perfect non-polluting, non-greenhouse gas releasing substitute for fossil fuels also with zero marginal cost? Would that be worth something to ya laddie?

When I thought cold fusion was possibly real, I was actually fairly concerned. I figured that someone would figure out relatively quickly how to make basically an atomic IED (but this was before I was familiar with the term IED).

Anyway, based on the very sad experience of today's Boston Marathon, it can be seen that a nuclear energy source with "zero marginal cost" might actually have very high cost.

Comments for this entry have been closed
Return to top