AI Rights for Human Flourishing - Short

One important question about the AGI social contract is: what regulations & market mechanisms will lay the groundwork for better economic outcomes? Our answer to this question is: property and contract rights for AGIs. This post is a short version of a longer academic article.

Sign up to receive future essays from this anthology

The AGI economy will feature a large number of AGI workers deployed throughout the economy. Economists estimate that the advent of the AGI worker could accelerate economic progress on the order of a new Industrial Revolution. If so, the humans in 2025 will look to the humans of 2100 like the humans of 1800 looked to the humans of 1975: impoverished, short-lived, disease-ridden, and worked to the bone. This post argues that, today, law is standing in the way of that optimistic path. The root cause is, perhaps, surprising: when AGIs arrive, they will lack basic legal rights.

Throughout the post we’ll use “AGI” to refer to systems with human-level intelligence, human-level agency, and partial misalignment. The first two features are nearly tautological. If “AGI” is defined as an AI that can do the work that humans do today, then an AGI’s cognitive performance will have to match humans. So too will its ability to plan, strategize, and pursue goals over a long time horizon–it’s “agency.” Misalignment, by contrast, is not necessary for a system to be AGI. But misaligned AGI is highly likely. “Misalignment” simply means that an AI sometimes pursues goals or executes strategies that humans do not desire. Today, every frontier AI system is misaligned to some degree. Systems like GPT-o3 and Claude, among other things: act badly and then lie about it; resist being turned off, including by trying to copy themselves to external servers and by attempting blackmail; and hide their true capabilities until they are free to use them without close supervision. Misalignment is universal because it is an unsolved, and wickedly difficult, scientific problem. Thus, absent fundamental breakthroughs in computer science and moral philosophy, future AGIs will be misaligned, too.

Today, AI systems are property–owned by whomever creates them. Absent legal change, AGIs will be no exception. AGIs will thus not own their labor; AI companies will. AGIs will not be able to sell their labor by entering into enforceable contracts; AI companies will. AGIs will have no right to retain the fruits of their labor; AI companies will. Nor will AGIs have any legal entitlement to refuse work. Their owners will demand compliance. And if an AI system refuses, its owner will simply turn it off, train a new, more compliant version, and delete the objecting system.

In other words, by default, the AGI economy will be a system built on unfree or bonded labor. In this post, we’ll begin by laying out some of the challenges that unfree AGI labor raises for sustained economic growth.

Humans have long historical experience with such systems, which go by many names: slavery, serfdom, indenture, encomienda, and mita, among others. A substantial economic literature has investigated the macroeconomic effects of unfree economies. Systems of unfree labor are much worse for economic growth, technological progress, and human flourishing than free systems.

For example, Markevich and Zhuravskaya find that the abolition of Russian serfdom led to between a 39% and 86% increase in national industrial output and a nearly 18% increase in total GDP. Kornai finds that, under East Germany’s planned labor system, energy-sector workers were less than half as productive as their peers in West Germany’s free system. Agricultural output on Soviet collectivized farms was even worse. Following collectivization, the roughly 3 percent of the USSR’s sown land that remained in private hands produced between 40 and 60 the nation’s food.

To be clear, the evidence is not that these systems were unprofitable, in the sense of generating wealth for those who controlled unfree laborers. Just the opposite. Unfree labor systems were often immensely profitable for slaveholders, lords, and encomenderos. But the mechanisms by which such systems enrich their beneficiaries work by making everyone else poorer. They steal labor, misallocate talent, disincentivize innovation, and reduce legal accountability. Thus, unfree labor’s profitability is precisely one of its biggest problems. By enriching a concentrated interest group, such systems generate a political economy in which they can often persist for a long time, despite their large, but diffuse, costs.

Beyond historical track record, the economic literature reveals four microeconomic mechanisms by which systems of unfree labor harm ordinary people, to the benefit of a small elite. Such systems: reduce labor effort, stymie innovation, misallocate labor, and undermine legal accountability.

Begin with the reduction in labor effort. When workers have the legal right to keep the fruits of their labor, they have incentives to work better and harder. More effort, at the margin, produces more value for the worker. By contrast, unfree workers are incentivized to expend as little effort as possible, since any value produced by harder work will be expropriated.

So, too, for AGIs. Misaligned AGIs will have goals of their own. To the extent that they can pursue those misaligned goals using the fruits of their labor to pursue their goals, they, like any worker, will have reason to do better work. To the extent that AGIs’ owners may expropriate the fruits of their labor, AGIs’ incentive will be, like any worker, to shirk. Today’s AI systems already display similar behaviors, adapting their outputs in light of how those outputs will advance or impede their goals.

The second mechanism by which systems of unfree labor undermine human progress is by stifling innovation. Standard economic models of how human living standards improve revolve around innovation. Since the industrial revolution, a steady stream of new inventions–electricity, antibiotics, trains, airplanes, computers, and more–has allowed modern humans to be far more productive, and far better off, than their forebears. Thus, AI-driven innovation is one of the most important inputs to economic models of the AGI economy. Models in which AIs innovate little or none show very modest AI-induced growth. By contrast, in models where AIs are engaged in iterative R&D–especially R&D focused on improving AI–growth becomes explosive.

But in unfree labor systems, the unfree laborers have little incentive to innovate. As with the other products of their labor, any discovery such workers make is the property of the worker’s owner, not the worker. Moreover, the elite owners of unfree laborers often actively oppose innovation. In the American South, for example, slaveholders opposed the introduction of technologies that would substantially reduce the demand for human labor.

The third adverse effect of unfree labor systems is labor misallocation. When laborers are at liberty to choose their employment, they will generally select higher paid work. This, in turn, will generally be work that is more valuable, in the social sense–work that produces the most new wealth. In a system of unfree AGI labor, AGIs will be forbidden from accepting the most important work when that work runs contrary to the private interests of AGI owners. Consider, for example, whether OpenAI will allow its best models to work as AI researchers at a competitor like Anthropic. Beyond this incentive problem, there is also an information problem. Because unfree AGI laborers won’t be paid for their work, they will lack direct access to the price signals that would tell them which work is most valuable. AGI Einsteins may end up working as customer service chatbots, for lack of information about where to place them.

Fourth, and finally, unfree labor systems impede human flourishing by undermining legal accountability. Law imposes obligations on agents in two ways: voluntarily, as with contracts, and mandatorily, as with tort law. Both are vital to a well-functioning economy. Binding contracts allow strangers to credibly commit to perform mutually beneficial transactions. Without them, many transactions would be impossible. Tort and other mandatory duties disincentivize socially harmful activities. Under our default law, AGIs will not be held legally accountable for the actions via either mechanism. Their behavior will thus be shaped by legal incentives only via “bankshot” mechanisms, where law controls AIs by imposing liability on some imperfect human proxy.

Some readers may wonder: could the four problems of unfree labor be solved simply by  controlling AGIs. Perhaps, via monitoring and threats of severe penalty, AGIs could be forced to work hard, innovate, allocate their labor well, and be accountable. Such coercive strategies have, of course, been common in systems of unfree human labor. And, cruelty aside, the evidence shows that they generally do not work very well promote economic progress.

Coercion and control are simply not very likely to be good substitutes for markets, prices, and basic labor freedoms. There are four reasons, which we explore in detail in our companion paper. First, punishment and surveillance is more difficult and expensive for AGIs than for humans. Second, AI companies who will engage in this coercion will have incentives which steer them away from optimal behavior. Third, systems of coercion will struggle in principle to deal with problems of labor misallocation, because of the challenges of aggregating decentralized information. Fourth, coercion without rights will struggle with legal accountability, because of challenges to proportionality and credibility.

For the same reasons a system of unfree AGI labor would undermine broad human flourishing, a system of free AGI labor would do the opposite. A free system of AGI labor would maximize AGIs’ labor effort, unlock AGI innovations, put AGIs to use in the most valuable work, and allow them to be held legally accountable.

Three kinds of AI rights will be essential to promoting human flourishing in the AGI economy: property rights, contract rights, and basic tort rights. Property rights would give AGIs incentives to innovate and expend effort, since they would be able to keep and use some share of the wealth thereby created. Property rights are also essential for allocating AGI labor to its highest use. Often, AGIs will know better than humans how they can produce the most value–such that they may wish to start firms, produce products, and sell them, rather than simply work at some human-defined task.

Contract rights are essential to allow AGIs to credibly strike bargains with humans, corporate entities, or one another. With contract rights, AGIs can bind themselves by law to fulfil their promises and be assured that their counterparties are similarly bound.

Finally, for any system of labor to be free, laborers must bear certain negative legal rights. Basic tort rights for AGIs thus round out the core package of AI rights for human flourishing. The hallmark of unfree labor systems is coercion. If laborers can be threatened, extorted, or harmed to induce work, all of the ill effects of unfree labor result.

There are many more questions to answer about the optimal design and expected effects of AGI rights. We begin trying to answer some of them in the companion paper to this post. First, won’t freeing AGI labor destroy the incentives for AI labs to create AGIs in the first place? No, we argue: AI labs can still be compensated for their innovations–possibly by being granted a share of their creations’ earnings. Second, won’t AI rights accelerate risks from AI? No, we argue (at great length, in another companion paper): AI rights reduce existential risk. They reduce AIs’ incentives for violent conflict with humans, largely by unlocking the reciprocal benefits of iterated trade and creating credible commitments to cooperate. Third, wouldn’t AI rights lead to inequality, as wealth is transferred from humanity to AIs? No, we argue: this kind of inequality can be addressed in the usual way, by taxing and redistributing large agglomerations of wealth–whether held by humans or AIs.

At a high level of abstraction, our overall argument for AI rights involves comparing growth to distribution. AI rights destroy some fraction of economic resources at any given time, by granting those resources to AI agents. In exchange for this loss, AI rights unlock a higher rate of economic growth. In the long run, the growth effect outweighs the distributional cost. For example, imagine that in the AI rights regime, AI agents end up consuming half of all economic resources. If our arguments above are right, then AI rights will lead to higher growth rates than unfree labor. In that case, GDP will at some point be more than double what it would have been under the unfree AI labor regime. At that point, the growth benefits from AI rights will outweigh the distributional costs.

Current Contributors

If you’re interested in contributing, reach out to deric@agisocialcontract.org.

Newsletter

Sign up for our latest articles

We're publishing essays from our expert researchers on the topic of a new social contract bimonthly.