The Missing Institution: A Global Dividend System for the Age of AI
May 28, 2025
If AI breaks the link between work and income, who gets the upside? This piece explores a future of concentrated AI wealth and makes the case for a global dividend system to share it. It maps the scenario, the risks, how it could work, and why we need to start planning now.
Anna Yelizarova is a founding researcher of the Windfall Trust, and this exploration builds on one direction of her recent research.
Sign up to receive future essays from this anthology
A Global Economic Transition We Are Not Ready For
Over the past decade, the conversation around artificial intelligence has been dominated by a familiar set of concerns: breakthroughs in machine learning, competitive pressures between labs, and how to govern rapidly advancing systems that even their creators don’t fully understand. Yet far less attention has been paid to a more profound, systemic question: How will AI transform the economic foundations of society?
No one can predict the future with certainty. But one scenario that deserves far more attention is the possibility that advanced AI could dramatically reduce the role of human labor in the economy. What happens if machines can do an increasing portion of economically valuable work better, faster, and cheaper than people? How would we distribute wealth, maintain social cohesion, and preserve human dignity in a world where labor is no longer the organizing principle of economic life?
We are approaching a world where advanced AI functions less like a tool and more like a drop-in replacement for a white-collar worker—a shift that is no longer theoretical, but that researchers at leading AI labs have come to expect in the next few years. This is a more profound disruption than most current economic policy debates are accounting for—but that’s precisely why it requires our attention now.
Importantly, this isn’t some fringe worry. It’s a design goal, embedded in the mission statements of leading AI labs. OpenAI’s charter, for instance, explicitly commits to building systems that “outperform humans at most economically valuable work.” Whether you believe that’s just over the horizon or still decades away, the pursuit of this future is already underway and the fact that it’s being seriously pursued and funded should give us pause. This raises urgent questions about what happens if the labs succeed in their stated goal. Questions that should be addressed before such capabilities are realized.
The countries most exposed to AI-driven disruption are often the least equipped to shape how its benefits and burdens are shared. They lack the tax base to cushion job loss, the institutional infrastructure to adapt, and the geopolitical leverage to demand inclusion. Left unaddressed, this asymmetry could widen existing global inequalities and destabilize already fragile political orders.
Most policy conversations today focus on national fixes: tax reform, regional UBI pilots, retraining programs. But what do we do when the problem is global?
Defining the World We’re Planning For
Too often, debates about AI and the economy break down because of unspoken differences in assumptions—about dominant risks, timelines, or where the disruption will hit.
Some imagine a slow wave of job displacement offset by eventual job creation. Others see a future where human labor is largely obsolete. Still others anticipate a world where AI supercharges productivity but the gains flow disproportionately to the most talented and industrious, while everyone else treads water. These futures demand different responses and without clarity about which one we’re planning for, it can be hard to have a coherent conversation about policy, let alone priorities.
So let’s be specific about the scenario this proposal engages with: it’s a world where advanced AI systems can perform a growing share of economically valuable tasks. Not just drafting emails or debugging code but coordinating logistics, optimizing business strategy, designing drugs, producing media, and autonomously completing complex, multi-week projects—more efficiently and cost-effectively than humans. In this world, labor’s role in value creation shrinks significantly and wages begin to decouple from productivity. The share of income going to workers declines while value accrues with capital owners. Millions find themselves displaced from jobs.
In this world, wage-based incomes could stagnate or fall, even as goods and services become cheaper to produce. If purchasing power falls faster than prices, cheaper goods won’t translate to greater well-being. And so we may find ourselves in a paradox: high productivity, low demand. What happens to supply chains, investment, and economic dynamism when consumer demand falters, not from scarcity, but from exclusion?
At the same time, wealth will begin to concentrate among the firms best positioned to automate labor at scale—AI labs, cloud providers, chipmakers, and large corporate users, all of whom stand to capture enormous value. Not everyone can become an entrepreneur or license proprietary models. An early glimpse into pricing models suggests that access to the most capable systems will remain at the level of firms and out of reach for most individuals. The result may be a bifurcated economy, where a minority of firms operate at superhuman efficiency, while the majority struggle with weakened demand and eroding margins.
Meanwhile, governments could face a growing fiscal squeeze. Today’s tax systems rely heavily on income taxes from workers, while often failing to capture corporate profits effectively due to tax loopholes, profit shifting, and relatively low rates on business income. As AI reduces the role of human labor and allows companies to operate with fewer employees, this setup begins to break down. With less income to tax from workers, government revenues decline—just as the demand for public support rises. In many countries, social safety nets are already fragile. Without reform, the gap between what governments can provide and what people need is likely to widen.
This future is also geopolitically asymmetric. Countries home to leading AI labs and digital infrastructure may capture a disproportionate share of the economic value. Others, especially those reliant on cheap human labor or foreign remittances would struggle to adapt.
For decades, offshoring to low-wage countries has been a rational strategy: it maximized margins and minimized labor costs. But in a world where robotics and automation continue to advance, that logic could begin to reverse. If machines can undercut even the lowest wage floors, it might become more cost-effective for companies to bring production back home. That shift wouldn’t just simplify logistics; it could undermine the foundations of economic growth in many parts of the Global South, where export-driven industrial jobs have long been central to national development. If those jobs were to disappear without new ones rising in their place, the social and economic consequences could be profound.
This isn’t just a theoretical concern. Many governments in the Global South already face strained budgets and limited capacity to deliver large-scale social support. And yet the economic asymmetry is likely to deepen—even by most conservative accounts. PwC estimates that AI could add $15.7 trillion to global GDP by 2030, but just $1.7 trillion of that is projected to reach the Global South (excluding China). The divide is not just about money but about the tools for resilience, fiscal capacity and institutional readiness.
This scenario shouldn't be treated as a forecast, and it certainly isn’t the only future we should prepare for. A more decentralized world, where access to AGI is widespread and development is diffuse, would bring its own economic dynamics and governance challenges. The goal isn’t to bet on one outcome, but to map the range of plausible trajectories where our current institutions fall short.
But across many trajectories, one thing is consistent: we’re entering a world our existing institutions weren’t designed to navigate.
Beyond Borders: Distributing Value Through a Global Dividend
If this is the future we’re heading toward—a world of soaring output but declining roles for human labor—then the question isn’t just what will happen, but what kind of institution could meet that challenge. We’ll need mechanisms that don’t just stabilize economies, but match the global scale of the disruption. One emerging proposal is a system to distribute AI-generated value across borders, anchored in a clear but ambitious idea: a global dividend.
A global dividend system would deliver recurring payouts to individuals, grounded in the principle that every person holds a legitimate claim to a share of the value created by transformative technologies. This isn’t about charity, it’s about recognizing that modern AI systems are built atop shared infrastructure, public data, and a long arc of collective human knowledge. If AI labs want broad exemptions from copyright law, while the rest of society wants a living income, then the political question becomes: why not exchange one for the other?
The institutional form this might take is still experimental, but one early vision resembles a global sovereign wealth fund: a vehicle to hold and grow AI-driven economic surplus, not in the name of any one country, but in service of humanity as a whole. And while the global scale is unprecedented, the core idea is not.
Alaska’s Permanent Fund Dividend, for example, distributes annual cash payouts—typically between $1,000 and $2,000—to every resident, funded by the state’s oil revenues. These payments are statutory, not contractual: Alaskans do not hold formal ownership, and the state can reduce or pause distributions during budget crises.
By contrast, the Eastern Band of Cherokee Indians distributes income dividends—ranging from $3,000 to $6,000 twice per year—from a sovereign wealth fund built on casino revenues. These dividends are protected under tribal law, with community-led governance and legal entitlements. Children’s shares are placed in trust and accessed at adulthood, often with financial literacy training required. It’s a powerful model of how sovereign wealth can be shared directly, equitably, and intergenerationally.
A global dividend would build on that logic, extending it beyond borders and anchoring it not in geography, but in our shared exposure to the economic upheaval AI may bring. Such a system could encode economic stakeholding where individuals hold formal, durable claims to wealth derived from an AI-driven economy.
This isn’t completely outside the Overton window—even some industry leaders have gestured towards exploring ownership-based models. OpenAI CEO Sam Altman, for instance, proposed in Moore’s Law for Everything (2021) that large corporations be made to provide equity stakes to a public fund, which could make annual distributions to all US citizens. His proposal was national in scope, but the underlying intuition applies more broadly: that alignment and inclusion may depend not just on access, but on ownership.
In practice, that system could start small. A rising global income floor beginning with the poorest communities and expanding over time. This offers one pathway to deploy funds in the near term and help people even before enough wealth for full global coverage is generated. It’s a way to begin building the infrastructure and legitimacy needed for broader distribution later, while offering immediate support where it’s needed most.
In the long run, such a system could ensure that everyone retains the means to meet basic needs in a world where labor is no longer the main vehicle for income. But even in the short term, modest distributions could serve as a stabilizing force and targeting the most exposed could soften the landing and buy time for broader adaptation.
One of the key challenges will be legal: how do we codify the idea that every person has a rightful economic stake in this emerging infrastructure? That will require creative legal thinking to embed personhood-based entitlements into a durable and enforceable institutional framework. This reframes economic inclusion as a fundamental human right. A form of economic membership in a shared global system.
History offers lessons in how wealth-sharing mechanisms can succeed—or fail—depending on design. After the fall of the Soviet Union, Russia launched a bold experiment in voucher privatization: distributing shares of state-owned companies to citizens to jump-start a market economy. But with little public education, most people, unfamiliar with markets and in dire need for cash, sold their vouchers cheaply to brokers. The result was a dramatic concentration of wealth and the rise of the Russian oligarchy. A year later, Czechoslovakia adopted a similar approach but achieved better outcomes. Why? The Czech government paired privatization with extensive campaigns to educate citizens while the country had a more stable economic context at the time of privatization.
The lesson is clear: redistribution mechanisms need to be trusted, understood, and protected before value is captured.
Building something like this wouldn’t be easy. It would require legal imagination, international coordination, and public trust. It would likely face political resistance from those reluctant to cede sovereignty, and from industries wary of redistribution. But the idea rests on a principle that deserves more space in our collective imagination: collective stewardship of shared wealth in an era of massive disruption.
Capturing the Windfall: Funding the Global Dividend System
If a global dividend system is to be more than a thought experiment, we have to answer a hard question: where will the money come from?
A common critique is that there simply isn’t enough money to make something like this work globally. But in an AI-driven economy, two things seem increasingly likely. First, the removal of labor from production across key sectors could generate enormous new wealth—value that doesn’t exist today. Second, in a world of low marginal costs, goods and services could become significantly cheaper. In that world, even modest transfers well below the scale of our current welfare systems might go a long way.
Creating economic value is what markets do. Capturing a portion of that value for public use is a political and institutional challenge.
One early answer to this came in the form of the Windfall Clause, a report by researchers at the Center for the Governance of AI. It proposed AI labs voluntarily pre-commit in a legal contract to redistributing a share of their profits once they crossed a certain economic threshold—say, 1% of global GDP. The concept remains interesting, but the original formulation is likely insufficient for today’s landscape. The mechanism struggles against familiar headwinds. Profits are pliable. They can be shifted, hidden, reinvested. Thresholds can be gamed or endlessly deferred. Reviving or building on the Windfall Clause would require significant redesign: tighter definitions, stronger enforcement, and external levers to reinforce accountability.
There are ways to strengthen and diversify the toolkit. The Windfall Clause was a proposal tailored to AI labs, but they may not be the only entities capturing transformative value. And voluntary profit-sharing alone may not offer the enforceability or durability needed. What follows are several approaches that respond directly to those limitations: offering legal footholds, broader targets, and potential paths toward more stable institutional buy-in.
One idea is to secure equity stakes in the firms or infrastructure providers most likely to benefit from AI-led transformation. Unlike profit-sharing pledges, equity provides a legal claim on future value, and can be harder to evade or obscure. But equity-based models assume early action and leverage: that we know where value will accrue, and that public actors can move before capital lock-in. That window may close quickly and the value chain is likely to splinter across labs, cloud providers, chipmakers, and downstream companies that are quietly automating labor, often without public scrutiny or oversight.
Another approach might shift focus to the users of AI systems rather than their producers. Imagine a framework where large companies that automate away significant portions of their workforce are required to pay into a global fund as a condition for continued access to advanced AI imposed by the providers—licensing not just the technology, but the responsibility. But that, too, relies on a fragile foundation: a willingness from AI labs to coordinate, a willingness from firms to comply, and enough shared incentive to keep the system from unraveling the moment it becomes inconvenient.
Then there’s the state. Governments could implement taxes on AI-driven productivity gains or capital income, and route a portion of the proceeds into an international fund. These policies will be hard to coordinate but the incentives may change. If AI accelerates job loss and weakens demand, governments may find themselves scrambling to sustain consumption and prevent collapse. Under those conditions, redistribution could become a tool of macroeconomic stabilization, not just social justice.
And yet, even fragile foundations have a way of hardening under pressure. Because the history of social policy is not a history of foresight. Many of the major welfare systems we rely on today didn’t exist before the Industrial Revolution. They emerged in response to breakdown. As factories replaced farms, as urban poverty swelled and wealth concentrated, societies created public schools, labor laws, pensions, and insurance systems. Not because they envisioned them in advance, but because the cost of doing nothing became too high.
The same pattern holds at the global level. In 1944, as World War II neared its end and the Great Depression had shattered confidence in free-market capitalism, 44 nations gathered in Bretton Woods to design a new economic order. Out of that meeting came the IMF, the World Bank, and a global monetary framework. It wasn’t consensus that was the driving force. It was crisis.
If AI drives a similar rupture, then crisis will once again force the question. And when it does, the difference will be whether we have something to reach for: a policy framework, an institutional prototype, a draft on the table. It is a question that deserves far more attention, experimentation, and collective foresight.
Capturing a share of AI-driven economic surplus for public benefit won’t be easy. There are no turnkey solutions, and many mechanisms face serious technical and political challenges. Some of these proposals may sound idealistic but history reminds us that today’s political nonstarters can become tomorrow’s economic necessities. In a world where labor becomes less central to value creation and there are simply fewer jobs to go around, the boundary of what’s politically and economically feasible could shift—possibly faster than we expect.
Recognizing the Limits
No single policy can carry the weight of a future this complex. A global dividend system won’t be enough on its own. It cannot answer deeper questions of identity, purpose, or meaning—challenges that will require cultural, social, and political responses far beyond its scope. But it can offer a starting point for economic security in an era when the foundations of income and labor are being redefined.
There are other ways to imagine redistribution in this scenario. Some might advocate for universal basic services provided by nations, perhaps funded through multilateral tax treaties. Others might suggest public ownership of key AI infrastructure. None will be easy. Getting governments to tax powerful domestic industries is hard. Getting multinational firms to commit to global contributions is harder. But politics has always been about power, and the exercise of it. If the legitimacy of economic systems begins to fray, perhaps even entrenched interests may discover that redistribution is in their own long-term interest.
The model outlined here assumes a specific kind of future. One in which the economic gains of advanced AI accrue not broadly, but to a handful of companies and countries. That may sound dystopian to many, but that doesn’t mean we should resign ourselves to it. Others in the ecosystem are focused on preventing extreme power concentration altogether, through tools like antitrust law or public AI infrastructure or attempting to slow down this race. If this future seems undesirable, then we should start sketching alternatives, designing institutions and incentives that push us toward better outcomes.
This is one proposal, built for one possible future. It isn’t offered as a definitive answer, but as a starting point for debate, refinement, and imagination. The assumptions behind it may prove incomplete or incorrect; only time will tell. But in an era of rapid transformation and deep uncertainty, putting concrete ideas on the table is a way to clarify thinking, surface disagreements, and inspire alternative approaches.
At its best, this proposal is more than a policy suggestion-–it’s an invitation. To imagine a global economy grounded in solidarity and help catalyze discussion about what kind of futures people actually want, and what institutions will be needed to make them viable.
Special thanks to Deric Cheng, Anton Korinek, Adrian Brown for helpful comments and suggestions.
Newsletter
Sign up for our latest articles
We're publishing essays from our expert researchers on the topic of a new social contract bimonthly.