Three Nobel-Caliber Economists Say AI Is on the Wrong Track for Workers
Acemoglu, Autor, and Johnson argue that current AI development favors automation over augmentation — and propose nine policies to redirect it toward pro-worker outcomes.
When Daron Acemoglu, David Autor, and Simon Johnson publish a paper together, the economics profession pays attention. These are not fringe voices. Acemoglu won the 2024 Nobel Prize in Economics. [Fact] Autor created the "tasks framework" that reshaped how we think about technology and labor markets. [Fact] Johnson, a former IMF chief economist and MIT Sloan professor, wrote the book on how power concentrations shape economic outcomes. [Fact]
Their new paper, published through the Hamilton Project at the Brookings Institution in February 2026, carries a blunt message: the way we are building AI right now is not designed to help workers. [Claim] And unless we deliberately change course, the technology that could be the greatest force-multiplier in modern labor history will instead become another engine of inequality.
The Core Problem: AI Is Automating, Not Augmenting
The authors draw a crucial distinction that most AI discourse glosses over. They classify technologies into five categories based on how they affect labor: labor-augmenting (makes workers more effective), capital-augmenting (improves machines), automating (replaces human tasks with machines), expertise-leveling (lets new workers perform specialized tasks), and new task-creating (generates entirely new kinds of human work). [Fact]
Of these five, only "new task-creating" technologies unambiguously benefit workers. [Fact] Everything else involves trade-offs — and the current AI investment landscape is heavily tilted toward automation.
As the authors put it: "A great deal of the current AI focus is on task automation and the development of high-level capabilities in line with artificial general intelligence, with less energy and investment flowing toward the development of pro-worker AI." [Fact] The reason is straightforward economics: leading firms see greater returns in automating expertise than in creating new tasks for humans. [Claim]
This framing matters for anyone watching their own occupation's AI exposure numbers. When we report that software developers have high AI exposure, the natural question is: exposure to what, exactly? Automation that replaces their work, or augmentation that amplifies it? Acemoglu, Autor, and Johnson argue that the answer depends on policy choices we are making right now.
What "Pro-Worker AI" Actually Looks Like
The paper defines "pro-worker technology" as technology that makes human skills and expertise more valuable — not less. [Fact] Think of a diagnostic AI tool that helps a nurse practitioner catch conditions they might have missed, rather than an AI system that eliminates the need for the practitioner entirely. Think of code-completion tools that let a software developer build features faster, rather than fully autonomous coding agents that make the developer redundant.
The distinction matters enormously for specific occupations. For customer service representatives, pro-worker AI means tools that surface relevant information instantly during complex calls, helping agents resolve issues faster. The alternative — chatbots that handle most queries without any human involvement — is automation, and it is currently winning the investment race.
For accountants, it is the difference between AI that automates routine compliance checks (freeing accountants for advisory work) versus AI that handles the advisory work too, collapsing the profession's value chain.
For administrative assistants, the question is whether AI scheduling and email tools make them indispensable coordinators of complex workflows, or whether those same tools simply make the role unnecessary.
Nine Policy Recommendations
The paper does not stop at diagnosis. It proposes nine concrete interventions to redirect AI development toward pro-worker outcomes. [Fact]
The most striking recommendation targets the U.S. tax code. [Fact] Current tax policy — through provisions like Section 168 bonus depreciation — makes it cheaper for companies to invest in equipment and software than to invest in hiring or training workers. [Fact] The authors argue this creates a systematic bias toward automation: when replacing a worker with software is tax-advantaged but training that worker is not, the economic incentives push toward displacement.
Other recommendations include directing federal grant-making toward pro-worker AI research, establishing DARPA-style competitive prizes for pro-worker innovation, strengthening antitrust enforcement to ensure technology competition, and creating legal frameworks that protect workers' expertise from being extracted by AI systems — what the authors call preventing "expertise theft." [Fact]
Two recommendations focus on specific sectors: healthcare and education. [Fact] The authors see these as areas where pro-worker AI could have outsized positive impact — where AI-augmented professionals could dramatically expand access to services rather than simply cutting costs.
The final set addresses power dynamics: mechanisms for worker voice in AI deployment decisions, and loosening licensure restrictions that prevent newly AI-empowered workers from practicing at the top of their expanded capabilities. [Fact]
How This Connects to What We Already Know
This paper adds an important policy dimension to a growing body of evidence. We have covered research showing that AI-exposed jobs were already declining before ChatGPT launched, that firms are measurably substituting AI for human labor, and that Brookings found 6 million U.S. workers face high AI risk with low adaptation capacity.
Acemoglu, Autor, and Johnson provide the theoretical scaffolding for why these trends are not inevitable. The displacement is not happening because AI is inherently anti-worker — it is happening because the incentive structure favors automation over augmentation. [Claim] Change the incentives, and the technology can be redirected.
This is a more optimistic framing than it might appear. It means the outcome is not predetermined. But it also means that passively waiting for the market to produce pro-worker AI is, in the authors' view, naive. Simon Johnson has stated directly: "We are currently not on a pro-worker AI path." [Fact]
What This Means for Your Career
If you are a software developer, accountant, customer service representative, or administrative assistant, this paper offers a framework for thinking about your future that goes beyond simple automation risk scores.
The question is not just "will AI affect my job?" — it almost certainly will. The question is whether your employer, your industry, and your government are investing in AI that makes you more valuable or AI that makes you replaceable. [Claim]
Practically, this means three things. First, seek out roles and organizations where AI is deployed as a tool in your hands, not a replacement for your position. Second, develop expertise in the judgment-intensive parts of your work — the areas where AI augments rather than automates. Third, pay attention to the policy conversation. The tax code, antitrust enforcement, and labor regulations will shape whether AI becomes your co-pilot or your successor.
The full paper is available through the Hamilton Project at Brookings.
This analysis was generated with AI assistance based on the original research by Acemoglu, Autor, and Johnson (2026). For detailed AI exposure data on specific occupations, visit our occupation pages.