Will AI Erase 10 Million Jobs? NBER Forecasters Can't Agree
A new NBER paper compared 5 forecaster groups on AI's labor market impact. The median says GDP grows 2.5%/year. The rapid scenario says ~10M jobs gone by 2050. The disagreement reveals more than the numbers.
62 percent. That's where Tetlock's superforecasters and academic economists think U.S. labor force participation could land by 2050 if AI advances quickly — down from 62.7% today. Push the scenario harder, into "transformative AI" territory, and the same forecasters split apart: some see participation collapsing to 55%, costing roughly 10 million jobs, while others see almost no change at all.
That gap isn't about whether AI will get better. It's about what happens to the economy _after_ it does — and a new working paper from the National Bureau of Economic Research suggests the experts can't even agree on the question, let alone the answer.
If you've been waiting for the people who study this for a living to tell you whether your job is safe, this paper is the closest thing you'll get to a real answer. Spoiler: there isn't one. But the _shape_ of the disagreement tells you almost everything about what to plan for.
What the paper actually did
In Forecasting the Economic Effects of AI (NBER Working Paper w35046, April 2026), Ezra Karger, Otto Kuusela, Philip Tetlock and 12 co-authors ran a structured forecasting exercise across five very different groups: academic economists, employees at frontier AI companies, policy researchers, "superforecasters" with documented accuracy track records, and a representative sample of the U.S. general public.
Each group was asked to predict the same set of macroeconomic variables — annual GDP growth, labor force participation, AI's share of those changes — across a baseline scenario and a "rapid AI progress" scenario, all anchored to 2050.
The methodology matters here. Tetlock's prior work (Superforecasting, 2015) established that calibrated forecasters consistently outperform domain experts on geopolitical questions. Putting them next to economists and AI insiders on the _same questions_ is what makes this paper unusual: it's not just "what do experts think," it's "do the people with the best forecasting track records agree with the people with the deepest domain knowledge?"
[Fact] They mostly don't.
The headline numbers
Across all groups, the median forecast for annual U.S. GDP growth through 2050 was 2.5%. That's notably above the U.S. government's official baseline of 2.0% medium-run / 1.7% long-run (CBO 2026 Long-Term Budget Outlook). In other words, every forecasting community in the study expects AI to add growth — they just disagree about how much.
Under the rapid AI progress scenario, things diverge sharply:
- Median GDP growth jumps to roughly 4% per year — nearly double the baseline.
- Median labor force participation falls from 62.7% today to ~55% by 2050.
- Forecasters attributed about half of the participation drop directly to AI, implying ~10 million U.S. jobs displaced by AI specifically (the rest is demographic — aging, retirement, immigration shifts).
But "median" hides the real story. The interquartile range on AI-attributed job losses spans from essentially zero to well over 20 million. AI company employees and superforecasters tended to cluster on the higher end. Academic economists and the general public clustered lower. Policy researchers were the most spread out of any group.
Why economists and AI insiders disagree
The paper's most interesting finding isn't _that_ the groups disagree — it's _why_.
[Fact] When the authors decomposed the variance, the dominant driver of disagreement wasn't predictions about how fast AI capabilities will improve. Most groups gave fairly similar timelines for things like "human-level performance on most cognitive tasks." The big gap was over what a high-capability AI actually does to the economy once it exists.
Academic economists tended to lean on historical analogies — electrification, computing, the internet — where productivity gains were large but labor market disruption was gradual and absorbed by new job categories. AI company employees were more likely to argue that this time is structurally different because the _substitutability_ of AI for cognitive labor is broader and faster than any prior technology.
[Claim] Neither side has a clean empirical case. The paper notes that this is exactly the kind of question where you'd expect Tetlock-style superforecasters to outperform — they're trained to update on weak signals rather than commit to a worldview — but even the superforecasters split, with the median sitting closer to the AI-insider view than to the economist consensus.
What different groups want done about it
The paper also surveyed policy preferences, and the split is the most politically loaded result in the study:
- Experts (economists, AI employees, policy researchers) overwhelmingly preferred targeted retraining programs and portable benefits as the primary response.
- The general public preferred universal basic income and federal job guarantees by a wide margin.
- Superforecasters split, but leaned toward retraining and wage insurance over UBI.
[Estimate] This gap matters because political feasibility runs through the public, not through experts. If AI displacement does play out near the high end of the forecasts, the policy response is more likely to look like UBI/job-guarantee debates than like the targeted retraining frameworks favored by economists. That's a planning signal for anyone in policy, government, or labor market roles (related occupation).
What this means for your career
You shouldn't take any single number from this paper as a personal forecast. The whole point of the study is that the experts disagree by orders of magnitude, and that disagreement is structural, not something more data will quickly resolve.
What you can take away:
- The base case is still growth. Every group's median forecast had GDP higher than the government baseline. AI is not a recession story in the median view — it's a redistribution story.
- Labor force participation is the variable to watch. A drop from 62.7% to 55% would be the largest peacetime decline in U.S. history, larger than the post-2008 drop that defined a decade of policy debates. If you start seeing real drops in the BLS monthly participation rate, the rapid scenario is becoming the actual one.
- The cognitive/manual split is fuzzier than the old narrative said. AI insiders in this study didn't carve out manual labor as a safe haven the way earlier automation literature did. They expect cognitive work to be hit _first_, but not exclusively.
- Plan for either scenario, bet on neither. If the median experts are right, you have ~25 years and the response is gradual retraining. If the AI insiders are right, the timeline compresses and the response involves benefits restructuring. Skills that travel across either world — judgment, client trust, physical presence, oversight of AI systems — are the safer bets.
The forecasting paradox the paper exposes
There's a deeper finding buried in the methodology section that doesn't get a headline number, but probably should.
[Fact] When the authors compared the _confidence_ each group expressed in their forecasts, AI company employees were the most certain — narrow distributions, strong views — while academic economists were the most uncertain, with the widest distributions. Superforecasters were calibrated _between_ the two, but closer to the economists in stated uncertainty.
That pattern is informative. In Tetlock's prior research, the people who turn out to be most accurate over time tend to be the ones who hold their views with appropriate humility — the "foxes" rather than the "hedgehogs." On AI's economic effects, the most confident forecasters are the people closest to the technology. History suggests proximity often produces overconfidence rather than insight, especially when the proximate group has a financial or career stake in the outcome being big.
[Claim] None of that means AI insiders are wrong. It means their confidence shouldn't be weighted higher than their accuracy track record on this specific question — and the track record doesn't exist yet, because the question is novel.
Who should pay attention right now
If you work in a role where labor market signals matter — HR, workforce planning, economic development, retirement planning — this paper is the most current expert-elicitation snapshot available, and worth bookmarking even if you disagree with its conclusions. Human resources managers in particular will find the policy-preference gap relevant: the workforce policies your employees want may diverge sharply from what your finance team's economists are modeling.
If you're an individual worker, the practical move is the same one good career advisors have given for a decade: build skills with optionality, watch the leading indicators (participation rate, AI capex as share of GDP, BLS occupational projections), and don't restructure your life around any single forecast — including this one.
Sources
- Karger, E., Kuusela, O., Tetlock, P. et al. (2026). _Forecasting the Economic Effects of AI_. NBER Working Paper w35046. Published April 2026.
- CBO 2026 Long-Term Budget Outlook — used by the paper as baseline.
- BLS Civilian Labor Force Participation Rate — the monthly indicator to watch.
_This article was written with AI-assisted analysis based on the NBER working paper cited above. All numerical claims reflect the paper's reported figures; interpretations are our own and may differ from the authors'._
Analysis based on the Anthropic Economic Index, U.S. Bureau of Labor Statistics, and O*NET occupational data. Learn about our methodology
Update history
- First published on May 9, 2026.
- Last reviewed on May 9, 2026.