research

Yale Budget Lab: AI Exposure Hasn't Moved Workforce Distribution Since ChatGPT

29% of US workers are in occupations with the lowest AI exposure. 18% are in the highest. And the share has not budged since ChatGPT launched. The Yale Budget Lab's February 2026 synthesis finds AI exposure is real and measurable — but it has not yet translated into measurable employment displacement.

ByEditor & Author
Published: Last updated:
AI-assisted analysisReviewed and edited by author

29% of US workers are in occupations with the lowest AI exposure. 18% are in the highest. And the share has not budged since ChatGPT launched.

That's the core finding from a Yale Budget Lab synthesis published February 2026 — and it's the kind of result that should make every "AI is coming for your job" headline sound a little quieter. The Lab, led by economist Martha Gimbel, has been tracking this question monthly using Current Population Survey (CPS) microdata. Their answer so far is uncomfortable for both the AI doomers and the AI utopians: we can measure exposure, but we cannot yet see exposure becoming displacement. [Fact]

If you've been refreshing your job board out of fear, the data says you can take a breath. If you've been counting on AI to fix the productivity slowdown, the data says you should not pop the champagne yet. Here's what the numbers actually show.

What "AI exposure" actually means

The Yale Budget Lab does not invent a new exposure score. It pools the leading academic measures and compares them — Felten et al. (which captures whether AI capabilities can perform an occupation's tasks), Webb (patent-based AI), and Eloundou et al. 2024 (whose dv_rating_beta and human_rating_beta ask whether GPTs can speed up O\*NET tasks, with the human rating annotated by humans and the dv_rating by GPT-4). [Fact]

This matters because the four metrics agree on which jobs are _least_ exposed and disagree most on which are _most_ exposed. The Felten measure rates managers, professionals, and office and administrative staff very high — and service, production, and construction workers very low — because the latter involve manual tasks that current algorithms cannot do. The Eloundou measures rank by task speed-up, which surfaces a slightly different list. [Fact] The takeaway: any single ranking you read in the press is one slice of a noisier picture.

The number that surprised me

Across all of these measures, the share of US workers in the lowest, middle, and highest occupational exposure groups stays stable at roughly 29%, 46%, and 18% respectively. [Fact] Not since ChatGPT launched in November 2022. Not after GPT-4. Not after Claude Opus or Gemini reasoning models. The distribution has barely moved.

That is striking because exposure scores are not zero. Roughly one in five US workers sits in the highest-exposure quintile. If exposure translated quickly into displacement, you would expect those workers to be churning out of their occupations — moving toward less exposed ones — at a higher rate than everyone else. [Estimate]

They are not.

Unemployed workers do come from exposed jobs — but only modestly

The Lab's monthly CPS updates do find that unemployed workers tend to come from occupations where, on average, 25% to 35% of tasks could be performed by generative AI. [Fact] That is a real signal. But it is not dramatically above the workforce average for high-exposure groups, which is the comparison that matters.

In other words: yes, people losing jobs in 2026 are slightly more concentrated in exposed occupations than the workforce as a whole. No, this is not a tidal wave. The Lab's blunt summary in their March 2026 update: measures of exposure, automation, and augmentation show no sign of being related to changes in employment or unemployment. [Fact]

This is the part the Twitter-thread industrial complex skips. Exposure is real and measurable. Displacement, so far, is not.

Why this might not last

I want to be careful here. "No effect yet" is not the same as "no effect ever." Three things could change the picture in the next 18 months. [Opinion]

First, Anthropic's own usage telemetry — which the Yale Lab has begun cross-referencing — suggests AI usage is shifting toward _automation_ (AI doing the task end-to-end) rather than _augmentation_ (AI helping a human). Both the November 2025 and February 2026 Anthropic samples show this pattern. [Fact] If automation share keeps rising, the lag between exposure and displacement could close.

Second, the headline CPS numbers smooth over what's happening at the entry level. Federal Reserve Governor Michael Barr noted on February 17, 2026 that early-career workers in AI-exposed occupations — software developers and customer service representatives among them — have already seen employment decline relative to other early-career workers. [Fact] If you are a 23-year-old looking for your first job in those fields, the average across all workers is not your reality.

Third, AI adoption itself is still early. Only 17% of US firms reported using AI in business functions as of December 2025, rising to about 30% among large firms (250+ employees). [Fact] The headline workforce stats are stable in part because the technology has not yet been deployed everywhere.

What this means for your career

If you work in a high-exposure occupation — and the Felten measure suggests that includes a lot of data scientists, financial analysts, administrative assistants, accountants, and especially computer programmers — three things are worth doing right now.

Stop reading the apocalypse takes. The data does not support them yet. The Yale Budget Lab is one of the few groups doing this work with rigor and they are publishing monthly. Read their updates instead of the breathless thread.

Watch the entry-level signal in your own field. If junior hiring is collapsing while mid-career hiring is stable, that is the canary. The Yale Lab does not yet break out hiring by experience level, but Fed speeches and industry reports are starting to. The displacement story, if it comes, will likely show up here first.

Get good at the part of your job that AI is _bad_ at, not the part it is good at. The Felten methodology is explicit that "exposure" measures task-level overlap — it does not measure whether a human is still needed for judgment, accountability, client trust, or system integration. Those are still yours.

The bigger picture

What I find most useful about the Yale Budget Lab's work is the discipline. They publish methodology. They update monthly. They cross-reference multiple data sources. They are willing to publish the boring conclusion — _we don't see it yet_ — instead of dressing the data up for a conference talk.

In a discourse where "AI will eliminate 300 million jobs" has been a Goldman Sachs headline since 2023 and "AI is just a productivity tool" has been a counter-headline ever since, the Lab's answer is the most honest one we have: the labor market is changing, but the change has not yet shown up where we keep looking for it. [Opinion]

That should make us more humble, not more certain.

Sources


_This article was written with AI assistance and edited for accuracy. Statistics are current as of the source publication dates. We update this analysis as new Yale Budget Lab releases and CPS data become available._

Analysis based on the Anthropic Economic Index, U.S. Bureau of Labor Statistics, and O*NET occupational data. Learn about our methodology

Update history

  • First published on May 5, 2026.
  • Last reviewed on May 5, 2026.

More in this topic

Science Research

Tags

#ai-exposure#yale-budget-lab#labor-market#cps-data#eloundou#felten