research

OpenAI Maps 921 US Jobs: 18% Face Higher Short-Term AI Automation Risk

OpenAI's April 2026 framework maps 921 US occupations and finds 18% face higher short-term automation risk — concentrated in legal support, office admin, and education paperwork. Lawyers, nurses, and teachers are insulated. Here's what it means for your week.

ByEditor & Author
Published: Last updated:
AI-assisted analysisReviewed and edited by author

18% of US jobs face a higher risk of short-term AI automation — and the list of who's on it might surprise you. [Fact]

That's not a forecast from a think tank or a journalist. It's what OpenAI itself put in writing this April, in a 4-dimension framework that maps 921 occupations covering 99.7% of US employment. If your work touches contracts, classrooms, or office paperwork, you're statistically more exposed than a plumber, a registered nurse, or a kindergarten teacher. [Fact]

Here's what the report actually says, why the cutoff is drawn where it is, and what it means for the job you go to on Monday morning.

What OpenAI's framework does differently

Most AI-and-jobs reports start with task lists. They look at what your job involves, ask whether AI can do those tasks, and stop there. The OpenAI Jobs Transition Framework, written by economist Alex Martin Richmond with a foreword from OpenAI Chief Economist Ronnie Chatterji, layers three more dimensions on top of that. [Fact]

Dimension one is the obvious one: technical capability. Can AI actually perform the tasks the job requires today? [Fact]

Dimension two is the one most reports skip — human necessity. Some work has to be done by humans even when AI can technically do it. The framework breaks this into three reasons: regulatory (judges, lawyers in courtrooms, certain medical procedures), relational (teachers, therapists, hospice nurses), and physical (plumbers, electricians, hands-on caregivers). A task can be 90% automatable and still legally or socially require a human. [Claim]

Dimension three is demand elasticity. If AI makes accounting 10× cheaper, do firms hire 10× as many accountants doing 10× as much work? Or do they cut the headcount? OpenAI admits this is the hardest dimension to observe and uses structured approximations. [Estimate]

Dimension four is where the report gets unusual. OpenAI ran the analysis through anonymized ChatGPT consumer usage data from the second half of 2025, cross-referenced with US Current Population Survey unemployment data and a GPT-5.4 occupation classifier. They found ChatGPT usage was roughly higher in jobs the framework flagged as high-risk than in low-risk ones — a real-world signal, not a theoretical projection. [Fact]

The numbers, spelled out

Across 921 occupations, the framework breaks the US workforce into four buckets. [Fact]

  • 18% of jobs face higher short-term automation risk. The clusters are legal support work, classroom-administrative roles, and office and administrative support overall. [Fact]
  • 46% are likely to see less change — work that's harder to automate or where human necessity blocks it. [Fact]
  • 24% could see employment decline as task composition shifts inside the role. The job title survives; the headcount shrinks. [Estimate]
  • 12% could grow because of AI — usually because cheaper output drives more demand. [Estimate]

Notice that 18% + 46% + 24% + 12% = 100%, but "higher risk" and "employment decline" overlap conceptually. The framework is sorting by _type_ of pressure, not netting winners against losers.

Who's actually on the high-risk list

Three job families dominate the 18%: legal support, education, and office/administrative. [Fact]

In legal support, paralegals and legal secretaries do enormous volumes of document review, citation checking, and templated drafting — exactly what large language models do well. [Estimate] If you work as a paralegal or a legal secretary, the report says you're in the highest-pressure category right now.

In education, the affected slice is administrative — not the teaching itself. [Claim] OpenAI specifically calls out teachers, preschool teachers, and kindergarten teachers as insulated roles, because the relational dimension makes them necessary in a way AI can't substitute. The exposure is in scheduling, grading paperwork, parent communication drafts, and lesson-plan boilerplate.

In office and administrative work, administrative assistants, executive secretaries, customer service representatives, data entry keyers, bookkeeping clerks, and office clerks all sit in the high-pressure band. The common thread is structured language work at scale — the exact thing that's now $0.03 a query. [Estimate]

Who's insulated, and why the reasons matter

The report's clearest finding for workers is which roles are buffered, and why.

Lawyers (the licensed, courtroom kind, distinct from paralegals), judges, registered nurses, nurse practitioners, and front-line teachers are all flagged as insulated. [Fact] The reasons aren't the same.

A lawyer is buffered by regulation. Bar admission, courtroom appearance rules, and signature liability mean a human has to be the one whose name goes on the brief — even if AI wrote a draft. A nurse is buffered by physical and relational necessity: catheter insertion, bedside care, the human presence that calms a frightened patient. [Claim] A teacher is buffered by relational demand: parents, school boards, and students themselves expect a person, not a chatbot, in front of the classroom.

Here's why that nuance matters for your career planning. If your role is buffered by regulation, the buffer can disappear with one statute change. If it's buffered by physical necessity, robotics could erode it over a decade. If it's buffered by relational demand — by what humans actually want from other humans — it tends to be the most durable. [Claim]

What the ChatGPT usage data adds

This is the part of the report that's hardest to dismiss. OpenAI didn't just predict — they measured. ChatGPT usage in H2 2025 was roughly higher in occupations the framework flagged as high-risk than in low-risk ones. [Fact]

That's a behavioral signal, not a survey response. Workers in legal support, office administration, and white-collar coordination roles are _already using AI heavily on the job_. The transition isn't a future shock — it's a current, ongoing substitution. [Claim]

Independent reporting from EdTech Innovation Hub framed this as the central tension: the 18% number is striking, but the more important fact is that adoption is concentrated in exactly the roles that are most exposed. If you're in those roles and you're _not_ using AI yet, you're already behind colleagues who are. [Estimate]

What this means for your week

Three concrete things if your job is on the 18% list.

One — measure your own task mix this week. Take any five-day stretch and write down what you actually did each hour. The OpenAI framework is occupation-level; your specific job is task-level. If 70% of your week is structured language work (drafting, summarizing, classifying, retrieving), the framework's pressure applies to you. If 70% is judgment calls, client relationships, or physical work, it applies less.

Two — adopt before you're forced to. The ChatGPT usage data shows the substitution is already underway in your category. Workers who use AI to handle the 18%-bucket tasks faster have more headroom to take on the work the framework calls insulated — relational, regulated, judgment-heavy. That headroom is your career insurance.

Three — watch for the 24% bucket. Even if your role is "insulated," the 24% quiet-decline scenario means firms keep the title but cut the headcount. Watch for hiring freezes, role consolidations, and "we'll backfill that later" language in your team. Those are leading indicators that your firm has decided AI absorbed the lost capacity.

The framework is useful precisely because it doesn't pretend to be a forecast. It maps pressure. What you do with that pressure — or what your employer does — is the part the report doesn't try to predict.

Sources

AI-Assisted Analysis Disclosure

This article was drafted by Claude (Anthropic) using OpenAI's published April 2026 framework as the primary source. The full PDF was not directly retrievable in this session due to file-size limits, so figures and findings are quoted from independent secondary coverage (EdTech Innovation Hub, BCG re-publication) and the report's own publicly listed summary. All quoted percentages and the 921-occupation count come from those sources. Editorial judgments — which roles to highlight, how to frame the dimensions, the action steps — are this site's analysis. Original report and primary citations are linked above for verification.

Update History

  • 2026-04-28 — Initial publication, summarizing the OpenAI Jobs Transition Framework released April 2026.

Analysis based on the Anthropic Economic Index, U.S. Bureau of Labor Statistics, and O*NET occupational data. Learn about our methodology

Update history

  • First published on April 27, 2026.
  • Last reviewed on April 27, 2026.

More in this topic

Science Research

Tags

#openai#ai-jobs#automation-risk#labor-market#chatgpt#2026-research#transition-framework