newsUpdated: March 21, 2026

Karpathy Scored Every US Job for AI Exposure — Here's What the Data Says

OpenAI co-founder Andrej Karpathy rated 342 US occupations for AI exposure. 42% of workers — 59.9 million people — land in the high-exposure zone. What does this mean for your career?

When Andrej Karpathy — co-founder of OpenAI and former director of AI at Tesla — decides to spend a weekend scraping the entire Bureau of Labor Statistics Occupational Outlook Handbook and rating every job for AI exposure, people pay attention. And they should, because the results paint one of the most comprehensive pictures we've seen of how artificial intelligence is reshaping the American labor market.

Karpathy analyzed 342 occupations covering roughly 143 million US workers [Fact]. He assigned each job an AI exposure score from 0 to 10, based on how much of the work could plausibly be handled by large language models and related AI systems. The weighted average across the entire US workforce came out to 4.9 out of 10 [Fact] — essentially saying that about half of what Americans do at work is now within reach of AI capabilities.

That headline number, though, hides enormous variation. And it's in the extremes where things get really interesting.

The High-Exposure Reality: 59.9 Million Workers

Of those 143 million workers, roughly 59.9 million — or 42% of the workforce — are in occupations scoring 7 or higher on Karpathy's scale [Fact]. These aren't fringe jobs. Collectively, they earn approximately .7 trillion in annual wages [Fact]. That's not a rounding error in the economy; it's the economy.

Who's at the top of the list? Medical transcriptionists scored a perfect 10 out of 10 [Fact] — essentially every core task in the role is something an LLM can already do well. Accountants and lawyers both scored 9 out of 10 [Fact], reflecting how much of their work involves processing, analyzing, and generating text-based documents. If you work in one of these fields, the data doesn't mean you'll be unemployed next year, but it does mean the nature of your work is likely to change dramatically. Explore our detailed analysis for accountants | Lawyers | Medical transcriptionists

At the other end, roofers scored 0 out of 10 [Fact], home health aides scored 1 out of 10 [Fact], and construction laborers also scored 1 out of 10 [Fact]. The pattern is unmistakable: jobs that require physical presence, manual dexterity, and real-world environmental judgment remain almost entirely outside AI's reach. See the full data for roofers

The Income Paradox: Higher Pay, Higher Risk

Perhaps the most striking finding is the relationship between income and AI exposure [Fact]. Workers earning ,000 or more annually face an average exposure score of 6.0 out of 10, while those earning under ,000 score just 3.4 out of 10 [Fact]. Education tells a similar story: workers with a bachelor's degree average 5.7, compared to 4.7 for those with professional degrees [Fact].

This directly contradicts the old automation playbook, where factory workers and cashiers were the ones losing sleep over machines. Generative AI flips the script. It excels at precisely the tasks that high earners get paid for: analyzing complex documents, drafting professional communications, synthesizing research, and generating structured outputs. The barista and the plumber are, paradoxically, safer than the corporate lawyer.

This finding echoes what we've been tracking at AI Changing Work. Our own data shows similar patterns — software developers, for instance, face significant AI exposure on code generation and documentation tasks, while their debugging, architecture, and stakeholder communication work remains harder to automate.

How This Compares to Other Research

Karpathy's analysis doesn't exist in a vacuum, and placing it alongside other major studies reveals both convergence and tension.

OpenAI's own "GPTs are GPTs" paper (Eloundou et al., 2023) estimated that about 80% of US workers are in occupations where at least 10% of tasks could be affected by LLMs [Fact]. That's a broader but shallower claim than Karpathy's — it says most people feel some impact, without specifying how much.

The Anthropic Economic Index (2025), drawing from millions of real-world API conversations, found that AI is currently used more for augmentation than replacement — workers using AI to do their existing jobs better, rather than AI doing the jobs outright [Fact]. Only about 4% of observed AI usage constituted full automation of tasks [Fact]. This is a critical reality check: exposure potential and actual displacement are very different things.

Brookings Institution researchers have consistently argued that the labor market data simply doesn't show the mass displacement that exposure scores might predict [Fact]. Employment in supposedly high-risk occupations has remained remarkably stable, suggesting that adoption lags, organizational inertia, regulatory requirements, and human preferences for human interaction all act as buffers.

So where does Karpathy's work fit? Think of it as a ceiling estimate [Claim] — what AI could theoretically do based on task descriptions, not what it is doing or will do on any specific timeline. The gap between that ceiling and reality is where human judgment, institutional complexity, and market dynamics live.

What Karpathy's Method Gets Right — and Wrong

Karpathy's approach has real strengths. He used the BLS's own detailed task descriptions rather than inventing categories, and his scoring was systematic across all 342 occupations. The transparency of a simple 0-10 scale makes results immediately interpretable.

But there are important limitations to keep in mind [Claim]. The scores are generated by an LLM evaluating its own capabilities — essentially asking AI how much of each job it thinks it can do. That creates an obvious confidence bias. LLMs are notoriously poor at knowing what they don't know. A model might rate legal document review at 9/10, not accounting for the contextual judgment, client relationship nuances, and regulatory constraints that make real legal work much harder than processing legal text.

The analysis also treats each occupation as a monolith. A "lawyer" who drafts contracts all day faces very different AI exposure than a trial lawyer who spends most of their time in courtrooms reading juries. Similarly, "accountants" ranges from bookkeepers doing data entry to forensic accountants investigating complex fraud schemes.

Finally, Karpathy's method doesn't account for the complementarity effect — the well-documented phenomenon where AI tools often make human workers more productive rather than replacing them, actually increasing demand for the human skill [Claim].

What This Means for You

If your job scored high on Karpathy's scale, the worst response is panic. The second worst is denial. The data points to a more nuanced reality:

The task matters more than the title. Within any high-scoring occupation, some tasks are highly automatable and others aren't. Focus on understanding which of your specific tasks are most exposed, and start shifting your time toward the ones that aren't.

AI fluency is becoming non-negotiable. Across every high-exposure occupation, the workers who learn to use AI tools effectively will outcompete those who don't. This isn't about becoming a programmer — it's about understanding what AI can and can't do in your specific domain.

The timeline is uncertain but the direction is not. Whether the full impact takes 3 years or 15, the trajectory toward greater AI capability in knowledge work is clear. Use the uncertainty in timing to your advantage — start adapting now while the job market still values your existing skills.

For a deeper look at how AI affects your specific occupation, explore our detailed analysis pages for any of the 1,000+ occupations we track.

Sources

  • Karpathy, A. (2026). "AI Exposure Score for US Occupations." Analysis based on BLS Occupational Outlook Handbook data. Fortune coverage | Awesome Agents summary
  • Eloundou, T. et al. (2023). "GPTs are GPTs: An early look at the labor market impact potential of large language models." OpenAI.
  • Anthropic. (2025). "The Anthropic Economic Index." Anthropic Research
  • Brookings Institution. (2025-2026). Multiple reports on AI and labor market stability.

Update History

  • 2026-03-22: Initial publication based on Karpathy's AI exposure analysis of 342 US occupations.

This article was generated with AI assistance using data from the cited sources. All factual claims are attributed and tagged with confidence indicators ([Fact], [Claim], [Estimate]). For detailed occupation-level data, visit the individual occupation pages linked above. Learn more about our AI-assisted content process.


Tags

#ai-exposure#karpathy#labor-market#white-collar-automation#ai-risk-assessment