newsUpdated: March 22, 2026

Anthropic Engineers Use AI for 59% of Their Work — What Their Internal Data Reveals

Anthropic surveyed 132 engineers and analyzed 200,000 Claude Code transcripts. AI usage doubled to 59%, productivity grew 50%, and 27% of AI-assisted work was entirely new.

What happens when an AI company turns the microscope on itself? Anthropic just published the answer, and the numbers are striking — not because they are impossibly high, but because they reveal exactly how AI integration actually looks in practice.

In August 2025, Anthropic surveyed 132 of its own engineers and researchers, conducted 53 in-depth interviews, and analyzed over 200,000 internal Claude Code transcripts spanning February through August 2025. [Fact] The result is one of the most granular looks we have at how knowledge workers use AI day to day — not in a hypothetical survey, but in their actual workflows.

From 28% to 59%: The Usage Curve Is Steepening

A year ago, Anthropic employees reported using Claude in about 28% of their work. [Fact] By August 2025, that figure had jumped to 59% — more than doubling in twelve months. [Fact]

Productivity gains followed a similar trajectory. Self-reported productivity improvement went from 20% to 50% over the same period. [Fact] And 14% of respondents reported gains exceeding 100% — essentially doubling their output with AI assistance. [Fact]

These are not hypothetical projections. They come from people who build AI tools and use them every day. If anyone should be good at getting value from AI, it is the engineers who create it. That context matters, and we will come back to it.

What Engineers Actually Use AI For

The most common daily use case might surprise you: debugging. [Fact] Fifty-five percent of respondents use Claude every day to track down bugs. Code understanding comes second at 42%, followed by feature implementation at 37%. [Fact]

Design and planning tasks — the higher-level thinking that requires architectural judgment — remain the area with the lowest AI adoption. [Fact] Engineers are choosing to delegate tasks where the output is easily verifiable: if Claude writes buggy code, the test fails and you know immediately. If Claude makes a bad design decision, you might not discover the consequences for months.

This pattern of selective delegation is consistent with what we have seen in our analysis of Anthropic's Economic Index, which found that AI use clusters around "augmentation" rather than full automation. Workers remain in the loop, choosing which tasks to hand off based on risk and verifiability.

The 27% That Changes Everything

Perhaps the most consequential finding: 27% of AI-assisted work would not have been done at all without AI. [Fact] These are not tasks that got faster — they are tasks that simply would not have existed. Engineers used Claude to explore unfamiliar codebases, write tests they would have skipped, fix minor annoyances (so-called "papercut fixes" accounted for 8.6% of Claude Code usage [Fact]), and prototype ideas that felt too time-consuming to attempt manually.

This challenges the simple narrative that AI either "replaces" or "augments" human work. A substantial chunk of AI's impact is creating entirely new work — expanding what individuals consider feasible within their time constraints.

For software developers and computer programmers, this is a meaningful signal. AI is not just making existing tasks faster; it is expanding the scope of what one person can accomplish. A backend engineer can now build a frontend interface. A researcher can create data visualizations without learning a new framework. The boundary between specializations is blurring.

Growing Autonomy, Growing Concerns

Claude Code's autonomy has expanded measurably. [Fact] The number of consecutive tool calls — actions Claude takes without human intervention — doubled from about 10 to 20 over six months. Meanwhile, human turns per conversation dropped 33%, from 6.2 to 4.1. [Fact]

Engineers are stepping back and letting AI handle longer stretches of work independently. Feature implementation as a use case grew from 14% to 37%, and even design and planning work climbed from 1% to 10%. [Fact]

But the interviews reveal an undercurrent of concern. One engineer noted that "when producing output is so easy and fast, it gets harder and harder to actually take the time to learn something." [Fact] Another pointed to a paradox: using Claude effectively requires exactly the kind of coding expertise that might atrophy from relying on Claude too heavily.

Some reported short-term optimism paired with long-term uncertainty. As one put it: AI will likely "make me and many others irrelevant" eventually. [Fact] This is not the voice of a technophobe — it is someone who builds these systems for a living.

What This Means for Software Professionals

For software developers, data scientists, and computer programmers, this study offers both encouragement and a warning.

The encouragement: AI is currently making developers more productive, not replacing them. Anthropic's merged pull requests per engineer per day increased by 67% [Fact], but headcount did not shrink correspondingly. The work expanded to fill the new capacity.

The warning: the trajectory is clear. Usage doubled in one year. Autonomy doubled in six months. Design tasks — long considered the most human part of engineering — are starting to be delegated too. If you are a developer whose primary value is writing code rather than understanding problems, the comfortable middle ground is eroding.

[Claim] The developers who will thrive are those who get good at the meta-skill: knowing when to delegate, what to verify, and how to direct AI effectively. This study shows that even at an AI company, more than half of respondents only fully delegate 0-20% of their work. [Fact] The skill of the future is not prompting — it is judgment.

The Caveat You Should Not Ignore

Anthropicn employees are not typical knowledge workers. They build Claude, understand its capabilities intimately, and work in an environment designed to maximize AI adoption. [Claim] If the ceiling of AI productivity gains is around 50% with a 59% integration rate, most companies operating with less AI expertise and weaker tooling will see substantially lower numbers.

The study also acknowledges significant limitations: selection bias toward engaged users, social desirability effects in non-anonymous responses, and the inherent difficulty of self-reporting productivity gains. [Fact]

Still, this is valuable precisely because it shows the upper bound of what current AI can do for technical work. It is a preview, not a prophecy — but it is one that every software professional should pay attention to.

Sources

Update History

  • 2026-03-23: Initial publication based on Anthropic internal study (December 2025).

This analysis was generated with AI assistance. All factual claims are tagged with [Fact], opinions and interpretations with [Claim], and projections with [Estimate]. Source data and methodology details can be found in the linked paper. For detailed occupation-level data, visit individual occupation pages.


Tags

#anthropic#ai-productivity#software-development#claude-code#internal-data