0%
Still working...

Anthropic’s Data Proves Your Team’s AI Fluency Matters More Than the Model You Pick

I keep hearing the same conversation in enterprise AI meetings. Which model should we standardise on? Should we wait for GPT-6? Is Claude better than Gemini for our use case?

Anthropic just published data that makes these conversations largely irrelevant. Their fifth Economic Index report, released March 24, provides hard evidence that user skill and experience with AI tools is a bigger determinant of outcomes than model selection.

The data is specific, controlled, and uncomfortable for anyone who thinks buying the right licence is the hard part of AI adoption.

The Core Finding

High-tenure Claude users — those with six or more months of experience on the platform — have a 10% higher success rate in their conversations compared to newer users. This finding holds after controlling for task type, country, language, model selection, and use case.

Let me repeat that: same model, same task, different outcomes — driven entirely by user experience.

Anthropic’s researchers ran multiple regression specifications to rule out obvious confounders. Even in the most stringent tests — comparing experienced and new users doing the exact same narrowly defined task — the success gap was 3 to 4 percentage points. The full model with all controls shows 4 percentage points, roughly a 10% improvement in success rate.

What Experienced Users Actually Do Differently

The report breaks down the behavioural differences between high-tenure and low-tenure users, and it reads like a playbook for AI fluency:

They match models to tasks. Experienced users are more likely to select Opus (Anthropic’s most capable model class) for complex, high-value work and Sonnet for simpler tasks. For every additional $10 of hourly wage associated with a task, Opus usage increases by 1.5 percentage points. API users show even stronger model-switching behaviour, with a 2.8 percentage point increase per $10 of task value.

They collaborate rather than delegate. High-tenure users are more likely to use iterative, feedback-loop interaction patterns. They treat AI as a thought partner, not a command-line tool. Newer users are more likely to use directive patterns — give a single instruction and accept whatever comes back.

They bring harder problems. The tasks that experienced users bring to Claude require almost a full additional year of formal education to understand. They’re doing corporate financial analysis, manuscript revision, AI research — not checking sports scores or writing haikus.

They use AI for work, not entertainment. High-tenure users have 10% fewer personal conversations and are 7 percentage points more likely to be using Claude for work-related tasks.

Why Model Selection Anxiety Is Misplaced

Here’s the uncomfortable implication: most enterprises are optimising the wrong variable. They’re spending weeks evaluating model benchmarks, running proof-of-concept comparisons between vendors, and debating context window sizes. Meanwhile, their teams are using whatever model they have access to at a fraction of its potential because they haven’t invested in learning how to use it effectively.

Anthropic’s data doesn’t say models don’t matter at all. More capable models do perform better on harder tasks. But the data clearly shows that the gap between a skilled user on an average model and an unskilled user on the best model is significant and measurable. The user’s skill is the larger variable.

In my experience working with enterprise AI deployments, this rings true. I’ve seen teams on older models outperform teams on cutting-edge models purely because they’d spent months building effective prompting habits, workflow integration patterns, and quality review processes.

The Learning Curve Is Real — And It Compounds

One detail in Anthropic’s report deserves special attention: the years of education reflected in user prompts increases by almost a full year for every additional year of Claude usage. People don’t just learn to use AI. They learn to bring increasingly sophisticated work to it.

This is a compounding effect. The more experience you have, the harder the problems you tackle with AI, the more value you extract, and the more experience you gain. It’s a flywheel.

The inverse is also true. Teams that delay adoption, that limit AI access to “approved use cases,” that treat AI as a productivity tool rather than a thinking partner — they’re falling behind on a curve that compounds quarterly.

What This Means for Your AI Strategy

Stop optimising for model selection. Start optimising for user development. Your choice between GPT-5, Claude Opus, and Gemini matters far less than whether your people know how to use whatever model you give them. Invest in training, practice time, and feedback loops.

Measure engagement depth, not just adoption breadth. How many people have an AI licence is a vanity metric. How many people have been actively using AI tools for more than six months — and how their usage patterns have evolved — is the metric that actually predicts value creation.

Create the conditions for learning-by-doing. Anthropic’s data suggests that experience itself is the teacher. People get better at AI by using AI. The implication: give your teams time, permission, and encouragement to experiment. Structured training helps, but sustained practice is what moves the needle.

Watch for the internal skill gap. The same inequality Anthropic sees globally — where early adopters pull further ahead of late adopters — is almost certainly forming inside your organisation right now. The teams that started using AI six months ago are producing measurably better results than the teams that started last month.

The Bottom Line

Anthropic’s Economic Index is the first large-scale empirical evidence for something many of us have suspected: AI fluency is the competitive advantage, not AI access.

The model wars make for good headlines. But the real differentiator in 2026 isn’t which model you chose. It’s how deeply your people have learned to work with it. And that’s a gap you can only close with time, practice, and organisational commitment.

The evidence is in. Act on it.

Leave A Comment

Recommended Posts