Research shows AI helps people do parts of their job faster. In an observational study of Claude.ai data, we found AI can speed up some tasks by 80%. But does this increased productivity come with trade-offs? Other research shows that when people use AI assistance, they become less engaged with their work and reduce the effort they put into doing it—in other words, they offload their thinking to AI.
It’s unclear whether this cognitive offloading can prevent people from growing their skills on the job, or—in the case of coding—understanding the systems they’re building. Our latest study, a randomized controlled trial with software developers as participants, investigates this potential downside of using AI at work.
This question has broad implications—for how to design AI products that facilitate learning, for how workplaces should approach AI policies, and for broader societal resilience, among others. We focused on coding, a field where AI tools have rapidly become standard. Here, AI creates a potential tension: as coding grows more automated and speeds up work, humans will still need the skills to catch errors, guide output, and ultimately provide oversight for AI deployed in high-stakes environments. Does AI provide a shortcut to both skill development and increased efficiency? Or do productivity increases from AI assistance undermine skill development?
In a randomized controlled trial, we examined 1) how quickly software developers picked up a new skill (in this case, a Python library) with and without AI assistance; and 2) whether using AI made them less likely to understand the code they’d just written.
We found that using AI assistance led to a statistically significant decrease in mastery. On a quiz that covered concepts they’d used just a few minutes before, participants in the AI group scored 17% lower than those who coded by hand, or the equivalent of nearly two letter grades. Using AI sped up the task slightly, but this didn’t reach the threshold of statistical significance.
Importantly, using AI assistance didn’t guarantee a lower score. How someone used AI influenced how much information they retained. The participants who showed stronger mastery used AI assistance not just to produce code but to build comprehension while doing so—whether by asking follow-up questions, requesting explanations, or posing conceptual questions while coding independently.
I would love to read an independent study on this, but this is from Anthropic (the guys that make Claude) so it’s definitely biased.
Speaking for myself, I’ve been using LLM’s to help out with jumps in small gaps of knowledge. Like for example, I know what I need to do, I just don’t know/remember the specific functions or libraries that I need to do that in Python. LLM is extremely useful for these moments; and it’s faster than searching and asking on forums. And to be transparent, I did learn a few tricks here and there.
But if someone lets the LLM do most of the work - like vibe coders - I doubt they will learn anything.
I do the same. I start with the large task, break it into smaller chunks, and I usually end up writing most of them myself. But occasionally there will be one function that is just so cookie-cutter, insignificant to the overall function of the program, and outside of my normal area of experitise, that I’ll offload that one to an LLM.
They actually do pretty well for tasks like that, when given a targeted task with very specific inputs and outputs, and I can learn a bit by looking at what it ended up generating. I’d say it’s only about 5-10% of the code that I write that falls into the category where an LLM could realistically take it on though.
OP, please add a fat disclaimer at the bottom that this is from Anthropic, a major AI company.
Importantly, using AI assistance didn’t guarantee a lower score. How someone used AI influenced how much information they retained. The participants who showed stronger mastery used AI assistance not just to produce code but to build comprehension while doing so—whether by asking follow-up questions, requesting explanations, or posing conceptual questions while coding independently.
importantly, in our own funded study, we found that those who used our product the most did the best
“you’re holding it wrong”
I’m trying out using Claude on a problem at work that has been frustrating; lots of unexpected edge cases that require big changes.
I definitely know less about my “solution” (it’s not done yet, but it’s getting close) than if I actually sat down and did it all myself. And it’s taken a lot lot of back and forth to get to where I am.
It’d probably have gone better if, once Claude provided me a result, I went through it completely and made sure I understood every aspect of it, but man when it just spits out a full script, the urge to just run it and see if it works is strong. And if it’s close but not quite right, then the feeling is “well, let me just ask about this one part while I’m already here” and then you get a new complete script to try. And that loop continues, and I never get around to really going through and fully understanding the code.
Do you tell Claude to make a plan first?
That helps me tremendously. Whenever something needs to be modified, I tell it to update the plan first, and to stick to the plan.
That way, Claude doesn’t rewrite code that has already been implemented as part of the plan.
And understanding the plan helps understanding the code.
Sometimes if I know there will be a lot of code produced, I’ll tell it to add a quick comment on every piece it adds or modifies with a reference to the step in the plan it refers to. Makes code reviewing much more pleasant and easier to follow. And the bugs and hallucinations stick out more too.
Agreed, using a planning phase makes a huge difference. It will break the implementation into steps, making reviewing or manually refactoring parts of the code far more easily.
deleted by creator



