You’re Shipping Faster, But Learning Less Than You Think
AI speeds up work but can quietly erode skill. How to use it as a helper without losing the ability to explain, debug, and grow.
A few months ago, I watched a friend ship a little browser tool in one weekend. It worked. It looked clean, no bugs, and even had a README file with all the info.
On Monday, one tiny thing broke. A button stopped saving settings after a refresh. Nothing dramatic, just annoying.
We jumped on a call. It was late, someone’s microphone kept rubbing against a hoodie string, and I was eating the kind of sad desk snack that leaves salt on your fingers. He pulled up the code and started scrolling like he was looking for a familiar street sign in a city he’d visited once.
He built almost all of it with an AI assistant (Claude 4.5).
When the bug appeared, he had no mental map. Not “I forgot the exact line,” but “I don’t know what this part is supposed to do.” He could describe the UI. He could describe the feature. He couldn’t explain the flow. He couldn’t predict where state lived, or what got cached, or what re-ran when the page loaded.
So he did what most people do now. He pasted the error into the model and waited for a fix.
It suggested a change. He applied it, and he got a new error.
Another suggestion. Another patch, another new error. And now the tool still didn’t work, plus the code was drifting into a shape neither of us recognized.
I’ve been around software teams, analytics teams, and recruiting teams long enough to recognize it. Not incompetence, not laziness, but something else entirely. It’s that specific kind of competence that ships, but can’t explain itself.
AI makes you faster at producing outputs. It also makes it easier to skip the part that turns work into skill.
People don’t notice because everything looks fine until it doesn’t. The first time you need to debug, extend, teach, defend, or rebuild without the helper, you find out what you actually learned.
What You Lose Before You Realize It
The problem isn’t that AI is “too good.” The problem is that it’s good in a way that changes your posture.
Before, you wrote a thing, and your brain had to keep track of cause and effect. You held a few constraints in working memory, you tested a guess, you got it wrong, you felt the friction, and then you adjusted. You built a messy internal model as you went.
Now the workflow is different. You prompt. You receive. You skim. You accept or reject. The role quietly shifts from “builder” to “editor.”
That sounds fine, because editing still feels like thinking. Sometimes it is, often it isn’t.
There’s research on what happens when humans rely on automated aids. Parasuraman and Riley1 framed it as misuse and disuse of automation, including over-reliance and reduced monitoring, which can lead to errors being missed even by capable operators.
That’s the part most people associate with planes and medical devices. The same pattern shows up in everyday knowledge work, just in a less dramatic costume.
You accept output you didn’t generate.
You stop checking parts that used to be effortful.
You lose the ability to notice what’s off.
It gets worse, because AI is fluent. It produces confident prose, plausible code, tidy summaries. Fluency tricks the brain. You feel like you understood because you recognized the words and the structure. Recognizing something is easy, but recalling it from scratch is hard.
And retrieval is the expensive step that builds the wiring.
One reason learning sticks is that forcing yourself to pull information out of your head changes memory. Roediger and Karpicke2 showed that taking tests improves later retention, even without feedback, compared to repeated studying, especially when you measure days later rather than minutes later.
That’s not a motivational poster idea. It’s a finding that shows up again and again, including in applied settings like medical education, where repeated testing can improve long-term retention compared to restudying.
AI-heavy workflows remove a lot of those forced retrieval moments.
You don’t have to recall the syntax.
You don’t have to wrestle the structure.
You don’t have to generate the explanation.
You just judge the output.
There’s another angle here that makes me uneasy, because it’s not only about memory. It’s about where your brain expects knowledge to live.
Sparrow, Liu, and Wegner’s3 “Google effects on memory” paper found that people tend to remember where to find information rather than the information itself, and that thinking about hard questions can prime thinking about computers.
That’s not “internet bad.” It’s more specific: when external access is reliable, the brain adapts by shifting effort away from internal storage. You can argue that this is rational. You can also admit it changes you.
A quick digression that I can’t fully defend, except that I keep noticing it: I’m seeing younger people get surprisingly calm about not knowing basics. Not embarrassed, just calm. Like “why would I store that in my head.” Maybe it’s fine. Maybe it’s healthy. I don’t know. I just don’t think we’ve sat with what it means when the default is “I can fetch it” for everything, including the parts you used to need in order to think.
The Change You Don’t Feel Right Away
If you want the short explanation, it’s this: skill comes from producing, not approving.
When you generate an answer yourself, even a bad one, you expose gaps. You get immediate error signals. You also encode the path you took. That path is often more valuable than the final answer.
There’s a classic finding called the generation effect. People remember information better when they generate it themselves rather than just reading it. Slamecka and Graf4 demonstrated this across multiple experiments in 1978.
When you use an AI assistant for first drafts, first solutions, first explanations, you skip generation. You get the final product without the cognitive work that makes it “yours.”
The result looks like competence. You can deliver. You can talk about it at a high level. You might even sound smarter because the phrasing is cleaner than what you’d write on your own.
Then you hit the moment that requires an internal model.
Debugging is a good example. You can’t debug by vibes. You need a chain of reasoning about what should happen, what did happen, and where the divergence likely lives. That chain comes from having built similar chains before.
People assume the danger is “hallucinations.” That’s real, but it’s the loud risk. The quieter risk is that you stop doing the kind of thinking that makes you resilient.
I’m not arguing for some romantic version of suffering. A lot of struggle is wasted time. I’ve wasted plenty of it.
I’m saying there’s a category of friction that’s doing a job, and AI removes it by default.
Why Smooth is a Warning Sign
It’s hard to accept, but when you’re focused on output, feeling like the learning process is going smoothly is often a bad sign.
When studying feels easy, you’re usually rereading, recognizing, nodding along. That produces comfort, not capability. When studying feels harder, you’re often retrieving, generating, or spacing practice. That discomfort is part of why it sticks.
The testing effect research is one example. Another is spacing. Another is interleaving. They all share the same annoying feature: the practice feels worse in the moment.
AI makes work feel smoother. It reduces pauses. It reduces dead ends. It reduces the “wait, why isn’t this working” loop.
That loop is where a lot of learning lives.
Automation research also points to a monitoring problem: when a system usually works, humans monitor it less, and they get worse at noticing when it fails. Parasuraman and Riley’s misuse concept includes complacency and reduced vigilance.
Put those together and you get a nasty combo:
Less generation
Less retrieval
Less monitoring
More fluency
More trust
I’ve seen this on small teams and side projects, not because people are careless, but because they’re busy. They’re rewarded for shipping. They’re not rewarded for being able to rebuild the whole thing from scratch a month later.
Do not get me wrong, I like these AI tools. I use them every day. I’ve shipped faster because of them. I also notice that if I let the model do the first draft of my thinking too often, I get lazier about holding the whole system in my head.
I catch myself doing it with writing.
I’ll ask for a paragraph, think “yeah that’s basically it,” paste it, and move on. Then someone asks me a sharp question about a claim inside it, and I have that awful moment where I realize I agreed with a sentence I didn’t fully earn.
That’s on me.
It also feels like a predictable outcome of the tool.
One more messy story, I was building a simple salary calculator with a friend, just a weekend thing, nothing tied to my job. We were half arguing about whether the rounding should happen before or after deductions. It was raining. Someone kept sending memes in the group chat like we were not actively trying to finish. I used the model to generate the tax logic for one country I didn’t know well.
It produced something that looked correct. Variable names, comments, even edge cases.
A week later a user emailed: “Your net pay is off by about 70 euros.”
We traced it back. The model had assumed a threshold that used to be true but wasn’t anymore. I should’ve checked the source law table. I didn’t. I saw tidy code and felt safe.
That mistake doesn’t prove AI makes you dumb. It proves something smaller and more annoying. That makes it easier to skip verification because the output looks finished.
And yes, I’ve also seen the opposite, where AI helped someone learn faster because they used it like a tutor and forced themselves to explain each step. So the effect isn’t automatic, but it depends on posture.
My Current System (Imperfect but Real)
Don’t ban AI, but constrain its use. On purpose. In a way that keeps the learning loop intact.
Here’s a system I’ve used, and still fail at sometimes.







