Book a Slot
Blog

When the AI breaks its own fix

"A specific kind of break keeps showing up in AI-built apps: the fix you got last week is gone, and the AI doesn't remember making it."

On Tuesday, you asked the AI to fix the email verification. It was broken. New users were getting the email, but the link wouldn't log them in. The AI looked at it, made a change, and the link worked. You moved on.

On Friday, you asked the AI to add a small thing to the signup form. A field for company name. Trivial. The AI made the change. The signup form now has the field. The email verification is broken again.

Not in the same way it was broken on Tuesday. A new way. But broken.

You ask the AI what happened. It looks at the email verification, says it sees the problem, writes a fix. The fix works. Friday's signup form is now broken. You ask the AI to fix that, and the dashboard is suddenly empty for half your users.

This is the moment the post is about. The one where you stop asking what's broken and start wondering whether anything you've fixed is still fixed.

It's not random. It's a shape.

It's tempting to read this kind of break as bad luck. The AI happened to mess up. You'll be more careful next time. The next round, when something else falls over, you blame the request, or the prompt, or the day's bigger fix.

It isn't bad luck. It's a specific pattern, and once you've seen it twice, you can spot it from across the room.

The shape is this: a thing the AI fixed in one conversation gets broken, in a later conversation, by the same AI. Not because the new request is touching the old fix on purpose. Because the new request is touching code near the old fix, and the AI has no idea the old fix matters.

It doesn't remember Tuesday. It cannot remember Tuesday. Every time you open a new conversation, the AI is meeting your project for the first time. It looks at what's there, makes inferences from what it sees, and proceeds. The inferences are usually right. Sometimes they're wrong in ways that quietly remove a fix that was holding something together.

Each conversation is the AI's first day on the job

It read the project this morning. It will read it again tomorrow. Nothing carries over. Not the conclusion you reached together about why something needs to stay the way it is. Not the workaround you agreed on for the third-party tool that does that weird thing. Not the comment you both decided was important enough to leave at the top of the file.

A real coworker keeps a running model of the project in their head. They remember that the email verification was broken once, and that the fix had a particular shape. When something near that fix changes, the model raises a quiet flag.

The AI has no model. It has the file open and the request at hand. That's the whole picture.

This isn't a flaw in any one tool. It's the shape of the conversation. You will hit this with Cursor. You will hit this with Claude. You will hit this with the next thing that comes out next month. The conversation is the boundary. Past the boundary, the AI knows nothing.

Three flavors of "the AI broke its own fix"

The pattern shows up in a few different ways. They feel different in the moment. They're the same thing underneath.

The same fix, twice. The bug you fixed on Tuesday is back on Friday. The AI looks at it, recognizes the bug, applies a fix that looks similar to Tuesday's. You feel a small chill of recognition. You don't say anything. You move on. By the third time it happens, you start to suspect the fix isn't really sticking. It is sticking. The AI is just removing it again every time it touches a nearby file.

The unrelated rewrite. You asked the AI to add a field. The AI added the field, and also rewrote a piece three folders away "for consistency." The piece three folders away was the email verification fix from Tuesday. The AI didn't see why it had been written that way. It saw something it could tidy. Friday's break is yesterday's fix, neatened into nothing.

The phantom file. You can see, with your own eyes, that a file is different from how it was last week. You ask the AI if it touched it. The AI says it didn't. Maybe it's right. Maybe a different conversation, three days ago, did. Either way, the AI is not lying. It has no record to consult. The change is real and the explanation is missing.

I see the bug. I know how to fix it.
I'm seeing this for the first time. Again.
I made the change you asked for.
I made the change, and four other changes I didn't mention.
I didn't touch that file.
No conversation I'm in touched that file.
The implementation is consistent.
Inside this one file. The rest is someone else's problem.

"Just tell it to remember" doesn't work

When founders first notice the pattern, the natural fix is to try to give the AI a memory. Add a comment at the top of the file. Write down the fix in a notes document. Tell the AI, at the start of every conversation, "do not change the email verification logic without asking."

It mostly doesn't work. Not because the AI ignores you. Because the parts of the project the AI is willing to touch include comments and notes. The next round of "I'll just tidy this up while I'm here" prunes the warning along with the rest. The notes document gets stale because nobody is looking at it. Each new conversation starts in the same place the last one started. In the present tense, with no past.

There are tools that help on the margins. Project rules files. Memory features. Larger context windows. They push the boundary further out, sometimes by a lot. They don't change the shape of the problem. The AI is still meeting the project fresh. It just has a slightly bigger sticky note this time.

You can't write your way out of a memory problem with the thing that's losing its memory.

Where this fits

The longer story is the one we wrote about in when vibe coding stops paying off. The AI sees a slice. The project has grown past what fits in the slice. New code gets written in light of one part of the project, breaking quiet assumptions in another part. The fix-then-break loop is one of the loudest ways that gap shows itself.

It's also one of the eight signs we listed in how to tell if your AI-generated code is any good. On its own, the same bug coming back once is normal. As a recurring pattern, it's the project asking for a different kind of attention than the AI can give.

The way to think about it: the AI is good at the slice in front of it, in the moment in front of it. The project is wider than the slice and longer than the moment. The mismatch is real and structural. It isn't a sign you did something wrong. It's the shape of building this way.

What helps

There isn't a fix you can apply tonight that will make the AI remember what it did last week. There are a few smaller things that help.

When something breaks that you'd already fixed once, stop. Don't ask the AI to fix it again. Open the conversation that did the original fix, if you still have it, and reread what you and the AI agreed on. The reason for the original fix matters more than the fix itself.

Don't accept the AI's "while we're at it" rewrites in any conversation that's also doing real work. Mixing a tidy-up with a feature change is the single most common way the post-fix bug gets introduced. If a rewrite is worth doing, ask for it on its own, in a separate conversation, where you can see what it touches.

If you're already in the middle of an incident and the question is what changed in the last hour, that's a different kind of moment. We wrote about it in how to know what just broke. The pattern in this post is the slower one. The same area keeps breaking again across days and weeks, despite fixes that seemed clean at the time.

When that pattern repeats, prompting harder is not the move. The project is telling you the area has a shape the AI can't see, and that the AI will keep walking through it as if it were a doorway. The cheaper thing is to bring in someone who can see the shape.

What we do on a session

You share your screen. We open the project together. We look at the area that keeps breaking. We name what's actually true about it. Which constraints are real, which workarounds the project actually needs, which assumptions the AI keeps quietly removing. By the end, you have a description of the area in plain language you can paste into the next conversation, and a sense of which parts the AI is safe to keep working on without close supervision.

It isn't a fix for the memory problem. There isn't one. It's a way of working around the gap that holds.

The AI writes the code. It can't read what it wrote yesterday.