Why AI quietly skips the security parts
Vibe coding security gaps follow one pattern. The AI builds what you ask for, and the safety lives in the things you didn't know to ask for.
You find the hole on a Sunday afternoon.
You were poking at your own app, doing the thing customers do, and you tried something dumb on purpose. Logged out. Pasted a friend's account ID into a URL. Saw their dashboard. Their settings. Their billing.
You sit there for a minute. Then you ask the AI what happened.
The AI is helpful. It looks at the file, finds the missing check, suggests three ways to add it, and offers to write the patch. It does. The patch works. You test it again. The hole is closed.
Then you stop typing.
Because the AI never said this is one of seven holes I left open in the same shape. It didn't volunteer the list. It didn't even know there was a list. You asked about the door it forgot to lock, and it locked the door. The other six unlocked doors are still unlocked, in rooms you haven't walked into yet.
That's the moment most founders meet vibe coding security. Not the breach. The realization that the breach was findable, the AI fixed it cheerfully, and you have no idea what else looks the same.
Name the pattern
Here's the thing nobody tells you when you start building this way: AI is honest about the code in front of it. It cannot be honest about the code that isn't there.
Security is almost entirely the code that isn't there. It is the limit you didn't add. The check you didn't write. The refusal you didn't think to put on the form. The error message you didn't realize was telling a stranger which emails are real. Every one of those is a piece of your app made of nothing, and the AI is precise about building what you describe. So it builds the piece you described and leaves the nothing alone.
The pattern, then, is the same pattern across every founder we sit down with: the asked-for parts are there. The unasked-for parts are missing. The unasked-for parts are where the safety lives.
Four shapes of the same skip
Once you see the pattern, you start to recognize it everywhere. Not as a checklist, but as a shape. The shape repeats. Here are four versions of it from the conversations we have most often, in the words founders use when they describe what happened.
The asked-for door, the unasked-for rooms
You asked for a login form. You got a login form. It works. People can sign in.
What you didn't ask for is the question that comes after the door. Once someone is inside, who is allowed to look at what? The login form proves somebody is logged in. It does not prove that the somebody asking for account 42's data is the somebody who owns account 42. That second check is its own piece of code, and you never asked for it, because if you had known to ask for it, you would not have needed an AI in the first place.
This is the most common vibe coding security gap we see. The door is real. The rooms behind the door trust whoever turns the knob. The fix is small once you know to ask. Without the asking, the building is open.
The happy path, the unwritten branch
You described a payment that works. The AI wrote the code that handles a payment that works. The customer pays, the account upgrades, the receipt sends. All true.
Then a real card gets declined at 2 AM. The customer's bank flags a fraud alert. The renewal three months from now silently fails. None of those were the prompt. The AI cannot write the code for the failure case if the failure case was never the conversation. So the failure case isn't there, and the first time you find out is when a customer emails to ask why their account was downgraded weeks ago.
A working app is not the same as an app that knows what to do when something stops working. The AI builds the first one by default. The second one is a different conversation, and somebody has to start it.
The user, not the stranger
When you say "user" to the AI, the AI pictures the user you pictured. Someone clicking through the form the way you would. Filling in their email the way you would. Pressing the button.
A stranger is something else. A stranger sends the form ten thousand times in a minute. A stranger sends an empty submission. A stranger sends a name field with a paragraph of HTML in it, or a script that signs up a fake account every two seconds until your inbox is full and your sender reputation is gone.
The AI was building for the user. The stranger never came up. So the limits and the checks and the second look at what came in, those don't exist, because in the imagined room there was no stranger to put them there for.
The error that helps the wrong person
Your login page says user not found when the email isn't registered, and wrong password when it is. That is helpful. The AI was being helpful when it wrote it. It was helpful to you, sitting at the page, trying to figure out which thing went wrong.
It is also helpful to a person with a list of ten thousand emails who wants to know which ones have accounts on your app. The same message, spoken to the wrong audience, becomes the leak.
This one is the cleanest example of the whole pattern. The AI is not careless. It is helpful in the only way it knows how. Helpful and safe are not the same word, and the AI was never told the difference.
Why this isn't the AI's fault, and why it isn't yours either
It is tempting to read this and decide the AI is the problem. It isn't. The AI is doing the thing it was built to do, and it does it well. It answers the question.
It is also tempting to read this and feel like you should have known. You shouldn't have. The list of vibe coding security risks the AI quietly skipped is not on a page anywhere. It is the lived experience of people who have launched apps, watched them get attacked, and learned the next thing to ask. That experience does not transfer through prompts. It transfers through someone sitting next to you, looking at your app, and saying did you ever put a limit on that signup form, because if you didn't, here is what someone will do with it on Tuesday.
The gap isn't malice. It isn't laziness. It isn't the AI being bad at its job, and it isn't you being bad at yours. It is the simple fact that LLMs answer the question they were asked, and security is everything that lives in the question nobody asked.
A peer who has seen a hundred of these knows what the unasked question usually contains. That is most of what the job is. Judgment is not a longer prompt. It is the practiced reflex of looking at an app and noticing what isn't there. You can't write that reflex into a prompt because the reflex is the thing that decides what to put in the prompt.
What to do with this
The pattern is the answer to why. It isn't the answer to what.
We wrote the what down already. The secrets that ended up in the browser. The data access that doesn't check who's asking. The payments that fail quietly. The forms that trust whatever a stranger types. They all live on the list of the things AI didn't know to add, with the small test you can run to find each one on your own app today.
This post is the reason that list exists. The other one is the list.
If you want a faster version of the same thing, we do this on a live call. You share your screen, we open your app together, and we name the unasked-for parts as we find them. We don't read every line. Nobody does. We look for the shapes from this post and call them out where they live in your app, and together we sort them into what's urgent, what can wait, and what never mattered as much as it sounded. You leave with a clearer view of where the gaps are and what's worth doing about them, in the order that fits your project.
You don't have to fix any of this alone. You also don't have to fix all of it at once. The first move is knowing what the AI didn't tell you, and that move is short.
The AI builds the parts you can describe. A peer's job is to know the parts you didn't think to type.
