Is Your Vibe-Coded App Actually Secure? 7 Vulnerabilities AI Keeps Writing
Claude Code, Codex, and Kimi can ship your entire app in a weekend. But every AI coding agent has the same blind spots. Here are the 7 security vulnerabilities we keep finding — and how to catch them in under 60 seconds.
Vibe coding changed everything. You describe what you want in plain English, and an AI agent writes the code, sets up the database, configures deployment, and ships a working app before your coffee gets cold.
Tools like Claude Code, Codex, Kimi, and Cursor have made it possible for solo founders to build in a weekend what used to take a team months. The speed is real. The productivity gains are real.
The security gaps are also real.
We run a security scanner that checks websites for vulnerabilities — 33 checks per scan, covering everything from SSL configuration to attack path analysis. Over the last few weeks, we've noticed a clear pattern: apps built with AI coding agents have a remarkably consistent set of security blind spots.
Not because the AI is bad at coding. It's actually quite good. The problem is that AI optimizes for making things work, not for making things safe. Security is almost never in the prompt, so it's almost never in the output.
Here are the 7 vulnerabilities we keep finding in vibe-coded apps. All of them are preventable. Most of them take less than 10 minutes to fix.
The 7 vulnerabilities
Hardcoded secrets in source code
This is the single most common security issue in AI-generated code. When you tell an AI agent "connect to the database" or "add Stripe payments," it writes working code — with the API key right in the source file.
// AI-generated code — works perfectly, ships your secrets
const stripe = new Stripe("sk_live_4eC39HqLyjWDarjtT1zdp7dc");
const supabase = createClient(
"https://abc123.supabase.co",
"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."
);The AI knows the code needs a key to work. It does not know that the key should live in an environment variable, never in the committed source. Even when it uses .env files, it often fails to add .env to .gitignore.
How to fix: Run gitleaks or trufflehog on your repo. Move every secret to environment variables. Rotate any key that was ever in a commit — even if you deleted it later, it's still in git history.
Missing rate limiting on API routes
AI agents create API endpoints that work. They almost never add rate limiting. This means anyone can hit your /api/login endpoint 10,000 times per second with different password combinations. Or your /api/send-email endpoint to spam from your domain. Or your /api/ai-chat endpoint to burn through your entire OpenAI budget overnight.
How to fix: Add middleware-level rate limiting. In Next.js, use middleware.ts with IP-based throttling. Most frameworks have a rate-limit library that takes 5 lines to configure. Start with 60 requests per minute per IP on auth routes, 10 per minute on email/AI routes.
No Content Security Policy (CSP) headers
CSP headers tell the browser which scripts, styles, and resources are allowed to run on your page. Without them, an attacker who finds any XSS vector can inject arbitrary JavaScript — steal sessions, redirect users, mine crypto in your visitors' browsers.
AI coding agents almost never configure CSP. They generate pages that work in the browser, but the response headers are completely empty of security directives. No Content-Security-Policy, no X-Frame-Options, no Strict-Transport-Security.
How to fix: Add security headers in your next.config.js or vercel.json headers section. At minimum: CSP, HSTS, X-Frame-Options, X-Content-Type-Options. Mozilla Observatory gives you a free check and tells you exactly what to add.
Broken access control (IDOR everywhere)
When AI writes an API route like /api/users/[id], it typically checks if the user is authenticated. It rarely checks if the authenticated user is authorized to access that specific ID. This means User A can read User B's data just by changing the ID in the URL.
// AI-generated: checks auth but not authorization
export async function GET(req, { params }) {
const session = await getSession(); // ✅ Authenticated
if (!session) return new Response("Unauthorized", { status: 401 });
// ❌ No check: does session.user.id === params.id?
const userData = await db.users.findById(params.id);
return Response.json(userData);
}This is called Insecure Direct Object Reference (IDOR), and it's OWASP's #1 web vulnerability for a reason. AI agents write this pattern by default because the code "works" — the happy path succeeds.
How to fix: Every API route that takes a user-specific ID must verify the authenticated user owns that resource. In Supabase, use Row Level Security (RLS) policies instead of relying on application-level checks. RLS makes IDOR structurally impossible at the database layer.
Supabase RLS policies set to true (or missing entirely)
This one is specific to the Next.js + Supabase stack that powers a huge portion of vibe-coded apps. When AI agents create Supabase tables, they often set RLS policies to true (allow everything) or skip RLS entirely. This means anyone with your Supabase anon key can read every row in every table.
-- AI-generated migration: RLS enabled but policy allows all
ALTER TABLE public.orders ENABLE ROW LEVEL SECURITY;
CREATE POLICY "allow_all" ON public.orders
FOR ALL USING (true); -- ❌ Anyone can read all ordersYour anon key is public — it's in your frontend JavaScript bundle. The only thing protecting your data is RLS policies. If those policies say true, your data is public.
How to fix: Audit every RLS policy in your Supabase dashboard. Every USING clause should reference auth.uid() to scope access to the authenticated user. Supabase's security advisor (in the dashboard) flags tables with weak policies automatically.
Unvalidated input at every API boundary
AI-generated API routes trust whatever the client sends. No schema validation, no type checking, no sanitization. The function signature says "take a body, use the fields" — and it does exactly that, including fields the client shouldn't be able to set (like role: "admin" or price: 0).
How to fix: Use Zod (TypeScript) or Pydantic (Python) to validate every API input. Define the exact shape you expect, reject everything else. This also protects against SQL injection and NoSQL injection as a side effect — if the input must match a schema, arbitrary payloads get rejected before they reach the database.
CORS set to allow all origins
When an AI agent gets a CORS error during development, it does what any frustrated developer does: sets Access-Control-Allow-Origin: * and moves on. The error goes away. The code ships to production. Now any website on the internet can make authenticated requests to your API on behalf of your users.
// AI fix for CORS error — works but opens your API to the world
export async function middleware(request) {
const response = NextResponse.next();
response.headers.set("Access-Control-Allow-Origin", "*"); // ❌
response.headers.set("Access-Control-Allow-Credentials", "true");
return response;
}The dangerous combination is Allow-Origin: * together with Allow-Credentials: true. This lets a malicious site steal your users' session cookies.
How to fix: Set Access-Control-Allow-Origin to your actual production domain, not *. For development, use an environment variable that switches between localhost:3000 and your production URL.
Why this keeps happening
None of these vulnerabilities exist because AI coding agents are broken. They exist because of a fundamental mismatch between what the agent optimizes for and what production software requires.
When you tell Claude Code to "build me an e-commerce app," it optimizes for: the app runs, the checkout works, products display correctly, users can log in. All true. All functional. All shippable within hours.
What nobody prompted: "make sure nobody can brute-force the login," "make sure the database isn't publicly readable," "make sure error messages don't leak stack traces," "make sure the API keys aren't committed to GitHub."
Security is almost never in the prompt because the developer doesn't know to ask for it. That's not a criticism — it's the whole point of vibe coding. You're supposed to describe what you want, not how to make it safe. But the gap between "works" and "safe" is where attackers live.
The 60-second check
You don't need to become a security expert to find these issues. You need to run a scan.
We built IsMySiteHacked.com specifically for developers who ship fast and need to know what they missed. 33 security checks, including every vulnerability on this list. Takes about two minutes. No signup required for the free scan.
If you vibe-coded your app last weekend, scan it this weekend. You will almost certainly find at least 2-3 items from this list. Better you find them than someone else.
Built your app with AI? Check what it missed.
33 security checks. Real attack paths. Plain-English findings. Free scan, no signup.
Scan your vibe-coded app nowRelated: What Happens When You Let AI Write Your Security Scanner — we code-reviewed an AI-generated pentest framework and found dead code, 484 markdown files, and a Windows-breaking filename.