Security · · 9 min read

The Most Common Security Issues in AI-Generated Code

We've audited dozens of AI-built apps. These are the vulnerabilities we find most often.

We have audited over fifty applications built with AI coding tools — Cursor, Bolt, Lovable, Replit, and others. Some were MVPs. Some were live products with paying customers. Some were handling sensitive data without a single security measure in place.

The specifics vary, but the patterns do not. AI tools make the same classes of mistake regardless of which tool you use or what kind of application you are building. This post documents the eight vulnerabilities we encounter most frequently, ranked by how often we find them and how severe they tend to be.

These are not edge cases. These are the default output of AI coding tools in the absence of explicit security guidance.

1. Authentication Bypasses

Found in approximately 60% of audits.

Critical

Authentication Bypass

An attacker can access protected resources without valid credentials, potentially gaining full access to user accounts and administrative functionality.

Authentication bypasses in AI-generated code rarely look like a missing login check. They are more subtle: a middleware that runs on most routes but misses a few, a token verification function that catches some error types but not others, or a client-side auth guard with no corresponding server-side check.

The most common pattern is an API route that checks whether a user is authenticated but fails to handle the case where the token is expired, malformed, or simply absent. The error handling falls through to the main logic, and the request is processed as if it were authenticated.

Before
// Silent failure = bypass
export async function GET(req: Request) {
try {
  const user = await getUser(req);
  const data = await getUserData(user.id);
  return Response.json(data);
} catch {
  // Auth failure falls through to here
  // but still returns empty data, not a 401
  return Response.json([]);
}
}
After
// Explicit auth check with early return
export async function GET(req: Request) {
const user = await getUser(req);

if (!user) {
  return Response.json(
    { error: 'Unauthorized' },
    { status: 401 }
  );
}

const data = await getUserData(user.id);
return Response.json(data);
}

Why AI tools get this wrong: They generate the happy path first and add error handling as an afterthought. The error handling is designed to prevent crashes, not to enforce security boundaries.

2. Missing Authorization Checks

Found in approximately 70% of audits.

Critical

Broken Access Control (IDOR)

Authenticated users can access or modify resources belonging to other users by manipulating identifiers in API requests.

This is the single most common vulnerability we find. The application checks that the user is logged in, but does not check that they own the resource they are requesting. An attacker simply changes an ID in the URL or request body to access another user’s data.

Before
// Checks auth but not ownership
export async function GET(req: Request) {
const user = await requireAuth(req);
const { invoiceId } = req.params;

// Any authenticated user can fetch
// ANY invoice by changing the ID
const invoice = await db.invoice.findUnique({
  where: { id: invoiceId },
});

return Response.json(invoice);
}
After
// Checks both auth and ownership
export async function GET(req: Request) {
const user = await requireAuth(req);
const { invoiceId } = req.params;

const invoice = await db.invoice.findUnique({
  where: {
    id: invoiceId,
    userId: user.id, // Scoped to the user
  },
});

if (!invoice) {
  return Response.json(
    { error: 'Not found' },
    { status: 404 }
  );
}

return Response.json(invoice);
}

IDOR vulnerabilities are trivial to exploit. An attacker does not need special tools — they just change a number in the URL. Automated scanners catch these instantly. If your app has them, they will be found.

Why AI tools get this wrong: The prompt says “fetch the invoice” and the AI fetches the invoice. It does not consider that the requester might not be the owner. Ownership scoping requires threat modelling that AI tools do not perform.

3. Exposed API Keys

Found in approximately 55% of audits.

High

Exposed Secrets

API keys, database credentials, or third-party service tokens are visible in client-side code, version control, or build artifacts.

We covered this in detail in our Cursor security audit, but it bears repeating: AI tools hardcode secrets into source code as a default behaviour. This includes Stripe secret keys, database connection strings, OAuth client secrets, and third-party API tokens.

The most dangerous variant is when a server-side secret ends up in client-side code. In Next.js, this happens when a secret without the NEXT_PUBLIC_ prefix is accidentally imported into a client component, or when it is embedded in a server action that gets serialised to the client.

Why AI tools get this wrong: Hardcoding a value is the fastest path to working code. The AI is optimising for the immediate success of the current prompt, not for the security implications downstream.

4. SQL and NoSQL Injection

Found in approximately 35% of audits.

High

Injection Vulnerability

User-controlled input is incorporated into database queries without proper sanitisation or parameterisation, allowing attackers to read, modify, or delete arbitrary data.

Injection vulnerabilities appear most often in search features, filtering logic, and anywhere the AI generates raw SQL or MongoDB queries to handle complex data retrieval. ORMs protect against this, but AI tools sometimes bypass the ORM to write raw queries when the ORM syntax is not immediately obvious.

Before
// MongoDB NoSQL injection
const user = await db.collection('users').findOne({
email: req.body.email,
password: req.body.password,
});
// Attacker sends: { "password": { "$ne": "" } }
// This matches any non-empty password
After
// Validate types before querying
const email = z.string().email().parse(req.body.email);
const password = z.string().min(8).parse(req.body.password);

const user = await db.collection('users').findOne({
email: email,
});

if (!user || !await bcrypt.compare(password, user.passwordHash)) {
throw new AuthError('Invalid credentials');
}

Why AI tools get this wrong: They prioritise readable, concise code. String interpolation is more readable than parameterised queries. For NoSQL databases, the injection vectors are less well-known, so the training data contains fewer defensive patterns.

5. Missing Row Level Security Policies

Found in approximately 80% of Supabase audits.

Critical

Missing RLS Policies

Supabase tables are accessible to any authenticated user (or the public) because Row Level Security is disabled or has overly permissive policies.

This is so pervasive that we wrote an entire post about it. When AI tools set up Supabase, they frequently disable RLS to avoid query errors during development and never re-enable it. Even when RLS is enabled, the policies are often too broad to provide meaningful protection.

Why AI tools get this wrong: RLS errors during development are confusing and block progress. The AI disables RLS to make the query work, and there is no mechanism to remind it to re-enable and configure it later.

6. Insecure File Uploads

Found in approximately 40% of audits with upload features.

High

Insecure File Upload

File uploads lack proper validation of file type, size, and content, allowing attackers to upload malicious files or consume excessive storage.

When an application has file upload functionality, AI tools generate code that saves the file based on the user-provided filename and MIME type. Neither of these is trustworthy. An attacker can upload an executable disguised as a JPEG, or a massive file designed to exhaust your storage quota.

Before
// Trusts user-provided metadata
export async function POST(req: Request) {
const formData = await req.formData();
const file = formData.get('file') as File;

// No type checking, no size limit
await storage.upload(file.name, file);

return Response.json({ url: `/uploads/${file.name}` });
}
After
// Validates content, restricts types and size
import { fileTypeFromBuffer } from 'file-type';

const MAX_SIZE = 5 * 1024 * 1024; // 5MB
const ALLOWED_TYPES = ['image/jpeg', 'image/png', 'image/webp'];

export async function POST(req: Request) {
const formData = await req.formData();
const file = formData.get('file') as File;

if (file.size > MAX_SIZE) {
  return Response.json({ error: 'File too large' }, { status: 400 });
}

const buffer = Buffer.from(await file.arrayBuffer());
const detected = await fileTypeFromBuffer(buffer);

if (!detected || !ALLOWED_TYPES.includes(detected.mime)) {
  return Response.json({ error: 'Invalid file type' }, { status: 400 });
}

const safeName = `${crypto.randomUUID()}.${detected.ext}`;
await storage.upload(safeName, buffer);

return Response.json({ url: `/uploads/${safeName}` });
}

Why AI tools get this wrong: File validation requires multiple layers (magic byte inspection, size limits, name sanitisation) that are not part of the core feature request. The AI implements the upload, not the security around it.

7. CORS Misconfiguration

Found in approximately 50% of audits.

Medium

CORS Misconfiguration

Cross-Origin Resource Sharing headers are either missing entirely or set to allow all origins, enabling cross-site attacks against authenticated users.

CORS issues take two forms in AI-generated code: headers are completely absent (which actually blocks cross-origin requests but causes integration problems), or the AI sets Access-Control-Allow-Origin: * to fix a development error without understanding the security implications.

The wildcard is particularly dangerous when combined with Access-Control-Allow-Credentials: true, which allows any website to make authenticated requests to your API on behalf of your users.

Why AI tools get this wrong: CORS errors are opaque and frustrating. When the AI encounters one, its solution is to allow everything. The concept of an origin allowlist requires understanding the deployment architecture, which is context the AI rarely has.

8. Missing Rate Limiting

Found in approximately 90% of audits.

Medium

Missing Rate Limiting

API endpoints have no request rate restrictions, allowing brute-force attacks, credential stuffing, and resource exhaustion.

This is the most consistently absent security feature across every AI tool we have tested. Rate limiting is never added unless explicitly requested, and even then it is often applied to only one or two endpoints.

Without rate limiting, your login endpoint can be hit thousands of times per second. Your password reset flow can be used to enumerate valid email addresses. Your email-sending endpoint can be abused to spam third parties.

Why AI tools get this wrong: Rate limiting is purely defensive infrastructure. It adds no user-facing functionality. AI tools do not add it because it is never the thing you asked for.

The Pattern Behind the Pattern

If you look at these eight issues as a group, a clear picture emerges: AI tools build features, not defences. Every vulnerability on this list represents something that should exist alongside the feature code but is not part of the feature itself.

This is not going to change with better models. It is a fundamental property of how prompt-driven development works. You ask for a feature. You get a feature. You do not get the security context around that feature unless you explicitly ask for it — and even then, the results are inconsistent.

The solution is to treat AI-generated code the way you would treat code from a very fast, very junior developer: it needs review before it goes to production. Our Vibe Coding Security Checklist is a good starting point for self-review. For a thorough professional assessment, see our audit packages.

Ready to ship with confidence?

Get your AI-generated app audited by UK security experts.

See Pricing

Or email us at hello@vibecodeaudits.co.uk

Related Articles