Vibe Coding Security Checklist: Before You Launch
The essential security checklist for AI-generated applications. Check these before going live.
You have built your app with Cursor, Bolt, Lovable, or another AI coding tool. It works. It looks good. You are ready to launch. But before you put it in front of real users — and their real data — you need to verify that the AI has not left the door wide open.
This checklist is based on the issues we find most frequently when auditing AI-generated applications. It is not theoretical. Every item here represents a vulnerability we have seen in production code generated by one or more AI tools.
Work through each section. If you cannot confidently check off an item, that is where you need to focus before going live.
Authentication and Sessions
Authentication is the front door to your application. AI tools will often generate authentication flows that work in the happy path but fail to handle edge cases, session management, and token security properly.
Authentication bypasses are the most critical class of vulnerability we find in AI-generated apps. A single flaw here can give an attacker full access to every user account in your system.
-
Passwords are hashed with bcrypt, scrypt, or Argon2. Check that you are not storing plaintext passwords or using weak hashing like MD5 or SHA-256 without a salt. AI tools sometimes skip hashing entirely in early iterations.
-
Session tokens are generated using cryptographically secure randomness. Tokens must use
crypto.randomBytes()or equivalent — notMath.random(). Session tokens generated withMath.random()are predictable and can be brute-forced. -
JWTs have short expiration times and proper validation. Access tokens should expire within 15-30 minutes. Always validate
iss,aud, andexpclaims on the server. See our Cursor security audit for specific JWT configuration guidance. -
Refresh token rotation is implemented. Each time a refresh token is used, it should be invalidated and a new one issued. This prevents token replay attacks if a refresh token is stolen.
-
Authentication state is verified server-side on every protected route. Do not rely on client-side checks alone. Every API route and server-rendered page that requires authentication must independently verify the session or token.
API Route Protection
Every API route is a potential attack surface. AI tools generate routes that handle the core logic but rarely add the defensive layers needed for production.
-
Every API route checks authentication before processing. Middleware is the cleanest approach. If your framework supports it, apply authentication middleware globally and explicitly opt out for public routes rather than opting in for protected ones.
-
Authorization is checked on every request, not just authentication. Being logged in is not enough. Verify that the authenticated user has permission to perform the specific action on the specific resource. A user should not be able to edit another user’s data by changing an ID in the request.
The most dangerous pattern we see: API routes that check if a user is logged in but do not verify they own the resource they are accessing. This is called an Insecure Direct Object Reference (IDOR) and it is extremely common in AI-generated code.
-
Rate limiting is applied to sensitive endpoints. At minimum: login, registration, password reset, email-sending endpoints, and any endpoint that performs expensive operations. Use Upstash, Redis, or framework-level rate limiting.
-
Request body size limits are configured. Without explicit limits, an attacker can send enormous payloads to exhaust your server’s memory. Set a reasonable maximum (e.g., 1MB for JSON bodies, appropriate limits for file uploads).
-
All input is validated with a schema validation library. Use Zod, Yup, or equivalent. Validate types, lengths, formats, and ranges. Reject anything that does not match the expected schema with a
400 Bad Request.
Database Security
Whether you are using Supabase, PlanetScale, Neon, or a self-hosted database, the AI-generated data layer is almost always under-secured.
-
Parameterised queries are used for all database operations. No string concatenation in SQL queries. If you are using an ORM like Prisma or Drizzle, this is handled for you. If you are writing raw SQL, use parameterised statements. Refer to our guide on common AI code vulnerabilities for examples.
-
Row Level Security (RLS) is enabled and tested if using Supabase. This is critical. We have written an entire post on why AI tools get RLS wrong. Do not skip this if you are on Supabase.
-
Database credentials use least-privilege access. Your application should not connect to the database with an admin or superuser role. Create a dedicated role with only the permissions the application needs.
-
Sensitive data is encrypted at rest. Personal information, payment details, and any data subject to GDPR or other regulations should be encrypted in the database, not stored as plaintext.
Environment and Secrets
AI tools are remarkably casual about secrets management. This section is non-negotiable.
We have seen live Stripe secret keys, database passwords, and OAuth client secrets committed to public GitHub repositories by founders who followed AI-generated code without reviewing it. Rotating compromised credentials after the fact does not undo the damage.
-
No secrets are hardcoded in source code. Search your codebase for strings that look like API keys, connection strings, or tokens. Check for
sk_live_,sk_test_,postgres://,mongodb+srv://, and similar patterns. -
.envfiles are in.gitignoreand have never been committed. Check your Git history:git log --all --full-history -- .env. If the file appears, the secret is compromised regardless of whether you removed it later. -
Client-side code does not contain server secrets. In Next.js, only variables prefixed with
NEXT_PUBLIC_are exposed to the browser. Verify that private keys (Stripe secret keys, database passwords, etc.) are only used in server-side code. -
A
.env.examplefile documents all required variables. This helps your team (and your AI tool) understand what environment variables are needed without exposing actual values.
Input Validation
Every piece of data that enters your application from the outside world is untrusted. AI tools treat incoming data as if it has already been validated.
-
All form inputs are validated on both client and server. Client-side validation is a user experience feature, not a security feature. Server-side validation is mandatory because client-side checks can be bypassed trivially.
-
File uploads are restricted by type, size, and content. Do not trust the file extension or MIME type from the
Content-Typeheader — these are trivially spoofed. Use a library that inspects file magic bytes. Set hard size limits. Store uploads outside your application directory. -
Email addresses, URLs, and other structured data are validated against proper formats. Use Zod’s
.email(),.url(), and similar validators. Do not accept any string where a structured format is expected. -
Rich text and HTML inputs are sanitised to prevent XSS. If your application accepts any form of HTML or Markdown from users and renders it, use a sanitisation library like DOMPurify. AI tools almost never add sanitisation.
Security Headers
Security headers are your application’s instructions to the browser about how to handle your content. They are almost always missing from AI-generated code.
-
Content Security Policy (CSP) is configured. A well-configured CSP prevents XSS attacks by controlling which scripts, styles, and other resources can be loaded. Start with a restrictive policy and loosen as needed.
-
Standard security headers are set. At minimum, your application should return these headers:
X-Content-Type-Options: nosniffX-Frame-Options: DENY(unless you need iframe embedding)Strict-Transport-Security: max-age=31536000; includeSubDomainsReferrer-Policy: strict-origin-when-cross-origin
-
Cookies are configured with secure attributes. All cookies must have
HttpOnly,Secure, andSameSite=Lax(orStrict) attributes. AI tools frequently generate cookies without these flags, making them vulnerable to theft via XSS or cross-site request forgery.
You can test your security headers for free using securityheaders.com. Aim for at least an A grade before launching.
How to Use This Checklist
Print it. Work through it item by item. For each item you cannot check off, create a ticket in your project tracker with a priority level. Critical items (authentication, secrets, database security) must be fixed before launch. Medium items (headers, rate limiting) should be addressed within the first week.
If you want us to go through this checklist for you — and catch the things you might miss — that is exactly what our audit packages are designed for. We will go deeper than any checklist can, testing each area with real attack techniques rather than just checking configuration values.