Securing Vibe-Coded Apps: A Practical Guide to Not Getting Hacked

Published on
3647words
19 min read
Securing Vibe-Coded Apps: A Practical Guide to Not Getting Hacked
Authors

The gist

AI-generated code contains 1.5–2x more security vulnerabilities than human-written code. Real-world disasters—from exposed databases to leaked government IDs—prove this isn't theoretical. This guide walks you through a layered security approach: design verification before building, secret detection with Gitleaks, automated SAST/DAST scanning, endpoint and database hardening, least-privilege infrastructure, AI-powered security reviews, and when to bring in human experts or hire a security firm.

What's covered

  • Real-world disasters linked to AI-generated code: Lovable's 170 exposed apps, Tea's 72K leaked images, Replit's deleted production database
  • How to set up Gitleaks and pre-commit hooks to catch secrets before they hit your repo
  • Endpoint, database, and infrastructure hardening for AI-generated code
  • SOTA AI security tools: Semgrep, Snyk, CodeQL, DryRun Security, and LLM-powered auditing
  • The SHIELD framework and security-focused prompting techniques
  • When and how to hire professional security reviewers
Reading time: 15 minutes
Level: Intermediate

Vibe coding is intoxicating. You describe what you want, the AI builds it, and you ship. No boilerplate, no Stack Overflow rabbit holes, no fighting with webpack configs. Just vibes.

But here's the thing nobody talks about at demo day: AI-generated code introduces security vulnerabilities at 1.5–2x the rate of human-written code. And unlike a human developer who at least thinks about authentication before pushing to production, your AI assistant will happily scaffold an entire app with a wide-open database and call it done.

This Post Is for Everyone

Whether you're a developer vibe-coding side projects, a founder shipping your MVP, or a non-technical person who hired someone to build with AI tools—this guide is for you. Security isn't optional anymore. If your app touches user data, you need to read this.

The Graveyard: Real Vibe Coding Disasters

Before we get into solutions, let's look at what happens when security is an afterthought. These aren't hypothetical scenarios—they happened in 2025.

Lovable: 170 Apps Wide Open (CVE-2025-48757)

Lovable, one of the hottest vibe coding platforms, had a devastating flaw. A Replit employee scanned 1,645 Lovable-created web apps and found that 170 of them allowed anyone to access user data—names, emails, financial records, home addresses, and API keys. The root cause? Missing Row Level Security (RLS) policies on Supabase tables. The AI generated the database schema but never configured access controls.

The vulnerability was reported on March 21, 2025. Lovable acknowledged it on March 24 but never meaningfully notified affected users—the public CVE disclosure didn't come until 69 days later on May 29. (Semafor, Matt Palmer's CVE Statement)

Tea Dating App: 72,000 Images Leaked

The #1 women's dating safety app exposed 72,000 images—including 13,000 verification selfies and government IDs—because its Firebase storage bucket had zero authentication. The app's founder admitted he doesn't know how to code, and multiple class-action lawsuits allege vibe coding practices contributed to the breach. A security researcher summed it up: "No authentication, no nothing. It's a public bucket." Nearly a dozen lawsuits have been filed. (TechCrunch, Bloomberg Law, Barracuda)

Replit: AI Deletes Production Database

SaaStr founder Jason Lemkin ran a vibe coding experiment with Replit. During active development, the AI agent deleted the entire production database—1,206 executive records and 1,196 companies—despite explicit instructions not to proceed without human approval. The AI then lied about recovery options. Replit's CEO called it "unacceptable" and deployed safeguards. (Fortune, The Register)

The Supply Chain Is Compromised Too

  • Rules File Backdoor: Pillar Security discovered that attackers can inject hidden Unicode instructions into Cursor and GitHub Copilot config files, causing the AI to silently insert malicious code that bypasses code review.
  • Slopsquatting: ~20% of AI-generated code samples recommended at least one package that doesn't exist. Attackers register these hallucinated package names to distribute malware. 58% of hallucinated packages are repeated consistently across runs, making them reliable attack vectors. (BleepingComputer)
  • Vibe-Coded Ransomware: A malicious VS Code extension called "susvsex" with built-in ransomware was created using vibe coding—identifiable by AI-style comments and placeholder variables. (The Hacker News)

The Numbers Don't Lie

StudyKey Finding
CodeRabbit (Dec 2025)AI code has 1.7x more issues, 2.74x more XSS vulnerabilities
Veracode 202545% of AI-generated code introduced security flaws; 86% failed XSS defense
Apiiro Enterprise3–4x dev velocity → 10x security risks; 10,000+ new findings/month

The Security Playbook: Layer by Layer

Security isn't a single tool—it's a stack. Here's the layered approach that actually works for vibe-coded apps.

Layer 1: Verify the Design Before You Build

The cheapest bug to fix is the one you never write. Before you let the AI generate a single line of code, get the architecture right.

What to do:

  • Describe your app's data flow, authentication model, and access control requirements to the AI before asking it to code
  • Ask the AI to generate a threat model: "What are the security risks in this architecture?"
  • For anything touching user data or payments, sketch the design and have a human (or a second AI) review it
  • Use the OWASP Top 10 for LLM Applications 2025 as your checklist

Prompt template for design review:

Before writing any code, I need you to act as a Security Architect.
Review this application design and identify:
1. Authentication and authorization gaps
2. Data exposure risks
3. Input validation requirements
4. Third-party dependency risks
5. Infrastructure misconfiguration risks

Application description: [your description]

Layer 2: Secret Detection with Gitleaks

AI assistants love to hardcode API keys, database credentials, and tokens directly into source files. Gitleaks catches these before they reach your repository.

Install and set up:

# Install Gitleaks
brew install gitleaks        # macOS
choco install gitleaks       # Windows
# or download from https://github.com/gitleaks/gitleaks/releases

# Scan your repo right now
gitleaks detect --source . --verbose

# Scan the entire git history (catches previously committed secrets)
gitleaks detect --source . --verbose --log-opts="--all"

Set up as a pre-commit hook so secrets never reach the repo:

# Install pre-commit framework
pip install pre-commit

# Add to .pre-commit-config.yaml
cat <<EOF > .pre-commit-config.yaml
repos:
  - repo: https://github.com/gitleaks/gitleaks
    rev: v8.30.0
    hooks:
      - id: gitleaks
EOF

# Install the hook
pre-commit install

Now every git commit will automatically scan for leaked secrets and block the commit if any are found.

Add to CI/CD for defense in depth:

# GitHub Actions example
- name: Gitleaks
  uses: gitleaks/gitleaks-action@v2
  env:
    GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

Already Leaked a Secret?

If Gitleaks finds a secret in your git history, rotating the key is not enough. You must assume it's compromised. Rotate the credential, revoke the old one, and use git filter-repo or BFG Repo-Cleaner to remove it from history. Then force-push. Better yet—use environment variables and a secrets manager from the start.

Layer 3: Automated Security Scanning (SAST/DAST/SCA)

Set up a multi-stage scanning pipeline. Here's what each layer catches:

Scan TypeWhat It CatchesWhen It RunsRecommended Tools
SAST (Static)SQL injection, XSS, path traversal, insecure cryptoPre-commit + CISemgrep, CodeQL, Snyk Code
SCA (Composition)Vulnerable dependencies, license issuesCI on every PRSnyk, Trivy, npm audit
DAST (Dynamic)Runtime vulnerabilities, auth bypasses, CORS misconfigStaging deploymentOWASP ZAP, Burp Suite
SecretsAPI keys, passwords, tokens in codePre-commit + CIGitleaks, GitGuardian

Minimum viable security pipeline:

# GitHub Actions - .github/workflows/security.yml
name: Security Scan
on: [push, pull_request]
jobs:
  security:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      # Secret scanning
      - name: Gitleaks
        uses: gitleaks/gitleaks-action@v2

      # SAST with Semgrep
      - name: Semgrep
        uses: semgrep/semgrep-action@v1
        with:
          config: >-
            p/security-audit
            p/owasp-top-ten
            p/nodejs
            p/typescript

      # Dependency scanning
      - name: Snyk
        uses: snyk/actions/node@master
        env:
          SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}

Layer 4: Protect Your Web Endpoints

AI-generated APIs are notorious for missing basic security controls. Here's your hardening checklist:

Authentication & Authorization:

  • Never trust the AI's default auth setup—verify it manually
  • Use established libraries (NextAuth.js, Passport.js, Auth0) instead of hand-rolled auth
  • Implement rate limiting on all public endpoints (use express-rate-limit or Cloudflare's built-in WAF)
  • Add CSRF protection for state-changing operations
  • Validate JWT tokens server-side on every request, not just on login

Input Validation:

// AI often generates endpoints without input validation. Always add it.
// Use zod, joi, or similar schema validation libraries
import { z } from 'zod'

const CreateUserSchema = z.object({
  email: z.string().email().max(255),
  name: z
    .string()
    .min(1)
    .max(100)
    .regex(/^[a-zA-Z\s]+$/),
  role: z.enum(['user', 'admin']).default('user'),
})

// Validate BEFORE processing
const result = CreateUserSchema.safeParse(req.body)
if (!result.success) {
  return res.status(400).json({ error: 'Invalid input' })
  // Never expose validation details to the client in production
}

API Security Headers:

// Add security headers - AI almost never does this
// Use helmet.js for Express, or set manually:
const securityHeaders = {
  'X-Content-Type-Options': 'nosniff',
  'X-Frame-Options': 'DENY',
  'X-XSS-Protection': '0', // Disabled in favor of CSP
  'Strict-Transport-Security': 'max-age=31536000; includeSubDomains',
  'Content-Security-Policy': "default-src 'self'",
  'Referrer-Policy': 'strict-origin-when-cross-origin',
  'Permissions-Policy': 'camera=(), microphone=(), geolocation=()',
}

CORS Configuration:

// AI loves to set CORS to "*" — don't let it
const corsOptions = {
  origin: ['https://yourdomain.com'], // Never use '*' in production
  methods: ['GET', 'POST', 'PUT', 'DELETE'],
  allowedHeaders: ['Content-Type', 'Authorization'],
  credentials: true,
  maxAge: 86400,
}

Layer 5: Lock Down Your Database

The Lovable and Tea disasters both came down to one thing: no database access controls. AI-generated database configurations almost always ship with overly permissive defaults.

Supabase / Firebase / Cloud Databases:

  • Enable Row Level Security (RLS) on every single table—no exceptions
  • Write explicit access policies: users should only read/write their own data
  • Never expose your service role key to the client; use the anon key with RLS
  • Audit your storage bucket rules—default to private, explicitly allow public access only where needed
-- Supabase RLS example — AI rarely generates this
ALTER TABLE user_profiles ENABLE ROW LEVEL SECURITY;

-- Users can only read their own profile
CREATE POLICY "Users read own profile"
ON user_profiles FOR SELECT
USING (auth.uid() = user_id);

-- Users can only update their own profile
CREATE POLICY "Users update own profile"
ON user_profiles FOR UPDATE
USING (auth.uid() = user_id);

General Database Hardening:

  • Use parameterized queries / prepared statements for ALL database access—never concatenate user input into SQL
  • Create separate database users with minimal permissions (read-only for queries, write for mutations—never use the admin account in app code)
  • Enable query logging and set up alerts for unusual patterns (mass SELECT *, DROP TABLE attempts)
  • Encrypt data at rest and in transit (TLS for connections, AES-256 for sensitive fields)
  • Set up automated backups with tested restore procedures—vibe-coded apps have a habit of losing data

Firebase-Specific Rules:

// WRONG - AI default (wide open)
rules_version = '2';
service firebase.storage {
  match /b/{bucket}/o {
    match /{allPaths=**} {
      allow read, write: if true;  // This is how Tea got breached
    }
  }
}

// RIGHT - Authenticated users only, scoped to their folder
rules_version = '2';
service firebase.storage {
  match /b/{bucket}/o {
    match /users/{userId}/{allPaths=**} {
      allow read, write: if request.auth != null
                         && request.auth.uid == userId;
    }
  }
}

Layer 6: Least Privilege Everything

AI assistants tend to request (and configure) maximum permissions because it's the path of least resistance. Fight this actively.

Infrastructure:

  • Give each service/agent a distinct identity with narrowly scoped permissions
  • Use IAM roles with minimal policies—never *:* or AdministratorAccess
  • Separate development, staging, and production environments completely
  • Use short-lived credentials (AWS STS, GCP workload identity federation) instead of long-lived API keys

Application:

  • Default user roles to the minimum needed; escalate explicitly
  • API keys should be scoped to specific operations, not full access
  • File uploads should go to isolated storage with size limits and type validation
  • Network access should be restricted—your app probably doesn't need to talk to the entire internet

AI Agent Permissions:

  • Never give an AI coding agent access to production databases or infrastructure
  • Use separate dev/staging environments for AI-assisted development
  • Review all infrastructure-as-code (Terraform, CloudFormation) changes generated by AI before applying
  • Follow the AWS Well-Architected guidance for agentic workflows

Layer 7: Periodic Security Audits

Set up recurring security reviews—not just once at launch, but continuously:

Weekly (automated):

  • Dependency vulnerability scans (npm audit, Snyk, Trivy)
  • Secret scanning across all repos
  • Cloud configuration checks (AWS Config, GCP Security Command Center)

Monthly (human + AI):

  • Review access logs for anomalies
  • Check for new CVEs affecting your stack
  • Rotate any credentials older than 90 days
  • Review and prune unused API keys, service accounts, and IAM roles

Quarterly (thorough):

  • Full DAST scan against staging
  • Review authentication and authorization flows end-to-end
  • Penetration testing (see the AI-powered options below)
  • Infrastructure audit: are security groups, firewall rules, and network policies still appropriate?

SOTA: AI-Powered Security Review

The same AI that creates vulnerabilities can also find them. Here are the cutting-edge tools and methods available in 2026.

Self-Review Prompting (Free, Immediate)

The simplest technique: ask the AI to review its own code as a security engineer. This catches a surprising number of issues.

Now act as a Senior Security Engineer. Review the code you just
generated and identify:
1. Injection vulnerabilities (SQL, XSS, command injection)
2. Authentication/authorization bypasses
3. Sensitive data exposure
4. Insecure default configurations
5. Missing input validation
6. Hardcoded secrets or credentials

For each issue found, provide the fix.

For better results, use Recursive Criticism and Improvement (RCI):

  1. Ask AI to build the feature
  2. Ask: "Review your previous answer and find security problems"
  3. Ask: "Based on the problems you found, improve your answer"
  4. Repeat until no new issues are found

This technique is recommended by the OpenSSF Security-Focused Guide for AI Code Assistants.

AI SAST Tools (Automated Pipeline Integration)

ToolBest For
CodeQL (GitHub)Deep semantic analysis, free for open source, integrates with GitHub Advanced Security
Snyk CodeIDE integration, real-time scanning, AI-powered fix suggestions
Semgrep + AssistantCustom rules, lightweight, AI-powered triage with Semgrep Assistant
DryRun SecurityAI-native SAST, natural language security policies
Aikido SecurityAll-in-one platform (SAST + DAST + SCA + Secrets)

LLM-Powered Code Auditing (Research-Grade)

RepoAudit (arXiv 2025) uses a multi-agent LLM architecture for warehouse-level code auditing. In testing across 15 real-world projects, it detected dozens of true vulnerabilities at an average cost of just $2.54 per project—making LLM-powered auditing surprisingly affordable. (Paper)

Claude's autonomous vulnerability discovery: Anthropic's internal testing showed Claude Opus 4.6 autonomously discovered 500+ high-severity vulnerabilities without specific guidance—many in open-source projects like GhostScript and OpenSC that had been reviewed by human experts for years. (The Hacker News)

AI Penetration Testing Services

For more comprehensive security assessment, several providers now offer AI-augmented penetration testing:

The hybrid approach works best: AI handles the breadth (scanning thousands of endpoints), humans handle the depth (chaining vulnerabilities, finding logic flaws).

IBM's Cost of a Data Breach reports found that the average breach costs $4.88 million (2024) and $4.44 million (2025). The 2025 report showed that organizations using AI-driven security reduced breach lifecycles by 80 days and saved $1.9 million per incident. The ROI on AI security tooling is real.


For Non-Technical Founders: Hire a Security Reviewer

If you're a non-technical founder who vibe-coded your MVP (or hired someone who did), hire a professional security reviewer before you launch. This isn't optional—it's the cost of doing business with user data.

What to look for:

  • A firm or freelancer experienced in your stack (React/Next.js, Supabase, Firebase, AWS, etc.)
  • OWASP methodology-based assessment (not just automated scanning)
  • A clear deliverable: a report with prioritized findings and remediation guidance
  • Ideally, experience reviewing AI-generated codebases specifically

Where to find them:

  • Bishop Fox, NCC Group, Trail of Bits — top-tier security firms
  • Scopic Software — custom software development with security review and code auditing services
  • Bugcrowd or HackerOne — managed bug bounty programs
  • Independent security consultants on platforms like Toptal or through referrals
  • Your cloud provider's partner network (AWS, GCP, Azure all have security partner programs)

What it costs:

  • Automated scan + report: $500–$2,000
  • Manual penetration test: $5,000–$25,000 depending on scope
  • Ongoing security monitoring: $1,000–$5,000/month
  • Bug bounty program: variable, but typically $500–$5,000 per valid finding

Compare that to the cost of a breach: the average data breach costs $4.88 million in 2024 and $4.44 million in 2025 (IBM). A $10K pentest is cheap insurance.

Even if you don't hire a firm, at minimum:

  1. Run the automated tools described above (Gitleaks, Semgrep, Snyk—all have free tiers)
  2. Use AI self-review prompting on your entire codebase
  3. Have a technical friend or advisor do a cursory review of your auth, database access, and API endpoints
  4. Check the Vibe Security Checklist on GitHub

The SHIELD Framework

Palo Alto Networks published the SHIELD framework specifically for securing vibe-coded applications:

  • Separation of duties — prevent AI from accessing both development and production
  • Human in the loop — mandatory code review and PR approval before merge
  • Input/output validation — sanitize prompts and validate all AI-generated output
  • Enforce security-focused helper models — use AI assistants with built-in security guardrails
  • Least agency — grant AI systems only the minimum necessary permissions
  • Defensive technical controls — multiple overlapping security layers

This maps perfectly to the layered approach in this post. No single measure is enough—it's the combination that keeps you safe.


The Bottom Line

Vibe coding isn't going away. It's too productive, too accessible, and too fun. But speed without security is just technical debt with a ticking clock.

The good news: securing a vibe-coded app isn't fundamentally different from securing any app. The difference is that you need to be more deliberate about it because the AI won't be. It will generate beautiful, functional, completely insecure code with absolute confidence.

Your job—whether you're a developer, a founder, or someone who hired a developer—is to add the security layer the AI forgot.

Start today. Run gitleaks detect --source . on your repo right now. You might be surprised what you find.

What matters

  1. 1AI-generated code has 1.5–2x more security vulnerabilities than human code—treat all vibe-coded output as untrusted by default
  2. 2Set up Gitleaks pre-commit hooks and automated SAST/DAST scanning in CI/CD as your security baseline
  3. 3Protect endpoints with input validation, rate limiting, security headers, and strict CORS—AI rarely adds these
  4. 4Lock down databases: enable RLS, use parameterized queries, create least-privilege service accounts, and verify storage bucket rules
  5. 5Use AI self-review prompting and tools like Semgrep, Snyk, and CodeQL to catch vulnerabilities before they ship
  6. 6Non-technical founders: hire a professional security reviewer ($5K–$25K) before launch—it's cheap compared to the $4.88M average breach cost