Jay Doolan

5 Prompts Which Make Any Web App More Secure

Posted on 05/03/266 min read
vibe-coded-security

Let's be real - when you're vibe coding your way through an MVP at 1am in your deepest flow state imaginable, security is the last thing on your mind.

But here's the thing: AI tools like Antigravity, Cursor or Claude will build you something that looks solid. Clean UI, routes wired up, data flowing - with absolutely nothing stopping someone from injecting a script tag into your form or hammering your API into the floor.

The good news? You can fix most of it with a few targeted prompts. Here are five copy-paste prompts that will add essential security to any vibe coded application.


1. Input Sanitisation

The problem: Your AI agent built the form. It did not protect the form. Those are two very different things.

By default you're probably wide open to XSS and injection attacks the moment a user can type anything into your app. Especially if you're persisting that input straight to a database.

Paste this into your AI tool:

Add input validation to all forms that:
- Strips HTML tags and script elements from text inputs
- Validates email formats before saving
- Limits text input length to sensible maximums
- Escapes special characters in database queries
- Returns specific, helpful error messages for invalid input

What it stops: XSS attacks, SQL injection, corrupted data from malformed input. The basics — but the basics most people skip.


2. Proper Authentication

The problem: A login screen is not authentication. AI tools love generating a login form with zero enforcement behind it - no lockout, no session expiry, nothing.

If you're using something like Clerk or BetterAuth you've got a head start, but that doesn't mean your session management & password policies are sorted by default.

Paste this:

Implement secure authentication with:
- Password requirements: minimum 8 characters, mix of letters and numbers
- Account lockout after 5 failed login attempts
- Session timeout after 30 minutes of inactivity
- Secure password reset via email verification only
- Force logout when a user's role or permissions change

What it stops: Brute force attacks, session hijacking & people getting back into accounts they shouldn't have after a role change.


3. Access Control

The problem: Your AI built multi-user functionality. It did not build the walls between users.

This one catches a lot of people out. You've got user A & user B. User A can probably see user B's data by tweaking the ID in a URL or API call. I've seen this in codebases people were about to ship. Don't be that person.

Paste this:

Add role-based access control where:
- Users can only view and edit their own data
- Admins require a separate confirmation step for sensitive actions
- API endpoints verify user permissions before returning any data
- Direct URL access to restricted pages redirects to login
- Database queries automatically filter results by user ownership

What it stops: Data leaks between users, privilege escalation & someone enumerating your entire user base by incrementing an ID.

If you're on Convex, lean into their built-in auth helpers here — the ctx.auth.getUserIdentity() pattern makes per-user data scoping genuinely easy.


4. Secure Data Storage

The problem: Plain text passwords still exist in the wild in 2026. I wish I was joking.

AI tools will generate a User schema & just... store the password. As a string. In your database. Next to the email. Right there for anyone with database access to read.

Paste this:

Secure sensitive data by:
- Hashing all passwords with bcrypt before storing them
- Encrypting personally identifiable information (PII) like emails and phone numbers
- Never storing payment or card details directly — use Stripe or a proper payment provider
- Adding database constraints to prevent duplicate sensitive records
- Creating audit logs for all data access and modifications

What it stops: A database breach turning into a full credential dump. If your hashes are strong, a breach is bad - a plain text leak is catastrophic.


5. API Security

The problem: AI-generated APIs are often wide open. No rate limiting. No proper error handling. Just raw endpoints sat there waiting to be abused.

This is especially relevant if you're exposing any kind of AI feature — those token costs add up fast when someone decides to hammer your endpoint.

Paste this:

Secure all API endpoints with:
- Rate limiting: max 100 requests per user per minute
- Authentication required on all data-modifying endpoints
- Generic error messages that don't leak system information
- CORS headers configured for your specific domain only
- Request logging for monitoring suspicious activity

What it stops: DDoS attacks, API abuse & error messages accidentally telling attackers exactly what your stack looks like.


Quick Sanity Check

Once you've applied these, do a quick manual test:

  1. Submit a form with <script>alert('xss')</script> — nothing should execute
  2. Try accessing another user's data by changing an ID in a URL
  3. Test your password reset flow with a random email that doesn't exist
  4. Fire rapid requests at your API and check the rate limiting kicks in
  5. Read your error messages — do they expose anything about your stack?

If any of those break your app in unexpected ways, you've found something worth fixing before someone else does.


Why AI Doesn't Handle This By Default

It's not that AI tools don't know about security - they absolutely do. The issue is that security adds friction, complexity & things that can go wrong during a demo. So by default, the path of least resistance wins.

The fix is simple: be explicit. The more specific your security requirements are in the prompt, the more seriously the AI takes them.


What This Doesn't Cover

These prompts give you a solid baseline, not a full security posture. For anything beyond an MVP you'll also want to think about:

  • HTTPS/SSL (most hosting platforms handle this, but worth confirming)
  • Regular dependency updates — outdated packages are a massive attack surface
  • Penetration testing if you're handling real user data at scale
  • Compliance requirements like GDPR if you're based in or serving the UK/EU

For most personal projects & MVPs, what's above is more than enough to not be the easiest target in the room. And that's really the goal - security isn't about being impenetrable, it's about not being the low-hanging fruit.

Build it properly from the start. Future you will be grateful.