There is a specific kind of frustration that vibe coding sessions manufacture reliably. You've spent a productive afternoon with Cursor building a FastAPI backend. Everything works — the endpoints return the right data, the Swagger docs render perfectly, the React frontend talks to the API without complaint. You push to your VPS (Virtual Private Server — a rented cloud machine where production apps run), wire up nginx, and within 60 seconds the entire app is broken. Every API call returns 404. The OpenAPI docs redirect to a page that doesn't exist. CORS errors flood the browser console like a log-spam attack. Sound familiar?
The code is not broken. Cursor wrote perfectly functional code. The problem is that Cursor generated code for a world it understood — a world where your app sits alone at the root URL on a known port — and you deployed it into an entirely different world that Cursor had never been told about. This article is about that gap, why it exists, and exactly how to close it so that your AI agent generates deployment-aware code from the first prompt.
The Deployment Blindspot: What Your AI Agent Doesn't Know
Quick Answer: Cursor and Claude Code know only what is in their context window. They have never seen your nginx config, your docker-compose setup, or your domain structure. Without that information they will always generate code that assumes localhost at the root — and that assumption breaks the moment a reverse proxy is involved.

Why local and production are two different mental models
When you run uvicorn main:app --reload locally, your FastAPI app is the only thing listening on port 8000. It owns the root URL /. Your frontend dev server runs on port 3000. They're both on localhost, so CORS is permissive or disabled entirely. Static files are served directly by the app or by Vite's dev server. Nothing stands between the browser and your code.
Production looks nothing like this. nginx (a high-performance web server used by roughly a third of all internet sites) listens on port 443 and routes requests based on URL prefix. Your API might live at /api/, your frontend at /, your media files at /static/. SSL terminates at nginx, so your FastAPI app internally receives plain HTTP. Cookies need the Secure flag. CORS must allow your real domain, not localhost. Every assumption the development setup encoded is wrong.
The four deployment assumptions Cursor bakes in silently
These aren't bugs in Cursor. They're the most reasonable defaults given what the model can see. But each one will break in production:
- API at root path: All generated API routes assume the app is mounted at
/, not/api/. - CORS origins as localhost: The default CORS config allows
http://localhost:3000, which the browser will refuse on your real domain. - HTTP cookies without Secure flag: Session cookies work locally over plain HTTP but will be blocked by browsers on HTTPS production sites.
- Static files served by the app: FastAPI's
StaticFilesmount conflicts with nginx's own static serving and creates path collisions at the proxy layer.
"The AI wrote code that was correct for the environment it was shown. I just never showed it my environment." — A pattern we hear from vibe coders every week.
- Write down your production stack in one paragraph: proxy software, URL layout, port assignments, SSL strategy
- Share that paragraph with Cursor at the start of your next session — before you write a single line of code
- Check whether your current
.cursorrulesorCLAUDE.mdmentions your deployment topology — if it doesn't, add it today
FastAPI root_path: The One Setting That Fixes the 404 Cascade
Quick Answer: When FastAPI sits behind a reverse proxy at a sub-path like /api/, you must tell it via root_path so it generates correct redirect URLs and OpenAPI doc links. Without it, every internal redirect and the Swagger UI will point to the wrong path, causing 404s that look like broken routes when the routes themselves are fine.

Why /api/ works in nginx but 404s inside FastAPI
Here is the sequence that breaks. nginx receives a request for https://yourdomain.com/api/users. It strips the /api prefix and forwards /users to FastAPI on port 8000. FastAPI correctly handles /users — this part works. The problem appears when FastAPI generates a redirect (like after a login) pointing to /dashboard. The browser follows that redirect and hits nginx expecting /api/dashboard — but FastAPI told it to go to /dashboard. 404.
The same problem manifests in the OpenAPI docs. Swagger UI loads from /docs and fetches its schema from /openapi.json. But nginx exposed Swagger at /api/docs — so both requests are immediately 404'd before they reach FastAPI.
The root_path fix and how to set it correctly
FastAPI follows the ASGI (Asynchronous Server Gateway Interface — the Python standard for connecting async web frameworks to servers) spec, which includes a root_path parameter for exactly this scenario. Set it and every internal URL FastAPI generates will automatically include the prefix:
# main.py
import os
from fastapi import FastAPI
app = FastAPI(
title="My App",
root_path=os.getenv("ROOT_PATH", ""), # "" in dev, "/api" in production
)
@app.get("/users")
async def list_users():
return []
Set ROOT_PATH=/api in your production .env and leave it empty in your local .env. The matching nginx configuration to make the prefix stripping work correctly is:
# nginx.conf — forward /api/ requests to FastAPI, strip the prefix
location /api/ {
proxy_pass http://fastapi:8000/; # trailing slash strips /api
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
The trailing slash in proxy_pass http://fastapi:8000/ is not optional — it's what tells nginx to strip the /api prefix before forwarding. Remove it and FastAPI will receive /api/users directly, which doesn't match any route definition and returns 404 immediately.
| Scenario | root_path Setting | Learner's First Step |
|---|---|---|
| App at root / (direct or Caddy proxy) | Empty string "" | No change needed — default behavior is correct |
| App at /api/ behind nginx | "/api" | Set ROOT_PATH=/api in production .env and use env var in FastAPI() |
| App at /v1/service/ (multi-tenant platform) | "/v1/service" | Use dynamic env var; set in container orchestration (Kubernetes, Docker Compose) |
| Multiple FastAPI apps on one server | Unique per app | Each service gets its own ROOT_PATH environment variable in docker-compose |
- Open your
main.pyand check whetherFastAPI()uses a hardcoded path or reads from an environment variable - If hardcoded, refactor to
root_path=os.getenv("ROOT_PATH", "")before your next deployment - Check your nginx config for the trailing-slash rule on
proxy_pass
Docker and nginx: Bridging the Local-to-Production Gap
Quick Answer: Docker Compose gives you a local multi-service environment, but without an nginx container mimicking production, every developer test happens against assumptions that won't hold on your server. Add a production-equivalent nginx service to docker-compose.prod.yml and test against it before pushing — the diff between local and production shrinks to near zero.

How nginx rewrites URLs your FastAPI app never expected
The most disorienting part of the nginx proxy relationship is that your FastAPI app receives a request that looks completely normal — it has no idea it was once addressed to a different URL. By the time a request for https://yourdomain.com/api/orders/42 reaches FastAPI, nginx has already stripped the /api prefix and rewritten the scheme from HTTPS to HTTP. FastAPI sees GET /orders/42 HTTP/1.1 with no indication it came via HTTPS or that the client knows it as /api/orders/42. This creates two failure modes: broken redirects (because the app doesn't know its public prefix) and broken cookie security (because the app doesn't know the connection is actually HTTPS).
The solution is to pass extra headers through nginx that tell the downstream application the original request context. The X-Forwarded-Proto header preserves the original scheme (https vs http), X-Forwarded-For carries the real client IP, and Host preserves the original hostname. These are already shown in the nginx config snippet above — the key is that your FastAPI app also needs to trust and read them.
Writing a deployment-aware Docker Compose setup
The practical solution is a two-file Docker Compose strategy. Your docker-compose.yml handles local development — no nginx, ports exposed directly, hot reload enabled. Your docker-compose.prod.yml mirrors production — nginx in front, ports not exposed externally, environment variables from your production .env. Running the production compose locally before deploying catches 90% of the breakages described in this article before they ever hit your server.
# docker-compose.prod.yml
version: "3.9"
services:
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/conf.d/default.conf:ro
- ./frontend/dist:/usr/share/nginx/html:ro
depends_on:
- fastapi
fastapi:
build: ./backend
environment:
- ROOT_PATH=/api
- DATABASE_URL=${DATABASE_URL}
- CORS_ORIGINS=${CORS_ORIGINS}
- SECRET_KEY=${SECRET_KEY}
# No ports: section — only nginx can reach this service
"The fastest way to find production-only bugs is to run your production Docker Compose setup on your laptop before you push." — A pattern consistently validated by teams that ship without surprises.
- Create a
docker-compose.prod.ymlthat includes an nginx container in front of your app - Run it locally with
docker compose -f docker-compose.prod.yml upand test againsthttp://localhost/api/ - Any failure at this stage is a production bug you just caught for free
Teaching Cursor and Claude Code About Your Deployment Stack
Quick Answer: Add a "Deployment Context" section to your .cursorrules and CLAUDE.md files. Include the reverse proxy path prefix, the production domain, environment variable names, and which services handle static files. AI agents read these files at the start of every session — giving them this context costs you five minutes once and prevents deployment bugs permanently.

What belongs in .cursorrules and CLAUDE.md
Both .cursorrules (read by Cursor) and CLAUDE.md (read by Claude Code) are plain-text files at your repository root. They're the equivalent of onboarding documentation for your AI agents — persistent context that loads automatically at the start of every session. Most developers use them for code style rules and off-limits files. Fewer use them for deployment context, which is a significant missed opportunity.
The reason deployment context matters so specifically is that it directly shapes which code the agent generates. If Cursor knows your API lives at /api/ behind nginx, it will use os.getenv("ROOT_PATH", "") rather than hardcoding "/". If it knows your production domain is HTTPS-only, it will set secure=True on cookies from the start. These are not corrections the agent makes after deployment breaks — they're decisions it makes before the code is written. For broader context on how AI tools like Cursor handle their configuration, see our comparison of Cursor, Copilot, and Codeium.
A deployment context template you can paste right now
Copy this block and adapt it to your stack. The specific values matter less than the fact that Cursor and Claude Code will have them available at every session:
## Deployment Context (REQUIRED reading before writing any backend code)
Production stack:
- Reverse proxy: nginx
- API sub-path: /api/ (nginx strips this before forwarding to FastAPI)
- Frontend path: / (served by nginx from /usr/share/nginx/html)
- Static assets: /static/ (served directly by nginx, NOT by FastAPI)
- Production domain: https://yourdomain.com (HTTPS enforced)
- Local dev: http://localhost:8000 (FastAPI direct, no proxy)
Environment variable conventions:
- ROOT_PATH: reverse proxy path prefix (empty in dev, "/api" in prod)
- CORS_ORIGINS: comma-separated allowed origins
- DATABASE_URL: full connection string including credentials
- SECRET_KEY: session signing key (never commit this)
- ENV: "development" or "production"
Rules:
- Never hardcode localhost URLs or port numbers in backend code
- Always read ROOT_PATH from env for FastAPI root_path
- Always read CORS_ORIGINS from env — never hardcode origins
- Set cookie Secure=True when ENV == "production"
- Do not add FastAPI StaticFiles mounts — nginx handles static
- Do not expose FastAPI port directly; all traffic goes through nginx
This is the kind of context that transforms an AI agent from a colleague who has never seen your infrastructure into one who has just read the deployment runbook. For a full guide on how to leverage these instruction files to prevent agents from breaking your codebase, read our developer's survival guide to AI agent loops.
- Create or open your
.cursorrulesfile and add a Deployment Context section using the template above - Do the same in
CLAUDE.mdif you also use Claude Code - Ask Cursor to summarise your deployment stack and verify it matches reality — if it can't, your context block isn't clear enough
Static Files, CORS, and Session Cookies: The Three Production Tripwires
Quick Answer: Static files, CORS headers, and session cookies all behave differently behind a reverse proxy and under HTTPS. Each requires a specific one-time configuration change that Cursor cannot guess without knowing your production topology. These three fixes together eliminate the most common category of production-only breakage in vibe-coded apps.

Static content path conflicts when nginx proxies
FastAPI makes it very easy to mount static files via app.mount("/static", StaticFiles(directory="static"), name="static"). Locally this works perfectly. In production behind nginx it creates a routing conflict: both nginx and FastAPI think they own /static/. Whether the request reaches nginx's file system or FastAPI's Python handler depends on the order of nginx location blocks — and the wrong order returns either a 404 (nginx has the block but the file isn't where it's looking) or unexpectedly slow static serving (Python is handling requests that nginx could serve from disk instantly).
The clean fix is to remove FastAPI's StaticFiles mount entirely in production and let nginx own the path exclusively. Add this to your .cursorrules deployment rules section and Cursor will stop generating StaticFiles mounts automatically.
CORS misconfigurations that only appear on production domains
In development, your frontend and backend typically share localhost as the origin, so CORS errors are rare — the browser treats same-host requests permissively. Once your frontend is at https://yourdomain.com and your API is at https://yourdomain.com/api/, they are technically the same origin and CORS wouldn't even be needed — but if your CORS config still lists http://localhost:3000 and doesn't list https://yourdomain.com, every credentialed request from your production domain will be blocked.
# main.py — environment-driven CORS origins
import os
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
app = FastAPI(root_path=os.getenv("ROOT_PATH", ""))
origins = os.getenv("CORS_ORIGINS", "http://localhost:3000").split(",")
app.add_middleware(
CORSMiddleware,
allow_origins=origins, # Never use ["*"] with credentials
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)

Secure session cookies and SameSite policy in production
Locally, cookies work over plain HTTP with no Secure flag required — the browser allows this on localhost. In production over HTTPS, cookies marked Secure=False will be silently ignored by modern browsers, causing sessions to appear to work (the login succeeds) but immediately expire (the next request is treated as unauthenticated). The catch is that your FastAPI app receives plain HTTP from nginx internally, so request.url.scheme will report http rather than https. You cannot trust the scheme from the request object in a proxied app — you must read the X-Forwarded-Proto header or simply use an environment variable:
# Setting cookies with environment-aware security
import os
from fastapi import Response
IS_PRODUCTION = os.getenv("ENV", "development") == "production"
def set_session_cookie(response: Response, token: str):
response.set_cookie(
key="session",
value=token,
httponly=True, # JS cannot read this cookie
secure=IS_PRODUCTION, # HTTPS-only in prod, plain HTTP in dev
samesite="lax", # Prevents CSRF while allowing top-level navigation
max_age=3600,
)
The combination of httponly=True, secure=IS_PRODUCTION, and samesite="lax" is the safe default for most web apps. Once this pattern is in your .cursorrules as a rule — "always use this function for session cookies" — Cursor will apply it consistently rather than generating ad-hoc set_cookie calls that vary per feature. For a deep dive into broader AI coding security risks including cookie and session management, see our AI coding security risks guide.
- Search your codebase for
set_cookiecalls — check each one forsecure=Trueandhttponly=True - Search for
allow_originsin your middleware — ensure it reads from an environment variable - Add the cookie pattern and CORS pattern as explicit rules to
.cursorrulesso Cursor never deviates from them
aicourses.com Verdict
The irony of the "works locally, breaks in production" problem is that it has almost nothing to do with the quality of your AI agent and almost everything to do with the quality of the context you give it. Cursor and Claude Code are not guessing poorly — they are extrapolating correctly from an incomplete picture. Your local development environment encodes assumptions about where the app lives, who can call it, and how sessions should behave. Those assumptions are invisible when they work and catastrophic when they don't translate to production.
The practical fix is a one-time investment: write a deployment context block in .cursorrules and CLAUDE.md, adopt environment variables for every configuration value that differs between local and production, and create a docker-compose.prod.yml that tests your app behind nginx before every push. Completed once, these three changes make every future vibe coding session deployment-aware from the first prompt. Understanding the broader mechanics of how AI tools reason about your code is also essential — we break that down in how AI coding tools actually work. Configuration values and framework APIs verified as of March 2026.
The next article in this cluster digs into a related failure mode: context window management during long vibe coding sessions — why your agent starts hallucinating after 40 minutes, and how to restructure your prompts so the context stays clean from start to finish.
Frequently Asked Questions
Why does my FastAPI app return 404 for all API routes when deployed behind nginx?
The most common cause is a missing root_path configuration. When nginx proxies requests from /api/ to your FastAPI app on port 8000, FastAPI generates internal URLs relative to / (the root), not /api/. Setting root_path='/api' in your FastAPI() constructor or via the ROOT_PATH environment variable tells the app where it actually lives in the URL hierarchy, fixing redirects and OpenAPI docs.
What is the difference between localhost and production for a Docker-deployed app?
Locally, your app is the only thing listening — it runs at the root path on a specific port. In production, nginx sits in front and routes different URL prefixes to different services. Your app suddenly lives at a sub-path like /api/, its static files need to be served separately, and cookies and CORS headers must reference the real domain rather than localhost.
Does Cursor know about my nginx or Docker configuration?
Not unless you tell it. Cursor and Claude Code read only what is in their context window — your source files, error messages, and any configuration documents you provide. If you don't share your nginx config or a written description of your deployment topology, the AI will generate code that works perfectly on localhost but ignores the production URL structure entirely.
What should I put in .cursorrules to give Cursor my deployment context?
Include: (1) whether the app runs behind a reverse proxy and at what sub-path, (2) the production domain and whether HTTPS is enforced, (3) environment variable names for secrets and runtime config, (4) which services handle static files, and (5) any CORS origins allowed in production. This prevents Cursor from hardcoding localhost URLs or generating development-only CORS settings.
How do I fix CORS errors that only appear in production and not locally?
Production CORS errors almost always mean your allow_origins list includes localhost but not your real domain. Use an environment variable: origins = os.getenv('CORS_ORIGINS', 'http://localhost:3000').split(','), then set CORS_ORIGINS=https://yourdomain.com in your production .env. Never use allow_origins=['*'] in production — it bypasses cookie and credential security.
Why do my session cookies work in development but not in production?
Production session cookies require secure=True (HTTPS only). When your app is behind nginx terminating SSL, the browser sees HTTPS but your app's set-cookie header may still emit Secure=False — because the app only sees plain HTTP from nginx internally. Use an ENV environment variable to set secure=IS_PRODUCTION rather than reading request.url.scheme.
What is a reverse proxy and why does it break my app's URL assumptions?
A reverse proxy (commonly nginx) sits in front of your app and forwards incoming requests to it. This translation process strips or alters the original URL, scheme, and host headers — so if your app generates absolute URLs or redirects, it will use the wrong base unless you explicitly configure it for the proxy using root_path and the X-Forwarded-Proto header.
How do I serve static files correctly when my FastAPI app is behind nginx?
Let nginx serve static files directly from the filesystem, bypassing FastAPI entirely. In nginx, add location /static/ { alias /app/static/; } before the proxy_pass block. Remove FastAPI's StaticFiles mount in production. This prevents path collisions and is far faster since nginx doesn't invoke Python for each file request.
Should I use CLAUDE.md or .cursorrules for deployment configuration context?
Use both if you work with both tools. .cursorrules is read by Cursor; CLAUDE.md is read by Claude Code. They can contain the same deployment context section — a short, structured block describing your production stack that each agent will automatically incorporate into its reasoning at the start of every session.
Conclusion
The deployment gap is a context gap. Your AI agent built great code for the environment it knew — localhost, port 8000, nothing in the way. Production is nginx, sub-paths, HTTPS-only cookies, and environment-specific CORS. Close that gap by putting your production topology into .cursorrules and CLAUDE.md, adopt environment variables for every deployment-sensitive value, and test against a local production-equivalent Docker stack before pushing. Do those three things once and you'll stop hitting this class of bug entirely. Want to learn more about AI? Download our aicourses.com app through this link and claim your free trial!


