AIUIG is a one-week, end-to-end proof-of-concept that converts natural language into functional React + Tailwind components, with live preview. It demonstrates AI engineering applied to UI generation, full-stack delivery (NestJS + Neon Postgres + Render + Vercel), and developer-centric UX. It is explicitly aligned with the responsibilities described in Vercel’s v0 AI Engineer role: prompt-to-code, fast prototyping, evaluation hooks, and production-ready deployment.
- Backend (Render):
https://aiuig.onrender.com/api - Frontend (Vercel):
https://aiuig-frontend.vercel.app/ - Health endpoint:
GET /api/healthon the backend
Monorepo layout:
AIUIG/ ├── apps/ │ ├── backend/ # NestJS 11 (Node 20), REST API, OpenAI integration │ │ ├── src/ │ │ │ ├── main.ts │ │ │ ├── app.module.ts │ │ │ ├── health/... │ │ │ └── ai/... │ │ ├── package.json │ │ └── .env (not committed) / .env.example │ └── frontend/ # React + Vite + Tailwind, React Live preview │ ├── src/ │ │ └── App.tsx │ ├── index.html │ └── .env (VITE_API_URL) ├── README.md └── pnpm-workspace.yaml (optional)
Key choices:
- Backend: NestJS for fast, structured APIs and clean module boundaries.
- Database: Neon Postgres (serverless,
sslmode=require) to stay fully cloud-first. - Hosting: Render for the API, Vercel for the frontend, each with auto-deploy on push.
- Frontend: Vite + React + Tailwind for speed and UX; React Live to safely render generated components.
- Prompt → React TSX + Tailwind code generation via OpenAI’s Chat Completions.
- Live dual view: code and rendered preview (React Live sandbox).
- Strict output contract: a single TSX component, self-contained, no external UI libs.
- Input validation (zod) and basic sanitization on the backend.
- Health checks for monitoring and uptime tools.
- CORS configured to allow Vercel frontend and local dev.
Base URL: https://<your-render-service>.onrender.com
Endpoint: GET /api/health
- Status and timestamp for monitoring.
Endpoint: POST /api/generate-ui
- Request body:
promptstring (8..2000 chars): natural language description of the desired UI.maxTokensoptional number (256..4096): cap for model output.
Example request body: { "prompt": "Create a login form with email and password fields, remember-me checkbox, and a submit button styled with Tailwind." }
Example response: { "code": "export default function Ui() { /* TSX component */ }", "meta": { "model": "gpt-4o-mini", "tokens_used": 632 } }
Contract:
- The backend enforces a single TSX component named
Ui. - No external imports beyond React; styling is Tailwind only.
- The frontend displays the code and renders it inside a sandboxed preview.
Backend .env (local dev). File: apps/backend/.env
DATABASE_URL=postgresql://:@-.neon.tech/?sslmode=require
DATABASE_SSL=true
JWT_SECRET=supersecretkey JWT_EXPIRES_IN=3600s
PORT=5000 NODE_ENV=development
Backend template (committed). File: apps/backend/.env.example
DATABASE_URL=
DATABASE_SSL=true
JWT_SECRET=
JWT_EXPIRES_IN=3600s
PORT=5000
NODE_ENV=development
Frontend .env. File: apps/frontend/.env
VITE_API_URL=https://.onrender.com
Notes:
- Do not commit
.env. Commit.env.exampleas a contract for required keys. - On Render, do not set
PORT; Render injects it automatically. - Neon requires SSL. Use
?sslmode=requireandDATABASE_SSL=true.
Entry point. File: apps/backend/src/main.ts
- Sets global prefix
api. - Enables CORS for
http://localhost:5173and your Vercel domain. - Binds to
PORT(5000 locally; Render injects its own at runtime). - Registers
ValidationPipefor DTO safety.
Health module. Files: apps/backend/src/health/health.controller.ts, health.module.ts
GET /api/healthreturns{ status, timestamp, uptime }.
AI module. Files: apps/backend/src/ai/ai.controller.ts, ai.service.ts
POST /api/generate-ui: validates payload, calls OpenAI, extracts the TSX block, applies a basic safety check, returns{ code, meta }.
OpenAI client. File: apps/backend/src/ai/openai.client.ts
- Wraps Chat Completions call with
model = process.env.OPENAI_MODEL || "gpt-4o-mini"and a strict system prompt enforcing a single TSX component.
Prompting helper. File: apps/backend/src/ai/prompt.ts
- System and user prompts that steer the model to produce one clean TSX component with Tailwind classes and no external imports.
Security notes:
- Basic blacklist for obviously unsafe patterns.
- The frontend preview is sandboxed; still, do not execute server-side code from generations.
Core screen. File: apps/frontend/src/App.tsx
- Textarea for
prompt, Generate button, loading state. - Tabs: Code and Preview.
- Code tab: shows returned TSX.
- Preview tab: renders the component via React Live.
- API client uses
import.meta.env.VITE_API_URL.
Tailwind:
- Standard Tailwind 4+ setup with Vite.
- Minimal, neutral styling; emphasis on clarity and developer ergonomics.
Local dev:
- Backend:
http://localhost:5000 - Frontend:
http://localhost:5173 - CORS allows both local and your Vercel domain.
Backend on Render (Web Service):
- Branch:
main - Root Directory:
apps/backend - Environment: Node
- Build Command:
pnpm install && pnpm build - Start Command:
pnpm start - Auto Deploy: Yes
- Health Check Path:
/api/health - Environment Variables:
DATABASE_URL=postgresql://<user>:<password>@<project>-<hash>.neon.tech/<dbname>?sslmode=requireDATABASE_SSL=trueJWT_SECRET=<your-secret>JWT_EXPIRES_IN=3600s- Do not set
PORT
Database on Neon:
- Create project
aiuigineu-central-1(Frankfurt). - Copy “Connection string” from Connection Details to
DATABASE_URL. - Ensure
sslmode=require. - Use a dedicated user with limited scope if needed.
Frontend on Vercel:
- Project root:
apps/frontend - Build:
pnpm build - Env:
VITE_API_URL=https://<your-render-service>.onrender.com - Automatic deploy on push to
main.
Monitoring:
- UptimeRobot can ping
https://<your-render-service>.onrender.com/api/healthevery 5 minutes.
Day 1
- Scaffolding backend (NestJS 11, Node 20).
- Health endpoint and CORS; port set to 5000 to avoid clashes with other projects.
Day 2
- OpenAI integration and strict prompt contract.
POST /api/generate-uireturning a single TSX component.
Day 3
- Frontend Vite + React + Tailwind setup.
- React Live integration for embedded rendering.
Day 4
- CORS hardening and input validation with zod.
- Basic unsafe pattern filtering for generated code.
Day 5
- Render configuration:
- Branch
main, Rootapps/backend, Node runtime. - Build
pnpm install && pnpm build, Startpnpm start. - Health check
/api/health, Auto-deploy on push.
- Branch
Day 6
- Neon database project creation; connection string with
sslmode=require. - Environment variables added to Render and local
.env. - End-to-end test: frontend calls backend, shows code and preview.
Day 7
- README completion and deployment polish.
- Prepared narrative aligning the project with Vercel v0’s mission.
- It demonstrates natural language to UI generation using LLMs.
- It bridges research prompts with production constraints and developer UX.
- It ships fully deployed infrastructure, not just local scripts.
- It includes validation, safety, and preview ergonomics favored by UI engineers.
- It is fast to iterate and extend with evaluation pipelines.
- Style presets in the prompt (minimal, dashboard, marketing).
- History and replay: persist prompts and generations in Neon.
- Feedback loop: thumbs up/down stored and aggregated, preparing A/B experiments.
- Structured evaluation harness for generation quality across fixtures.
Backend
- File:
apps/backend/.env - Ensure
DATABASE_URLpoints to Neon withsslmode=require,PORT=5000. - Command:
pnpm start:dev
- Verify:
curl http://localhost:5000/api/health
Frontend
- File:
apps/frontend/.env - Set
VITE_API_URL=http://localhost:5000 - Commands:
pnpm installpnpm run dev
- Open
http://localhost:5173
- The project was intentionally scoped to the essentials to focus on AI UI generation.
- It showcases full-stack ownership under tight time constraints.
- It is designed to be extended with evaluation metrics and user-study hooks.
UNLICENSED for now. Contact the author for permissions regarding demo usage.
Charles. Built in one week to target AI Engineer roles focused on UI generation and developer experience.



