VibeStart.dev is an end-to-end, AI-powered coding environment that turns text into runnable apps inside an isolated sandbox. It streams tool progress to the UI, shows logs, previews running servers, and lets you inspect generated files in real time.
This is my version of Vercel’s open‑source Vibe Coding Platform. It’s community‑driven — contributions are welcome — and the roadmap is evolving. See the upstream example at vercel/examples → apps/vibe-coding-platform and this public repo at Noisemaker111/vibestart.dev. This repository is public and intended to remain public.
- This is my version of Vercel’s open‑source Vibe Coding Platform.
- Community‑driven: issues and PRs are welcome; the direction will evolve with feedback.
- We acknowledge upstream inspiration and continue to leverage Vercel services (AI Gateway, Sandbox) where applicable.
- Primary repository: Noisemaker111/vibestart.dev
- Upstream example: vercel/examples → apps/vibe-coding-platform
Built with Next.js 15, React 19, Tailwind CSS, and the AI SDK v5. It integrates with Vercel AI Gateway for multi-model support (OpenAI GPT‑5, O4-mini, Anthropic Claude, Gemini, Nova, Grok, etc.).
# 1) Install deps (Bun recommended)
bun install
# 2) Set env var for Vercel AI Gateway
# (Use your Gateway endpoint URL)
# Windows (PowerShell): $env:AI_GATEWAY_BASE_URL="https://gateway.vercel.ai/api/v1"
# macOS/Linux (bash): export AI_GATEWAY_BASE_URL="https://gateway.vercel.ai/api/v1"
# 3) Dev server
bun run dev
# Open http://localhost:3000- Lint:
bun run lint - Build:
bun run build - Start:
bun run start
Note: This repo currently has no test script configured.
- Lets you chat with an agent that can:
- Create a fresh sandbox (via Vercel Sandbox; ephemeral Linux container)
- Generate and upload complete files into the Sandbox
- Run commands (install, build, start) and wait for completion
- Expose ports and fetch a public preview URL
- Streams every tool action back to the UI as it happens
- Shows a live file explorer and tailing command logs
- Provides an in-app browser preview of the running app
graph TD;
UI["Next.js App (React 19)"] -->|/api/chat| ChatAPI
ChatAPI["/app/api/chat/route.ts"] --> AI["AI SDK streamText"]
AI --> Tools["UI Tool Calls"]
Tools --> Sandbox["@vercel/sandbox"]
Tools --> Gateway["Vercel AI Gateway"]
Gateway --> LLMs["OpenAI / Anthropic / Gemini / ..."]
subgraph "UI Panels"
ChatPanel["Chat"]
FilesPanel["File Explorer"]
LogsPanel["Logs"]
PreviewPanel["Preview"]
end
ChatPanel -->|messages| ChatAPI
FilesPanel -->|/api/sandboxes/:id/files| Sandbox
LogsPanel -->|/api/sandboxes/:id/cmds/:id/logs| Sandbox
PreviewPanel -->|domain port| Sandbox
Key flows
- /api/chat streams model output and tool events to the browser
- Tool calls orchestrate Sandbox lifecycle: create → generate files → run → wait → preview
- Zustand stores sandbox state (id, status, paths, commands, url) client‑side
.
├─ ai/
│ ├─ constants.ts # Default model, supported models, sample prompts
│ ├─ gateway.ts # Vercel AI Gateway provider + model options
│ ├─ messages/
│ │ ├─ data-parts.ts # Zod schema for tool data payloads
│ │ └─ metadata.ts # Zod schema for message metadata
│ └─ tools/ # Tool implementations (used by the agent)
│ ├─ create-sandbox.ts # Creates a sandbox
│ ├─ generate-files.ts # Streams file generation and uploads to sandbox
│ ├─ get-sandbox-url.ts # Returns public URL for an exposed port
│ ├─ run-command.ts # Starts a command (detached)
│ ├─ wait-command.ts # Waits for a command to finish
│ └─ index.ts # Registers tools for streamText
│
├─ app/
│ ├─ api/
│ │ ├─ chat/
│ │ │ ├─ prompt.md # System prompt for the coding agent
│ │ │ └─ route.ts # Streams chat + tool calls
│ │ ├─ models/route.tsx # Filters available models via Gateway
│ │ └─ sandboxes/[...]/ # Sandbox helper endpoints (status, files, logs)
│ ├─ chat.tsx # Chat panel wiring (useChat, ModelSelector)
│ ├─ file-explorer.tsx # Connects store → File Explorer component
│ ├─ preview.tsx # Connects store → Preview component
│ ├─ logs.tsx # Logs panel container
│ ├─ actions.ts # Server actions (e.g., dismiss welcome)
│ ├─ state.ts # Zustand stores + data mapper for tool events
│ ├─ layout.tsx # Root layout, toasts, SandboxState watcher
│ └─ page.tsx # Tabs layout (Chat, Preview, Files, Logs)
│
├─ components/
│ ├─ chat/ # Message rendering and tool-part UIs
│ │ ├─ message.tsx
│ │ ├─ message-part/
│ │ │ ├─ create-sandbox.tsx
│ │ │ ├─ generate-files.tsx
│ │ │ ├─ get-sandbox-url.tsx
│ │ │ ├─ run-command.tsx
│ │ │ ├─ wait-command.tsx
│ │ │ └─ text.tsx, reasoning.tsx
│ │ └─ tool-message.tsx, tool-header.tsx, message-spinner.tsx
│ ├─ file-explorer/ # Sandbox filesystem browser
│ ├─ preview/ # In-app browser (iframe) with refresh
│ ├─ modals/ # Welcome + Sandbox status dialog
│ ├─ panels/ # Panel scaffolding
│ └─ ui/ # Shadcn-style primitives
│
├─ next.config.ts # MD loader, images, botid wrapper
├─ package.json # Scripts and dependencies
├─ tsconfig.json # TS config (paths, JSX, bundler resolution)
└─ public/ # Static assets
- User submits a message from
app/chat.tsxviauseChat - Request hits
app/api/chat/route.ts- Validates bots with
checkBotId - Fetches models from Gateway and selects model options
- Calls
streamTextwith:system:app/api/chat/prompt.mdtools: fromai/tools/index.tsstopWhen:stepCountIs(20)
- Validates bots with
- As the model streams tokens and triggers tool calls, the server converts them to UI message parts and streams them back
- Client maps each incoming part to local state via
useDataStateMapperinapp/state.ts:- New sandbox id → store it
- Uploaded file paths → update file explorer
- New command id → append to logs list
- New preview url → update preview
All tools live in ai/tools/ and are registered in ai/tools/index.ts.
-
createSandbox
- Input:
{ timeout?: number, ports?: number[] } - Output stream:
data-create-sandbox - Creates a sandbox (via Vercel Sandbox) and streams a
sandboxId
- Input:
-
generateFiles
- Input:
{ sandboxId: string } - Implementation detail: nested
streamObjectcall with a schema:{ files: Array<{ path: string, content: string }> }
- Streams progress:
generating→uploading→uploaded→done - Uses
Sandbox.writeFilesto upload files incrementally
- Input:
-
runCommand
- Input:
{ sandboxId: string, command: string, args?: string[], sudo?: boolean } - Starts a detached command in the Sandbox and streams
{ commandId } - Important: each command runs in a fresh shell session → use absolute or relative paths; do not rely on
cd
- Input:
-
waitCommand
- Input:
{ sandboxId, commandId, command, args } - Waits until completion and returns
exitCode,stdout,stderr
- Input:
-
getSandboxURL
- Input:
{ sandboxId: string, port: number } - Returns a public URL for a port that must have been exposed when the sandbox was created
- Input:
UI components under components/chat/message-part/* provide rich, per-tool inline status rendering (spinners, checkmarks, links, etc.).
- Models displayed in the UI come from
GET /api/models, which calls Gateway and filters bySUPPORTED_MODELSinai/constants.ts. - Default model:
DEFAULT_MODEL(openai/gpt-5). - Per-model behaviors (e.g., OpenAI reasoning options, Anthropic headers) are configured in
ai/gateway.ts#getModelOptions. - To add or remove a model from the dropdown, edit
SUPPORTED_MODELS.
Environment
- Required:
AI_GATEWAY_BASE_URL→ base URL for Vercel AI Gateway - Set locally in your shell (do not commit .env). Vercel: add it to Project Settings → Environment Variables.
- State:
app/state.tsdefines a Zustand store for Sandbox session and a data mapper that reacts to streamed tool parts - Panels:
components/panelsframe each area (Chat, Files, Logs, Preview) - File Explorer:
components/file-explorerbuilds a navigable tree from uploaded paths and fetches file content from/api/sandboxes/:id/files?path=... - Logs:
components/commands-logsstreams NDJSON logs from/api/sandboxes/:sandboxId/cmds/:cmdId/logs - Preview:
components/preview/preview.tsxrenders an iframe with refresh, open-in-new, and retry helpers
POST /api/chat— primary streaming endpoint for chat + toolsGET /api/models— list of allowed models (via Gateway)GET /api/sandboxes/:sandboxId— ping to detect stopped SandboxGET /api/sandboxes/:sandboxId/files?path=...— stream file contentGET /api/sandboxes/:sandboxId/cmds/:cmdId— command statusGET /api/sandboxes/:sandboxId/cmds/:cmdId/logs— NDJSON logs
- Change system prompt
- Edit
app/api/chat/prompt.md(supports Markdown)
- Edit
- Add/remove models
ai/constants.ts→SUPPORTED_MODELS- Optionally tweak behavior in
ai/gateway.ts#getModelOptions
- Add a new tool
- Create
ai/tools/my-tool.tsusingtool({...}) - Register it in
ai/tools/index.ts - Add a UI renderer in
components/chat/message-part/ - Update
ai/messages/data-parts.tsschema with your part shape
- Create
- Change the file explorer or logs UI
- Edit
components/file-explorer/*orcomponents/commands-logs/*
- Edit
- Adjust panels and layout
app/page.tsx,components/panels/*
- Theming/styles
- Tailwind classes across components; global styles in
app/globals.css
- Tailwind classes across components; global styles in
- Install:
bun install - Run dev:
bun run dev→http://localhost:3000 - Lint:
bun run lint - Build:
bun run build - Start:
bun run start
If you change TypeScript config or aliases, restart the dev server.
- Set
AI_GATEWAY_BASE_URLin Project → Settings → Environment Variables - Enable Edge/Streaming as needed (default Next.js settings work)
next.config.tsalready configures MD loading and image remote patterns
- Conventional Commits for messages (e.g.,
feat:,fix:,chore:) - No secrets in code. Use environment variables only.
- Avoid reading
.env*files in code. Pass required values via process env at runtime. - Each sandbox command runs in a fresh shell — use full or relative paths. Do not rely on
cdor shell state between commands.
- Models list is empty
- Ensure
AI_GATEWAY_BASE_URLis set and reachable - Gateway project must have the models you expect enabled
- Ensure
- Preview is blank
- Confirm you exposed the correct port when creating the sandbox
- Verify the app inside the sandbox is listening on that port
- Commands appear stuck
- Use
waitCommandafterrunCommandfor dependent steps - Check NDJSON logs at
/api/sandboxes/:id/cmds/:cmdId/logs
- Use
- Why Vercel Sandbox?
- It provides an isolated, ephemeral Linux environment for safe code execution with streaming APIs and robust file/command primitives.
- Why the AI Gateway?
- One endpoint to many providers, consistent auth, observability, rate limiting, and advanced features like safety, caching, and fallbacks.
- Can I add my own UI around the agent?
- Yes. The agent is just a streaming endpoint plus a toolbelt; you can build any front-end on top.
You don’t need to be an expert to help. Small, focused changes are perfect.
-
How to contribute
- Fork → create a branch, e.g.
feature/short-title - Setup locally:
bun install→bun run dev - Make a change (docs copy, UI polish, bug fix, feature)
- Check it:
bun run lint→ optionalbun run build( i dont do this but you should ) - Open a PR with: problem, what changed, before/after screenshot if UI
- Fork → create a branch, e.g.
-
Good first ideas
- Improve README/docs (typos, clarity, examples)
- Small UI polish (accessibility, spacing, labels)
- Add a sample app template for file generation
- Tweak model list or defaults in
ai/constants.ts
-
Scope & style
- Keep PRs small and single-purpose
- No secrets in code; follow existing formatting
- Aim for clear naming and accessible UI
If you’re unsure, open an issue with your idea. Happy to help scope it down.
- Model presets in the UI and per-model defaults
- Save/load chat + sandbox session state
- Export generated project as a downloadable repo/zip
- Example templates gallery for common stacks
- Better onboarding copy and accessibility passes
Got a suggestion? Open an issue or PR.
MIT – see LICENSE.