Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs.json
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@
{
"group": "Use cases",
"pages": [
"docs/use-cases/vibe-coding",
"docs/use-cases/computer-use",
"docs/use-cases/ci-cd"
]
Expand Down
3 changes: 3 additions & 0 deletions docs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

## What is E2B?

E2B provides isolated sandboxes that let agents safely execute code, process data, and run tools. Our SDKs make it easy to start and manage these environments.

Check warning on line 11 in docs.mdx

View check run for this annotation

Mintlify / Mintlify Validation (e2b) - vale-spellcheck

docs.mdx#L11

Did you really mean 'SDKs'?

Spin up a sandbox and run code in a few lines:

Expand Down Expand Up @@ -63,6 +63,9 @@
## Examples

<CardGroup cols={2}>
<Card title="Vibe Coding" icon="wand-magic-sparkles" href="/docs/use-cases/vibe-coding">
Build AI app generators that turn prompts into running web apps using E2B sandboxes for secure code execution.
</Card>
<Card title="Computer Use" icon="desktop" href="/docs/use-cases/computer-use">
Build AI agents that see, understand, and control virtual Linux desktops using E2B Desktop sandboxes.
</Card>
Expand Down
170 changes: 170 additions & 0 deletions docs/use-cases/vibe-coding.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,170 @@
---
title: "Vibe Coding"
description: "Build AI-powered app generators that turn natural language into running code using E2B sandboxes for secure execution and live previews."
icon: "wand-magic-sparkles"
---

Vibe coding tools let users describe an app in plain language and get back running code instantly. Your app handles the LLM interaction and UI, then uses E2B sandboxes to prepare and serve the generated app. Since the generated code never runs on your infrastructure, it can't cause damage even if it's buggy or malicious.

For a complete working implementation, see [Fragments](https://github.com/e2b-dev/fragments) — an open-source vibe coding platform you can try via the [live demo](https://fragments.e2b.dev).

## Why E2B

- **Secure execution** — AI-generated code runs in isolated sandboxes, not on your servers
- **Live preview URLs** — each sandbox exposes a [public URL](/docs/sandbox/internet-access) you can embed in an iframe
- **Custom templates** — pre-install frameworks like Next.js, Streamlit, or Gradio so sandboxes start instantly via [templates](/docs/template/quickstart)

Check warning on line 15 in docs/use-cases/vibe-coding.mdx

View check run for this annotation

Mintlify / Mintlify Validation (e2b) - vale-spellcheck

docs/use-cases/vibe-coding.mdx#L15

Did you really mean 'Streamlit'?

Check warning on line 15 in docs/use-cases/vibe-coding.mdx

View check run for this annotation

Mintlify / Mintlify Validation (e2b) - vale-spellcheck

docs/use-cases/vibe-coding.mdx#L15

Did you really mean 'Gradio'?
- **Multi-framework support** — same API whether the generated app is React, Vue, Python, or anything else

## Install the SDK

[Fragments](https://github.com/e2b-dev/fragments) uses the [E2B Code Interpreter SDK](https://github.com/e2b-dev/code-interpreter).

<CodeGroup>
```bash JavaScript & TypeScript
npm i @e2b/code-interpreter
```
```bash Python
pip install e2b-code-interpreter
```
</CodeGroup>

## Core Implementation

Your app orchestrates the flow from its own server — the sandbox is used purely to prepare and serve the generated code.

### Create a sandbox from a template

Each sandbox starts from a [template](/docs/template/quickstart) with the target framework pre-installed and a dev server already running. See the [Next.js template example](/docs/template/examples/nextjs).

<CodeGroup>
```typescript JavaScript & TypeScript
import { Sandbox } from '@e2b/code-interpreter'

const sandbox = await Sandbox.create('nextjs-app', {
timeoutMs: 300_000,
})
```
```python Python
from e2b_code_interpreter import Sandbox

sandbox = Sandbox.create("nextjs-app", timeout=300)
```
</CodeGroup>

### Install dependencies and write code

Install any extra packages the LLM requested, then write the generated code to the sandbox [filesystem](/docs/filesystem/read-write).

<CodeGroup>
```typescript JavaScript & TypeScript
// Install additional packages requested by the LLM
await sandbox.commands.run('npm install recharts @radix-ui/react-icons')

// Write the generated code
await sandbox.files.write('/home/user/pages/index.tsx', generatedCode)
```
```python Python
# Install additional packages requested by the LLM
sandbox.commands.run("npm install recharts @radix-ui/react-icons")

# Write the generated code
sandbox.files.write("/home/user/pages/index.tsx", generated_code)
```
</CodeGroup>

### Get the preview URL

The dev server picks up changes automatically. Retrieve the sandbox's [public URL](/docs/sandbox/internet-access) and embed it in your frontend.

<CodeGroup>
```typescript JavaScript & TypeScript
const host = sandbox.getHost(3000)
const previewUrl = `https://${host}`
// Embed previewUrl in an iframe for the user
```
```python Python
host = sandbox.get_host(3000)
preview_url = f"https://{host}"
# Embed preview_url in an iframe for the user
```
</CodeGroup>

### Full example

A complete flow: LLM generates code, sandbox prepares and serves it. Simplified from [Fragments](https://github.com/e2b-dev/fragments).

<CodeGroup>
```typescript JavaScript & TypeScript expandable
import { Sandbox } from '@e2b/code-interpreter'
import OpenAI from 'openai'

// 1. Get code from the LLM
const openai = new OpenAI()
const response = await openai.chat.completions.create({
model: 'gpt-5.2-mini',
messages: [
{
role: 'system',
content:
'Generate a single Next.js page component using TypeScript and Tailwind CSS. Return only code, no markdown.',
},
{ role: 'user', content: 'Build a calculator app' },
],
})
const generatedCode = response.choices[0].message.content

// 2. Create a sandbox and prepare the app
const sandbox = await Sandbox.create('nextjs-app', { timeoutMs: 300_000 })
await sandbox.files.write('/home/user/pages/index.tsx', generatedCode)

// 3. Return the preview URL
const previewUrl = `https://${sandbox.getHost(3000)}`
console.log('App is live at:', previewUrl)

// Later, when the user is done:
await sandbox.kill()
```
```python Python expandable
from e2b_code_interpreter import Sandbox
from openai import OpenAI

# 1. Get code from the LLM
client = OpenAI()
response = client.chat.completions.create(
model="gpt-5.2-mini",
messages=[
{
"role": "system",
"content": "Generate a single Next.js page component using TypeScript and Tailwind CSS. Return only code, no markdown.",
},
{"role": "user", "content": "Build a calculator app"},
],
)
generated_code = response.choices[0].message.content

# 2. Create a sandbox and prepare the app
sandbox = Sandbox.create("nextjs-app", timeout=300)
sandbox.files.write("/home/user/pages/index.tsx", generated_code)

# 3. Return the preview URL
preview_url = f"https://{sandbox.get_host(3000)}"
print("App is live at:", preview_url)

# Later, when the user is done:
sandbox.kill()
```
</CodeGroup>

## Related Guides

<CardGroup cols={3}>
<Card title="Custom Templates" icon="cube" href="/docs/template/quickstart">
Pre-install frameworks and tools so sandboxes start instantly
</Card>
<Card title="Connect LLMs" icon="brain" href="/docs/quickstart/connect-llms">
Integrate AI models with sandboxes using tool calling
</Card>
<Card title="Internet Access" icon="globe" href="/docs/sandbox/internet-access">
Access sandbox apps via public URLs and control network policies
</Card>
</CardGroup>