An in-browser IDE built with React + TypeScript and powered by Monaco. It includes a streaming AI assistant with provider adapters for OpenAI, Anthropic, Google Gemini, and local Ollama, plus an apply-plan pipeline that safely turns AI output into real file edits.
🏷️ Tags / Keywords
- react, typescript, vite, monaco-editor, sse, jsonl, adapters, openai, anthropic, gemini, ollama
- streaming, apply-plan, normalize, redaction, telemetry, e2e, vitest, playwright, styled-components, radix-ui
- file explorer, tabs, editor bridge, vite proxy, devtools, tracing, flags, timeouts, resilience, fetch retry
- Overview and highlights
- Quickstart (Windows/PowerShell friendly)
- Providers, endpoints, streaming, and models
- Architecture and data flow
- Contracts (types and events)
- Configuration (env vars, flags, storage)
- Security, privacy, and telemetry
- Operations runbook (debugging/tracing/health)
- Performance characteristics and tuning
- Accessibility status
- Project layout
- Testing and quality gates
- Deployment options
- Roadmap and scope notes
- Contributing guidelines
- License
- Appendices (A–F)
Synapse IDE is a client-first, browser-based IDE shell featuring:
- Monaco editor wrapper with themes and keyboard integration
- AI assistant with unified streaming across providers
- Apply-plan pipeline to transform AI output into safe file edits
- Local-first storage (settings, telemetry buffer) via localStorage
- Strong developer ergonomics: TypeScript types, lint/format, unit + e2e tests
Highlights
- React 19 + Vite 6 + TS 5; fast HMR and code-split bundles
- Streaming adapters for: OpenAI (SSE), Anthropic (SSE), Gemini (simulated streaming), Ollama (JSONL)
- Dev proxies for
/openaiand/ollamato avoid browser CORS during development - Feature flags for tracing and E2E; centralized timeouts for predictable UX
synapse.mp4
Prerequisites
- Node.js 18+ recommended
Install
npm installRun the dev server
npm run devBuild and preview
npm run build
npm run previewType-check, lint, and tests
npm run type-check
npm run lint
npm run test
npm run e2eUseful scripts
dev:trace— start with tracing flag (/?trace=1)e2e:ci— line reporter for CIformat/format:check— Prettierrelease:*— version + changelog helpers
Note: Do not forget extract src.zip
Provider adapters: src/services/ai/adapters/index.ts
- OpenAI → endpoint:
openai_chat_completions— transport: SSE - Anthropic → endpoint:
anthropic_messages— transport: SSE - Google (Gemini) → endpoint:
gemini_generate— transport: JSON; streaming simulated client-side - Ollama (local) → endpoint:
ollama_generate— transport: server JSONL streaming
Model registry: src/utils/ai/models/registry.ts
- OpenAI examples:
gpt-5,gpt-5-mini,gpt-5-nano,gpt-4o,gpt-4o-mini,chatgpt-4o-latest,gpt-4-turbo,gpt-4,gpt-3.5-turbo - Anthropic examples:
claude-4-{opus|sonnet|haiku},claude-3.5-{sonnet|haiku|vision|opus},claude-3-{opus|sonnet|haiku} - Google examples:
gemini-2.0-{pro-exp|flash-exp|flash-lite},gemini-1.5-{pro(-latest)|flash|flash-8b},gemini-pro,gemini-pro-vision - Ollama examples:
llama3.1(and:70b),llama3,llama2,codellama,mistral,mixtral,phi3/phi,deepseek-coder,qwen2.5-coder,starcoder2
Notes
- Model metadata includes
supportsVisionandsupportsToolsflags; function/tool-calling is not wired in the current UI. - The adapters normalize events to a common stream format: start → delta* → usage? → done | error.
The table below is generated from src/utils/ai/models/registry.ts. To refresh it after changing the registry, run the script described later in this README.
| Provider | Model | Vision | Tools | Cap Tokens | Endpoint |
|---|---|---|---|---|---|
| openai | gpt-5 | ✓ | 200000 | openai_chat_completions | |
| openai | gpt-5-mini | ✓ | 128000 | openai_chat_completions | |
| openai | gpt-5-nano | ✓ | 64000 | openai_chat_completions | |
| openai | gpt-4o | ✓ | ✓ | 128000 | openai_chat_completions |
| openai | gpt-4o-mini | ✓ | ✓ | 128000 | openai_chat_completions |
| openai | chatgpt-4o-latest | ✓ | ✓ | 128000 | openai_chat_completions |
| openai | gpt-4-turbo | ✓ | 128000 | openai_chat_completions | |
| openai | gpt-4 | ✓ | 8192 | openai_chat_completions | |
| openai | gpt-3.5-turbo | 4096 | openai_chat_completions | ||
| anthropic | claude-4-opus | ✓ | 200000 | anthropic_messages | |
| anthropic | claude-4-sonnet | ✓ | 200000 | anthropic_messages | |
| anthropic | claude-4-haiku | 200000 | anthropic_messages | ||
| anthropic | claude-3.5-opus-20241022 | ✓ | 200000 | anthropic_messages | |
| anthropic | claude-3.5-vision-20241022 | ✓ | ✓ | 200000 | anthropic_messages |
| anthropic | claude-3-5-sonnet-20241022 | ✓ | 200000 | anthropic_messages | |
| anthropic | claude-3-5-sonnet-20240620 | ✓ | 200000 | anthropic_messages | |
| anthropic | claude-3-5-haiku-20241022 | 200000 | anthropic_messages | ||
| anthropic | claude-3-opus-20240229 | 200000 | anthropic_messages | ||
| anthropic | claude-3-sonnet-20240229 | 200000 | anthropic_messages | ||
| anthropic | claude-3-haiku-20240307 | 200000 | anthropic_messages | ||
| gemini-2.0-pro-exp | ✓ | 1000000 | gemini_generate | ||
| gemini-2.0-flash-lite | 1000000 | gemini_generate | |||
| gemini-2.0-flash-exp | 1000000 | gemini_generate | |||
| gemini-1.5-pro-latest | ✓ | 1000000 | gemini_generate | ||
| gemini-1.5-pro | ✓ | 1000000 | gemini_generate | ||
| gemini-1.5-flash | 1000000 | gemini_generate | |||
| gemini-1.5-flash-8b | 1000000 | gemini_generate | |||
| gemini-pro | 120000 | gemini_generate | |||
| gemini-pro-vision | ✓ | 120000 | gemini_generate | ||
| ollama | llama3.1 | 32768 | ollama_generate | ||
| ollama | llama3.1:70b | 32768 | ollama_generate | ||
| ollama | llama3 | 8192 | ollama_generate | ||
| ollama | llama2 | 8192 | ollama_generate | ||
| ollama | codellama | 8192 | ollama_generate | ||
| ollama | codellama:13b | 8192 | ollama_generate | ||
| ollama | mistral | 32768 | ollama_generate | ||
| ollama | mixtral | 32768 | ollama_generate | ||
| ollama | phi3 | 4096 | ollama_generate | ||
| ollama | phi | 4096 | ollama_generate | ||
| ollama | deepseek-coder | 16384 | ollama_generate | ||
| ollama | deepseek-coder:33b | 16384 | ollama_generate | ||
| ollama | qwen2.5-coder | 32768 | ollama_generate | ||
| ollama | starcoder2 | 16384 | ollama_generate |
High-level graph
flowchart TD
UI[Chat UI] --> HK[useAiStreaming]
HK --> ADP[Provider Adapters]
ADP -->|SSE/JSONL| HTTP[requestSSE / fetchWithRetry]
ADP -->|delta events| UI
UI --> NORM[normalizeOutput]
NORM --> PLAN[buildApplyPlan]
PLAN --> EXEC[executeApplyPlan]
EXEC --> MONACO[Monaco Editor]
EXEC --> FS[File Explorer Store]
Key modules
- Streaming:
src/services/ai/http.ts(SSE w/ open timeout), adapters atsrc/services/ai/adapters/index.ts - Models & validation:
src/utils/ai/models/{registry,schema,validator}.ts - Output normalization:
src/utils/ai/lang/{languageMap,normalizeOutput}.ts - Apply plan:
src/utils/ai/apply/{buildApplyPlan,executeApplyPlan}.ts - IDE shell/editor:
src/components/ide/EnhancedIDE.tsx,src/components/editor/MonacoEditor.tsx
Editor bridge capabilities (observed in IDE/editor components)
- Insert at cursor, replace active file, open tab, set active tab
- Integration points with apply-plan and command palette
Streaming events (from src/services/ai/adapters/types.ts)
start— adapter acknowledges request; providesrequestIddelta— token chunk withtextusage— optional{ prompt, completion }done— end of stream (with optionalfinishReason)error— normalizedUnifiedError { code, provider?, status? }
Adapter API (summary)
stream({ requestId, signal, baseUrl?, apiKey?, options, messages, onEvent, timeoutMs? })complete({ baseUrl?, apiKey?, options, messages, signal?, timeoutMs? })returns{ text, usage?, finishReason? }
Apply plan types (from buildApplyPlan.ts / executeApplyPlan.ts)
- Inputs:
{ rawAssistantText, selectedLanguageId, mode, defaultDir?, existingPaths } - Output plan:
{ mode, items: [{ path, action: 'create'|'replace', code, monaco, ext, exists }], warnings } - Execution API:
{ createFile, replaceFile, insertIntoActive?, setActiveTab?, pushUndoSnapshot }
Timeouts (from src/config/timeouts.ts)
sseOpenMs(10s prod / 1s e2e),idleMs,hardMs,retryBackoffMs
Errors (from src/services/ai/adapters/errors.ts)
UnifiedErrorCode:network|timeout|rate_limit|auth|permission|content_blocked|invalid_request|server|cancelled|unknown- HTTP status mapping for
errorFromResponseandfromHttpError
Vite dev server (vite.config.ts)
- Port: 3000; opens browser automatically
- Proxy
/ollama→http://localhost:11434(path rewritten) - Proxy
/openai→https://api.openai.com(path rewritten)
Environment variables (Vite)
VITE_AI_TELEMETRY_ENDPOINT— dev telemetry endpoint (optional)VITE_API_URL— optional app API base; defaults handled in codeVITE_AI_TRACE—1enables verbose adapter/HTTP tracingVITE_E2E—1enables e2e-moded timeouts and hooksVITE_AUTORUN—1auto-runs test harness in previewVITE_OPENAI_API_KEY— dev convenience for seeding key into settings
AI settings store (src/stores/aiSettings.schema.ts)
- Storage key:
synapse.ai.settings.v2 - Defaults:
provider=openai,model=gpt-4o,stream=true,timeoutMs=30000 - Presets:
BEGINNER_DEFAULT,PRO_DEFAULT - Merging semantics via
mergeAiSettingsV2
Flags (src/config/flags.ts, usage across code)
aiTrace— env/query/localStoragee2e— env/query/localStoragea11yEnabled— experimental
Context safety (from src/utils/ai/context/sanitize.ts and src/utils/safety/redactor.ts)
- Redacts
.env-style lines (KEY=VALUE → KEY=[REDACTED]) - Masks emails, phone numbers, IPv4 addresses
- Strips nested code fences in context slices; normalizes whitespace; masks injection phrases
Telemetry (dev-only)
- Buffered to localStorage; endpoint configurable via
VITE_AI_TELEMETRY_ENDPOINT - No raw prompts or user code are sent by default (development convenience only)
Fast checks
- OpenAI in dev: confirm
/openaiproxy is active; verify API key - Ollama: ensure service on
http://localhost:11434, model pulled; dev proxy/ollama - Streaming doesn’t start: enable tracing and check
[SSE][OPEN]and adapter[ADAPTER_CHUNK]logs
Enable tracing
npm run dev:trace
# or add /?trace=1 to the URLInspecting usage and costs (where implemented)
useAiStreamingsets usage viasetUsageAndMaybeCost()when adapters emitusage
Common errors and mapping
- 401 →
auth, 403 →permission, 404/409/422 →invalid_request, 408 →timeout, 429 →rate_limit, 5xx →server
Abort and late-token guards
useAiStreamingignores deltas after settle; buffers are flushed once on abort/error to avoid losing last chunks
Bundles
- Rollup manual chunks split
vendorandmonacofor better caching (vite.config.ts)
Streaming
- Buffered rendering via requestAnimationFrame to avoid per-token re-renders
- Open-timeout for SSE to fail fast; JSONL incremental parsing for Ollama
- Gemini simulates streaming from a single JSON response for consistent UX
Timeouts
- Tuned via
src/config/timeouts.ts; e2e profile shortens them significantly
- Experimental
a11yEnabledflag exists; not fully wired across all UI components - Future work: focus management and keyboard-first command palette
src/components/ide/EnhancedIDE.tsx— IDE shell and wiringsrc/components/editor/MonacoEditor.tsx— Monaco wrapper, themes, previewsrc/services/ai/http.ts— SSE clientsrc/services/ai/adapters/index.ts— provider adapters (OpenAI/Anthropic/Gemini/Ollama)src/utils/ai/models/*— model registry, schema, validationsrc/utils/ai/lang/*— language map and output normalizationsrc/utils/ai/apply/*— apply plan construction and executionsrc/stores/**— app/editor/file explorer/AI settings storese2e/**— Playwright tests;tests/**— Vitest suitesgithub/src/**— mirrored tree for deployment-oriented builds
Quality gates
- Type-check:
npm run type-check - Lint:
npm run lint - Unit tests:
npm run test - E2E tests:
npm run e2e
CI reporter
npm run test:ciwrites JUnit tojunit.xml
- Docker/Docker Compose present:
Dockerfile,docker-compose.yml - Fly.io:
fly.toml; Render:render.yaml; Kubernetes:k8s/
- Tool/function-calling: models carry
supportsToolsbut UI is not wired yet - Provider health checks in UI (esp. Gemini/Ollama)
- Accessibility coverage and keyboard-first workflows
- Optional unified server proxy for providers
- Keep public APIs stable; add/adjust tests if behavior changes
- Use ESLint/Prettier (
npm run lint,npm run lint:fix) - Follow module conventions for
utils/,stores/,services/
Private; no license specified in package.json. Add a license before public distribution.
🔌 Appendix A — Provider matrix (transport, endpoint, keys)
Provider adapters: src/services/ai/adapters/index.ts
-
OpenAI
- Endpoint:
openai_chat_completions - Transport: SSE
- Dev proxy:
/openai→https://api.openai.com - Key:
AiSettings.keys.openai(seed viaVITE_OPENAI_API_KEYin dev)
- Endpoint:
-
Anthropic
- Endpoint:
anthropic_messages - Transport: SSE
- Key:
AiSettings.keys.anthropic
- Endpoint:
-
Google (Gemini)
- Endpoint:
gemini_generate - Transport: JSON; streaming simulated from full response
- Key:
AiSettings.keys.google
- Endpoint:
-
Ollama
- Endpoint:
ollama_generate - Transport: server JSONL stream
- Base URL:
AiSettings.keys.ollama(defaulthttp://localhost:11434) - Dev proxy:
/ollama→ local service
- Endpoint:
📑 Appendix B — Models (selected, from registry)
See src/utils/ai/models/registry.ts. Examples include:
- OpenAI:
gpt-5,gpt-5-mini,gpt-5-nano,gpt-4o,gpt-4o-mini,chatgpt-4o-latest,gpt-4-turbo,gpt-4,gpt-3.5-turbo - Anthropic:
claude-4-{opus|sonnet|haiku},claude-3.5-{sonnet|haiku|vision|opus},claude-3-{opus|sonnet|haiku} - Google:
gemini-2.0-{pro-exp|flash-exp|flash-lite},gemini-1.5-{pro(-latest)|flash|flash-8b},gemini-pro,gemini-pro-vision - Ollama:
llama3.1,llama3.1:70b,llama3,llama2,codellama,mistral,mixtral,phi3,phi,deepseek-coder,qwen2.5-coder,starcoder2
Each entry carries optional flags like supportsVision/supportsTools, and an endpoint (explicit or inferred by provider).
❗ Appendix C — Error codes and mapping
From src/services/ai/adapters/errors.ts and types.ts:
- Codes:
network,timeout,rate_limit,auth,permission,content_blocked,invalid_request,server,cancelled,unknown - HTTP → code mapping (example): 401 →
auth, 403 →permission, 404/409/422 →invalid_request, 408 →timeout, 429 →rate_limit, 5xx →server
All adapter errors are normalized to UnifiedError { code, provider?, status?, raw? }.
📶 Appendix D — Streaming hook behavior
From src/hooks/useAiStreaming.ts:
- Single-flight per
groupKeywith abort of prior in-flight - Buffered delta handling via rAF; guards against late tokens after settle
- Tracing spans: build_prompt, connect, stream; usage attribution when provided
- Central
timeouts.sseOpenMsapplied to adapters for snappy connects (esp. in E2E)
🧩 Appendix E — Apply plan pipeline
From src/utils/ai/apply/{buildApplyPlan,executeApplyPlan}.ts:
- Normalize assistant output → infer files and monaco language IDs
- Beginner mode: replace-existing vs create-new selection
- Pro mode: collision-safe renaming (
file-2.ts), then create - Execution API supports
insertIntoActive(when desired) and undo snapshots
🧪 Appendix F — Dev utilities and flags
- Flags:
aiTrace,e2e,a11yEnabled(experimental) - SSE client traces:
[SSE][INIT|OPEN|DONE|RETRY|OPEN_TIMEOUT] - Adapter traces:
[ADAPTER_REQUEST|FIRST_CHUNK|STREAM_END|CHUNK] - HTTP retry/backoff behavior centralized in
requestSSEandfetchWithRetry
Source: src/stores/aiSettings.schema.ts
Interface AiSettings
mode:'beginner' | 'pro'provider:'openai' | 'anthropic' | 'google' | 'ollama'model:string(model id)languageId:string(e.g.,'typescript')temperature:numbermaxTokens:numberstream:booleantimeoutMs:numbersafetyLevel?:'standard' | 'strict' | 'off'keys?:Partial<Record<ProviderKey, string>>activePresetId?:stringusePresets?:boolean(default false)useWorkspaceContext?:boolean(default false)
Defaults (DEFAULT_AI_SETTINGS_V2)
mode:beginnerprovider:openaimodel:gpt-4olanguageId:typescripttemperature:0.2maxTokens:2048stream:truetimeoutMs:30000safetyLevel:standardkeys:{ ollama: 'http://localhost:11434' }usePresets:falseuseWorkspaceContext:false
Persistence (v2)
- Storage key:
synapse.ai.settings.v2 - Helpers:
loadAiSettingsV2,saveAiSettingsV2,subscribeAiSettingsV2,mergeAiSettingsV2
Presets
BEGINNER_DEFAULT→{ mode: 'beginner', temperature: 0.3, stream: true }PRO_DEFAULT→{ mode: 'pro', temperature: 0.2, stream: true }
OpenAI (SSE)
- POST
/v1/chat/completionswith{ model, messages, stream: true } - Parse SSE lines; on
choices[0].delta.contentemitdelta - On
[DONE]orfinish_reason, emitdone - Usage tokens (if present) mapped to
{ prompt, completion }
Anthropic (SSE)
- POST
/v1/messageswith consolidated user message (system + user) - Parse SSE lines; extract
delta.textorcontent_block_delta.text - Emit
delta; on stream end or[DONE], emitdone
Gemini (JSON → simulated)
- POST
models/{model}:generateContent - Extract full text from
candidates[0].content.parts[0].text - Split into ~60-char chunks and emit
deltawith small delays
Ollama (JSONL)
- POST
/api/generatewith{ model, prompt, stream: true } - Read stream via
ReadableStream.getReader(); decode line by line - Each JSON line’s
responseis adelta; whenjson.done, emitdone
Source: src/services/ai/http.ts
- Supports JSON requests with retry/backoff and SSE with open-timeout
- Trace tags (dev):
[HTTP][INIT|ERROR_STATUS|RETRY|DONE|FAIL],[SSE][INIT|ERROR_STATUS|RETRY|OPEN_TIMEOUT|OPEN|DONE] openTimeoutMsaborts connections that don’t open in time
Source: vite.config.ts
/ollama→http://localhost:11434; rewrite:^/ollama→ `` (keeps/api/*)/openai→https://api.openai.com; rewrite:^/openai→ `` (keeps/v1/*)
Sources: src/utils/ai/context/sanitize.ts, src/utils/safety/redactor.ts
- Redact
.env-styleKEY=VALUElines →KEY=[REDACTED] - Mask PII: emails, phone numbers, IPv4 addresses
- Strip nested fence openers; normalize whitespace
- Mask likely injection phrases with neutral markers
Sources: src/utils/ai/lang/normalizeOutput.ts, src/utils/ai/lang/languageMap.ts, src/utils/ai/apply/*
- Normalize AI output into
{ files: [{ path, code, monaco, ext }], warnings } - Beginner mode prefers replace-existing; Pro mode prefers create with collision-safe names
- Execute via editor API:
createFile,replaceFile, optionalinsertIntoActive,pushUndoSnapshot
Sources: src/services/ai/adapters/errors.ts, types.ts
- Normalized
UnifiedErrorwithcode, optionalproviderandstatus - Typical mappings: auth/permission/rate_limit/timeout/server/invalid_request
- Use dev proxy for OpenAI/Ollama to avoid CORS and reduce latency
- Keep
aiTraceoff in production-like runs to minimize console overhead - Prefer Pro mode for multi-file code generation to avoid destructive replaces
Files: playwright.config.ts, e2e/chat.spec.ts and tests/*
- E2E focuses on chat/streaming flows;
VITE_E2E=1shortens timeouts - Unit tests target normalization, guards, and stores
Scripts from package.json
release:version— bump package versionrelease:changelog— generate changelogrelease— run bump + changelog + build
- SSE — Server-Sent Events (one-way streaming from server)
- JSONL — JSON Lines (newline-delimited JSON objects)
- rAF — requestAnimationFrame (browser frame callback)
- E2E — End-to-End tests (Playwright)
- HMR — Hot Module Replacement (dev server live updates)
- ide, in-browser, monaco, react, typescript, vite, zustand, styled-components, radix-ui, lucide-react
- sse, jsonl, fetch, retry, adapters, provider, endpoint, gemini, anthropic, openai, ollama
- streaming, apply-plan, normalization, redaction, telemetry, tracing, e2e, vitest, playwright
- file-explorer, tabs, editor-bridge, vite-proxy, timeouts, resilience, safety, sanitize, usage
Core
start→vitedev→vitedev:trace→vite --open '/?trace=1'dev:safe→vite --host 0.0.0.0 --port 3000dev:cmd→vite
Build & type-check
build→tsc -b && vite buildtypecheck→tsc -p tsconfig.json --noEmittype-check→tsc --noEmit
Testing
test→vitest runtest:ci→vitest run --reporter=junit --outputFile=junit.xmle2e→playwright teste2e:ci→playwright test --reporter=line
Linting & formatting
lint→eslint . --ext ts,tsx --report-unused-disable-directiveslint:fix→eslint . --ext ts,tsx --fixformat→prettier --write "src/**/*.{ts,tsx,js,jsx,json,css,md}"format:check→prettier --check "src/**/*.{ts,tsx,js,jsx,json,css,md}"
Preview & eval
test:dev→vite --open http://localhost:5173?tests=1&autorun=1test:preview→vite preview --open http://localhost:4173?tests=1&autorun=1preview→vite previewclean→rimraf distbuild:eval→tsc -p tsconfig.eval.jsoneval/eval:record/eval:ci→ run evaluation scripts underscripts/eval
Releases
release:changelog,release:version,release:tag,release
@→./src@/components→./src/components@/hooks→./src/hooks@/utils→./src/utils@/types→./src/types@/styles→./src/styles@/contexts→./src/contexts@/pages→./src/pages@/assets→./src/assets
Top-level
src/— application sourcepublic/— static assetstests/— Vitest suitese2e/— Playwright specs and utilitiesk8s/— deployment manifestscripts/— CI, eval, release scriptsgithub/— mirrored build withsrc/subtree
Testing artifacts
playwright-report/,test-results/,junit.xml
e2e/utils/sse.ts— helpers for SSE in E2E contextse2e/chat.spec.ts— streaming chat spec
eslint.config.js— ESLint configurationvitest.config.{ts|mjs}— Vitest configurationplaywright.config.ts— Playwright configurationpostcss.config.js— PostCSS, used with Tailwind configuration presenttsconfig*.json— TypeScript project configs (app, node, eval)
build.outDir→distsourcemap: truefor production buildschunkSizeWarningLimit: 1200rollupOptions.output.manualChunks→{ vendor: [...], monaco: [...] }
- Dev server:
http://localhost:3000 - OpenAI proxy:
/openai(rewritten tohttps://api.openai.com) - Ollama proxy:
/ollama(rewritten tohttp://localhost:11434)
styled-componentsfor theming and component styles@radix-ui/react-*for primitives (dialog, dropdown, tabs, toast)lucide-reactfor iconsframer-motionfor motion primitives