Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
234 changes: 194 additions & 40 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,88 +4,163 @@
<img src="./assets/openui.png" width="150" alt="OpenUI" />
</p>

Building UI components can be a slog. OpenUI aims to make the process fun, fast, and flexible. It's also a tool we're using at [W&B](https://wandb.com) to test and prototype our next generation tooling for building powerful applications on top of LLM's.
<p align="center">
<strong>AI-powered UI component generation - describe it, see it rendered, iterate in real-time</strong>
</p>

Building UI components can be a slog. OpenUI aims to make the process fun, fast, and flexible. It's also a tool we're using at [W&B](https://wandb.com) to test and prototype our next generation tooling for building powerful applications on top of LLMs.

## Table of Contents

- [Features](#features)
- [Demo](#demo)
- [Quick Start](#quick-start)
- [Installation](#installation)
- [Docker (Recommended)](#docker-recommended)
- [From Source](#from-source--python)
- [Supported Models](#supported-models)
- [Configuration](#configuration)
- [Development](#development)
- [Contributing](#contributing)
- [Troubleshooting](#troubleshooting)

## Features

- **Natural Language UI Generation**: Describe components in plain English and see them rendered instantly
- **Multiple Framework Support**: Convert between HTML, React, Svelte, Web Components, and more
- **Real-time Iteration**: Ask for changes and refinements with live preview
- **Multi-Model Support**: Works with OpenAI, Anthropic, Groq, Gemini, and any LiteLLM-compatible model
- **Local Model Support**: Run completely offline with Ollama
- **Live Demo Available**: Try it immediately at [openui.fly.dev](https://openui.fly.dev)

## Overview
## Demo

![Demo](./assets/demo.gif)

OpenUI let's you describe UI using your imagination, then see it rendered live. You can ask for changes and convert HTML to React, Svelte, Web Components, etc. It's like [v0](https://v0.dev) but open source and not as polished :stuck_out_tongue_closed_eyes:.
OpenUI lets you describe UI using your imagination, then see it rendered live. You can ask for changes and convert HTML to React, Svelte, Web Components, etc. It's like [v0](https://v0.dev) but open source and not as polished :stuck_out_tongue_closed_eyes:.

## Live Demo
## Quick Start

[Try the demo](https://openui.fly.dev)
🚀 **Try it now**: [Live Demo](https://openui.fly.dev)

## Running Locally
🐳 **Run locally with Docker**:
```bash
docker run --rm -p 7878:7878 -e OPENAI_API_KEY=your_key_here ghcr.io/wandb/openui
```

OpenUI supports [OpenAI](https://platform.openai.com/api-keys), [Groq](https://console.groq.com/keys), and any model [LiteLLM](https://docs.litellm.ai/docs/) supports such as [Gemini](https://aistudio.google.com/app/apikey) or [Anthropic (Claude)](https://console.anthropic.com/settings/keys). The following environment variables are optional, but need to be set in your environment for alternative models to work:
Then visit [http://localhost:7878](http://localhost:7878)

- **OpenAI** `OPENAI_API_KEY`
- **Groq** `GROQ_API_KEY`
- **Gemini** `GEMINI_API_KEY`
- **Anthropic** `ANTHROPIC_API_KEY`
- **Cohere** `COHERE_API_KEY`
- **Mistral** `MISTRAL_API_KEY`
- **OpenAI Compatible** `OPENAI_COMPATIBLE_ENDPOINT` and `OPENAI_COMPATIBLE_API_KEY`
## Installation

For example, if you're running a tool like [localai](https://localai.io/) you can set `OPENAI_COMPATIBLE_ENDPOINT` and optionally `OPENAI_COMPATIBLE_API_KEY` to have the models available listed in the UI's model selector under LiteLLM.
### Prerequisites

### Ollama
- **For Docker**: Docker installed on your system
- **For Source**: Python 3.8+, git, and [uv](https://github.com/astral-sh/uv)
- **API Keys**: At least one LLM provider API key (OpenAI, Anthropic, etc.) or Ollama for local models

You can also use models available to [Ollama](https://ollama.com). [Install Ollama](https://ollama.com/download) and pull a model like [Llava](https://ollama.com/library/llava). If Ollama is not running on http://127.0.0.1:11434, you can set the `OLLAMA_HOST` environment variable to the host and port of your Ollama instance. For example when running in docker you'll need to point to http://host.docker.internal:11434 as shown below.
## Supported Models

### Docker (preferred)
OpenUI supports multiple LLM providers through direct APIs and [LiteLLM](https://docs.litellm.ai/docs/). Set the appropriate environment variables for your chosen provider:

The following command would forward the specified API keys from your shell environment and tell Docker to use the Ollama instance running on your machine.
| Provider | Environment Variable | Get API Key |
|----------|---------------------|-------------|
| **OpenAI** | `OPENAI_API_KEY` | [platform.openai.com](https://platform.openai.com/api-keys) |
| **Anthropic** | `ANTHROPIC_API_KEY` | [console.anthropic.com](https://console.anthropic.com/settings/keys) |
| **Groq** | `GROQ_API_KEY` | [console.groq.com](https://console.groq.com/keys) |
| **Gemini** | `GEMINI_API_KEY` | [aistudio.google.com](https://aistudio.google.com/app/apikey) |
| **Cohere** | `COHERE_API_KEY` | [cohere.com](https://cohere.com) |
| **Mistral** | `MISTRAL_API_KEY` | [mistral.ai](https://mistral.ai) |
| **OpenAI Compatible** | `OPENAI_COMPATIBLE_ENDPOINT`<br>`OPENAI_COMPATIBLE_API_KEY` | For tools like [LocalAI](https://localai.io/) |

### Local Models with Ollama

For completely offline usage, install [Ollama](https://ollama.com/download) and pull a model:

```bash
export ANTHROPIC_API_KEY=xxx
export OPENAI_API_KEY=xxx
docker run --rm --name openui -p 7878:7878 -e OPENAI_API_KEY -e ANTHROPIC_API_KEY -e OLLAMA_HOST=http://host.docker.internal:11434 ghcr.io/wandb/openui
# Install and start Ollama
ollama serve

# Pull a vision-capable model
ollama pull llava
```

Now you can goto [http://localhost:7878](http://localhost:7878) and generate new UI's!
**Docker configuration**: Set `OLLAMA_HOST=http://host.docker.internal:11434` when running OpenUI in Docker.

### Docker (Recommended)

**Basic usage**:
```bash
docker run --rm -p 7878:7878 -e OPENAI_API_KEY=your_key_here ghcr.io/wandb/openui
```

**With multiple providers and Ollama**:
```bash
export ANTHROPIC_API_KEY=your_anthropic_key
export OPENAI_API_KEY=your_openai_key
docker run --rm --name openui -p 7878:7878 \
-e OPENAI_API_KEY \
-e ANTHROPIC_API_KEY \
-e OLLAMA_HOST=http://host.docker.internal:11434 \
ghcr.io/wandb/openui
```

Open [http://localhost:7878](http://localhost:7878) and start generating UIs!

### From Source / Python

Assuming you have git and [uv](https://github.com/astral-sh/uv) installed:
**Requirements**: Python 3.8+, git, and [uv](https://github.com/astral-sh/uv)

```bash
# Clone the repository
git clone https://github.com/wandb/openui
cd openui/backend

# Install dependencies
uv sync --frozen --extra litellm
source .venv/bin/activate
# Set API keys for any LLM's you want to use
export OPENAI_API_KEY=xxx

# Set API keys for your chosen LLM providers
export OPENAI_API_KEY=your_openai_key
export ANTHROPIC_API_KEY=your_anthropic_key

# Start the server
python -m openui
```

## LiteLLM
Visit [http://localhost:7878](http://localhost:7878) to use OpenUI.

[LiteLLM](https://docs.litellm.ai/docs/) can be used to connect to basically any LLM service available. We generate a config automatically based on your environment variables. You can create your own [proxy config](https://litellm.vercel.app/docs/proxy/configs) to override this behavior. We look for a custom config in the following locations:
## Configuration

1. `litellm-config.yaml` in the current directory
2. `/app/litellm-config.yaml` when running in a docker container
3. An arbitrary path specified by the `OPENUI_LITELLM_CONFIG` environment variable
### Custom LiteLLM Configuration

For example to use a custom config in docker you can run:
OpenUI automatically generates a LiteLLM config from your environment variables. For advanced configurations, you can provide a custom [proxy config](https://litellm.vercel.app/docs/proxy/configs).

**Config file locations** (in order of preference):
1. `litellm-config.yaml` in the current directory
2. `/app/litellm-config.yaml` (when running in Docker)
3. Path specified by `OPENUI_LITELLM_CONFIG` environment variable

**Docker example**:
```bash
docker run -n openui -p 7878:7878 -v $(pwd)/litellm-config.yaml:/app/litellm-config.yaml ghcr.io/wandb/openui
docker run -p 7878:7878 \
-v $(pwd)/litellm-config.yaml:/app/litellm-config.yaml \
ghcr.io/wandb/openui
```

To use litellm from source you can run:

**Source example**:
```bash
pip install .[litellm]
export ANTHROPIC_API_KEY=xxx
export ANTHROPIC_API_KEY=your_key
export OPENAI_COMPATIBLE_ENDPOINT=http://localhost:8080/v1
python -m openui --litellm
```

## Groq
### Model Selection

To use the super fast [Groq](https://groq.com) models, set `GROQ_API_KEY` to your Groq api key which you can [find here](https://console.groq.com/keys). To use one of the Groq models, click the settings icon in the nav bar.
Once running, click the settings icon (⚙️) in the navigation bar to:
- Switch between available models
- Configure model-specific settings
- View usage statistics

### Docker Compose

Expand Down Expand Up @@ -142,6 +217,85 @@ Before you can use Gitpod:

> NOTE: Other (local) models might also be used with a bigger Gitpod instance type. Required models are not preconfigured in Gitpod but can easily be added as documented above.

### Resources
### Development Resources

For detailed development information, see the component-specific documentation:
- [Frontend README](./frontend/README.md) - React/TypeScript frontend
- [Backend README](./backend/README.md) - Python FastAPI backend

## Contributing

We welcome contributions! Here's how to get started:

### Development Setup

1. **Fork and clone** the repository
2. **Choose your development environment**:
- [Dev Container](.devcontainer/devcontainer.json) (recommended)
- [GitHub Codespaces](https://github.com/wandb/openui)
- [Gitpod](https://gitpod.io/#https://github.com/wandb/openui)
- Local development

### Making Changes

1. **Backend development**:
```bash
cd backend
uv sync --frozen --extra litellm
source .venv/bin/activate
python -m openui --dev
```

2. **Frontend development** (in a new terminal):
```bash
cd frontend
npm install
npm run dev
```

3. **Access the development server** at [http://localhost:5173](http://localhost:5173)

### Pull Request Process

See the readmes in the [frontend](./frontend/README.md) and [backend](./backend/README.md) directories.
1. Create a feature branch from `main`
2. Make your changes with clear, focused commits
3. Add tests for new functionality
4. Ensure all tests pass
5. Update documentation as needed
6. Submit a pull request with a clear description

## Troubleshooting

### Common Issues

**🔑 API Key Issues**
- Verify your API key is correctly set: `echo $OPENAI_API_KEY`
- Check that the key has sufficient credits/permissions
- Ensure the key is for the correct service (OpenAI vs Anthropic, etc.)

**🐳 Docker Issues**
- **Port conflicts**: Change the port mapping `-p 8080:7878`
- **Permission errors**: Add `--user $(id -u):$(id -g)` to docker run
- **Ollama connection**: Use `host.docker.internal:11434` not `localhost:11434`

**🔧 Installation Issues**
- **uv not found**: Install with `curl -LsSf https://astral.sh/uv/install.sh | sh`
- **Python version**: Requires Python 3.8+, check with `python --version`
- **Git LFS**: Some assets require Git LFS: `git lfs pull`

**🖥️ Local Development**
- **Hot reload not working**: Check that both frontend (port 5173) and backend (port 7878) are running
- **Model not available**: Verify API keys and check the settings gear icon
- **Slow generation**: Try switching to a faster model like Groq

### Getting Help

- **Documentation**: Check the [frontend](./frontend/README.md) and [backend](./backend/README.md) READMEs
- **Issues**: Report bugs on [GitHub Issues](https://github.com/wandb/openui/issues)
- **Discussions**: Join the conversation in [GitHub Discussions](https://github.com/wandb/openui/discussions)

---

<p align="center">
Made with ❤️ by the <a href="https://wandb.com">Weights & Biases</a> team
</p>
Loading