A modern, feature-rich chat interface supporting multiple AI providers with advanced reasoning capabilities
Features β’ Quick Start β’ Usage β’ Development β’ Contributing
Open Claude is a powerful, open-source chat interface inspired by Anthropic's Claude. It provides a clean, intuitive UI for interacting with multiple Large Language Models (LLMs) including OpenAI GPT models, Google Gemini, Groq, and local models via Ollama or LM Studio.
- π¬ Deep Research Scientist Mode: Autonomous iterative research loop for professional multi-page reports
- π§ Advanced Thinking UI: Unique collapsible thinking/reasoning display showing AI's thought process
- π¨ Premium Design: Modern, responsive interface with dark/light mode
- π Multi-Provider: Switch between OpenAI, Groq, Gemini, Ollama, and LM Studio
- π Intelligent Web Search: Integrated Tavily search with dynamic tool selection
- π¦ Artifacts: Preview HTML, React, SVG, and Research Papers in a dedicated panel
- π¬ Chat History: Multiple conversations with persistent local storage
- π Markdown Support: Full GitHub-flavored markdown with syntax highlighting and automatic citations
A groundbreaking feature that displays the AI's internal reasoning process:
- Collapsible Sections: View or hide the model's thought process
- Token Tracking: See how many tokens were used for thinking
- Duration Display: Track how long the model spent reasoning
- Streaming Support: Watch the AI think in real-time
- Premium Design: Beautiful gradient UI with smooth animations
| Provider | Typical Models | Hosted | Notes |
|---|---|---|---|
| OpenAI | GPT-5.2, GPT-5, GPT-4.x | Cloud | Best general purpose + tools |
| Google Gemini | Gemini 3 Pro / Flash, 2.5 Flash / Pro | Cloud | Excellent multimodal, long context |
| Groq | llama3, mixtral, gemma, openai/gpt-oss | Cloud / optimized servers | Ultra-fast inference |
| Ollama | LLaMA, Gemma, Qwen, Mistral, Mixtral, CodeLlama | Local | Self-hosted open models |
| LM Studio | Any local GGUF model | Local | Local hosting with OpenAI-style API |
- Clean Design: Inspired by Claude's minimalist aesthetic
- Dark/Light Mode: Automatic theme switching based on system preferences
- Responsive: Works seamlessly on desktop and mobile
- Syntax Highlighting: Beautiful code blocks with react-syntax-highlighter
- Artifact Panel: Side-by-side code preview for HTML, React, and SVG
- Powered by Tavily API
- Real-time web search results
- Context-aware responses with up-to-date information
Before you begin, ensure you have:
- Clone the repository
git clone https://github.com/Damienchakma/Open-claude.git
cd Open-claude- Install dependencies
npm install- Start the development server
npm run dev- Open in your browser
Navigate to http://localhost:5173
That's it! π The application is now running.
Open Claude requires API keys for cloud providers. Here's how to get them:
- Visit OpenAI Platform
- Sign up or log in
- Go to API Keys section
- Click Create new secret key
- Copy and save the key (you won't see it again!)
- Visit Google AI Studio
- Sign in with your Google account
- Click Create API Key
- Copy the generated key
- Visit Groq Console
- Create an account or log in
- Navigate to API Keys
- Generate a new API key
- Visit Tavily
- Sign up for an account
- Get your API key from the dashboard
- Click the Settings icon (gear) in the sidebar
- Enter your API keys in the respective fields
- Click Save
- Your keys are stored locally in your browser (never sent to any server)
- Install Ollama from ollama.ai
- Pull a model:
ollama pull llama2 - Ollama runs on
localhost:11434by default - Select Ollama in the model dropdown
- Install LM Studio
- Download and load a model
- Start the local server (port 1234)
- Select LM Studio in the model dropdown
- Select a provider and model from the dropdown
- Type your message in the input field
- Press Enter or click Send
- View the AI's response
- Click the Globe icon to enable web search
- Your queries will include real-time web results
- The AI will provide context-aware answers with current information
When the AI generates code (HTML, React, SVG):
- An artifact card appears in the chat
- Click the card to open the preview panel
- Toggle between Preview and Code views
- Copy code with the copy button
The flagship feature for professional investigation:
- Autonomous Loop: The agent decides what to search, evaluates results, and digs deeper until requirements are met.
- 3-Minute Minimum: Ensures high-quality, non-trivial research outputs.
- PDF-Ready Reports: Generates formal research papers with Abstracts, Methodology, and References.
- Automatic Citations: All claims are cited to verifiable sources.
- New Chat: Click the New Chat button in the sidebar
- Switch Chats: Click on any chat in the history
- Delete Chat: Hover over a chat and click the trash icon
- Persistence: Your chats and artifacts are saved to your browser's local storage and isolated per session.
For models that support reasoning (like OpenAI o1):
- The thinking section appears above the response
- Click to expand/collapse
- View token count and duration
- See the AI's step-by-step reasoning
Open-claude/
βββ src/
β βββ components/ # React components
β β βββ ArtifactPanel.jsx # Code & Research preview panel
β β βββ BuildMode.jsx # AI App Builder mode
β β βββ ChatMessage.jsx # Individual message display
β β βββ ChatMode.jsx # Main chat interface logic
β β βββ CitationDisplay.jsx # Research source citations
β β βββ CodeMode.jsx # Dedicated code editor
β β βββ ModeSwitcher.jsx # Switch between Chat, Build, Code
β β βββ SettingsModal.jsx # Provider & Model configuration
β β βββ ThinkingDisplay.jsx # Reasoning process UI
β β βββ Workbench/ # Build mode development area
β βββ context/ # Global state management
β β βββ ChatContext.jsx # Core chat & artifacts state
β β βββ ModeContext.jsx # Application mode state
β β βββ BuildContext.jsx # Build mode state
β βββ lib/ # Core logic & tools
β β βββ llm/ # LLM factory and clients
β β βββ IntelligentSearchTool.js # Deep Research & Web Search
β β βββ BoltArtifactParser.js # Advanced artifact extraction
β βββ App.jsx # Main app entry layout
β βββ main.jsx # React mounting point
β βββ index.css # Global design system
βββ public/ # Static assets
βββ index.html # Main entry template
βββ package.json # Dependencies
βββ tailwind.config.js # UI design tokens
- Frontend Framework: React 18.2
- Build Tool: Vite 5.1
- Styling: Tailwind CSS + Custom CSS Variables
- Markdown: react-markdown + remark-gfm
- Syntax Highlighting: react-syntax-highlighter
- Icons: Lucide React
- Animations: Framer Motion
# Start development server
npm run dev
# Build for production
npm run build
# Preview production build
npm run preview
# Lint code
npm run lintnpm run buildThis creates an optimized production build in the dist/ folder.
The app doesn't use .env files. All settings are stored in browser localStorage for security.
Edit src/index.css to customize the color scheme:
:root {
--accent: #d97757; /* Primary accent color */
--bg-primary: #FFFFFF; /* Main background */
--text-primary: #2F2F2F; /* Main text color */
/* ... more variables */
}- Create a new client class in
src/lib/llm/clients.js - Extend
BaseClientand implementstreamChat() - Add to
LLMFactory.getClient()switch case - Update
ModelService.jsto fetch available models - Add UI elements in
SettingsModal.jsx
Contributions are welcome! Here's how you can help:
- Check if the bug has already been reported in Issues
- Create a new issue with:
- Clear title and description
- Steps to reproduce
- Expected vs actual behavior
- Screenshots if applicable
- Open an issue with the
enhancementlabel - Describe the feature and why it would be useful
- Provide examples or mockups if possible
- Fork the repository
- Create a new branch:
git checkout -b feature/your-feature-name - Make your changes
- Test thoroughly
- Commit:
git commit -m "Add: your feature description" - Push:
git push origin feature/your-feature-name - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Inspired by Anthropic's Claude interface
- Built with modern web technologies
- Community-driven development
- Issues: GitHub Issues
- Discussions: GitHub Discussions
If you find this project useful, please consider giving it a star! β
Owner: Damien Chakma