Skip to content

Decentralised-AI/Open-claude

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€– Open Claude

A modern, feature-rich chat interface supporting multiple AI providers with advanced reasoning capabilities

React Vite License

Features β€’ Quick Start β€’ Usage β€’ Development β€’ Contributing


πŸ“– Overview

Open Claude is a powerful, open-source chat interface inspired by Anthropic's Claude. It provides a clean, intuitive UI for interacting with multiple Large Language Models (LLMs) including OpenAI GPT models, Google Gemini, Groq, and local models via Ollama or LM Studio.

🌟 Highlights

  • πŸ”¬ Deep Research Scientist Mode: Autonomous iterative research loop for professional multi-page reports
  • 🧠 Advanced Thinking UI: Unique collapsible thinking/reasoning display showing AI's thought process
  • 🎨 Premium Design: Modern, responsive interface with dark/light mode
  • πŸ”„ Multi-Provider: Switch between OpenAI, Groq, Gemini, Ollama, and LM Studio
  • πŸ” Intelligent Web Search: Integrated Tavily search with dynamic tool selection
  • πŸ“¦ Artifacts: Preview HTML, React, SVG, and Research Papers in a dedicated panel
  • πŸ’¬ Chat History: Multiple conversations with persistent local storage
  • 🎭 Markdown Support: Full GitHub-flavored markdown with syntax highlighting and automatic citations

✨ Features

🧠 Thinking/Reasoning Mode

A groundbreaking feature that displays the AI's internal reasoning process:

  • Collapsible Sections: View or hide the model's thought process
  • Token Tracking: See how many tokens were used for thinking
  • Duration Display: Track how long the model spent reasoning
  • Streaming Support: Watch the AI think in real-time
  • Premium Design: Beautiful gradient UI with smooth animations

πŸ€– Multi-Provider Support

Provider Typical Models Hosted Notes
OpenAI GPT-5.2, GPT-5, GPT-4.x Cloud Best general purpose + tools
Google Gemini Gemini 3 Pro / Flash, 2.5 Flash / Pro Cloud Excellent multimodal, long context
Groq llama3, mixtral, gemma, openai/gpt-oss Cloud / optimized servers Ultra-fast inference
Ollama LLaMA, Gemma, Qwen, Mistral, Mixtral, CodeLlama Local Self-hosted open models
LM Studio Any local GGUF model Local Local hosting with OpenAI-style API

🎨 User Interface

  • Clean Design: Inspired by Claude's minimalist aesthetic
  • Dark/Light Mode: Automatic theme switching based on system preferences
  • Responsive: Works seamlessly on desktop and mobile
  • Syntax Highlighting: Beautiful code blocks with react-syntax-highlighter
  • Artifact Panel: Side-by-side code preview for HTML, React, and SVG

πŸ” Web Search Integration

  • Powered by Tavily API
  • Real-time web search results
  • Context-aware responses with up-to-date information

πŸš€ Quick Start

Prerequisites

Before you begin, ensure you have:

  • Node.js 18.0 or higher (Download)
  • npm (comes with Node.js) or yarn
  • Git (Download)

Installation

  1. Clone the repository
git clone https://github.com/Damienchakma/Open-claude.git
cd Open-claude
  1. Install dependencies
npm install
  1. Start the development server
npm run dev
  1. Open in your browser

Navigate to http://localhost:5173

That's it! πŸŽ‰ The application is now running.


βš™οΈ Configuration

Getting API Keys

Open Claude requires API keys for cloud providers. Here's how to get them:

OpenAI API Key

  1. Visit OpenAI Platform
  2. Sign up or log in
  3. Go to API Keys section
  4. Click Create new secret key
  5. Copy and save the key (you won't see it again!)

Google Gemini API Key

  1. Visit Google AI Studio
  2. Sign in with your Google account
  3. Click Create API Key
  4. Copy the generated key

Groq API Key

  1. Visit Groq Console
  2. Create an account or log in
  3. Navigate to API Keys
  4. Generate a new API key

Tavily API Key (Optional - for web search)

  1. Visit Tavily
  2. Sign up for an account
  3. Get your API key from the dashboard

Setting Up API Keys in the App

  1. Click the Settings icon (gear) in the sidebar
  2. Enter your API keys in the respective fields
  3. Click Save
  4. Your keys are stored locally in your browser (never sent to any server)

Using Local Models (No API Key Required)

Ollama Setup

  1. Install Ollama from ollama.ai
  2. Pull a model: ollama pull llama2
  3. Ollama runs on localhost:11434 by default
  4. Select Ollama in the model dropdown

LM Studio Setup

  1. Install LM Studio
  2. Download and load a model
  3. Start the local server (port 1234)
  4. Select LM Studio in the model dropdown

🎯 Usage

Basic Chat

  1. Select a provider and model from the dropdown
  2. Type your message in the input field
  3. Press Enter or click Send
  4. View the AI's response

Using Web Search

  1. Click the Globe icon to enable web search
  2. Your queries will include real-time web results
  3. The AI will provide context-aware answers with current information

Viewing Artifacts

When the AI generates code (HTML, React, SVG):

  1. An artifact card appears in the chat
  2. Click the card to open the preview panel
  3. Toggle between Preview and Code views
  4. Copy code with the copy button

πŸ”¬ Deep Research Scientist Mode

The flagship feature for professional investigation:

  1. Autonomous Loop: The agent decides what to search, evaluates results, and digs deeper until requirements are met.
  2. 3-Minute Minimum: Ensures high-quality, non-trivial research outputs.
  3. PDF-Ready Reports: Generates formal research papers with Abstracts, Methodology, and References.
  4. Automatic Citations: All claims are cited to verifiable sources.

Managing Chats

  • New Chat: Click the New Chat button in the sidebar
  • Switch Chats: Click on any chat in the history
  • Delete Chat: Hover over a chat and click the trash icon
  • Persistence: Your chats and artifacts are saved to your browser's local storage and isolated per session.

Viewing AI Thinking Process

For models that support reasoning (like OpenAI o1):

  1. The thinking section appears above the response
  2. Click to expand/collapse
  3. View token count and duration
  4. See the AI's step-by-step reasoning

πŸ› οΈ Development

Project Structure

Open-claude/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ components/          # React components
β”‚   β”‚   β”œβ”€β”€ ArtifactPanel.jsx      # Code & Research preview panel
β”‚   β”‚   β”œβ”€β”€ BuildMode.jsx          # AI App Builder mode
β”‚   β”‚   β”œβ”€β”€ ChatMessage.jsx        # Individual message display
β”‚   β”‚   β”œβ”€β”€ ChatMode.jsx           # Main chat interface logic
β”‚   β”‚   β”œβ”€β”€ CitationDisplay.jsx    # Research source citations
β”‚   β”‚   β”œβ”€β”€ CodeMode.jsx           # Dedicated code editor
β”‚   β”‚   β”œβ”€β”€ ModeSwitcher.jsx       # Switch between Chat, Build, Code
β”‚   β”‚   β”œβ”€β”€ SettingsModal.jsx      # Provider & Model configuration
β”‚   β”‚   β”œβ”€β”€ ThinkingDisplay.jsx    # Reasoning process UI
β”‚   β”‚   └── Workbench/             # Build mode development area
β”‚   β”œβ”€β”€ context/             # Global state management
β”‚   β”‚   β”œβ”€β”€ ChatContext.jsx        # Core chat & artifacts state
β”‚   β”‚   β”œβ”€β”€ ModeContext.jsx        # Application mode state
β”‚   β”‚   └── BuildContext.jsx       # Build mode state
β”‚   β”œβ”€β”€ lib/                 # Core logic & tools
β”‚   β”‚   β”œβ”€β”€ llm/                   # LLM factory and clients
β”‚   β”‚   β”œβ”€β”€ IntelligentSearchTool.js # Deep Research & Web Search
β”‚   β”‚   └── BoltArtifactParser.js   # Advanced artifact extraction
β”‚   β”œβ”€β”€ App.jsx              # Main app entry layout
β”‚   β”œβ”€β”€ main.jsx             # React mounting point
β”‚   └── index.css            # Global design system
β”œβ”€β”€ public/                  # Static assets
β”œβ”€β”€ index.html               # Main entry template
β”œβ”€β”€ package.json             # Dependencies
└── tailwind.config.js       # UI design tokens

Tech Stack

  • Frontend Framework: React 18.2
  • Build Tool: Vite 5.1
  • Styling: Tailwind CSS + Custom CSS Variables
  • Markdown: react-markdown + remark-gfm
  • Syntax Highlighting: react-syntax-highlighter
  • Icons: Lucide React
  • Animations: Framer Motion

Available Scripts

# Start development server
npm run dev

# Build for production
npm run build

# Preview production build
npm run preview

# Lint code
npm run lint

Building for Production

npm run build

This creates an optimized production build in the dist/ folder.

Environment Variables

The app doesn't use .env files. All settings are stored in browser localStorage for security.


🎨 Customization

Changing Theme Colors

Edit src/index.css to customize the color scheme:

:root {
  --accent: #d97757;        /* Primary accent color */
  --bg-primary: #FFFFFF;    /* Main background */
  --text-primary: #2F2F2F;  /* Main text color */
  /* ... more variables */
}

Adding New LLM Providers

  1. Create a new client class in src/lib/llm/clients.js
  2. Extend BaseClient and implement streamChat()
  3. Add to LLMFactory.getClient() switch case
  4. Update ModelService.js to fetch available models
  5. Add UI elements in SettingsModal.jsx

🀝 Contributing

Contributions are welcome! Here's how you can help:

Reporting Bugs

  1. Check if the bug has already been reported in Issues
  2. Create a new issue with:
    • Clear title and description
    • Steps to reproduce
    • Expected vs actual behavior
    • Screenshots if applicable

Suggesting Features

  1. Open an issue with the enhancement label
  2. Describe the feature and why it would be useful
  3. Provide examples or mockups if possible

Pull Requests

  1. Fork the repository
  2. Create a new branch: git checkout -b feature/your-feature-name
  3. Make your changes
  4. Test thoroughly
  5. Commit: git commit -m "Add: your feature description"
  6. Push: git push origin feature/your-feature-name
  7. Open a Pull Request

πŸ“ License

This project is licensed under the MIT License - see the LICENSE file for details.


πŸ™ Acknowledgments

  • Inspired by Anthropic's Claude interface
  • Built with modern web technologies
  • Community-driven development

πŸ“ž Support


⭐ Star History

If you find this project useful, please consider giving it a star! ⭐


About

Open source version of claude ui or replica or sort

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • JavaScript 96.6%
  • CSS 3.3%
  • HTML 0.1%