A full-featured, client-side AI chatbot application built with React 19 and TypeScript. Access multiple AI models through OpenRouter API with streaming responses, multi-project management, and persistent context files.
- Client-side authentication with SHA-256 password hashing
- User registration and login
- Session management with localStorage
- User-scoped data isolation
- Create unlimited chat projects
- Custom instructions per project
- Project-specific settings (model, temperature, max tokens)
- Persistent context files
- Independent chat histories
- Real-time streaming responses (SSE)
- Markdown rendering with syntax highlighting
- File attachments (images and text files)
- Message history with timestamps
- Token counting and usage tracking
- Add persistent files to project context
- Automatic token counting
- Support for various text file formats
- Visual file management interface
- OpenRouter API key management
- Model selection (Claude, GPT-4, Gemini, Llama, etc.)
- Theme control (Light/Dark/Auto)
- Font size adjustment
- Streaming toggle
- Token warning thresholds
- Clean, responsive design
- Dark mode support
- Smooth animations and transitions
- Mobile-friendly interface
- Beautiful gradient authentication screens
- Frontend: React 19 with TypeScript
- Build Tool: Vite
- Styling: CSS with CSS Variables
- API: OpenRouter (streaming SSE)
- Storage: localStorage (client-side only)
- Markdown: react-markdown with remark-gfm
- Syntax Highlighting: react-syntax-highlighter
- Token Counting: js-tiktoken
- Icons: lucide-react
- Animations: framer-motion
- Date Formatting: date-fns
- Node.js 18+ and npm
- OpenRouter API key (Get one here)
-
Clone the repository
git clone <repository-url> cd AI-Chatbot
-
Install dependencies
npm install
-
Start the development server
npm run dev
-
Open your browser Navigate to
http://localhost:5173
-
Push to GitHub
git push origin main
-
Import to Vercel
- Go to vercel.com
- Click "New Project"
- Import your repository
- Vercel will auto-detect Vite settings
- Click "Deploy"
-
Configuration
- Framework Preset: Vite
- Build Command:
npm run build - Output Directory:
dist - Node.js Version: 18.x (auto-detected from
.nvmrc)
The app uses lazy loading and code splitting:
- Initial bundle: ~66 KB gzipped (loads instantly)
- Chat components: ~230 KB gzipped (loads when chatting)
- Token counter: ~2.5 MB gzipped (loads when needed)
Heavy dependencies (syntax highlighter, tiktoken) are lazy-loaded only when the user starts chatting, ensuring fast initial page load.
- Click "Sign up" on the authentication screen
- Enter a username (minimum 3 characters)
- Enter a password (minimum 6 characters)
- Confirm your password
- Click "Create Account"
You'll be automatically logged in after registration.
- Click the Settings icon (⚙️) in the top right
- Enter your OpenRouter API key
- Select your preferred default model
- Adjust other settings as needed
- Click "Save Settings"
- Click the "+" button in the sidebar
- Enter a project name
- (Optional) Add a description
- (Optional) Add custom instructions for the AI
- Click "Create Project"
- Select your project from the sidebar
- Type your message in the input box
- (Optional) Attach files using the paperclip icon
- Press Enter or click Send
- Watch as the AI responds in real-time!
AI-Chatbot/
├── src/
│ ├── components/ # React components
│ │ ├── Auth/ # Authentication components
│ │ ├── Chat/ # Chat interface components
│ │ ├── Common/ # Reusable UI components
│ │ ├── Files/ # Context files management
│ │ ├── Layout/ # App layout components
│ │ ├── Project/ # Project management components
│ │ └── Settings/ # Settings panel
│ ├── contexts/ # React context providers
│ │ ├── AuthContext.tsx
│ │ ├── ProjectContext.tsx
│ │ └── SettingsContext.tsx
│ ├── services/ # Business logic services
│ │ ├── AuthService.ts
│ │ ├── StorageService.ts
│ │ ├── OpenRouterAPI.ts
│ │ ├── FileProcessor.ts
│ │ └── TokenCounter.ts
│ ├── types/ # TypeScript type definitions
│ ├── styles/ # Global styles
│ ├── App.tsx # Main app component
│ └── main.tsx # Application entry point
├── index.html
├── vite.config.ts
├── tsconfig.json
└── package.json
npm run dev- Start development servernpm run build- Build for productionnpm run preview- Preview production buildnpm run lint- Run ESLint
All user data is stored locally in the browser using localStorage. Passwords are hashed using SHA-256 with a random salt. Note: This is for demonstration purposes only. For production use, implement proper backend authentication.
Each project maintains:
- Separate chat history
- Custom instructions (system prompt)
- Context files (persistent across conversations)
- Project-specific model settings
Add text files to your project that will be included in every conversation:
- Supports: .txt, .md, .js, .ts, .jsx, .tsx, .json, .py, .java, .html, .css, and more
- Automatic token counting
- Visual file size and token display
- Easy removal of files
Real-time response streaming using Server-Sent Events (SSE):
- See responses as they're generated
- Stop generation at any time
- Smooth typing animation
- Error handling and recovery
Track your API usage with built-in token counting:
- Message-level token counts
- Context file token counts
- Total conversation tokens
- Warning when approaching limits
Three theme options:
- Light: Clean, bright interface
- Dark: Easy on the eyes
- Auto: Follows system preference
Via OpenRouter, you can access:
- Anthropic: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku
- OpenAI: GPT-4 Turbo, GPT-4, GPT-3.5 Turbo
- Google: Gemini Pro
- Meta: Llama 3.1 70B Instruct
- And many more!
- This is a client-side only application
- All data is stored in browser localStorage
- API keys are stored in plain text in localStorage
- Password hashing is client-side only
- Not suitable for production without a proper backend
For production use, you should:
- Implement server-side authentication
- Store API keys securely on the backend
- Use HTTPS
- Implement rate limiting
- Add proper error logging
- Use environment variables
- Chrome/Edge 90+
- Firefox 88+
- Safari 14+
The build is optimized with:
- Manual chunk splitting (React, markdown, syntax highlighter)
- Code splitting for better load times
- CSS minification
- Tree shaking
All data is stored in localStorage under the following keys:
chatbot_users- User accountschatbot_session- Current sessionchatbot_{userId}_projects- User's projectschatbot_{userId}_messages_{projectId}- Project messageschatbot_{userId}_settings- User settingschatbot_{userId}_activeProjectId- Active project
- Verify your API key is correct
- Check you have credits on OpenRouter
- Try a different model
- Check your internet connection
- Some networks block SSE
- Try disabling streaming in settings
- Check browser compatibility
- Clear localStorage and refresh
- Verify theme setting in Settings panel
Contributions are welcome! Please feel free to submit a Pull Request.
MIT License - feel free to use this project for learning or commercial purposes.
- Built with Vite
- Powered by OpenRouter
- Icons by Lucide
- Markdown rendering by react-markdown
For issues and questions:
- Check existing issues on GitHub
- Create a new issue with detailed information
- Include browser console errors if applicable
Happy Chatting! 🚀