A powerful local AI assistant that runs entirely offline on your computer using Ollama as the LLM backend. LIA understands natural language commands and can safely execute file operations, folder management, and more.
- 100% Offline - Runs entirely on your local machine
- Natural Language Interface - Communicate in plain English
- Multi-Interface - Web UI, CLI, and REST API
- Secure Command Execution - Safe, sandboxed file operations
- Modern UI - Beautiful dark theme React interface
- Powered by Ollama - Uses local LLMs (llama3) for intelligent parsing
- File Operations - Open files/folders, list, create, delete, copy, move, rename
- File Management - Copy, move, rename files and directories
- AI Content Generation - Generate file contents using LLM
- Safe Shell Commands - Execute whitelisted system commands
- CLI Interface - Use from terminal with natural language
- π Python Mode - Intelligent Python code generation and execution for complex tasks
- Automatic fallback when native commands can't handle requests
- Handles calculations, data analysis, pattern extraction, and more
- Sandboxed execution with security safeguards
- Subtle UI indication with code visibility
Before running LIA, ensure you have:
-
Python 3.8 or higher (Python 3.11-3.13 recommended)
python3 --version
-
Node.js 16 or higher
node --version
-
Ollama installed and running
Install Ollama from: https://ollama.ai
Then pull a model (e.g., llama3):
ollama pull llama3
Start Ollama server:
ollama serve
git clone https://github.com/Yusiko99/LIA
cd LIA
chmod +x setup.sh
./setup.sh
cd frontend
npm install
./run.shWhat this does:
- Checks and installs Ollama if it's missing, and starts the server
- Guides you to select a local model, including
hf.co/Yusiko/LIA(Llama 3.1 finetuned for LIA) - Optionally pre-downloads the chosen model so the first run is smooth
AZ (Qurulum β bir ΙmrlΙ):
- Ollama yoxlanΔ±r/quraΕdΔ±rΔ±lΔ±r vΙ iΕΙ salΔ±nΔ±r
- Yerli model seΓ§imi ΓΌΓ§ΓΌn bΙlΙdΓ§i aΓ§Δ±lΔ±r (
hf.co/Yusiko/LIAdaxil olmaqla) - SeΓ§ilmiΕ model ΙvvΙlcΙdΙn endirilΙ bilΙr ki, ilk iΕΙ salΔ±nmada gecikmΙ olmasΔ±n
cd LIAcd backend
# Create virtual environment
python3 -m venv venv
# Activate virtual environment
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txtCreate a .env file in the backend directory:
OLLAMA_HOST=http://localhost:11434
OLLAMA_MODEL=llama3
HOST=0.0.0.0
PORT=8000
ALLOWED_DIRECTORIES=/home/$USERNAMEHERE$cd ../frontend
# Install dependencies
npm installTerminal 1 - Backend:
cd backend
source venv/bin/activate
python main.pyTerminal 2 - Frontend:
cd frontend
npm run devNavigate to: http://localhost:5173
# Add to ~/.bashrc or ~/.zshrc
export PATH="/home/$USERNAMEHERE$/LIA:$PATH"
# Then use from anywhere
cd ~/Documents
lia "List all PDF files"π Full CLI Guide: See CLI_GUIDE.md for complete documentation.
"Open the Pictures folder"
"Open image.jpg"
"Open document.pdf"
"List all PDF files in Downloads"
"Show me all images in Pictures"
"Search for Python files"
"Create a file named notes.txt"
"Create report.txt and write 3 pages about AI"
"Write 'Hello World' to test.txt"
"Delete old.txt"
"Copy file.txt to backup.txt"
"Move image.jpg to Pictures"
"Rename old.txt to new.txt"
"Get info about document.pdf"
"Show me my system information"
"Run ls"
"Run date"
"Calculate the factorial of 10"
"Find the average of numbers in data.txt"
"Count how many times 'error' appears in system.log"
"Extract all email addresses from contacts.txt"
"List all files larger than 10MB in Downloads"
Python Mode automatically activates for complex tasks that native commands can't handle!
backend/
βββ main.py # FastAPI application entry point
βββ lia_cli.py # Command-line interface
βββ config.py # Configuration management
βββ models.py # Pydantic models and schemas
βββ ollama_service.py # Ollama LLM integration
βββ command_executor.py # Safe command execution
βββ python_executor.py # π Python Mode: Code generation & execution
βββ requirements.txt # Python dependencies
Key Components:
- FastAPI Server - REST API for frontend communication
- Ollama Service - Natural language parsing and content generation
- Command Executor - Secure, sandboxed command execution
- π Python Executor - Intelligent Python code generation and execution for complex tasks
- CLI Interface - Terminal-based interaction
- Path Validation - Ensures operations stay within allowed directories
Supported Operations:
- Open files/folders (
open_file,open_folder) - List & search files (
list_files,search_files) - Create, read, write, delete files
- Copy, move, rename files (
copy_file,move_file,rename_file) - Get file/system info (
get_info) - Execute safe shell commands (
execute_command)
frontend/
βββ src/
β βββ App.jsx # Main application component
β βββ App.css # Component styles
β βββ index.css # Global styles
β βββ main.jsx # React entry point
βββ index.html
βββ package.json
βββ vite.config.js
Key Features:
- Modern chat interface
- Real-time command execution feedback
- File list visualization
- Connection status monitoring
- Dark theme with smooth animations
LIA implements multiple security layers:
- Path Validation - All file paths are validated and normalized
- Directory Whitelisting - Operations restricted to allowed directories
- Command Sandboxing - Only safe, predefined operations allowed
- π Python Sandboxing - Python code runs in isolated subprocess with timeout (5s) and restricted environment
- Code Safety Checks - Validates generated Python code for dangerous patterns
- Local Execution - Everything runs on your machine, no data sent externally
Edit backend/.env:
OLLAMA_HOST- Ollama server URL (default: http://localhost:11434)OLLAMA_MODEL- Model to use (default: llama3). You can also sethf.co/Yusiko/LIA.PORT- Backend server port (default: 8000)ALLOWED_DIRECTORIES- Base directory for file operations
Edit frontend/src/App.jsx:
API_URL- Backend API URL (default: http://localhost:8000)
GET /health
POST /api/chat
Body: { "message": "your command here" }
POST /api/generate
Body: { "message": "content prompt" }
-
Check if Ollama is running:
curl http://localhost:11434/api/tags
-
Verify Python dependencies:
pip install -r requirements.txt
- Ensure backend is running on port 8000
- Check CORS settings in
backend/main.py - Verify frontend API_URL matches backend address
-
Start Ollama:
ollama serve
-
Pull required model:
ollama pull llama2
-
Add command type to
models.py:class CommandType(str, Enum): YOUR_COMMAND = "your_command"
-
Implement handler in
command_executor.py:async def _your_command(self, params: Dict[str, Any]) -> CommandResult: # Implementation
-
Update LLM prompt in
ollama_service.py
MIT License - Feel free to use and modify as needed.
This is a personal project, but suggestions and improvements are welcome!
Future enhancements:
- WebSocket for real-time updates
- Streaming LLM responses
- File preview in UI
- Multi-model support (switch models on-the-fly)
- Voice input/output
- Command history persistence
- Scheduled tasks
- Plugin system
- Docker deployment
- Mobile app
- Model Selection - Larger models (like llama2:13b) provide better understanding but require more resources
- Performance - First command may be slower as Ollama loads the model
- Safety - Always review what LIA will do before confirming sensitive operations
- Custom Paths - Use absolute paths for operations outside your home directory
For issues or questions:
- Check the troubleshooting section
- Review Ollama documentation: https://ollama.ai/docs
- Check FastAPI docs: https://fastapi.tiangolo.com
- You can contact with me via Instagram : https://instagram.com/yusikome