Skip to content

Yusiko99/LIA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

8 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

LIA - Local Intelligent Agent

A powerful local AI assistant that runs entirely offline on your computer using Ollama as the LLM backend. LIA understands natural language commands and can safely execute file operations, folder management, and more.

LIA Python React FastAPI

🌟 Features

  • 100% Offline - Runs entirely on your local machine
  • Natural Language Interface - Communicate in plain English
  • Multi-Interface - Web UI, CLI, and REST API
  • Secure Command Execution - Safe, sandboxed file operations
  • Modern UI - Beautiful dark theme React interface
  • Powered by Ollama - Uses local LLMs (llama3) for intelligent parsing
  • File Operations - Open files/folders, list, create, delete, copy, move, rename
  • File Management - Copy, move, rename files and directories
  • AI Content Generation - Generate file contents using LLM
  • Safe Shell Commands - Execute whitelisted system commands
  • CLI Interface - Use from terminal with natural language
  • 🐍 Python Mode - Intelligent Python code generation and execution for complex tasks
    • Automatic fallback when native commands can't handle requests
    • Handles calculations, data analysis, pattern extraction, and more
    • Sandboxed execution with security safeguards
    • Subtle UI indication with code visibility

πŸ“‹ Prerequisites

Before running LIA, ensure you have:

  1. Python 3.8 or higher (Python 3.11-3.13 recommended)

    python3 --version
  2. Node.js 16 or higher

    node --version
  3. Ollama installed and running

    Install Ollama from: https://ollama.ai

    Then pull a model (e.g., llama3):

    ollama pull llama3

    Start Ollama server:

    ollama serve

πŸš€ Quick Start

0. One-Command Setup (installs Ollama if missing)

git clone https://github.com/Yusiko99/LIA
cd LIA
chmod +x setup.sh
./setup.sh
cd frontend
npm install
./run.sh

What this does:

  • Checks and installs Ollama if it's missing, and starts the server
  • Guides you to select a local model, including hf.co/Yusiko/LIA (Llama 3.1 finetuned for LIA)
  • Optionally pre-downloads the chosen model so the first run is smooth

AZ (Qurulum – bir Ι™mrlΙ™):

  • Ollama yoxlanΔ±r/quraşdΔ±rΔ±lΔ±r vΙ™ iΕŸΙ™ salΔ±nΔ±r
  • Yerli model seΓ§imi ΓΌΓ§ΓΌn bΙ™lΙ™dΓ§i aΓ§Δ±lΔ±r (hf.co/Yusiko/LIA daxil olmaqla)
  • SeΓ§ilmiş model Ι™vvΙ™lcΙ™dΙ™n endirilΙ™ bilΙ™r ki, ilk iΕŸΙ™ salΔ±nmada gecikmΙ™ olmasΔ±n

1. Clone or Navigate to Project

cd LIA

2. Setup Backend

cd backend

# Create virtual environment
python3 -m venv venv

# Activate virtual environment
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

3. Configure Backend (Optional)

Create a .env file in the backend directory:

OLLAMA_HOST=http://localhost:11434
OLLAMA_MODEL=llama3
HOST=0.0.0.0
PORT=8000
ALLOWED_DIRECTORIES=/home/$USERNAMEHERE$

4. Setup Frontend

cd ../frontend

# Install dependencies
npm install

5. Run the Application

Terminal 1 - Backend:

cd backend
source venv/bin/activate
python main.py

Terminal 2 - Frontend:

cd frontend
npm run dev

6. Open Browser

Navigate to: http://localhost:5173


Add to PATH (Optional)

# Add to ~/.bashrc or ~/.zshrc
export PATH="/home/$USERNAMEHERE$/LIA:$PATH"

# Then use from anywhere
cd ~/Documents
lia "List all PDF files"

πŸ“– Full CLI Guide: See CLI_GUIDE.md for complete documentation.


πŸ’¬ Example Commands

Open Files & Folders

"Open the Pictures folder"
"Open image.jpg"
"Open document.pdf"

List & Search Files

"List all PDF files in Downloads"
"Show me all images in Pictures"
"Search for Python files"

Create & Edit Files

"Create a file named notes.txt"
"Create report.txt and write 3 pages about AI"
"Write 'Hello World' to test.txt"

File Management

"Delete old.txt"
"Copy file.txt to backup.txt"
"Move image.jpg to Pictures"
"Rename old.txt to new.txt"

Information & System

"Get info about document.pdf"
"Show me my system information"
"Run ls"
"Run date"

🐍 Python Mode (Computational Tasks)

"Calculate the factorial of 10"
"Find the average of numbers in data.txt"
"Count how many times 'error' appears in system.log"
"Extract all email addresses from contacts.txt"
"List all files larger than 10MB in Downloads"

Python Mode automatically activates for complex tasks that native commands can't handle!

πŸ—οΈ Architecture

Backend (Python/FastAPI)

backend/
β”œβ”€β”€ main.py              # FastAPI application entry point
β”œβ”€β”€ lia_cli.py           # Command-line interface
β”œβ”€β”€ config.py            # Configuration management
β”œβ”€β”€ models.py            # Pydantic models and schemas
β”œβ”€β”€ ollama_service.py    # Ollama LLM integration
β”œβ”€β”€ command_executor.py  # Safe command execution
β”œβ”€β”€ python_executor.py   # 🐍 Python Mode: Code generation & execution
└── requirements.txt     # Python dependencies

Key Components:

  • FastAPI Server - REST API for frontend communication
  • Ollama Service - Natural language parsing and content generation
  • Command Executor - Secure, sandboxed command execution
  • 🐍 Python Executor - Intelligent Python code generation and execution for complex tasks
  • CLI Interface - Terminal-based interaction
  • Path Validation - Ensures operations stay within allowed directories

Supported Operations:

  • Open files/folders (open_file, open_folder)
  • List & search files (list_files, search_files)
  • Create, read, write, delete files
  • Copy, move, rename files (copy_file, move_file, rename_file)
  • Get file/system info (get_info)
  • Execute safe shell commands (execute_command)

Frontend (React/Vite)

frontend/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ App.jsx         # Main application component
β”‚   β”œβ”€β”€ App.css         # Component styles
β”‚   β”œβ”€β”€ index.css       # Global styles
β”‚   └── main.jsx        # React entry point
β”œβ”€β”€ index.html
β”œβ”€β”€ package.json
└── vite.config.js

Key Features:

  • Modern chat interface
  • Real-time command execution feedback
  • File list visualization
  • Connection status monitoring
  • Dark theme with smooth animations

πŸ”’ Security

LIA implements multiple security layers:

  1. Path Validation - All file paths are validated and normalized
  2. Directory Whitelisting - Operations restricted to allowed directories
  3. Command Sandboxing - Only safe, predefined operations allowed
  4. 🐍 Python Sandboxing - Python code runs in isolated subprocess with timeout (5s) and restricted environment
  5. Code Safety Checks - Validates generated Python code for dangerous patterns
  6. Local Execution - Everything runs on your machine, no data sent externally

πŸ› οΈ Configuration

Backend Configuration

Edit backend/.env:

  • OLLAMA_HOST - Ollama server URL (default: http://localhost:11434)
  • OLLAMA_MODEL - Model to use (default: llama3). You can also set hf.co/Yusiko/LIA.
  • PORT - Backend server port (default: 8000)
  • ALLOWED_DIRECTORIES - Base directory for file operations

Frontend Configuration

Edit frontend/src/App.jsx:

πŸ“Š API Endpoints

Health Check

GET /health

Process Chat Message

POST /api/chat
Body: { "message": "your command here" }

Generate Content

POST /api/generate
Body: { "message": "content prompt" }

πŸ› Troubleshooting

Backend Won't Start

  1. Check if Ollama is running:

    curl http://localhost:11434/api/tags
  2. Verify Python dependencies:

    pip install -r requirements.txt

Frontend Connection Error

  1. Ensure backend is running on port 8000
  2. Check CORS settings in backend/main.py
  3. Verify frontend API_URL matches backend address

Ollama Not Responding

  1. Start Ollama:

    ollama serve
  2. Pull required model:

    ollama pull llama2

πŸ”§ Development

Adding New Commands

  1. Add command type to models.py:

    class CommandType(str, Enum):
        YOUR_COMMAND = "your_command"
  2. Implement handler in command_executor.py:

    async def _your_command(self, params: Dict[str, Any]) -> CommandResult:
        # Implementation
  3. Update LLM prompt in ollama_service.py

πŸ“ License

MIT License - Feel free to use and modify as needed.

🀝 Contributing

This is a personal project, but suggestions and improvements are welcome!

🎯 Roadmap

Future enhancements:

  • WebSocket for real-time updates
  • Streaming LLM responses
  • File preview in UI
  • Multi-model support (switch models on-the-fly)
  • Voice input/output
  • Command history persistence
  • Scheduled tasks
  • Plugin system
  • Docker deployment
  • Mobile app

πŸ’‘ Tips

  1. Model Selection - Larger models (like llama2:13b) provide better understanding but require more resources
  2. Performance - First command may be slower as Ollama loads the model
  3. Safety - Always review what LIA will do before confirming sensitive operations
  4. Custom Paths - Use absolute paths for operations outside your home directory

πŸ“ž Support

For issues or questions:

About

LIA - Local Intelligent Agent

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors