Skip to content

Cosmo1121/plaud-IT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

398 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PDAN-GPT

A web and desktop AI agent management platform that allows you to create, configure, and orchestrate multiple AI agents using local Ollama models.

Features

  • Agent Management - Create, edit, clone, and delete AI agents with custom system prompts, models, parameters, and per-agent tool permissions (with select all/deselect all per skill group)
  • Conversation History - Persistent chat with configurable storage (Local Storage or Cloud), plus JSON export/import for backup & restore
  • Multi-Agent Orchestrator - Plan and execute complex multi-step tasks using multiple agents in sequence, with a pre-built demo workflow
  • Background Agents - Schedule and run agents autonomously on recurring tasks
  • Knowledge Base - Upload and manage documents for agent context and reference
  • Skill System - Modular, tiered skill architecture with manifest-driven registration, execution tiers (safe/medium/high), and per-skill enable/disable controls
  • Memory-Aware Skills - Tool results emit structured facts, constraints, and artifacts that persist per agent and are injected into future system prompts
  • Agent Intelligence - Track per-agent tool success rates, skill confidence scores, auto-disable failing tools below 30% success rate, and user-correction metrics
  • Tool Execution - Agents can use tools like filesystem operations, HTTP requests, SQL queries, and safe-tier formatters
  • Tool Test Bench - Manually trigger formatter tools from Admin → Test Tools to validate the full pipeline without an Ollama connection
  • Agent Workspaces - Per-agent virtual filesystem with configurable mounts (read-only/read-write), path scoping, and size limits
  • Ollama Integration - Connect to your local Ollama instance for LLM inference
  • Telemetry & Monitoring - Track agent performance, tool usage, and system health via raw telemetry and the Agent Intelligence dashboard
  • Admin Dashboard - Centralized administration with service health, model management, user management, skills, workspaces, memory viewer, tool testing, and all settings
  • Authentication - Optional multi-user mode with email/password auth, per-user data isolation, and admin roles (first user auto-promoted to admin)
  • Security - Conditional JWT validation on edge functions, auth-aware storage policies, rate limiting, path traversal protection, and dual-mode RLS (open in single-user mode, enforced in multi-user mode)
  • Multi-Language Support - Available in English, Portuguese, and German
  • Desktop & Web Modes - Run as an Electron desktop app (direct Ollama access) or in the browser (requires CORS)
  • Self-Hosting - Fully self-hostable via Docker Compose (see SELF_HOSTING.md)
  • Intranet / Offline Mode - Air-gapped deployment with no external network calls; connect Ollama on any network host

🧠 Agent Intelligence

The Intelligence system tracks how agents improve over time:

Metric Description
Tool Success Rate Per-tool success/failure ratio for each agent
Skill Confidence success_rate × (1 − correction_rate) — weighted score
Auto-Disable Tools dropping below 30% success after 5+ calls are automatically disabled
User Corrections Tracks when users correct agent output to refine confidence

View these metrics at Telemetry → Intelligence after selecting an agent.


🔧 Skill System

Skills are modular tool bundles registered via JSON manifests:

src/skills/
├── filesystem/       # File read/write/list/delete (medium tier)
├── shell/            # Command execution (high tier)
├── http/             # HTTP requests (high tier, requires approval)
├── database/         # SQL queries (high tier)
├── formatter/        # JSON/Markdown formatting (safe tier, emits memory)
├── process/          # Process listing & system stats
├── search_local/     # File and content search
├── clipboard/        # Clipboard read/write
├── ui_notify/        # Notifications, toasts, modal prompts
├── git/              # Git status, diff, log, commit, branch
├── package_manager/  # npm/pip install, dependency listing
├── task_runner/      # Define and run tasks
├── device/           # Device, GPU, audio, camera info
├── time/             # Current time, scheduling
├── policy/           # Policy listing, checks, denial explanations
└── registry.ts       # Central skill registry with enable/disable

Execution Tiers

Tier Where it runs Example
Safe In-browser (no side effects) format_json, validate_json
Medium Electron main process or edge function filesystem_read
High Edge function / isolated service run_command, http_get_localhost

Memory-Aware Results

Skills can emit structured memory that persists per agent:

{
  success: true,
  artifacts: ["report.md"],
  memory: {
    facts: ["Report generated on 2026-02-10"],
    constraints: ["Do not overwrite report.md without approval"]
  }
}

Facts and constraints are automatically injected into the agent's system prompt for future conversations.


🚀 Quick Start

1. Prerequisites

  • Ollama installed locally
  • At least one model pulled (e.g., ollama pull llama3.2)

2. Enable Ollama CORS

⚠️ IMPORTANT: Since this is a web application, you must start Ollama with CORS enabled:

macOS / Linux:

OLLAMA_ORIGINS=* ollama serve

Windows (PowerShell):

$env:OLLAMA_ORIGINS="*"; ollama serve

Windows (CMD):

set OLLAMA_ORIGINS=* && ollama serve

3. Connect in Admin Settings

  1. Open the app and navigate to Admin → Settings
  2. Verify the Ollama Base URL is http://127.0.0.1:11434
  3. Click Test to verify connection
  4. Select your default model from the dropdown

🛠️ Tool Execution

Demo Mode (Current)

The tool execution system currently runs in demo mode with simulated responses. This means:

Tool Demo Behavior
filesystem_read Returns simulated file content
filesystem_write Shows approval dialog, returns success message
run_command Shows approval dialog, returns simulated output
http_get_localhost Shows approval dialog, returns simulated response
sqlite_query Returns sample data rows
open_url Shows approval dialog, returns success message
format_json Formats JSON in-browser (safe tier, fully functional)
format_markdown Normalises Markdown in-browser (safe tier, fully functional)
validate_json Validates JSON in-browser (safe tier, fully functional)

Tools marked as requires_approval will prompt for user confirmation before "executing."

Tool Test Bench

Navigate to Admin → Test Tools to manually trigger formatter tools without needing an Ollama connection. This validates the full pipeline: handler → memory store → intelligence tracking.

Connecting Real Tool Functions

To implement real tool execution, modify the Edge Functions in supabase/functions/:

Example: Real Filesystem Read

Edit supabase/functions/execute-tool/index.ts:

case 'filesystem_read':
  // Option 1: Read from Supabase Storage
  const { data, error } = await supabaseClient.storage
    .from('workspace')
    .download(args.path as string);
  
  if (error) throw error;
  const content = await data.text();
  
  result = { success: true, content };
  break;

Example: Real HTTP Request

case 'http_get_localhost':
  // Only allow localhost URLs for security
  const url = args.url as string;
  if (!url.startsWith('http://localhost') && !url.startsWith('http://127.0.0.1')) {
    throw new Error('Only localhost URLs are allowed');
  }
  
  const response = await fetch(url);
  const responseText = await response.text();
  
  result = {
    success: true,
    status: response.status,
    body: responseText,
  };
  break;

Example: Real Command Execution (Deno)

case 'run_command':
  // ⚠️ SECURITY WARNING: Command execution is dangerous
  // Only enable this with proper sandboxing and validation
  const command = new Deno.Command('sh', {
    args: ['-c', args.command as string],
    cwd: '/safe/sandbox/directory',
  });
  
  const { stdout, stderr } = await command.output();
  const decoder = new TextDecoder();
  
  result = {
    success: true,
    stdout: decoder.decode(stdout),
    stderr: decoder.decode(stderr),
  };
  break;

Security Features

The platform includes layered security controls:

  1. Dual-mode RLS - All database tables use is_auth_enabled() checks: open access in single-user mode, strict per-user isolation in multi-user mode
  2. Conditional edge function auth - JWT validation is enforced on execute-tool and approve-tool when authentication is enabled
  3. Auth-aware storage policies - Workspace bucket access requires authentication when multi-user mode is active
  4. Input validation - Path traversal protection, URL allowlisting (localhost only for HTTP tool), file size limits
  5. Rate limiting - Per-IP request throttling on edge functions (100 req/min)
  6. Tool approval workflow - Risky operations require explicit user approval with time-limited, single-use tokens
  7. Audit logging - All tool executions recorded in tool_logs table

📁 Project Structure

src/
├── components/
│   ├── admin/         # Admin dashboard panels (auth toggle, config, models, users, settings, memory, test tools)
│   ├── auth/          # AuthGuard component
│   ├── background-agents/ # Background agent management
│   ├── chat/          # Chat UI components
│   ├── knowledge/     # Knowledge base management
│   ├── layout/        # App layout and sidebar
│   ├── orchestrator/  # Multi-agent orchestration UI + markdown renderer
│   ├── settings/      # File browser component
│   ├── telemetry/     # Agent Intelligence & raw telemetry
│   └── ui/            # shadcn/ui components
├── hooks/
│   ├── useAgents.ts        # Agent CRUD operations
│   ├── useBackgroundAgents.ts # Background agent management
│   ├── useConversations.ts # Conversation management
│   ├── useKnowledgeBase.ts # Knowledge base operations
│   ├── useOllama.ts        # Ollama API integration
│   ├── useOrchestrator.ts  # Multi-agent orchestration
│   ├── useSettings.ts      # App settings
│   ├── useSystemHealth.ts  # System health monitoring
│   ├── useTelemetry.ts     # Telemetry data
│   └── useTools.ts         # Tool execution with intelligence tracking
├── lib/
│   ├── agentIntelligence.ts  # Per-agent tool success/confidence/auto-disable
│   ├── orchestrator/
│   │   └── demoSeed.ts       # Pre-built demo workflow for orchestrator
│   ├── skillMemoryStore.ts   # Persistent fact/constraint/artifact memory per agent
│   ├── workspace.ts          # Agent workspace configuration & path resolution
│   └── tools.ts              # Tool definitions
├── contexts/
│   └── AuthContext.tsx        # Auth state provider
├── skills/
│   ├── filesystem/     # File operations (medium tier)
│   ├── shell/          # Command execution (high tier)
│   ├── http/           # HTTP requests (high tier)
│   ├── database/       # SQL queries (high tier)
│   ├── formatter/      # JSON/Markdown formatting (safe tier, memory-aware)
│   └── registry.ts     # Central skill registry with enable/disable
├── types/
│   ├── agent.ts            # Agent/message types
│   ├── knowledge.ts        # Knowledge base types
│   ├── orchestrator.ts     # Orchestrator types
│   ├── skill.ts            # Skill manifest, SkillResult, SkillMemory types
│   ├── system-health.ts    # System health types
│   └── telemetry.ts        # Telemetry types
├── i18n/                   # Internationalization (en, pt, de)
├── pages/
│   ├── Index.tsx              # Chat page
│   ├── AgentsPage.tsx         # Agent management
│   ├── AdminDashboardPage.tsx # Admin dashboard (overview, models, users, skills, workspaces, memory, test tools, settings)
│   ├── BackgroundAgentsPage.tsx # Background agents
│   ├── KnowledgeBasePage.tsx  # Knowledge base
│   ├── OrchestratorPage.tsx   # Orchestrator UI
│   └── TelemetryPage.tsx      # Agent Intelligence & raw telemetry
└── electron/               # Electron desktop app (main + preload)

supabase/
└── functions/
    ├── execute-tool/       # Tool execution endpoint
    └── approve-tool/       # Tool approval endpoint

🔧 Technology Stack

  • Frontend: React, TypeScript, Vite, Tailwind CSS, shadcn/ui
  • Backend: Supabase (PostgreSQL, Edge Functions, Storage)
  • AI: Ollama (local LLM inference)
  • State: TanStack Query
  • Desktop: Electron
  • i18n: i18next

🧪 Development

# Install dependencies
npm install

# Start development server
npm run dev

# Run tests
npm run test

📝 Environment Variables

The app uses Supabase environment variables which are automatically configured:

  • VITE_SUPABASE_URL - Supabase project URL
  • VITE_SUPABASE_PUBLISHABLE_KEY - Supabase anon key

🚢 Deployment

Web

Click Share → Publish in Lovable to deploy your app.

For custom domains, navigate to Project → Settings → Domains.

Self-Hosted

See SELF_HOSTING.md for Docker Compose deployment with full Supabase stack.

Intranet / Air-Gapped Deployment

For fully offline or intranet-only environments:

  1. Deploy via Docker Compose (see SELF_HOSTING.md) — runs the entire backend stack locally (database, storage, auth, edge functions)
  2. Navigate to Admin → Settings and enable Intranet Mode
  3. This allows Ollama URLs on any network host (e.g. http://ollama-server.local:11434), not just localhost
  4. Set Conversation Storage to Cloud (points to your self-hosted database) or Local Storage (browser-only)

With Intranet Mode + self-hosted backend + local Ollama, the application makes zero external network calls.

Desktop (Electron)

See electron-builder.json for packaging configuration.


🔔 Important Notes

  • Redeploy Edge Function after changes: If you edit the edge function at supabase/functions/get-settings/index.ts, redeploy the function to your Supabase project so production uses the updated sanitization and intranet_mode behavior.
  • Run security audit after install: After running npm install, run npm audit and npm audit fix to review and address reported vulnerabilities.
  • Quick dev commands: Install dependencies and run the app locally:
npm install
npm run dev
# App will be served at http://localhost:8080/
npm run lint
npm run test
  • Production note: When promoting fixes (especially to serverless functions), redeploy the Supabase Edge Functions and verify get-settings behavior so toggles (intranet/auth) behave correctly in production.

📚 Resources

🖥️ Desktop Functions Setting

  • The Admin → Settings panel includes a Desktop Functions toggle. When enabled, desktop (Electron) builds will route Ollama requests and tool execution through the Electron main process, which bypasses browser CORS and allows local filesystem and command access.

  • You can toggle this from the web UI to pre-configure the desktop behavior; the setting only takes effect when running the Electron desktop app.

  • Recommended: enable this only for trusted desktop deployments. Keep intranet_mode and other security settings configured appropriately when enabling local tool execution.

  • Ollama Documentation

  • Supabase Documentation

  • Lovable Documentation

  • GPU Acceleration Guide

  • Self-Hosting Guide

About

Desktop or Web UI for local LLM Agent Studio

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors