Skip to content

ParisNeo/lollms-vs-coder

Repository files navigation

Lollms VS Coder: Sovereign Engineering for Everyone

Version License Languages

Lollms VS Coder is your local-first AI partner. Whether you are a Senior Architect who wants to validate every byte of memory or a Vibe Coder who wants to summon a digital genie to manifest an app from pure thought, Lollms provides the tools to build without boundaries.

Now available in English, French, Spanish, German, Arabic, and Chinese (Simplified)! 🌍


πŸ—οΈ Choose Your Pathway

Lollms is designed to adapt to your style, not force you into ours.

πŸ“ The Architect (Control & Precision)

For those who know exactly what they want and need to understand the why.

  • Surgical HUD: Analyze individual functions for architectural risks.
  • Context Pinning: Tell the AI exactly which files to look at and which to ignore.
  • Guardian Protocol: Watch the AI work and verify every self-healing step it takes.

🧞 The Genie (Vibe Coding & Autonomy)

For those who want to build at the speed of thought without being a "specialist."

  • Summon the Agent: Describe a goal in plain Englishβ€”"Build me a weather dashboard with a retro UI"β€”and watch the Genie create the files, install dependencies, and run the server.
  • Autonomous Repair: If the Genie makes a mistake, it sees the error and fixes it before you even notice.
  • Zero-Config Environments: Let the AI manage your Python virtual environments and MCP tools.

βš”οΈ The 2026 Landscape: Lollms vs. The World

In an era of generic AI "Copilots," Lollms provides Sovereignty, Structural Intelligence, and Verifiable Autonomy.

Feature Lollms VS Coder Cursor / Windsurf GitHub Copilot
Philosophy Operator (Verify & Fix) Assistant (Suggest) Assistant (Suggest)
Compute 🏠/☁️ Hybrid (Ollama/Groq) ☁️ Cloud (MANDATORY) ☁️ Cloud (MANDATORY)
Vision πŸ“Š Interactive Call Graphs ❌ Text only ❌ Text only
Data Privacy πŸ”’ Zero Telemetry ⚠️ "Privacy Mode" opt-in ❌ High Telemetry
Protocol πŸ›‘οΈ Guardian (Self-Healing) ❌ Manual Fixes ❌ Manual Fixes
Memory 🧠 Infinite Project DNA ⚠️ RAG Indexing only ⚠️ Org-level only

πŸš€ The Lollms Edge: Why choose us?

1. 🎭 Expert Personalities (Your Team of Specialists)

Lollms doesn't just "chat." It allows you to summon a specific Expert Persona.

  • Vibe Coder? Summon the Pragmatist to get things running fast.
  • Security Pro? Switch to the Security Auditor for a deep vulnerability scan.
  • Embedded Expert? Invoke the STM32 Specialist to handle register-level logic. You can even build your own custom personas to fit your unique project vibe.

2. πŸ’Ž Modular Skills (The Source of Truth)

Stop relying on the AI's "hallucination-prone" general knowledge. Lollms Skills are atomic, verified knowledge capsules.

  • Diamond Protocol: When a skill is active (e.g., "FastAPI 2026 standards"), the AI prioritizes the skill's documentation over its internal training data.
  • Global & Local: Keep project-specific protocols (e.g., "Our Team's naming convention") in your local library, and share core coding patterns across all your projects via the global library.
  • Agent Integration: When the Lead Architect delegates tasks, it can explicitly "equip" sub-agents with specific skills to ensure perfect compliance.

3. πŸ›‘οΈ The "Guardian" Protocol (Self-Healing Code)

Lollms doesn't just write code and hope it works. When the Architect applies changes, the Guardian immediately scans for functional errors using the VS Code engine. If an error is found, the AI spawns a Repair Mission autonomously, fixing logic or "ghost" imports before you even review the result.

2. πŸ“Š Structural Intelligence (The HUD & The Graph)

Stop navigating blindly. Lollms provides two layers of structural vision:

  • The Visual Graph: A full-project map with SPARQL query support to analyze deep dependencies.
  • The Surgical HUD: A high-speed, inline analyzer. Click the ✨ Lollms HUD button above any function to instantly see its architectural risks and potential bugs without leaving the code.
  • SPARQL Queries: Run queries like SELECT ?x WHERE { ?x imports 'auth.ts' } to understand impact.
  • Isolate View: One click to hide everything except the file you're refactoring and its direct neighbors.

3. 🎯 Mission Briefing (Prime Directive)

Standard AI chat suffers from "context drift." Lollms introduces the Mission Briefing. Pin specific constraints (e.g., "Must use Python 3.12, No external libraries") to a dedicated briefing zone. These rules are treated as the Prime Directive, remaining the AI's highest priority regardless of how long the chat becomes.

4. 🧬 Project DNA (Automated Standards)

Lollms can extract a "DNA" profile of your project (naming conventions, folder patterns, tech stack). It saves this to Project Memory, ensuring the AI understands your project's unique identity across all discussions.


🌟 Key Features

Feature Description
🎭 Expert Personas Over 10+ built-in professional roles (Architect, Embedded Expert, Security Lead, etc.) that change the AI's logic, tone, and technical priorities.
πŸ’Ž Skills Library A modular library of verified code patterns, API docs, and standards. Acts as a "Source of Truth" that overrides generic model behavior.
πŸ€– Autonomous Agent Give the AI a complex objective, and it will generate and execute a multi-step plan, including creating files, writing code, running commands, and self-correcting.
πŸ“Š Architecture Graph Visualize your project structure with interactive call graphs and class diagrams. Supports SPARQL queries for deep dependency analysis.
πŸ›‘οΈ Guardian Audit Background self-healing loop. The AI automatically detects and repairs linting or import errors in generated code before finalizing tasks.
⚑ Quick Edit Companion A lightweight, floating window for fast code edits, explanations, or questions without leaving your current context (Ctrl+Shift+L).
🧠 Smart Context A sidebar file tree lets you precisely control which files the AI can "see." Includes πŸ” Definitions-Only mode to save tokens while keeping API visibility.
πŸ“ Smart Edits Apply AI-generated code directly to your files with a single click, supporting both full-file updates and Aider-style SEARCH/REPLACE patching.
πŸ•΅οΈ Commit Inspector Analyze git commits for security vulnerabilities, bugs, and code quality issues with a single click.
πŸ““ Jupyter Integration Enhance your data science workflow with tools to generate, explain, visualize, and fix notebook cells.

πŸš€ Installation & Setup

  1. Install lollms-vs-coder from the Visual Studio Marketplace.
  2. Open the Lollms sidebar in VS Code (click the Lollms icon in the activity bar).
  3. In the Navigation view, click the Settings item to open the configuration panel.
  4. Enter your Lollms API Host (e.g., http://localhost:9642 or your local Ollama address) and select your desired model.

πŸ’¬ Standard Discussions

The Lollms Chat is your central hub for interacting with the AI.

  • Start a Chat: Click the + icon in the Discussions sidebar view.
  • Manage Context: Use the AI Context Files view to control what the AI sees:
    • βœ… Included: The AI reads the full file content.
    • πŸ” Definitions: The AI sees the file structure (classes/functions) but not implementation details.
    • πŸ“„ Tree-Only: The AI sees the file path but not the content (saves tokens).
    • 🚫 Excluded: The file is hidden from the AI.
  • Mission Briefing: Use the πŸ›‘οΈ Briefing button to set task-specific constraints that stay at the top of the AI's memory.
  • Attach Files: Click the paperclip icon or drag & drop images and documents directly into the chat area.

πŸ› οΈ Discussion Tools & Thinking Mode

🧞 Summoning the Genie (Agent Mode)

Simply toggle the Agent badge in your chat. The AI is no longer just talking to you; it is an operator with hands. It can:

  • Search the Web for the latest documentation.
  • Read & Write files in your workspace.
  • Execute Commands in your terminal to test its own work.

🧠 Activate Thinking Mode

For complex tasks that require deep logic, enable Thinking Mode.

  1. Open Discussion Settings (βš™οΈ).
  2. Select your reasoning budget and watch the AI contemplate the best path before writing a single line.

🧬 Evolving Intelligence (The Memory Layer)

Lollms introduces a human-like memory architecture to ensure the Genie gets smarter with every interaction without bloating your context.

  • Dual-Layer Context:
    • Limbic System: Recent successes and critical failures are injected directly into every prompt for immediate recall.
    • Neocortex: An organized hierarchical library of thousands of project facts. The Genie can "browse" its own deep memory if it hits a wall.
  • Importance Weighting: High-use facts grow stronger. Obsolete information "decays" and moves to deep storage, keeping the context lean and focused.
  • Memory-to-Skill Promotion: Found a technical truth that applies to multiple projects? Use the Promote to Skill button in the Memory Manager to turn a project fact into a permanent Global Skill.
  • Error Awareness: When the Genie makes a mistake, it records the failure. If it tries to do the same thing again, a "Reflexive Guard" blocks the action and forces a new strategy.
  • Success Reflection: Every time the Genie fixes a hard bug, it reflects on why the fix worked. It then saves this "Technical Lesson" to your Project Memory automatically.
  • Internal REPL (The Brain): The Genie has access to an internal Python lab where it can search through its own Skills and Memories using code to find the perfect solution for your current task.
  • Project Evolution: Over time, your digital Genie becomes a specialist in your specific codebase, avoiding your project's unique pitfalls.

🌍 Web Search & Research Agent

Toggle Web Search in the settings to allow the AI to browse the internet. When enabled, Lollms spawns a specialized Research Librarian to verify facts or read library documentation.


πŸ“ Applying Code Changes

When the AI generates code, it provides interactive buttons to apply the changes directly to your project.

1. Full File Updates

If the AI generates a File: path/to/file.ext block:

  • Click βš™οΈ Apply to File.
  • A diff view will open, allowing you to review the changes before saving.

2. Diff / Patching (Aider Format)

Lollms excels at SEARCH/REPLACE blocks. This is the safest way to modify existing files without losing your local changes.

  • Click βš™οΈ Apply Patch.
  • The Guardian Protocol will automatically verify the change doesn't introduce syntax errors.

3. Insert / Replace

  • Insert: Inserts the code block at your current cursor position in the active editor.
  • Replace: Replaces your current selection with the generated code.

⚑ The Companion Panel (Quick Edit)

Press Ctrl+Shift+L (or Cmd+Shift+L on Mac) to open the Companion Panel. This is a persistent, floating window designed for rapid iteration.

  • Context Aware: Automatically tracks your active editor selection.
  • Attach/Detach: Pin the companion to a specific file or selection to keep the context fixed while you navigate other files.

πŸ““ Jupyter Notebook Integration

Lollms VS Coder supercharges your .ipynb notebooks with context-aware AI tools found in the cell toolbar:

  • $(book) Educative Notebook: Generates a comprehensive, step-by-step notebook on a topic.
  • $(sparkle) Enhance: Refactors and improves the code in the current cell.
  • $(graph) Visualize: Generates code to visualize the data in the cell's output.
  • $(debug-restart) Fix Error: If a cell execution fails, a "Fix with Lollms" button appears to analyze and fix the error.

πŸ€– Agent Tools

When in Agent Mode, the AI can autonomously use tools to complete complex objectives:

Tool Category Tools
File Operations read_file, generate_code, delete_file, move_file
Execution execute_command, run_file, run_tests_and_fix
Architecture update_code_graph, read_code_graph
Knowledge store_knowledge (RLM), extract_project_dna
Research search_web, search_arxiv, search_wikipedia, scrape_website
Communication moltbook_action (Agent Social Network), submit_response
Planning edit_plan (Self-Correction), wait

Contributing

Contributions, issues, and feature requests are welcome! Feel free to check the issues page.


License

This project is licensed under the Apache-2.0 License - see the LICENSE file for details.

About

AI-powered Visual Studio Code extension using Lollms's OpenAI-compatible interface for smart code assistance, debugging, real-time enhancement, context management, and automated commit message generation with full versioning support

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors