Agent Skills-compatible LLM wiki for Claude Code, Cursor, and Codex. Build a Karpathy-style knowledge base from raw sources, citations, and linting.
-
Updated
Apr 13, 2026
Agent Skills-compatible LLM wiki for Claude Code, Cursor, and Codex. Build a Karpathy-style knowledge base from raw sources, citations, and linting.
Karpathy’s LLM Wiki, 100% local with Ollama. Drop Markdown notes → AI extracts concepts → your Obsidian wiki auto-links and grows. Zero sharing. Your notes stay yours.
Memoriki - LLM Wiki + MemPalace. Personal knowledge base with real memory.
Compile documents into a living Obsidian wiki. Any AI agent. Based on Karpathy's LLM Wiki pattern.
Give every AI agent persistent memory of your team's knowledge. No vector DB, no RAG — just Git + BM25 + 114 tokens per session.
Muscle memory for Claude, OpenClaw, and AI agents. Zero-cost Hebbian memory system — learns which files matter through co-access patterns, predicts what you need next.
Andrej Karpathy's LLM Wiki pattern as a Claude Code plugin — turn accumulated sources into a self-maintaining, scalable markdown knowledge base.
Synapse Context Engine (SCE) is a brain-inspired hypergraph-based AI memory architecture for persistent context, coherent reasoning, and long-term memory, designed for transparency and safety.
Let your AI agent read, search, and build on your Obsidian notes. MCP + WebSocket + filesystem fallback. Inspired by Karpathy's LLM Wiki.
KnoLo Core is a local-first knowledge base engine built for small language models (LLMs). It packages your documents into a compact .knolo file and enables fully deterministic querying — no embeddings, no vector databases, no cloud services required. Designed for on-device and edge LLM deployments.
Code to make any AI have unlimited context persistent memory. In the example, a software for any AI to read the Uniform Commercial Code of Michigan. A document of 220,000 tokens
Middleware memory engine for LLMs — replaces linear chat history with three parallel structures (Entity Graph, Semantic Tree, Focus Buffer) to retrieve only the most relevant context within a token budget. Works as a Python library or MCP server for Claude Code, Copilot CLI, and other AI tools.
The core belief layer for AI agents. MnemeBrain Lite stores structured beliefs instead of text memories — enabling contradiction detection, belief revision, and explainable reasoning.
Git-backed multi-user wiki MCP server — long-term memory for any AI. Inspired by Karpathy's LLM Wiki pattern.
Drop files. Get a knowledge graph. No database required. Claude Code skill that compiles raw sources into interconnected Obsidian-compatible markdown wikis with auto-ingest, concept extraction, BM25 search, and drift detection.
Three-state persistent memory for AI agents. F1=0.945 without embeddings. Patent pending.
Pack 40+ files at 5 depth levels into any LLM context window. Keyword, semantic, and graph resolution. 100% recall at 1% of repo. Drop-in for any AI agent.
Self-maintaining engineering wiki. Feed it Request for Comments(RFCs), Architecture Decision Records(ADRs), and Incident Retrospective (Post-Incident Review) — Claude Code builds structured, interlinked pages. Ingest, query, lint. Based on Andrej Karpathy's LLM Wiki pattern.
The production-ready implementation of Karpathy's LLM Wiki pattern. Markdown + BM25 + MCP. No embeddings, no vector DB.
Add a description, image, and links to the rag-alternative topic page so that developers can more easily learn about it.
To associate your repository with the rag-alternative topic, visit your repo's landing page and select "manage topics."