Become a sponsor to Christopher Karani
On-device AI shouldn't be this hard.
I'm Christopher — an iOS engineer with 12+ years of Swift experience, building open source tools that make on-device AI practical for Apple developers.
What I maintain:
Wax — Metal-accelerated RAG & memory layer for on-device AI agents
Swarm — AI agent orchestration framework for Swift
Hive — Deterministic graph runtime for agent workflows
Conduit — Unified LLM inference SDK across providers
Espresso — Swift inference runtime that bypasses Core ML to dispatch directly to Apple's Neural Engine (4.76x speedup)
Why sponsor?
Every dollar goes directly toward keeping these libraries maintained, documented, and pushed forward. I'm a solo founder doing this full-time — your sponsorship means I can keep shipping instead of context-switching.
If you're building with on-device AI on Apple platforms, you're probably already using (or about to use) something from this stack.
Featured work
-
christopherkarani/Swarm
🐦🔥 A Swifty Agent Orchestration Framework Purpose built for on-device Models
Swift 456 -
christopherkarani/Wax
Single-file memory layer for AI agents, sub mili-second RAG on Apple Silicon. Metal Optimized On-Device. No Server. No API. One File. Pure Swift
Swift 713 -
christopherkarani/Espresso
Train and run transformers directly on Apple's Neural Engine in Swift bypass coreml entirely
Swift 96 -
christopherkarani/Conduit
🦑 Unified Swift SDK for LLM inference across local and cloud providers
Swift 87 -
christopherkarani/ContextCore
ContextCore: The ultra-fast Metal context engine for on-device AI. Build optimized context windows in <5ms with perfect recall on Apple Silicon. 🧠⚡️🚀
Swift 21 -
christopherkarani/EdgeRunner
🚀 LLM inference Engine in Swift/Metal, Load GGUF and safe tensors modes, no conversion, no cpp, pure swift
Swift 31