Local AI runtime for training & running small LLMs directly on Apple Neural Engine (ANE). No CoreML. No Metal. Offline, on-device fine-tuning & inference on M-series silicon.
-
Updated
Mar 6, 2026 - Objective-C
Local AI runtime for training & running small LLMs directly on Apple Neural Engine (ANE). No CoreML. No Metal. Offline, on-device fine-tuning & inference on M-series silicon.
Add a description, image, and links to the llm-on-device topic page so that developers can more easily learn about it.
To associate your repository with the llm-on-device topic, visit your repo's landing page and select "manage topics."