Local AI anywhere, for everyone — LLM inference, chat UI, voice, agents, workflows, RAG, and image generation. No cloud, no subscriptions.
-
Updated
Apr 4, 2026 - Python
Local AI anywhere, for everyone — LLM inference, chat UI, voice, agents, workflows, RAG, and image generation. No cloud, no subscriptions.
This is a mirror of the Strix Halo HomeLab wiki, to browse the wiki click on the link below
Sixunited AXB35 EC control & monitoring for Windows
Whisper + Piper + Wyoming for Strix Halo (ROCm 7.1.1+)
A comprehensive guide to running Linux (Omarchy/Arch) on the 2025 ASUS ROG Flow Z13 (AMD Strix Halo). Includes CachyOS Kernel setup, Tablet Mode fixes, and Power Management for the Ryzen AI Max
The definitive Strix Halo LLM guide — 65 t/s on a $2,999 mini PC. Live benchmarks, tested optimizations, and everything that doesn't work.
bare-metal ai stack for amd strix halo — 91 tok/s, 42 services, 17 agents, compiled from source, zero cloud
Tools and documentation related to the AMD Strix-Halo AGU family (Ryzen AI Max 395) of systems. Tested on GMKtec EVO-2
Claude Code skill for AMD Strix Halo (Ryzen AI MAX+ 395) ML setup. Handles PyTorch installation (official wheels don't work with gfx1151), GTT memory config, and environment setup. Enables 30B parameter models.
Simple installer script which take a download (if newer) and installs it globally. Sets Vulkan support
Ansible playbook to configure AMD Strix Halo machines (e.g. Framework Desktop or GMKtec EVO-X2) as local AI inference servers running Fedora 43. Sets up llama.cpp with llama-swap and Open WebUI and downloads GGUF models. With NGINX reverse proxy and TLS via ACME or self-signed certificate.
Talos-O (Omni): A sovereign, embodied agentic organism forged on AMD Strix Halo. Integrating the Chimera Kernel (Linux 6.18), Zero-Copy Introspection, and the Phronesis Engine. Built from First Principles.
llama.cpp setup on dedicated AMD Strix Halo machine
Monitoring app shown important rocm-related metrics in a browser window. Provides /metrics endpoint
sample application showing use of Farscape bindings for Linux/AMD generated binding librires
Self-hosted Fish Audio S2-Pro running on AMD Strix Halo via ROCm
Local LLM benchmarks on AMD Strix Halo — 26+ models tested across RADV, AMDVLK, and ROCm with llama.cpp
Add a description, image, and links to the strix-halo topic page so that developers can more easily learn about it.
To associate your repository with the strix-halo topic, visit your repo's landing page and select "manage topics."