v4.0.0 #9006
mudler
announced in
Announcements
v4.0.0
#9006
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
🎉 LocalAI 4.0.0 Release! 🚀
LocalAI 4.0.0 is out!
This major release transforms LocalAI into a complete AI orchestration platform. We’ve embedded agentic and hybrid search capabilities directly into the core, completely overhauled the user interface with React for a modern experience, and are thrilled to introduce Agenthub ( link ) a brand new community hub to easily share and import agents. Alongside these massive updates, we've introduced powerful new features like Canvas mode for code artifacts, MCP apps and full MCP client-side support.
🚀 Key Features
🤖 Native Agentic Orchestration & Agenthub
LocalAI now includes agentic capabilities embedded directly in the core. You can manage, import, start, and stop agents via the new UI.
agents.mp4
🎨 Revamped UI & Canvas Mode
The Web interface has been completely migrated to React, bringing a smoother experience and powerful new capabilities:
model-fit-canvas-mode.mp4
🔌 MCP Apps & Client-Side Support
We’ve expanded support for the Model Context Protocol (MCP):
LOCALAI_DISABLE_MCPto completely disable MCP support for security.🎵 New Backends, Audio & Video Enhancements
sample_ratesupport via post-processing and multi-voice support for Qwen TTS.vllm-omnibackend detection.🛠️ Infrastructure & Developer Experience
--data-pathCLI flag andLOCALAI_DATA_PATHenv var to separate persistent data (agents, skills) from configuration.🐞 Fixes & Improvements
json_verbosetoverbose_jsonfor OpenAI spec compliance (fixes Nextcloud integration).context_size).Known issues
diffusersbackend fails to build currently (due to CI limit exhaustion) and it's not currently part of this release (the previous version is still available). We are looking into it but, if you want to help and know someone at Github that could help supporting us with better ARM runners, please reach out!❤️ Thank You
LocalAI is a true FOSS movement — built by contributors, powered by community.
If you believe in privacy-first AI:
Your support keeps this stack alive.
✅ Full Changelog
📋 Click to expand full changelog
What's Changed
Breaking Changes 🛠
Bug fixes 🐛
Exciting New Features 🎉
sample_ratesupport to TTS API via post-processing resampling by @Copilot in Addsample_ratesupport to TTS API via post-processing resampling #8650🧠 Models
📖 Documentation and examples
👒 Dependencies
Other Changes
f75c4e8bf52ea480ece07fd3d9a292f1d7f04bc5by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp tof75c4e8bf52ea480ece07fd3d9a292f1d7f04bc5#86192b6dfe824de8600c061ef91ce5cc5c307f97112cby @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to2b6dfe824de8600c061ef91ce5cc5c307f97112c#8622b68a83e641b3ebe6465970b34e99f3f0e0a0b21aby @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp tob68a83e641b3ebe6465970b34e99f3f0e0a0b21a#8628418dea39cea85d3496c8b04a118c3b17f3940ad8by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to418dea39cea85d3496c8b04a118c3b17f3940ad8#86493769fe6eb70b0a0fbb30b80917f1caae68c902f7by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to3769fe6eb70b0a0fbb30b80917f1caae68c902f7#8655723c71064da0908c19683f8c344715fbf6d986fdby @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to723c71064da0908c19683f8c344715fbf6d986fd#86609453b4b9be9b73adfc35051083f37cefa039aceeby @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to9453b4b9be9b73adfc35051083f37cefa039acee#867105728db18eea59de81ee3a7699739daaf015206bby @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to05728db18eea59de81ee3a7699739daaf015206b#8683319146247e643695f94a558e8ae686277dd4f8daby @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to319146247e643695f94a558e8ae686277dd4f8da#87074d828bd1ab52773ba9570cc008cf209eb4a8b2f5by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to4d828bd1ab52773ba9570cc008cf209eb4a8b2f5#8727ecd99d6a9acbc436bad085783bcd5d0b9ae9e9e9by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp toecd99d6a9acbc436bad085783bcd5d0b9ae9e9e9#876224d2ee052795063afffc9732465ca1b1c65f4a28by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to24d2ee052795063afffc9732465ca1b1c65f4a28#877730c5194c9691e4e9a98b3dea9f19727397d3f46eby @localai-bot in chore: ⬆️ Update ggml-org/whisper.cpp to30c5194c9691e4e9a98b3dea9f19727397d3f46e#8796a0ed91a442ea6b013bd42ebc3887a81792eaefa1by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp toa0ed91a442ea6b013bd42ebc3887a81792eaefa1#8797566059a26b0ce8faec4ea053605719d399c64cc5by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to566059a26b0ce8faec4ea053605719d399c64cc5#8822c5a778891ba0ddbd4cbb507c823f970595b1adc2by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp toc5a778891ba0ddbd4cbb507c823f970595b1adc2#883735bee031e17ed2b2e8e7278b284a6c8cd120d9f8by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to35bee031e17ed2b2e8e7278b284a6c8cd120d9f8#8872d6dd6d7b555c233bb9bc9f20b4751eb8c9269743by @localai-bot in chore: ⬆️ Update leejet/stable-diffusion.cpp tod6dd6d7b555c233bb9bc9f20b4751eb8c9269743#892523fbfcb1ad6c6f76b230e8895254de785000be46by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to23fbfcb1ad6c6f76b230e8895254de785000be46#892110e5b148b061569aaee8ae0cf72a703129df0eabby @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to10e5b148b061569aaee8ae0cf72a703129df0eab#894657819b8d4b39d893408e51520dff3d47d1ebb757by @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp to57819b8d4b39d893408e51520dff3d47d1ebb757#89835aa065445541094cba934299cd498bbb9fa5c434by @localai-bot in chore: ⬆️ Update ace-step/acestep.cpp to5aa065445541094cba934299cd498bbb9fa5c434#8984e30f1fdf74ea9238ff562901aa974c75aab6619bby @localai-bot in chore: ⬆️ Update ggml-org/llama.cpp toe30f1fdf74ea9238ff562901aa974c75aab6619b#8997New Contributors
Full Changelog: v3.12.1...v4.0.0
This discussion was created from the release v4.0.0.
Beta Was this translation helpful? Give feedback.
All reactions