You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This commit was created on GitHub.com and signed with GitHub’s verified signature.
v1.0.0 (2025-12-24)
Highlights
Chat / Multimodal (local inference): Text and multimodal (text+image) conversations; supports RAG (knowledge-base Q&A), prompt optimization, inference parameter configuration, and session/message management. Chat/Multimodal is based on llama.cpp, usable without installing an environment. Defaults to Vulkan acceleration, optional CUDA acceleration, and supports CPU-only inference.
SD / TTS (multimedia generation): Stable Diffusion image/video generation workflows; local TTS (Kokoro) text-to-speech (currently supports Chinese and English).
Model Library (model full lifecycle): Import/download/load/manage models, covering Chat, Multimodal, Embedding, Rerank, SD, and TTS.
Dataset (data processing pipeline): Standardization and generation for SFT / RAG / audio-to-text datasets; pipeline step configuration, AI node augmentation, and resume from checkpoints.
Training (training loop): Full process from environment setup → training tasks → monitoring → evaluation → testing → packaging; supports LoRA or QLoRA fine-tuning.
Notes (Environment & Limits)
One-click CPU/GPU environment installation in Settings required: Embedding / Rerank / SD / TTS rely on Python environment capabilities. When triggering features that require environment installation, the system will prompt and guide the installation.
Kokoro TTS environment: Due to network restrictions in certain regions, installing the misaki G2P library may be slow. To avoid impacting the overall installation experience, an asynchronous installation method is used. Please keep the application running to allow the background installation to complete. You can manually refresh to check the readiness status of the TTS capability icon.
SD limitation: SD inference currently only supports NVIDIA CUDA; AMD/Vulkan do not support SD inference.
For more details and special notices, refer to each feature's documentation.
If you encounter any issues during use or wish to suggest additional features, please feel free to submit an Issue (preferably including your operating environment, model type, reproduction steps, and any logs/screenshots).
Acknowledgments
Initial version released; features and experience may not be perfect. We look forward to your feedback and suggestions to improve together!