-
-
Notifications
You must be signed in to change notification settings - Fork 1k
fix(ui): enable rocm-smi support by correcting flags and parsing #580
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Forgive me.. I've been laying down the ROCm to the extent that I need the CompyUI with Zluda and then I figured out if the ROCm was properly laid. Now I'll make a separate PyTorch 3.12 folder to lay the ROCm and try it out there.. |
|
I'm sorry I didn't notice you're testing with a Windows system. I don't currently have a way of testing this though and I think for Windows maybe using "Get-Counter" for dynamic performance counters could be the way to go. |
|
The error you are getting should not exist as I replaced it with a vendor neutral "GPUs not detected" style message. Can you run : $ git rev-parse HEAD |
|
Ugh sorry, a bit shameful but I was naively cloning your fork without checking the branch and I was just on the default |
|
As long as I'm not the only one making those sorts of mistakes! :D |
|
Works for me now, thanks a lot! Training Z Image Turbo right now at around 3.5sec/it. My setup AMD RX7700XT 12 GB VRAM and Ubuntu 24.04 LTS: What I did:
|








ai-toolkit runs on systems with AMD GPUs but displays an error about 'nvidia-smi' in the dashboard when doing so.
This patch removes the hard-coded dependency on 'nvidia-smi' allowing ai-toolkit to operate with either 'nvidia-smi' or 'rocm-smi'. It first checks for 'nvidia-smi' and then checks for 'rocm-smi' which may cause an issue if both are installed but it solves a need today.