-
Notifications
You must be signed in to change notification settings - Fork 31
Description
Describe the bug
Hi Livepeer team,
First of all, thanks for the great work on ai-runner and related projects. 🙌
I'm reaching out to request support for CUDA 12.8 in the Docker images provided by ai-runner, to enable compatibility with NVIDIA Blackwell GPUs (e.g. RTX 5090).
Why this is important:
Blackwell GPUs require at least CUDA 12.8 to unlock their full capabilities, including proper TensorRT engine generation and acceleration.
Current ai-runner images seem to be based on earlier CUDA versions (12.6), which means Blackwell GPUs can't be fully utilized.
Additional note on PyTorch:
Since PyTorch does not yet provide official stable builds with CUDA 12.8 (as of June 2025), supporting Blackwell likely means:
Using a PyTorch nightly build with CUDA 12.8.
Or providing documentation / guidance for users to build ai-runner images with their own PyTorch + CUDA 12.8 setup.
Suggested solution:
Provide a Dockerfile or image variant with CUDA 12.8 base (e.g. nvidia/cuda:12.8.0-runtime-ubuntu22.04 or similar).
Indicate in the documentation how to combine this with a suitable PyTorch nightly build until official stable support arrives.
Related context:
Blackwell RTX 5090 hardware
StreamDiffusion / ComfyUIStream engines generation via TensorRT
Need to generate engines compatible with CUDA 12.8
If you'd like, I'm happy to help test builds on RTX 5090 hardware.
Thanks again!
Reproduction steps
- Go to '...'
- Click on '....'
- Scroll down to '....'
- See error
Expected behaviour
No response
Severity
None
Screenshots / Live demo link
No response
OS
Linux
Running on
Docker
AI-worker version
No response
Additional context
No response