Skip to content

Streamdiffusion runner tensor location errors #807

@ad-astra-video

Description

@ad-astra-video

Describe the bug

Looks like the runner did not startup under the timeout and then these errors started.

https://eu-metrics-monitoring.livepeer.live/grafana/explore?schemaVersion=1&panes=%7B%22wov%22:%7B%22datasource%22:%22cemke7qcimq68d%22,%22queries%22:%5B%7B%22refId%22:%22A%22,%22expr%22:%22%7Bnode_name%3D%5C%22adastra-live-ai-4090x8-1%5C%22,%20container%3D%5C%22live-video-to-video_streamdiffusion_8901%5C%22%7D%22,%22queryType%22:%22range%22,%22datasource%22:%7B%22type%22:%22loki%22,%22uid%22:%22cemke7qcimq68d%22%7D,%22editorMode%22:%22code%22,%22direction%22:%22backward%22%7D%5D,%22range%22:%7B%22from%22:%221758410963997%22,%22to%22:%221758583932144%22%7D%7D%7D&orgId=1

  File "/workspace/miniconda3/envs/comfystream/lib/python3.11/site-packages/streamdiffusion/modules/controlnet_module.py", line 515, in _unet_hook
    down_samples, mid_sample = cn(
                               ^^^
  File "/workspace/miniconda3/envs/comfystream/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/miniconda3/envs/comfystream/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/miniconda3/envs/comfystream/lib/python3.11/site-packages/diffusers/models/controlnets/controlnet.py", line 756, in forward
    emb = self.time_embedding(t_emb, timestep_cond)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/miniconda3/envs/comfystream/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/miniconda3/envs/comfystream/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/miniconda3/envs/comfystream/lib/python3.11/site-packages/diffusers/models/embeddings.py", line 1290, in forward
    sample = self.linear_1(sample)
             ^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/miniconda3/envs/comfystream/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1751, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/miniconda3/envs/comfystream/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1762, in _call_impl
    return forward_call(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/miniconda3/envs/comfystream/lib/python3.11/site-packages/torch/nn/modules/linear.py", line 125, in forward
    return F.linear(input, self.weight, self.bias)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)

timestamp=2025-09-22 23:30:15 level=ERROR location=controlnet_module.py:525:_unet_hook gateway_request_id=7232abf9 manifest_id=0b913654 stream_id=str_fy2HwBRT78PJwJXg message=ControlNetModule: controlnet forward failed: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)```

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions