Skip to content

Commit b25c84a

Browse files
supersergiyclaude
andcommitted
Fix torch_tensorrt import crash on non-GPU nodes
torch_tensorrt now raises RuntimeError (not ImportError/OSError) when no CUDA device is available. Add RuntimeError to the except clause so convnet loads gracefully, allowing mazepa_layer_processing and other modules to register their builders on non-GPU headnodes. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent e34462a commit b25c84a

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

zetta_utils/convnet/utils.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@
2424
import torch_tensorrt # pylint: disable=import-error
2525

2626
TENSORRT_AVAILABLE = True # pragma: no cover
27-
except (ImportError, OSError) as e:
27+
except (ImportError, OSError, RuntimeError) as e:
2828
logger.info(f"torch_tensorrt is not available: {e}")
2929

3030

0 commit comments

Comments
 (0)