Skip to content

CUDA_ERROR_NO_DEVICE: no CUDA capable device is detected #2

@MSajek

Description

@MSajek

Hi @stefanprodic,
When I run
python ${EXE}/SWARM_read_level.py -m ${MOD} --sam ${SAM} --fasta ${FASTA} --raw ${BLOW5} -o ${OUTPUT_DIR}/${OUTPUT_FILE}
I have the following message in stderr:

2025-11-24 10:25:19.180331: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable TF_ENABLE_ONEDNN_OPTS=0. 2025-11-24 10:25:19.640229: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered 2025-11-24 10:25:19.640291: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered 2025-11-24 10:25:19.717590: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered 2025-11-24 10:25:19.869597: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2025-11-24 10:25:23.825152: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2025-11-24 10:25:31.913383: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:274] failed call to cuInit: CUDA_ERROR_NO_DEVICE: no CUDA-capable device is detected 2025-11-24 10:25:31.913414: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:129] retrieving CUDA diagnostic information for host: gpu07 2025-11-24 10:25:31.913419: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:136] hostname: gpu07 2025-11-24 10:25:31.913496: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:159] libcuda reported version is: 570.86.15 2025-11-24 10:25:31.913512: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:163] kernel reported version is: 570.86.15 2025-11-24 10:25:31.913517: I external/local_xla/xla/stream_executor/cuda/cuda_diagnostics.cc:241] kernel version seems to match DSO: 570.86.15

I used CUDA 11.8.
The output file is created, I assume that using CPU instead of GPU, right?
What can I do to make GPU visible?
Thanks in advance.
Best,
Marcin

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions