Skip to content

CUDA out of memory #3

@Sycamorers

Description

@Sycamorers

Hello,

When I ran the code, I was having this issue:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 5.80 GiB total capacity; 4.91 GiB already allocated; 51.38 MiB free; 4.93 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

From nvidia-smi I saw there was still memory to be used ( 867MiB/6138MiB, and my GPU is RTX A2000) and I've also tried checking float size (I believe in default you were using float32), following the error instruction to use setting max_split_size_mb or adding torch.cuda.empty_cache() in run_nerf.py

None of them worked.

Could you please help me identify what problems I have with implementing your code? Or probably I just need a better GPU?

I'd appreciate your help!
Thanks so much! I look forward to your reply.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions