-
Notifications
You must be signed in to change notification settings - Fork 8
Description
Hello,
When I ran the code, I was having this issue:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 5.80 GiB total capacity; 4.91 GiB already allocated; 51.38 MiB free; 4.93 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
From nvidia-smi I saw there was still memory to be used ( 867MiB/6138MiB, and my GPU is RTX A2000) and I've also tried checking float size (I believe in default you were using float32), following the error instruction to use setting max_split_size_mb or adding torch.cuda.empty_cache() in run_nerf.py
None of them worked.
Could you please help me identify what problems I have with implementing your code? Or probably I just need a better GPU?
I'd appreciate your help!
Thanks so much! I look forward to your reply.