-
Notifications
You must be signed in to change notification settings - Fork 15
Description
Hello,
I have a (perhaps conceptual) issue regarding the training of a model with the UNET-3D notebook, and I don't find a solution ...
I'm running the notebook on Windows, with a RTX4090, 512gb of RAM. Also tried with an Ubuntu computer with weaker specs but the same behaviour occurs
I am trying to segment a confocal microscopy image of synaptic boutons (spheric objects, quite easily separable from the background.)
Each image is 2224x2224px and 60stacks. The issue is that the training takes ages, around 20 hours per epoch... The GPU is functionnal and used (the vRAM is filled up to 21gb), but the GPU use is irregular, jumps from 1% to 98 % to 2%. The CPU and RAM are almost not used. I tried with many different patch sizes, batch sizes, nothing changes.
However, if I pre-crop each big image in 4 images of 512x512x40px, with the same batch size / patch size, suddently the GPU used reach a stable 98% and an epoch takes like 20 minutes. Although the total amount of data analyse is basically the same, the patch size is the same... I though it could be a bottlneck during the patching of the images but 512x512x8 or 256x256x8 patches for and image of 2224x2224x60px doesn't seem to extreme to me (and neither the CPU nor the ram seem to be used...)
Is there anything I'm doing wrong, or that I could optimize ?
Thanks a lot in advance !