-
Notifications
You must be signed in to change notification settings - Fork 22
Open
Description
Thank you for developing such a useful tool.
I have a cell–gene expression matrix with ~30,000 genes and ~60,000 cells. I attempted to run DoubletDetection on this dataset, and even when submitting the job to a node with 1 TB of RAM, it still exceeded the memory limit and failed.
Could you please advise on how to reduce the memory usage or handle such large datasets with DoubletDetection? Any recommendations or best practices would be greatly appreciated.
Thank you very much!
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels