-
Notifications
You must be signed in to change notification settings - Fork 1.2k
runtime error applying RandCropByPosNegLabeld in some samples when using PersistentDataset #5330
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @AKdeeplearner, from the information you post, I find there is no foreground in the wrong case, did you check this data("/home/dev/verse/images/sub-verse763_ct.nii.gz")?
And what did you mean "I gathered that this is supposed to happen when pos=0 and neg=0, however, why is this happening?", when you specify "pos=0" and "neg=0", it will return a ValueError. MONAI/monai/transforms/croppad/array.py Line 1010 in 8700fee
Hope it can help you, feel free to ask me if you have any questions, thanks! |
Hi @KumoLiu, all the labelmaps are binary masks, this one included and double-checked, unique values [0 1]. |
Hi @AKdeeplearner, did you check the data after |
@KumoLiu Before. Since that one is a deterministic transform, we expect the same output always. No issue is raised with CacheDataset for instance, thus it should not be raised with PersistentDataset as well |
@KumoLiu any suggestion? |
Hi @AKdeeplearner, I have three suggestions. |
I was wondering if you looked into this further? @AKdeeplearner when fixing this bug I came across the same error message. so, it might be useful to try this fix #5415 |
Greetings. I've been using CacheDataset for some time although currently, I don't have the resources to cache the whole dataset as the number of samples increased. To workaround this I've gone for PersistentDataset, though some strange events are happening.
Essentially, some samples raise this error during training:
I'm using UNETR with (96,96,96) kernel, and all the samples are above 96px in all the axis.
Oddly enough, when using CacheDataset or SmartCacheDataset this issue was never raised.
The training transform pipeline is the following:
I gathered that this is supposed to happen when
pos=0
andneg=0
, however, why is this happening? Is it because some of the croppings from those 4 result sometimes in samples that don't have foreground voxels?From my understanding regarding the loading and feeding process differences, if this happens it should happen for both dataset types. Why only happens with PersistentDataset, since I've tested considerably with the others, and what is actually happening during this transform?
Thanks
Setup:
The text was updated successfully, but these errors were encountered: