-
Notifications
You must be signed in to change notification settings - Fork 7.1k
box_iou
doesn't work with degenerated boxes
#2582
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi, Thanks for opening the issue! Sorry for the delay in replying, I just came back from holidays. For my understanding, what do you mean by "unaligned coordinates"? Do you mean something like rotated boxes? And if that's the case, how would you plan to solve this? |
Sorry if the title isn't relevant. I might have misquoted the issue. I've tried to explain the scenario with an example above. For a given bounding box coordinates (x1,y1,x2,y2) where (x1,y1) being top left corner and (x2,y2) being the bottom right, the However, I think the order shouldn't affect the calculations and we should bring the coordinates in the appropriate order before calculating the IoU. So, let's say the box coordinates have x1 > x2 or y1 > y2, we can simply swap these values before any calculation to ensure positive results all the time. |
Oh, now I get your point. I'm not sure if this is something that should be handled by In my view, this is something that should be handled on the user-side. Indeed, those degenerate boxes generally indicate that there are issues with the user modeling that should be handled somehow -- either by clamping to a valid size (like 0), always reordering the coordinates if they are degenerate, or something else. What we could potentially do is to raise an error if the boxes do not satisfy the "non-degenerate" criteria, if that makes it easier for users to spot this situation. Thoughts? |
Yes. "degenerate boxes" is what it's also referred to as in Providing an op to do the degeneration is also possible but I'm not sure if that's necessary. |
box_iou
doesn't work with unaligned coordinatesbox_iou
doesn't work with degenerated boxes
Yes, and there is a PR that does factorize some checks already (see #2295) although it doesn't factor out that particular part you are referring to.
I don't think we should be handling degenerate boxes in the op, as the meaning of handling it is undefined I believe (as I mentioned before). Can you explain what was the situation that made you open this issue? Would raising an error in the beginning made it easier for your to identify the issue in your code? |
Yeah. I was actually comparing two implementations of mAP calculation and absolutely overlooked the fact that torchvision's I agree we should respect the explicit ordering of box coordinates but a gentle warning could also save someone's debugging cycle 😅 |
I would be happy to accept a PR adding some checks to My only concern is that there might be downstream applications/users leveraging this current behavior (which sets the IoU to zero if the boxes are degeneraet), so adding an assert would break downstream users code. cc @ppwwyyxx for thoughts if we should add an assert on |
Adding the check sounds reasonable to me. However I often tend to not add a lot of them because python-side asserts may affect cuda performance pytorch/pytorch#36853. Addressing this issue would make robust code without perf hit. |
I opened #3425 to explicitly document the expected coordinate constraints. I would suggest to wait for pytorch/pytorch#36853 before introducing |
🐛 Bug
torchvision.ops.boxes.box_iou works only if (x1,y1) < (x2,y2) and calculates zero IoU otherwise
To Reproduce
Steps to reproduce the behavior:
As all the (x1,y1) are less than (x2,y2) in this case, the
box_iou
output with itself is a perfect identity matrixlike the row 2nd and 4th in this case
The
box_iou
output is 0Expected behavior
The coordinates should be handled beforehand and output for the 2nd case mentioned above should also be an Identity Matrix
Environment
PyTorch version: 1.6.0
Is debug build: No
CUDA used to build PyTorch: 10.2
OS: Manjaro Linux
GCC version: (GCC) 10.1.0
Clang version: Could not collect
CMake version: version 3.17.3
Python version: 3.8 (64-bit runtime)
Is CUDA available: Yes
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: GeForce 940MX
Nvidia driver version: 440.100
cuDNN version: Could not collect
Versions of relevant libraries:
[pip3] numpy==1.19.1
[pip3] pytorch-lightning==0.7.6
[pip3] torch==1.6.0
[pip3] torchvision==0.7.0
[conda] blas 1.0 mkl
[conda] cudatoolkit 10.2.89 hfd86e86_1
[conda] mkl 2020.1 217
[conda] mkl-service 2.3.0 py38he904b0f_0
[conda] mkl_fft 1.1.0 py38h23d657b_0
[conda] mkl_random 1.1.1 py38h0573a6f_0
[conda] numpy 1.19.1 py38hbc911f0_0
[conda] numpy-base 1.19.1 py38hfa32c7d_0
[conda] pytorch 1.6.0 py3.8_cuda10.2.89_cudnn7.6.5_0 pytorch
[conda] pytorch-lightning 0.7.6 pypi_0 pypi
[conda] torchvision 0.7.0 py38_cu102 pytorch
Willing to work
I'd be glad to fix this behaviour 🙂
The text was updated successfully, but these errors were encountered: