Skip to content

Add typing annotations to detection/roi_heads #4612

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

oke-aditya
Copy link
Contributor

@oke-aditya oke-aditya commented Oct 13, 2021

Helps #4582

Current status:
Opened PR to check if CI passes after adding types.
Fixing mypy errors

@@ -382,13 +401,11 @@ def expand_boxes(boxes, scale):


@torch.jit.unused
def expand_masks_tracing_scale(M, padding):
# type: (int, int) -> float
def expand_masks_tracing_scale(M: int, padding: int) -> Tensor:
Copy link
Contributor Author

@oke-aditya oke-aditya Oct 14, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

@oke-aditya oke-aditya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would love to hear your thoughts. I think CI failures are unrelated.

cc @NicolasHug @datumbox

widths_i: Tensor,
heights_i: Tensor,
offset_x_i: Tensor,
offset_y_i: Tensor,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hope the types here are correct.

y_0 = int(max(box[1].item(), 0))
y_1 = int(min(box[3].item() + 1, im_h))

im_mask[y_0:y_1, x_0:x_1] = mask[int(y_0 - box[1]) : int(y_1 - box[1]), int(x_0 - box[0]) : int(x_1 - box[0])]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Slicing needs integers, mypy complains that slices might not be integers. A safe way is to typecast :)

@@ -444,33 +464,32 @@ def _onnx_paste_mask_in_image(mask, box, im_h, im_w):
y_0 = torch.max(torch.cat((box[1].unsqueeze(0), zero)))
y_1 = torch.min(torch.cat((box[3].unsqueeze(0) + one, im_h.unsqueeze(0))))

unpaded_im_mask = mask[(y_0 - box[1]) : (y_1 - box[1]), (x_0 - box[0]) : (x_1 - box[0])]
unpaded_im_mask = mask[int(y_0 - box[1]) : int(y_1 - box[1]), int(x_0 - box[0]) : int(x_1 - box[0])]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above.

mask_predictor: Optional[nn.Module] = None,
keypoint_roi_pool: Optional[nn.Module] = None,
keypoint_head: Optional[nn.Module] = None,
keypoint_predictor: Optional[nn.Module] = None,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All are annotated as nn.Module as we want them as loose as possible.

bg_iou_thresh: float,
batch_size_per_image: int,
positive_fraction: float,
bbox_reg_weights: Tuple[float, float, float, float],
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be List[float] but that will need refactor.

Let me know

matched_idxs = None
labels = None # type: ignore[assignment]
regression_targets = None # type: ignore[assignment]
matched_idxs = None # type: ignore[assignment]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assigning to None needs ignore, we can't use Optional[Tensor] as that isn't supported by JIT.

mask_features = self.mask_head(mask_features)
mask_logits = self.mask_predictor(mask_features)
mask_features = self.mask_head(mask_features) # type: ignore[misc]
mask_logits = self.mask_predictor(mask_features) # type: ignore[misc]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mask_head and mask_predictor are None by Default.

So there maybe a case when None flows down here. Hence mypy complains.

x_0 = int(max(box[0].item(), 0))
x_1 = int(min(box[2].item() + 1, im_w))
y_0 = int(max(box[1].item(), 0))
y_1 = int(min(box[3].item() + 1, im_h))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

box is a single box Tensor like.

box = torch.tensor([10, 20, 40, 60])
Not sure whether it is float or int.

# type: (int, int) -> float
return torch.tensor(M + 2 * padding).to(torch.float32) / torch.tensor(M).to(torch.float32)
def expand_masks_tracing_scale(M: int, padding: int) -> float:
return torch.tensor(M + 2 * padding).to(torch.float32) / torch.tensor(M).to(torch.float32) # type: ignore[return-value]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The other way is to use cast() from typing, I'm not sure if it would work.

@@ -202,7 +214,7 @@ def _onnx_heatmaps_to_keypoints(

# TODO: simplify when indexing without rank will be supported by ONNX
base = num_keypoints * num_keypoints + num_keypoints + 1
ind = torch.arange(num_keypoints)
ind = torch.arange(int(num_keypoints))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

num_keypoints is scalar tensor, so it is safe to use int() I think,

@oke-aditya oke-aditya marked this pull request as ready for review October 14, 2021 16:26
@oke-aditya oke-aditya changed the title [WIP] Add typing annotations to detection/roi_align Add typing annotations to detection/roi_align Oct 14, 2021
Copy link
Contributor

@datumbox datumbox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Though I'm supportive of the initiative to cover the code with annotations, in this specific example we have mypy requiring castings that make the code messier in my opinion. I don't suppose we can turn it off on the specific methods that cause issues?

@oke-aditya
Copy link
Contributor Author

We can turn off the places where it is messier :) Can you point out such places, since it is slightly subjective where they are messier.

My 2 cents

Probably there will be very few cases < 10 and this file is probably largest ~800 lines.
So I think instead of ignoring the file, it will be better to just ignore the issues at those places.

@oke-aditya oke-aditya requested a review from datumbox October 14, 2021 16:47
@datumbox
Copy link
Contributor

Sounds good, I'll have a closer look next week if nobody else reviews this (I'm going to deassign myself from the review to allow others pick it up but feel free to ping me if it's not reviewed by next week).

You are right to say that what's "messy" is subjective. For me, ideally, the annotation PRs should introduce minimal code modifications on the body of the methods. The more we need to have, the more it hints we are not just annotating types but changing the functionality. A good example of this is the #4599 where we modify the type of parameters we pass. Though such modifications seem innocent initially, they might be changing how the method operates. This in a perfect world wouldn't be an issue because the tests should catch it. Unfortunately over time we've seen quite a few gaps on our tests which made me worried. Again I'm not saying this to stop the initiative that covers the codebase with annotations; I still fully support this. I'm just advocating to test more thoroughly potential changes (maybe with the use of a debugger) to be 100% certain we are doing the right thing.

@datumbox datumbox removed their request for review October 14, 2021 16:57
@oke-aditya
Copy link
Contributor Author

Though such modifications seem innocent initially, they might be changing how the method operates. This in a perfect world wouldn't be an issue because the tests should catch it. Unfortunately over time we've seen quite a few gaps on our tests which made me worried. Again I'm not saying this to stop the initiative that covers the codebase with annotations; I still fully support this. I'm just advocating to test more thoroughly potential changes (maybe with the use of a debugger) to be 100% certain we are doing the right thing.

Completely agree to you Vasilis. Really glad to hear such thoughts.

I must appreciate your depth of thinking and experience in software development. I Must say that there is lot to learn from you and all the maintainers.

@oke-aditya oke-aditya changed the title Add typing annotations to detection/roi_align Add typing annotations to detection/roi_heads Oct 18, 2021
@oke-aditya oke-aditya requested a review from datumbox November 1, 2021 15:02
@oke-aditya oke-aditya mentioned this pull request Oct 27, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants