-
Notifications
You must be signed in to change notification settings - Fork 7.1k
Add raft builders and presets in prototypes #5043
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
💊 CI failures summary and remediationsAs of commit 6a89b6d (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
return img1, img2, flow, valid_flow_mask | ||
|
||
def _pil_or_numpy_to_tensor(self, img1, img2, flow, valid_flow_mask): | ||
if not isinstance(img1, Tensor): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find it a bit confusing that we type the input as Tensor
and actually handle the case where it's not a tensor. I saw that on the other presets so I did the same. I assume that this is only temporary to check these presets on the current transforms (which return PIL images), and that we will remove the conversions eventually?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. It's also because the user is supposed to use these transforms during inference. At that point, you don't know if they chose to read the image with PIL or with TV's io. So here we support both.
This is also done because the reference scripts for other tasks only support PIL. BTW now that you added a prototype section for your model, you should add a support for it in your reference scripts on the other PR.
url="", | ||
transforms=RaftEval, | ||
meta={ | ||
"recipe": "", | ||
"epe": -1234, | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is currently no schema enforcement on the meta
so in theory you can add whatever you want. Having some common names across tasks would be useful for us when we tackled the documentation improvements (for example we currently use the followings: recipe, size, interpolation). For within the same task, I think it makes sense to introduce task-specific fields (for example: detection uses map, quantized models support backend etc).
Your use-case is quite unique; so far we never had a situation where we train a model with one dataset and provide validation stats for many. We also tend to give the stats on validation set, not on training set. Perhaps in your case, is worth providing a single metric under epe
which gives the value on the same dataset(s) it was used for fitting (or eval?) and then have another field other_epe
(better name required) that's going to be a dictionary with entries the various datasets and their metrics.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR @NicolasHug. I've added a few comments to discuss with you, happy to chat offline if that's easier and summarize afterwards.
url="", | ||
transforms=RaftEval, | ||
meta={ | ||
"recipe": "", | ||
"epe": -1234, | ||
}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is currently no schema enforcement on the meta
so in theory you can add whatever you want. Having some common names across tasks would be useful for us when we tackled the documentation improvements (for example we currently use the followings: recipe, size, interpolation). For within the same task, I think it makes sense to introduce task-specific fields (for example: detection uses map, quantized models support backend etc).
Your use-case is quite unique; so far we never had a situation where we train a model with one dataset and provide validation stats for many. We also tend to give the stats on validation set, not on training set. Perhaps in your case, is worth providing a single metric under epe
which gives the value on the same dataset(s) it was used for fitting (or eval?) and then have another field other_epe
(better name required) that's going to be a dictionary with entries the various datasets and their metrics.
return img1, img2, flow, valid_flow_mask | ||
|
||
def _pil_or_numpy_to_tensor(self, img1, img2, flow, valid_flow_mask): | ||
if not isinstance(img1, Tensor): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. It's also because the user is supposed to use these transforms during inference. At that point, you don't know if they chose to read the image with PIL or with TV's io. So here we support both.
This is also done because the reference scripts for other tasks only support PIL. BTW now that you added a prototype section for your model, you should add a support for it in your reference scripts on the other PR.
Co-authored-by: Vasilis Vryniotis <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, only couple of minor nits and you are ready to merge on green CI.
@NicolasHug I just realized there are a few other things that we need to add in this PR, just because you introduce a new family of models (optical flow).
|
Co-authored-by: Vasilis Vryniotis <[email protected]>
Thanks a lot for the reviews! I'll merge now, and will update these bits as we publish the pretrained weights - PR coming later today. |
Summary: Co-authored-by: Vasilis Vryniotis <[email protected]> Reviewed By: NicolasHug Differential Revision: D32950935 fbshipit-source-id: b936d2879243f7d5b8e05f329ebdfd074f5af05b
Towards #4644
This PR adds the
raft_large()
andraft_small()
builders, following the new model prototype API.cc @datumbox @bjuncek