-
Notifications
You must be signed in to change notification settings - Fork 7.2k
Restructuring metrics meta-data #5859
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding some highlights to assist review:
"kitti_train_per_image_epe": 5.0172, | ||
"kitti_train_f1-all": 17.4506, | ||
"metrics": { | ||
"sintel_train_cleanpass_epe": 1.4411, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not terribly happy with this approach but I left it as-is to get your input @NicolasHug. Happy to leave this on a follow up PR but here are some thoughts.
The need of providing accuracy metrics on multiple datasets is common and something we might face on other models (for example in Classification give metrics both for ImageNet1K V1/V2/Real datasets). In such cases, we might want to provide a map of accuracies one for each dataset. For example:
"metrics": {
# single metrics on the "best" available dataset (whichever this is for each weight) used for summarization
"epe": 1.4411,
"fl_all": 17.4506,
# metrics across different datasets. Naming TBD
"epe_breakdown": {
"sintel_train_cleanpass": 1.4411,
"sintel_train_finalpass": 2.7894,
"kitti_train_per_image_epe": 5.0172,
},
"fl_all_breakdown": {
"kitti_train": 17.4506,
},
}
We need to figure out this detail before the release because making changes on the meta-data will be considered BC-breaking in our current definition.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree, this is very adhoc for now. Happy to discuss this in the near future.
w.meta["acc@1"], | ||
w.meta["acc@5"], | ||
w.meta["metrics"]["acc@1"], | ||
w.meta["metrics"]["acc@5"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixing documentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @datumbox , LGTM.
The rendering of the docs is a bit strange now:
But this is an easy fix that we can leave for later
@NicolasHug Thanks for the review. You are right it looks ugly; let's follow up on the next PR. Any pointers on where this should be fixed? |
Summary: * Restructuring metrics meta-data for detection, segmentation and optical flow. * Renaming acc to pixel_acc for segmentation * Restructure video meta-data. * Restructure classification and quantization meta-data. * Fix tests. * Fix documentation Reviewed By: jdsgomes, NicolasHug Differential Revision: D36095663 fbshipit-source-id: a1261efe016026406aef8a3b4dd571fde7600c28
This PR restructures the way we store metrics in the meta-data:
metrics
key on meta-data.