Skip to content

Conversation

datumbox
Copy link
Contributor

@datumbox datumbox commented Apr 21, 2022

This PR restructures the way we store metrics in the meta-data:

  • Moves all model accuracy metrics in a metrics key on meta-data.
  • Renames several metrics to improve naming consistency.
  • Adds new tests to enforce schema fields and keep the number of fields low
  • Fixes the documentation

@datumbox datumbox marked this pull request as draft April 21, 2022 17:32
Copy link
Contributor Author

@datumbox datumbox left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding some highlights to assist review:

"kitti_train_per_image_epe": 5.0172,
"kitti_train_f1-all": 17.4506,
"metrics": {
"sintel_train_cleanpass_epe": 1.4411,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not terribly happy with this approach but I left it as-is to get your input @NicolasHug. Happy to leave this on a follow up PR but here are some thoughts.

The need of providing accuracy metrics on multiple datasets is common and something we might face on other models (for example in Classification give metrics both for ImageNet1K V1/V2/Real datasets). In such cases, we might want to provide a map of accuracies one for each dataset. For example:

"metrics": {
   # single metrics on the "best" available dataset (whichever this is for each weight) used for summarization
  "epe": 1.4411,
  "fl_all": 17.4506,
  # metrics across different datasets. Naming TBD
  "epe_breakdown": {
    "sintel_train_cleanpass": 1.4411,
    "sintel_train_finalpass": 2.7894,
    "kitti_train_per_image_epe": 5.0172,
  },
  "fl_all_breakdown": {
    "kitti_train": 17.4506,
   },
}

We need to figure out this detail before the release because making changes on the meta-data will be considered BC-breaking in our current definition.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, this is very adhoc for now. Happy to discuss this in the near future.

w.meta["acc@1"],
w.meta["acc@5"],
w.meta["metrics"]["acc@1"],
w.meta["metrics"]["acc@5"],
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixing documentation.

@datumbox datumbox requested a review from NicolasHug April 22, 2022 11:49
@datumbox datumbox marked this pull request as ready for review April 22, 2022 11:49
@datumbox datumbox changed the title [WIP] Restructuring metrics meta-data Restructuring metrics meta-data Apr 22, 2022
Copy link
Member

@NicolasHug NicolasHug left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @datumbox , LGTM.

The rendering of the docs is a bit strange now:

image

But this is an easy fix that we can leave for later

@datumbox datumbox merged commit 6f016dd into pytorch:main Apr 22, 2022
@datumbox datumbox deleted the models/meta_metrics branch April 22, 2022 15:24
@datumbox
Copy link
Contributor Author

@NicolasHug Thanks for the review. You are right it looks ugly; let's follow up on the next PR. Any pointers on where this should be fixed?

facebook-github-bot pushed a commit that referenced this pull request May 6, 2022
Summary:
* Restructuring metrics meta-data for detection, segmentation and optical flow.

* Renaming acc to pixel_acc for segmentation

* Restructure video meta-data.

* Restructure classification and quantization meta-data.

* Fix tests.

* Fix documentation

Reviewed By: jdsgomes, NicolasHug

Differential Revision: D36095663

fbshipit-source-id: a1261efe016026406aef8a3b4dd571fde7600c28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants