Skip to content

WIP Add table summarizing classification weights accuracies in docs #5741

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from

Conversation

NicolasHug
Copy link
Member

@NicolasHug NicolasHug commented Apr 5, 2022

Rendered docs: https://output.circle-artifacts.com/output/job/e411a4b2-f155-422e-bd08-e03f854aecc6/artifacts/0/docs/models.html

Table looks like this

image

I'll make sure to add links from the weight names to their documentation page as in #5577
Ideally the rows should be more narrow but I don't know how to do that yet. Not too big of a deal anyway.

@datumbox
Copy link
Contributor

datumbox commented Apr 5, 2022

@NicolasHug It looks pretty good. I assume we can fix the parameters to not use scientific format, right?

@NicolasHug
Copy link
Member Author

I did it on purpose to give an idea of the order of magnitude directly, but I don't have a strong opinion, I'll revert

Comment on lines +301 to +302
# TODO: this is ugly af and incorrect. We'll need an automatic way to
# retrieve weight enums for each section, or manually list them.
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any thought on that @datumbox ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes you are right. We need a registration mechanism for the models. The new Datasets API has one, so part of the reason I didn't want to invest time creating one is to potentially adopt/extend the one on Datasets. Thoughts?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The registration of the dataset is very very basic, it's just a decorator that adds the callable/object to a private dict. It would probably make sense to use something similar for the models / weights. Whether we should be relying on the same utils though is up for discussion - as a first version I'd suggest not to merge things and for the models to have a separate implementation. The code is really basic anyway.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy with what you propose and on the technical details you mentioned. I had a similar simple approach on the original proposal of the Multiweights support but I didn't port it to adopt some solution in common with Datasets. The code doesn't have to be the same but I think the interface can be basic and similar. As discuss offline, the only thing different for models is the fact that there is a hierarchy (Detection, Optical Flow, Classification etc) and this needs to be taken into account because names across modules conflict (for example resnet50 exists both in Classification and Quantizaztion submodules).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants