Skip to content

Update timm library to 0.4.12 #429

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 4, 2021
Merged

Update timm library to 0.4.12 #429

merged 1 commit into from
Jul 4, 2021

Conversation

zurk
Copy link
Contributor

@zurk zurk commented Jul 2, 2021

Closes issues #417 #424
Closes PR #418

Hi @qubvel!

I would like to update of timm library to make it compatible with the latest torch versions.
All changes in code need to be done to fix test errors. I explain each line to help understand what I did in self-review.

Please, if you OK with this PR merge it and publish a new (I believe v0.2.0) version of the segmentation-models-pytorch package.

@@ -1,4 +1,4 @@
torchvision>=0.3.0
torchvision>=0.5.0
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

act_layer=Swish,
norm_kwargs={}, # TODO: check
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no more norm_kwargs and channel_multiplier options.
norm_kwargs was not used, so I just delete it, but in the case of channel_multiplier, you can still pass it shrough the new round_chs_fn argument.

See huggingface/pytorch-image-models@c4f482a#diff-27c2bbd967991cbb5264f93cb5da34895fdab02424b2cc8c63d3d0768e65d47aR490

@@ -74,7 +74,7 @@ def load_state_dict(self, state_dict, **kwargs):
'block': SelectiveKernelBasic,
'layers': [2, 2, 2, 2],
'zero_init_last_bn': False,
'block_args': {'sk_kwargs': {'min_attn_channels': 16, 'attn_reduction': 8, 'split_input': True}}
'block_args': {'sk_kwargs': {'rd_ratio': 1/8, 'split_input': True}}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. attn_reduction was changed to rd_ratio and the value is inversed. So old attn_reduction equals to new 1 / rd_ratio

See huggingface/pytorch-image-models@307a935#diff-69a503e8326443f3698de117aa9ac39e8fbb9ce52dfa7783c6708fe6a38f6e87R52

  1. There is no more min_attn_channels. I did not find an easy way to keep it here and I think it is fine to just delete it.

See huggingface/pytorch-image-models@bda8ab0#diff-69a503e8326443f3698de117aa9ac39e8fbb9ce52dfa7783c6708fe6a38f6e87R52

@qubvel qubvel merged commit ae4c0f8 into qubvel-org:master Jul 4, 2021
@xuzhuang1996
Copy link

It works for me,timm 0.4.12

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants