Skip to content

make_module: First version #23288

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 16 commits into from
Closed

Conversation

z-a-f
Copy link

@z-a-f z-a-f commented Jul 24, 2019

Stack from ghstack:

Differential Revision: D16455390

z-a-f pushed a commit that referenced this pull request Jul 24, 2019
ghstack-source-id: 77a318d
Pull Request resolved: #23288
@z-a-f
Copy link
Author

z-a-f commented Jul 24, 2019

Example:

mod = make_module(torch.add)
mod_quant = make_module(torch.add, quantized=True)

a = torch.tensor(1, dtype=torch.float)
qa = torch.quantize_linear(a, 1.0, 127, torch.quint8)
b = torch.tensor(2, dtype=torch.float)
qb = torch.quantize_linear(b, 1.0, 127, torch.quint8)

print("Types\t\t:", type(mod), type(mod_quant))
print("__repr__\t:", mod, mod_quant)
print("forward call\t:", mod(a, b))
print("forward call\t:", mod_quant(qa, qb, 1.0, 0))

Prints:

Types		: <class '__main__._WrapModule'> <class '__main__._WrapModule'>
__repr__	: Add() Add_quantized()
forward call	: tensor(3.)
forward call	: tensor(3., size=(), dtype=torch.quint8, scale=1.0, zero_point=0)

I can make the returned type to be <class '__main__.Add'> and <class '__main__.Add_quantized'>, but that would require slightly less readable version of the code :)

Copy link
Contributor

@raghuramank100 raghuramank100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see suggestions

z-a-f pushed a commit that referenced this pull request Jul 24, 2019
ghstack-source-id: c23e019
Pull Request resolved: #23288
@z-a-f z-a-f requested a review from raghuramank100 July 25, 2019 23:41
z-a-f pushed a commit that referenced this pull request Jul 25, 2019
ghstack-source-id: d7ae4f1
Pull Request resolved: #23288
Copy link
Contributor

@raghuramank100 raghuramank100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First draft, will add more comments later.

Copy link
Collaborator

@dzhulgakov dzhulgakov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My biggest comment is that we need to figure out how to make it work with TorchScript (as you want the resulting model to be scriptable and runnable without python). Given that codegen + script to codegen might be an more robust approach (if you have a closed set of modules to cover which appears to be the case)

@z-a-f z-a-f requested a review from dzhulgakov July 30, 2019 18:37
@pytorchbot pytorchbot added the module: nn Related to torch.nn label Jul 30, 2019
Copy link
Contributor

@raghuramank100 raghuramank100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see comments

@z-a-f z-a-f requested a review from jerryzh168 July 30, 2019 22:50
@z-a-f z-a-f requested a review from raghuramank100 July 30, 2019 23:03
z-a-f pushed a commit that referenced this pull request Jul 30, 2019
ghstack-source-id: 47712f4
Pull Request resolved: #23288
Copy link
Contributor

@jerryzh168 jerryzh168 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, we can change to the base class solution in future prs.

@zou3519 zou3519 deleted the gh/zafartahirov/16/head branch July 31, 2019 05:17
@facebook-github-bot
Copy link
Contributor

@zafartahirov merged this pull request in 9c549df.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged module: nn Related to torch.nn
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants