-
Notifications
You must be signed in to change notification settings - Fork 699
how to perform quantization of my onnx or pytorch model. #3570
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @WilliamZhaoz, for onnx models you should be able to use the Loader or any of the example binaries that use the Loader (see this doc) For PyTorch models, this isn't currently available in torch_glow, but we have plans to add this feature. If you are interested in implementing it please go for it otherwise I will probably be able to do it next week. |
thanks Jack,
I have a onnx model, converted from pytorch model, but my task is not a
image classification task, in such a situation, how can I perform my model,
since I find only a image classification interface in glow.
Jack Montgomery <[email protected]> 于2019年10月1日周二 下午5:58写道:
… Hi @WilliamZhaoz <https://github.com/WilliamZhaoz>, for onnx models you
should be able to use the Loader or any of the example binaries that use
the Loader (see this doc
<https://github.com/pytorch/glow/blob/master/docs/Quantization.md#how-to-perform-nn-conversion>
)
For PyTorch models, this isn't currently available in torch_glow, but we
have plans to add this feature. If you are interested in implementing it
please go for it otherwise I will probably be able to do it next week.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3570?email_source=notifications&email_token=AIWSN2O4US55V5VRFOKUXO3QMMNLJA5CNFSM4I3RILKKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAAWRRI#issuecomment-536963269>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AIWSN2MAM725LPUGCTWTFT3QMMNLJANCNFSM4I3RILKA>
.
|
@WilliamZhaoz apologies for the delayed response. |
thanks Jack,
so you mean that, I can write the data interface to run my onnxmodel, only
need to re-write loader, for other model, even not is a ImageClassifier or
TextTranslator, it can works well?
Jack Montgomery <[email protected]> 于2019年10月17日周四 上午1:44写道:
… @WilliamZhaoz <https://github.com/WilliamZhaoz> apologies for the delayed
response.
You should be able to use ImageClassifier or TextTranslator as an example
for using Loader to load a model from file and run it.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3570?email_source=notifications&email_token=AIWSN2JQBNC4ENOUZRKRVMLQO5HHRA5CNFSM4I3RILKKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEBNLBDI#issuecomment-542814349>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIWSN2NSR26FA7JX5UG7MW3QO5HHRANCNFSM4I3RILKA>
.
|
I'm not sure exactly what you mean. What I meant was that probably you can make a binary similar to ImageClassifier or TextTranslator for your task and probably even can reuse Loader.cpp to create the glow graph for you. |
@WilliamZhaoz You need to generate bundles using model-compiler using that .onnx model after that you can run the bundle using main.cpp API |
now I have my own pytorch and onnx model.
how can I quantize it using glow in python API, and then how can I inference it in glow?
is there any clear doc?
thanks.
The text was updated successfully, but these errors were encountered: