-
Notifications
You must be signed in to change notification settings - Fork 595
Export TorchTune llama3_2_vision in ET #5911
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/5911
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ No FailuresAs of commit 9777e23 with merge base 4e83f24 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
07b8a4c
to
20355d7
Compare
20355d7
to
850e7c8
Compare
850e7c8
to
e850430
Compare
e850430
to
af45b02
Compare
Summary: For situations where the forward has non-position arguments, such as https://github.com/pytorch/torchtune/blob/3c450ef5f1fbe8237f899e942fd5222491a47ca7/torchtune/modules/transformer.py#L519 PR chain: - **YOU ARE HERE ~>** [Add kwarg example inputs to eager model base](#5765) - [Llama2 model cleanup](#5859) - [Accept model type parameter in export_llama](#5910) - [Export TorchTune llama3_2_vision in ET](#5911) - [Add et version of TorchTune MHA for swapping with custom op](#5912) Test Plan: Exported Stories110M model. ``` wget "https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.pt" echo '{"dim": 768, "multiple_of": 32, "n_heads": 12, "n_layers": 12, "norm_eps": 1e-05, "vocab_size": 32000}' > params.json python -m examples.models.llama2.export_llama -c stories110M.pt -p params.json -X -kv ``` Differential Revision: D64027696 Pulled By: dvorjackz
af45b02
to
f9c001a
Compare
Summary: For situations where the forward has non-position arguments, such as https://github.com/pytorch/torchtune/blob/3c450ef5f1fbe8237f899e942fd5222491a47ca7/torchtune/modules/transformer.py#L519 PR chain: - **YOU ARE HERE ~>** [Add kwarg example inputs to eager model base](#5765) - [Llama2 model cleanup](#5859) - [Accept model type parameter in export_llama](#5910) - [Export TorchTune llama3_2_vision in ET](#5911) - [Add et version of TorchTune MHA for swapping with custom op](#5912) Test Plan: Exported Stories110M model. ``` wget "https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.pt" echo '{"dim": 768, "multiple_of": 32, "n_heads": 12, "n_layers": 12, "norm_eps": 1e-05, "vocab_size": 32000}' > params.json python -m examples.models.llama2.export_llama -c stories110M.pt -p params.json -X -kv ``` Differential Revision: D64027696 Pulled By: dvorjackz
03779eb
to
4a09ff1
Compare
Summary: For situations where the forward has non-position arguments, such as https://github.com/pytorch/torchtune/blob/3c450ef5f1fbe8237f899e942fd5222491a47ca7/torchtune/modules/transformer.py#L519 PR chain: - **YOU ARE HERE ~>** [Add kwarg example inputs to eager model base](#5765) - [Llama2 model cleanup](#5859) - [Accept model type parameter in export_llama](#5910) - [Export TorchTune llama3_2_vision in ET](#5911) - [Add et version of TorchTune MHA for swapping with custom op](#5912) Test Plan: Exported Stories110M model. ``` wget "https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.pt" echo '{"dim": 768, "multiple_of": 32, "n_heads": 12, "n_layers": 12, "norm_eps": 1e-05, "vocab_size": 32000}' > params.json python -m examples.models.llama2.export_llama -c stories110M.pt -p params.json -X -kv ``` Reviewed By: tarun292 Differential Revision: D64027696 Pulled By: dvorjackz
Summary: For situations where the forward has non-position arguments, such as https://github.com/pytorch/torchtune/blob/3c450ef5f1fbe8237f899e942fd5222491a47ca7/torchtune/modules/transformer.py#L519 PR chain: - **YOU ARE HERE ~>** [Add kwarg example inputs to eager model base](#5765) - [Llama2 model cleanup](#5859) - [Accept model type parameter in export_llama](#5910) - [Export TorchTune llama3_2_vision in ET](#5911) - [Add et version of TorchTune MHA for swapping with custom op](#5912) Test Plan: Exported Stories110M model. ``` wget "https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.pt" echo '{"dim": 768, "multiple_of": 32, "n_heads": 12, "n_layers": 12, "norm_eps": 1e-05, "vocab_size": 32000}' > params.json python -m examples.models.llama2.export_llama -c stories110M.pt -p params.json -X -kv ``` Reviewed By: tarun292 Differential Revision: D64027696 Pulled By: dvorjackz
Summary: For situations where the forward has non-position arguments, such as https://github.com/pytorch/torchtune/blob/3c450ef5f1fbe8237f899e942fd5222491a47ca7/torchtune/modules/transformer.py#L519 PR chain: - **YOU ARE HERE ~>** [Add kwarg example inputs to eager model base](#5765) - [Llama2 model cleanup](#5859) - [Accept model type parameter in export_llama](#5910) - [Export TorchTune llama3_2_vision in ET](#5911) - [Add et version of TorchTune MHA for swapping with custom op](#5912) Test Plan: Exported Stories110M model. ``` wget "https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.pt" echo '{"dim": 768, "multiple_of": 32, "n_heads": 12, "n_layers": 12, "norm_eps": 1e-05, "vocab_size": 32000}' > params.json python -m examples.models.llama2.export_llama -c stories110M.pt -p params.json -X -kv ``` Reviewed By: tarun292 Differential Revision: D64027696 Pulled By: dvorjackz
Summary: For situations where the forward has non-position arguments, such as https://github.com/pytorch/torchtune/blob/3c450ef5f1fbe8237f899e942fd5222491a47ca7/torchtune/modules/transformer.py#L519 PR chain: - **YOU ARE HERE ~>** [Add kwarg example inputs to eager model base](#5765) - [Llama2 model cleanup](#5859) - [Accept model type parameter in export_llama](#5910) - [Export TorchTune llama3_2_vision in ET](#5911) - [Add et version of TorchTune MHA for swapping with custom op](#5912) Test Plan: Exported Stories110M model. ``` wget "https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.pt" echo '{"dim": 768, "multiple_of": 32, "n_heads": 12, "n_layers": 12, "norm_eps": 1e-05, "vocab_size": 32000}' > params.json python -m examples.models.llama2.export_llama -c stories110M.pt -p params.json -X -kv ``` Reviewed By: tarun292 Differential Revision: D64027696 Pulled By: dvorjackz
Summary: For situations where the forward has non-position arguments, such as https://github.com/pytorch/torchtune/blob/3c450ef5f1fbe8237f899e942fd5222491a47ca7/torchtune/modules/transformer.py#L519 PR chain: - **YOU ARE HERE ~>** [Add kwarg example inputs to eager model base](#5765) - [Llama2 model cleanup](#5859) - [Accept model type parameter in export_llama](#5910) - [Export TorchTune llama3_2_vision in ET](#5911) - [Add et version of TorchTune MHA for swapping with custom op](#5912) Pull Request resolved: #5765 Test Plan: Exported Stories110M model. ``` wget "https://huggingface.co/karpathy/tinyllamas/resolve/main/stories110M.pt" echo '{"dim": 768, "multiple_of": 32, "n_heads": 12, "n_layers": 12, "norm_eps": 1e-05, "vocab_size": 32000}' > params.json python -m examples.models.llama2.export_llama -c stories110M.pt -p params.json -X -kv ``` Reviewed By: tarun292 Differential Revision: D64027696 Pulled By: dvorjackz fbshipit-source-id: 15ecfb458c6194159140d4c601e5443a2e524fdc
Summary: - Removes redundant steps in the Llama2 export - Factors out checkpointing to be shared with future Llama models (namely 3.2 multimodal) - Comments and orders code more clearly PR chain: - [Add kwarg example inputs to eager model base](#5765) - **YOU ARE HERE ~>** [Llama2 model cleanup](#5859) - [Accept model type parameter in export_llama](#5910) - [Export TorchTune llama3_2_vision in ET](#5911) - [Add et version of TorchTune MHA for swapping with custom op](#5912) Test Plan: Ensure export + eval is similar before and after for Stories 110M: ``` python -m examples.models.llama2.eval_llama -c <checkpoint.pth> -p <params.json> -t <tokenizer.model/bin> -d fp32 --max_seq_len 2048 --limit 1000 ``` Before: ``` wikitext: {'word_perplexity,none': 14464.645927166595, 'word_perplexity_stderr,none': 'N/A', 'byte_perplexity,none': 5.99788806086652, 'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none': 2.5844545973083983, 'bits_per_byte_stderr,none': 'N/A', 'alias': 'wikitext'} ``` After: ``` wikitext: {'word_perplexity,none': 14464.299192404438, 'word_perplexity_stderr,none': 'N/A', 'byte_perplexity,none': 5.997861173678705, 'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none': 2.584448130015399, 'bits_per_byte_stderr,none': 'N/A', 'alias': 'wikitext'} ``` Differential Revision: D64145852 Pulled By: dvorjackz
Summary: - Removes redundant steps in the Llama2 export - Factors out checkpointing to be shared with future Llama models (namely 3.2 multimodal) - Comments and orders code more clearly PR chain: - [Add kwarg example inputs to eager model base](#5765) - **YOU ARE HERE ~>** [Llama2 model cleanup](#5859) - [Accept model type parameter in export_llama](#5910) - [Export TorchTune llama3_2_vision in ET](#5911) - [Add et version of TorchTune MHA for swapping with custom op](#5912) Test Plan: Ensure export + eval is similar before and after for Stories 110M: ``` python -m examples.models.llama2.eval_llama -c <checkpoint.pth> -p <params.json> -t <tokenizer.model/bin> -d fp32 --max_seq_len 2048 --limit 1000 ``` Before: ``` wikitext: {'word_perplexity,none': 14464.645927166595, 'word_perplexity_stderr,none': 'N/A', 'byte_perplexity,none': 5.99788806086652, 'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none': 2.5844545973083983, 'bits_per_byte_stderr,none': 'N/A', 'alias': 'wikitext'} ``` After: ``` wikitext: {'word_perplexity,none': 14464.299192404438, 'word_perplexity_stderr,none': 'N/A', 'byte_perplexity,none': 5.997861173678705, 'byte_perplexity_stderr,none': 'N/A', 'bits_per_byte,none': 2.584448130015399, 'bits_per_byte_stderr,none': 'N/A', 'alias': 'wikitext'} ``` Reviewed By: dbort Differential Revision: D64145852 Pulled By: dvorjackz
dc889b9
to
eb49bcf
Compare
Summary: Specify model to export in the CLI. Test Plan: Exported the stories 110M model. ``` python -m examples.models.llama.export_llama -c stories110M/stories110M.pt -p stories110M/params.json -X -kv ``` PR chain: - [Add kwarg example inputs to eager model base](#5765) - [Llama2 model cleanup](#5859) - **YOU ARE HERE ~>** [Accept model type parameter in export_llama](#5910) - [Export TorchTune llama3_2_vision in ET](#5911) - [Runner changes for TorchTune Llama3.2 vision text decoder](#6610) - [Add et version of TorchTune MHA for swapping with custom op](#5912) Differential Revision: D65612837 Pulled By: dvorjackz
eb49bcf
to
b1d2327
Compare
Summary: Specify model to export in the CLI. Test Plan: Exported the stories 110M model. ``` python -m examples.models.llama.export_llama -c stories110M/stories110M.pt -p stories110M/params.json -X -kv ``` PR chain: - [Add kwarg example inputs to eager model base](#5765) - [Llama2 model cleanup](#5859) - **YOU ARE HERE ~>** [Accept model type parameter in export_llama](#5910) - [Export TorchTune llama3_2_vision in ET](#5911) - [Runner changes for TorchTune Llama3.2 vision text decoder](#6610) - [Add et version of TorchTune MHA for swapping with custom op](#5912) Reviewed By: helunwencser Differential Revision: D65612837 Pulled By: dvorjackz
b1d2327
to
d8bfa6f
Compare
Summary: Specify model to export in the CLI. Test Plan: Exported the stories 110M model. ``` python -m examples.models.llama.export_llama -c stories110M/stories110M.pt -p stories110M/params.json -X -kv ``` PR chain: - [Add kwarg example inputs to eager model base](#5765) - [Llama2 model cleanup](#5859) - **YOU ARE HERE ~>** [Accept model type parameter in export_llama](#5910) - [Export TorchTune llama3_2_vision in ET](#5911) - [Runner changes for TorchTune Llama3.2 vision text decoder](#6610) - [Add et version of TorchTune MHA for swapping with custom op](#5912) Reviewed By: helunwencser Differential Revision: D65612837 Pulled By: dvorjackz
Summary: Specify model to export in the CLI. Test Plan: Exported the stories 110M model. ``` python -m examples.models.llama.export_llama -c stories110M/stories110M.pt -p stories110M/params.json -X -kv ``` PR chain: - [Add kwarg example inputs to eager model base](#5765) - [Llama2 model cleanup](#5859) - **YOU ARE HERE ~>** [Accept model type parameter in export_llama](#5910) - [Export TorchTune llama3_2_vision in ET](#5911) - [Runner changes for TorchTune Llama3.2 vision text decoder](#6610) - [Add et version of TorchTune MHA for swapping with custom op](#5912) Reviewed By: helunwencser Differential Revision: D65612837 Pulled By: dvorjackz
d8bfa6f
to
dbd9139
Compare
c79b773
to
2fe7bd8
Compare
|
||
self.model_ = prune_output_vocab(self.model_, output_prune_map) | ||
|
||
# if self.use_kv_cache: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will be uncommented in #6643
@@ -951,7 +954,9 @@ def _load_llama_model( | |||
use_kv_cache, | |||
use_sdpa_with_kv_cache, | |||
enable_dynamic_shape, | |||
model.params, | |||
model.max_seq_len, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously all the models were getting it from model.params
. Is it guaranteed that all models will have these available on them as attributes directly? Hopefully CI catches if they don't.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, hopefully CI catches but should be okay since there's only two model.py
s using this export_llama_lib
atm and both have it defined
Also can you add it to |
@dvorjackz has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@dvorjackz has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Summary
Add
llama3_2_vision
's text decoder as a TorchTune-exportable model.KV cache (commented out) and quantization not supported yet.
Test plan
Tested with the next PR in this chain and the instructions in its description: #6610
PR chain: