convert : fix vocab size when not defined in hparams #3421
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
If vocab_size is somehow missing from config.json, or in the case of the previous GPT-NeoX script, it is ignored entirely, we can end up in a case where vocab_size is less than
len(reverse_vocab)
, even though the purpose of vocab_size is to enlarge the vocabulary with padding tokensUse
len(tokenizer.vocab)
instead of attempting to interpret JSON directly, to account for added tokens. Also, add the missing hparams check to the GPT-NeoX script.With this change, GPT-NeoX is now attempting to use added tokens, though it is failing due to reasons described in PR #3405. Before this change, it wasn't even trying.
cc @goerch (yes, I know this conflicts with your PR)