You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/models/llama2/README.md
+24-4Lines changed: 24 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -49,7 +49,7 @@ We employed 4-bit groupwise per token dynamic quantization of all the linear lay
49
49
50
50
We evaluated WikiText perplexity using [LM Eval](https://github.com/EleutherAI/lm-evaluation-harness). Please note that LM Eval reports perplexity normalized by word count instead of token count. You may see different perplexity for WikiText from other sources if they implement it differntly. More details could be found [here](https://github.com/EleutherAI/lm-evaluation-harness/issues/2301).
51
51
52
-
Below are the results for two different groupsizes, with max_seq_len 2048, and 1000 samples.
52
+
Below are the results for two different groupsizes, with max_seq_length 2048, and limit 1000.
0 commit comments