From 80f57a552859ea0164489abd49d543a920ab8d65 Mon Sep 17 00:00:00 2001 From: Mengtao Yuan Date: Thu, 18 Apr 2024 13:09:33 -0700 Subject: [PATCH] Update README.md on the evaluation parameters --- examples/models/llama2/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/models/llama2/README.md b/examples/models/llama2/README.md index 4448f1cba42..fdc7832dbcb 100644 --- a/examples/models/llama2/README.md +++ b/examples/models/llama2/README.md @@ -22,7 +22,7 @@ Since 7B Llama2 model needs at least 4-bit quantization to fit even within some ## Quantization: We employed 4-bit groupwise per token dynamic quantization of all the linear layers of the model. Dynamic quantization refers to quantizating activations dynamically, such that quantization parameters for activations are calculated, from min/max range, at runtime. Here we quantized activations with 8bits (signed integer). Furthermore, weights are statically quantized. In our case weights were per-channel groupwise quantized with 4bit signed integer. For more information refer to this [page](https://github.com/pytorch-labs/ao/). -We evaluated WikiText perplexity using [LM Eval](https://github.com/EleutherAI/lm-evaluation-harness). Below are the results for two different groupsizes. +We evaluated WikiText perplexity using [LM Eval](https://github.com/EleutherAI/lm-evaluation-harness). Below are the results for two different groupsizes, with max_seq_len 2048, and 1000 samples: |Llama 2 | Baseline (FP32) | Groupwise 4-bit (128) | Groupwise 4-bit (256) |--------|-----------------| ---------------------- | ---------------