diff --git a/README.md b/README.md index c4d050d..aed1cfe 100644 --- a/README.md +++ b/README.md @@ -23,7 +23,7 @@ If you're planning on using any API-based models, make sure you define your rele The images and text are stored on the HuggingFace hub, as a .zip. You may download it directly from there, using `huggingface-cli` (recommended): ```bash -huggingface-cli download answerdotai/ReadBench readbench.zip --repo-type dataset +huggingface-cli download answerdotai/ReadBench readbench.zip --repo-type dataset --local-dir . ``` Alternatively, if you are unable to use `huggingface-cli`, you may use the direct download URL, as provided by HuggingFace: @@ -42,15 +42,16 @@ unzip readbench.zip The authors of GPQA have requested that the dataset should not be reshared as-is, to minimise model contamination. We follow their wishes, which means you need to generate the GPQA images yourself, absed on the original GPQA dataset. You can do so by running the following commands: ```bash -python data_prep.py --datasets gpqa +python datagen.py --datasets gpqa ``` +You might get an error that the dataset is gated and that you need to accept terms on the HF hub. To resolve, just follow the link, accept, and try again. 5. **Prepare the benchmark** You may now run the following command to prepare the metadata file which will be used to run the benchmark: ```bash -python downsampler.py --root rendered_images_ft12 --split standard +python data_prep.py --root rendered_images_ft12 --split standard ``` #### tl;dr @@ -58,10 +59,10 @@ python downsampler.py --root rendered_images_ft12 --split standard Running the commands below will download and prepare the full ReadBench benchmark, as used in the paper: ```bash -huggingface-cli download answerdotai/ReadBench readbench.zip --type dataset +huggingface-cli download answerdotai/ReadBench readbench.zip --repo-type dataset --local_dir . unzip readbench.zip -python data_prep.py --datasets gpqa -python downsampler.py --root rendered_images_ft12 --split standard +python datagen.py --datasets gpqa +python data_prep.py --root rendered_images_ft12 --split standard ``` diff --git a/eval_config/dataset2prompt.json b/eval_config/dataset2prompt.json index 39073a2..df9f7cb 100644 --- a/eval_config/dataset2prompt.json +++ b/eval_config/dataset2prompt.json @@ -2,5 +2,5 @@ "narrativeqa": "You are given a story, which can be either a novel or a movie script, and a question. Answer the question asconcisely as you can, using a single phrase if possible. Do not provide any explanation.\n\nStory: {context}\n\nNow, answer the question based on the story asconcisely as you can, using a single phrase if possible. Do not provide any explanation.\n\nQuestion: {input}\n\nAnswer:", "hotpotqa": "Answer the question based on the given passages. Only give me the answer and do not output any other words.\n\nThe following are given passages.\n{context}\n\nAnswer the question based on the given passages. Only give me the answer and do not output any other words.\n\nQuestion: {input}\nAnswer:", "2wikimqa": "Answer the question based on the given passages. Only give me the answer and do not output any other words.\n\nThe following are given passages.\n{context}\n\nAnswer the question based on the given passages. Only give me the answer and do not output any other words.\n\nQuestion: {input}\nAnswer:", - "triviaqa": "Answer the question based on the given passage. Only give me the answer and do not output any other words. The following are some examples.\n\n{context}\n\n{input}", + "triviaqa": "Answer the question based on the given passage. Only give me the answer and do not output any other words. The following are some examples.\n\n{context}\n\n{input}" } \ No newline at end of file