-
Notifications
You must be signed in to change notification settings - Fork 554
Description
Hi,
I am trying to fine-tune a Llama model with a large context size, and I found that to efficiently shard activations across multiple GPUs, I need to use Torchtitan. Here are some questions related to my setup:
See related issue: meta-llama/llama-recipes#785
-
Custom Dataset Usage
I created a custom dataset using parquet files and acustom_dataset.py
file, which is compatible withllama-recipes
. I'm also using theDEFAULT_CHATML_CHAT_TEMPLATE
. Could you please provide guidance on how to integrate and use this custom dataset effectively with Torchtitan? -
Fine-Tuning with Pretrained Model
Is it possible to fine-tune the model starting from a pretrained checkpoint? If so, are there specific steps or configurations needed to achieve this with Torchtitan? -
Model Support (Llama-3.2-1B)
I noticed that Torchtitan currently supports training Llama 3 models (8B, 70B) out of the box. What steps would I need to take if I wanted to trainmeta-llama/Llama-3.2-1B
specifically? -
Large Context and FSDP Limitation
I am unable to use FSDP because of the large context sizes I’m working with. Any additional guidance on handling large contexts effectively with Torchtitan would be appreciated.
Thank you for your help!