Skip to content

Fine-Tuning Llama Model with Large Context and Customized Dataset Using Torchtitan #677

@Amerehei

Description

@Amerehei

Hi,

I am trying to fine-tune a Llama model with a large context size, and I found that to efficiently shard activations across multiple GPUs, I need to use Torchtitan. Here are some questions related to my setup:

See related issue: meta-llama/llama-recipes#785

  1. Custom Dataset Usage
    I created a custom dataset using parquet files and a custom_dataset.py file, which is compatible with llama-recipes. I'm also using the DEFAULT_CHATML_CHAT_TEMPLATE. Could you please provide guidance on how to integrate and use this custom dataset effectively with Torchtitan?

  2. Fine-Tuning with Pretrained Model
    Is it possible to fine-tune the model starting from a pretrained checkpoint? If so, are there specific steps or configurations needed to achieve this with Torchtitan?

  3. Model Support (Llama-3.2-1B)
    I noticed that Torchtitan currently supports training Llama 3 models (8B, 70B) out of the box. What steps would I need to take if I wanted to train meta-llama/Llama-3.2-1B specifically?

  4. Large Context and FSDP Limitation
    I am unable to use FSDP because of the large context sizes I’m working with. Any additional guidance on handling large contexts effectively with Torchtitan would be appreciated.

Thank you for your help!

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or requestquestionFurther information is requested

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions