Skip to content

Conversation

baoleai
Copy link
Collaborator

@baoleai baoleai commented Sep 25, 2024

PR type

  • Bug Fix
  • New Feature
  • Document Updates
  • More Models or Datasets Support

PR information

set find_labels and can_return_loss after ta_accelerate, because ta_accelerate will wrapper original model to DistributedParallel module in TorchAcc.

@baoleai baoleai changed the title [TorchAcc] fix: fix find_labels and can_return_loss. [TorchAcc] fix: fix find_labels and can_return_loss Sep 25, 2024
@baoleai baoleai merged commit cd03fee into modelscope:main Sep 25, 2024
2 checks passed
@baoleai baoleai deleted the features/fix_find_labels branch September 25, 2024 08:30
tastelikefeet added a commit to tastelikefeet/swift that referenced this pull request Sep 26, 2024
* commit '57b3b9e46aa01bdc5c29b5e3d1e2da0582c9b282': (23 commits)
  fix not impl bug (modelscope#2134)
  Support fine-tuning MLLama. (modelscope#2132)
  Support for fine-tuning and deployment of the Llama 3.2 series models. (modelscope#2130)
  support got-ocr2 (modelscope#2123)
  [TorchAcc] fix: fix find_labels and can_return_loss (modelscope#2120)
  fix qwen2-audio (modelscope#2116)
  Fix qwen2-vl zero2/3 (modelscope#2114)
  support vllm & qwen2-vl video (modelscope#2110)
  Support for fine-tuning Llama 3.1 Omni. (modelscope#2106)
  fix infer device_map (modelscope#2105)
  fix cpu infer device_map (modelscope#2103)
  fix dataset preprocess (modelscope#2102)
  fix deploy openai compat (modelscope#2101)
  Fix the issue with media_offset in owl3 when batch_size > 1. (modelscope#2100)
  fix vllm tokenizer (modelscope#2099)
  Support for fine-tuning Pixtral-12B. (modelscope#2090)
  fix multiprocess remove_columns (modelscope#2088)
  fix qwen2.5 template (modelscope#2081)
  dynamic vit gradient_checkpointing (modelscope#2071)
  Support Mistral-small-inst-2409 (modelscope#2077)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants