Gemma 3 #2018
Replies: 3 comments 2 replies
-
Great! let's push more changes! |
Beta Was this translation helpful? Give feedback.
-
Hi @danielhanchen Thanks for this great update, I only have one question in the point: "Vision models now auto resize image", how is the automatic image size adjustment handled, for example I changed my images if they exceeded a certain size using `
` , now I can remove this and it will look for the best size based on the processing, for example in Qwen where a minimum and maximum are defined, how would it work? Can I disable this? |
Beta Was this translation helpful? Give feedback.
-
nice job |
Beta Was this translation helpful? Give feedback.
-
March Release 🦥
Get the latest stable Unsloth via:
The March release should be stable - you can force the version via:
New Features
Read all details here: https://unsloth.ai/blog/gemma3
Gemma 3 1B, 4B, 12B and 27B finetuning all work now! We fixed some issues which caused Gemma 3 training loss to be very high. This includes some tokenization issues so fine-tuning Gemma 3 will now work correctly if you use Unsloth.

Preliminary support for full-finetuning and 8bit finetuning - set
full_finetuning = True
orload_in_8bit = True
Both will be optimized further in the future! A reminder you will need more powerful GPUs!Vision models now auto resize images which stops OOMs and also allows truncating sequence lengths!
Many multiple optimizations in Unsloth allowing a further +10% less VRAM usage, and >10% speedup boost for 4bit (on top of our original 2x faster, 70% less memory usage). 8bit and full finetuning also benefit!
GRPO in Unsloth now allows non Unsloth uploaded models to be in 4bit as well - reduces VRAM usage a lot! (ie pretend your own finetune of Llama)
New training logs and infos - training parameter counts, total batch size

Vision models now also work for normal text training! This means non vision notebooks can work with vision models!
Complete gradient accumulation bug fix coverage for all models!
GRPO notebook for Gemma 3 coming soon with Hugging Face's reasoning course!
DoRA, Dropout, and other PEFT methods should just work!
Bug fixes
pip install "unsloth==2025.3.13" "unsloth_zoo==2025.3.11"
pip install --no-deps git+https://github.com/huggingface/transformers.git
Other items
xformers
wheels to pyproject.toml by @versipellis in [Windows Support] Add latestxformers
wheels to pyproject.toml #1753New Contributors
xformers
wheels to pyproject.toml #1753Full Changelog: 2025-02...2025-03
This discussion was created from the release Gemma 3.
Beta Was this translation helpful? Give feedback.
All reactions