Skip to content

Conversation

Isotr0py
Copy link
Member

@Isotr0py Isotr0py commented Aug 11, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

Test Plan

pytest -s -v tests/models/multimodal/generation/test_mllama.py -k test_models_distributed

Test Result

Test should be skipped on CI now.

(Optional) Documentation Update

Signed-off-by: Isotr0py <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added llama Related to Llama models multi-modality Related to multi-modality (#4194) labels Aug 11, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a temporary fix to skip Mllama tests on Transformers v4.55.0 due to a regression, which is a good approach to keep the CI green. My main feedback is to refactor the duplicated pytest.mark.skipif condition into a shared constant to improve code maintainability. This will make it easier to remove the skip condition once the upstream issue is resolved.

Comment on lines +289 to +292
@pytest.mark.skipif(
TRANSFORMERS_VERSION == "4.55.0",
reason="Transformers v4.55.0 has a regression issue on mllama, "
"see: https://github.com/huggingface/transformers/pull/40083")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

To improve maintainability and reduce code duplication, it's better to define this skipif condition as a constant and reuse it for all affected tests. This will make it easier to remove the skip condition in the future when the upstream issue is resolved.

Please define the following constant at the top of the file, for example after the imports:

MMLAMA_REGRESSION_TRANSFORMERS_4_55_0 = pytest.mark.skipif(
    TRANSFORMERS_VERSION == "4.55.0",
    reason="Transformers v4.55.0 has a regression issue on mllama, "
    "see: https://github.com/huggingface/transformers/pull/40083")

Then, you can apply it to this test and the others (test_models_multi_leading_images, test_models_interleaved_images, test_models_distributed) like this:

@MMLAMA_REGRESSION_TRANSFORMERS_4_55_0

Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for identifying the root cause!

@vllm-bot vllm-bot merged commit c90fb03 into vllm-project:main Aug 11, 2025
9 of 13 checks passed
@Isotr0py Isotr0py deleted the skip-mllama-tp branch August 12, 2025 06:25
paulpak58 pushed a commit to paulpak58/vllm that referenced this pull request Aug 13, 2025
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
yiliu30 pushed a commit to yiliu30/vllm-fork that referenced this pull request Aug 19, 2025
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
xiao-llm pushed a commit to xiao-llm/vllm that referenced this pull request Aug 28, 2025
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

llama Related to Llama models multi-modality Related to multi-modality (#4194)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[CI Failure]: Distributed Tests (2 GPUs) - Mllama TP=2 results divergence and deadlock issue

3 participants