Skip to content

Conversation

ptarasiewiczNV
Copy link
Contributor

@ptarasiewiczNV ptarasiewiczNV commented Jul 21, 2025

Overview:

This flag will set torch distribution to a specific cuda version, 12.8 in our case. This should fix B200 PyTorch errors.

Details:

Where should the reviewer start?

Related Issues: (use one of the action keywords Closes / Fixes / Resolves / Relates to)

  • closes GitHub issue: #xxx

Summary by CodeRabbit

  • Chores
    • Updated installation process for vllm to improve compatibility on AMD64 systems.

Copy link

copy-pr-bot bot commented Jul 21, 2025

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@ptarasiewiczNV
Copy link
Contributor Author

/ok to test 3300d1d

Copy link
Contributor

coderabbitai bot commented Jul 21, 2025

Walkthrough

The installation script for vllm on AMD64 architecture was updated to add the --torch-backend=auto option to pip install commands, both in editable and non-editable modes. No other logic or flow in the script was changed.

Changes

File(s) Change Summary
container/deps/vllm/install_vllm.sh Added --torch-backend=auto to pip install commands.

Estimated code review effort

1 (<10 minutes)

Poem

A hop, a skip, a backend tweak,
Now vllm installs are less oblique.
With torch set to auto, the script’s up to date,
AMD64 bunnies can now celebrate!
Pip commands improved, so neat and so small—
This rabbit approves, and that's all! 🐇✨


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cb6de94 and 3300d1d.

📒 Files selected for processing (1)
  • container/deps/vllm/install_vllm.sh (1 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: zaristei
PR: ai-dynamo/dynamo#2020
File: container/deps/vllm/install_vllm.sh:115-118
Timestamp: 2025-07-21T00:10:56.919Z
Learning: Graceful fallback for PyTorch wheel installation is broken on ARM architecture, so immediate exit on pinned version failure is preferred over fallback mechanisms in container/deps/vllm/install_vllm.sh for ARM64.
container/deps/vllm/install_vllm.sh (1)

Learnt from: zaristei
PR: #2020
File: container/deps/vllm/install_vllm.sh:115-118
Timestamp: 2025-07-21T00:10:56.919Z
Learning: Graceful fallback for PyTorch wheel installation is broken on ARM architecture, so immediate exit on pinned version failure is preferred over fallback mechanisms in container/deps/vllm/install_vllm.sh for ARM64.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build and Test - vllm

@ptarasiewiczNV ptarasiewiczNV marked this pull request as draft July 21, 2025 12:22
@ptarasiewiczNV
Copy link
Contributor Author

/ok to test 3300d1d

Copy link

copy-pr-bot bot commented Jul 21, 2025

/ok to test 3300d1d

@ptarasiewiczNV, there was an error processing your request: E2

See the following link for more information: https://docs.gha-runners.nvidia.com/cpr/e/2/

@ptarasiewiczNV
Copy link
Contributor Author

/ok to test a5e6fe4

@ptarasiewiczNV ptarasiewiczNV self-assigned this Jul 21, 2025
@ptarasiewiczNV
Copy link
Contributor Author

/ok to test b84d2a4

@ptarasiewiczNV ptarasiewiczNV marked this pull request as ready for review July 21, 2025 14:16
@ptarasiewiczNV
Copy link
Contributor Author

gitlab error doesn't seem to be related to this PR

@ptarasiewiczNV
Copy link
Contributor Author

@zaristei I see the arm build uses nightly torch, but maybe this backend path would also work for arm?

@zaristei
Copy link
Contributor

When I tried with 2.7.1 cu128, I was still hitting failures on arm unfortunately

@ptarasiewiczNV ptarasiewiczNV enabled auto-merge (squash) July 21, 2025 21:32
@zaristei
Copy link
Contributor

I did a sanity check where I built and ran agg.sh based on this branch on B200. Strangely, the example works, but when I check the torch version, I get 2.7.0 cu128. but if I'm not mistaken blackwell is only supported on 2.7.1?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants