-
Notifications
You must be signed in to change notification settings - Fork 618
docs: Quickstart with new UX #2005
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
17abadc
to
fcee50d
Compare
WalkthroughA new Quickstart guide in Markdown format has been added to the documentation. The guide provides step-by-step instructions for deploying Large Language Models using Dynamo, including setup prerequisites, component descriptions, request flow, and example usage. No code or exported entities were modified. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Frontend
participant NATS
participant vLLM Backend
participant etcd
User->>Frontend: Send chat completion request (HTTP)
Frontend->>etcd: Discover available vLLM models
Frontend->>NATS: Package & route request
NATS->>vLLM Backend: Deliver request
vLLM Backend->>NATS: Send response
NATS->>Frontend: Relay response
Frontend->>User: Stream response (HTTP)
Estimated code review effort1 (~2 minutes) Poem
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
examples/basics/quickstart/README.md (1)
69-80
:curl
example: quoting & JSON formatting brittleThe long, single-quoted payload breaks on shells that honour single quotes literally across newlines and prevents variable interpolation. For portability:
curl -X POST http://localhost:8000/v1/chat/completions \ -H 'Content-Type: application/json' \ -d '{ "model": "Qwen/Qwen3-0.6B", "messages": [ { "role": "user", "content": "Tell me a story about a brave cat" } ], "stream": false, "max_tokens": 1028 }'Changes:
• Added-X POST
to be explicit.
• Kept JSON in one literal block without trailing spaces after backslashes.
• Ensured a space after"stream":
.Helps avoid copy-paste glitches and clarifies the request method.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
examples/basics/quickstart/README.md
(1 hunks)
🧠 Learnings (1)
examples/basics/quickstart/README.md (4)
Learnt from: ptarasiewiczNV
PR: #2027
File: container/deps/vllm/install_vllm.sh:0-0
Timestamp: 2025-07-22T10:22:28.951Z
Learning: The --torch-backend=auto
flag works with vLLM installations via uv pip install, even though it's not a standard pip option. This flag is processed by vLLM's build system during installation to automatically match PyTorch distribution with container CUDA versions.
Learnt from: GuanLuo
PR: #1371
File: examples/llm/benchmarks/vllm_multinode_setup.sh:18-25
Timestamp: 2025-06-05T01:46:15.509Z
Learning: In multi-node setups with head/worker architecture, the head node typically doesn't need environment variables pointing to its own services (like NATS_SERVER, ETCD_ENDPOINTS) because local processes can access them via localhost. Only worker nodes need these environment variables to connect to the head node's external IP address.
Learnt from: fsaady
PR: #1730
File: examples/sglang/slurm_jobs/scripts/worker_setup.py:230-244
Timestamp: 2025-07-03T10:14:30.570Z
Learning: In examples/sglang/slurm_jobs/scripts/worker_setup.py, background processes (like nats-server, etcd) are intentionally left running even if later processes fail. This design choice allows users to manually connect to nodes and debug issues without having to restart the entire SLURM job from scratch, providing operational flexibility for troubleshooting in cluster environments.
Learnt from: PeaBrane
PR: #1409
File: examples/router_standalone/worker.py:171-186
Timestamp: 2025-06-08T08:30:45.126Z
Learning: Example code in the examples/
directory may intentionally use hard-coded values or simplified implementations that wouldn't be appropriate for production code, but are acceptable for demonstration and testing purposes.
🧰 Additional context used
🧠 Learnings (1)
examples/basics/quickstart/README.md (4)
Learnt from: ptarasiewiczNV
PR: #2027
File: container/deps/vllm/install_vllm.sh:0-0
Timestamp: 2025-07-22T10:22:28.951Z
Learning: The --torch-backend=auto
flag works with vLLM installations via uv pip install, even though it's not a standard pip option. This flag is processed by vLLM's build system during installation to automatically match PyTorch distribution with container CUDA versions.
Learnt from: GuanLuo
PR: #1371
File: examples/llm/benchmarks/vllm_multinode_setup.sh:18-25
Timestamp: 2025-06-05T01:46:15.509Z
Learning: In multi-node setups with head/worker architecture, the head node typically doesn't need environment variables pointing to its own services (like NATS_SERVER, ETCD_ENDPOINTS) because local processes can access them via localhost. Only worker nodes need these environment variables to connect to the head node's external IP address.
Learnt from: fsaady
PR: #1730
File: examples/sglang/slurm_jobs/scripts/worker_setup.py:230-244
Timestamp: 2025-07-03T10:14:30.570Z
Learning: In examples/sglang/slurm_jobs/scripts/worker_setup.py, background processes (like nats-server, etcd) are intentionally left running even if later processes fail. This design choice allows users to manually connect to nodes and debug issues without having to restart the entire SLURM job from scratch, providing operational flexibility for troubleshooting in cluster environments.
Learnt from: PeaBrane
PR: #1409
File: examples/router_standalone/worker.py:171-186
Timestamp: 2025-06-08T08:30:45.126Z
Learning: Example code in the examples/
directory may intentionally use hard-coded values or simplified implementations that wouldn't be appropriate for production code, but are acceptable for demonstration and testing purposes.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Mirror Repository to GitLab
- GitHub Check: Build and Test - vllm
🔇 Additional comments (2)
examples/basics/quickstart/README.md (2)
14-16
: Verify Docker-Compose file path and service coverage
deploy/metrics/docker-compose.yml
sounds metrics-specific and may not spin up etcd + NATS. If the compose file under that path only exposes Prometheus/Grafana stacks, the quick-start will fail at runtime.
Double-check that the referenced compose file:
- Exists in the repo at that exact path.
- Declares
etcd
andnats
(with JetStream enabled).If not, point to the correct compose file or provide an inline snippet with the required services.
42-44
: CLI may be ahead of implementation statusPast discussion (see PR #2046) notes that the v2 UX entry-point
python -m dynamo.vllm
is planned but not yet wired for vLLM. If users run this today they may hitModuleNotFoundError
.
Consider temporarily documenting the currently-working module path (e.g.python -m dynamo.components.backends.vllm …
) or add a note that the short form will land once UX v2 is fully merged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approving to unblock but please address changes before merge
03771f8
to
19b0022
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
python -m dynamo.vllm --model Qwen/Qwen3-0.6B | ||
``` | ||
|
||
Leave this terminal running - it will show vLLM Backend logs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it will show vLLM Backend logs
Nice. SDK/serve mixed the logs of all the components, and people were asking to change that, so great to highlight it.
**Open a new terminal** and run: | ||
|
||
```bash | ||
python -m dynamo.vllm --model Qwen/Qwen3-0.6B |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should mention that the model will be downloaded from Hugging Face.
Overview:
New hello world example that focuses on LLM deployment. Also includes some explanation of what's happening under the hood.
Summary by CodeRabbit