Skip to content

Conversation

WoosukKwon
Copy link
Collaborator

@WoosukKwon WoosukKwon commented Mar 28, 2023

This PR implements a new preemption (eviction) mechanism "recomputation". In our benchmark results, recomputation is more efficient than swapping, because swapping incurs significant overheads due to numerous small data transfers between CPU and GPU. Thus, we use recomputation for our default preemption mechanism.

However, currently we do not support recomputation for sequence groups with multiple sequences. This is because when token blocks are shared, the recomputation logic becomes very complex and we do not have CUDA kernels to efficiently support it. We will use swapping for this case despite its overheads.

Besides, this PR also refactors the scheduling logic to be easier to understand.

@WoosukKwon WoosukKwon requested a review from zhuohan123 March 30, 2023 00:04
Copy link
Member

@zhuohan123 zhuohan123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Left some small comments.

# sequences, we only support swapping.
# TODO(woosuk): Support recomputation for sequence groups with multiple
# sequences.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add different preemption methods as options? For example, add a preempt_method function argument and can pick between swapping and recomputation.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added PreemptionMode and allowed the caller of _preempt to specify the mode. If the mode is not specified, we use recomputation for single-output requests and swapping for multi-output requests.

class PolicyFactory:

_POLICY_REGISTRY = {
'fcfs': FCFS,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will we add SSF in another PR?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. In this PR, I tried to make minimal changes.

Comment on lines 60 to +147
# Blocks that need to be swaped or copied before model execution.
blocks_to_swap_in: Dict[int, int] = {}
blocks_to_swap_out: Dict[int, int] = {}
blocks_to_copy: Dict[int, List[int]] = {}

# 1. Reserve new slots for the running sequences.
# NOTE: Here we implicitly assume FCFS scheduling.
# That is, the most recently added sequence group is the first
# to be swapped out.
victim_idx = len(self.running) - 1
for i, seq_group in enumerate(self.running):
if i > victim_idx:
# The i-th sequence group has already been swapped out.
break
# OOM. Swap out the victim sequence groups.
# Fix the current time.
now = time.time()

# NOTE(woosuk): We prioritize the sequence groups in the RUNNING state
# in order to minimize the preemption overheads.
# Preemption happens only when there is no available slot to keep all
# the sequence groups in the RUNNING state.
# In this case, the policy is responsible for deciding which sequence
# groups to preempt.
self.running = self.policy.sort_by_priority(now, self.running)

# Reserve new token slots for the running sequence groups.
running: List[SequenceGroup] = []
preempted: List[SequenceGroup] = []
while self.running:
seq_group = self.running.pop(0)
while not self.block_manager.can_append(seq_group):
victim_seq_group = self.running[victim_idx]
self._swap_out(victim_seq_group, blocks_to_swap_out)
victim_idx -= 1
if i > victim_idx:
# No other sequence groups can be swapped out.
if self.running:
# Preempt the lowest-priority sequence groups.
victim_seq_group = self.running.pop(-1)
self._preempt(victim_seq_group, blocks_to_swap_out)
preempted.append(victim_seq_group)
else:
# No other sequence groups can be preempted.
# Preempt the current sequence group.
self._preempt(seq_group, blocks_to_swap_out)
preempted.append(seq_group)
break
else:
# Append new slots to the sequence group.
self._append(seq_group, blocks_to_copy)
self.running = self.running[:victim_idx + 1]

# 2. Swap in the swapped sequences if possible.
# NOTE: Here we implicitly assume FCFS scheduling.
# The swapped sequences are in LIFO order.
for i, seq_group in enumerate(reversed(self.swapped)):
if self.block_manager.can_swap_in(seq_group):
self._swap_in(seq_group, blocks_to_swap_in)
self._append(seq_group, blocks_to_copy)
else:
# OOM. Stop swapping.
self.swapped = self.swapped[:len(self.swapped) - i]
running.append(seq_group)
self.running = running

# Swap in the sequence groups in the SWAPPED state if possible.
self.swapped = self.policy.sort_by_priority(now, self.swapped)
while self.swapped:
seq_group = self.swapped[0]
# If the sequence group has been preempted in this step, stop.
if seq_group in preempted:
break
# If the sequence group cannot be swapped in, stop.
if not self.block_manager.can_swap_in(seq_group):
break
else:
# All swapped sequences are swapped in.
self.swapped.clear()

# Ensure that swap-in and swap-out never happen at the same timestep.
if blocks_to_swap_in:
assert not blocks_to_swap_out
seq_group = self.swapped.pop(0)
self._swap_in(seq_group, blocks_to_swap_in)
self._append(seq_group, blocks_to_copy)
self.running.append(seq_group)

num_batched_tokens = sum(
seq_group.num_seqs(status=SequenceStatus.RUNNING)
for seq_group in self.running
)

# 3. Join new sequences if possible.
# NOTE: Here we implicitly assume FCFS scheduling.
# TODO(woosuk): Add a batching policy to control the batch size.
# Join waiting sequences if possible.
prompt_group_ids: List[int] = []
# NOTE(woosuk): The sequence groups in the SWAPPED state are strictly
# prioritized over the sequence groups in the WAITING state.
# This is because we want to bound the amount of CPU memory taken by
# the swapped sequence groups.
if not self.swapped:
for i, seq_group in enumerate(self.pending):
self.waiting = self.policy.sort_by_priority(now, self.waiting)
while self.waiting:
seq_group = self.waiting[0]
# If the sequence group has been preempted in this step, stop.
if seq_group in preempted:
break
# If the sequence group cannot be allocated, stop.
if not self.block_manager.can_allocate(seq_group):
break

# If the number of batched tokens exceeds the limit, stop.
num_prompt_tokens = seq_group.seqs[0].get_len()
if self.block_manager.can_allocate(seq_group):
if (num_batched_tokens + num_prompt_tokens
<= self.max_num_batched_tokens):
self._allocate(seq_group)
num_batched_tokens += num_prompt_tokens
continue

self.pending = self.pending[i:]
break
else:
self.pending.clear()
if (num_batched_tokens + num_prompt_tokens
> self.max_num_batched_tokens):
break

seq_group = self.waiting.pop(0)
self._allocate(seq_group)
self.running.append(seq_group)
num_batched_tokens += num_prompt_tokens
prompt_group_ids.append(seq_group.group_id)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe move this part to a new function dedicated to swapping and finding which sequences to run?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. I moved the scheduling logic to a new function _schedule.

def can_allocate(self, seq_group: SequenceGroup) -> bool:
# NOTE: Here we assume that all sequences in the group have the same prompt.
# FIXME(woosuk): Here we assume that all sequences in the group share
# the same prompt. This may not be true for preempted sequences.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly, is this function only wrong when we use recomputation preemption for parallel decoding?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, and for beam search as well.

@WoosukKwon WoosukKwon merged commit 7a7929a into main Mar 30, 2023
@WoosukKwon WoosukKwon deleted the recomp+sched branch March 30, 2023 21:51
bigPYJ1151 pushed a commit to bigPYJ1151/vllm that referenced this pull request Sep 12, 2023
xiangyuT added a commit to xiangyuT/vllm that referenced this pull request Oct 25, 2023
slyalin pushed a commit to slyalin/vllm that referenced this pull request Apr 19, 2024
sfc-gh-hazhang added a commit to sfc-gh-hazhang/vllm that referenced this pull request May 7, 2024
* sharded prequantized checkpoints

* update

---------

Co-authored-by: Hao Zhang <[email protected]>
fxmarty pushed a commit to fxmarty/vllm-public that referenced this pull request May 31, 2024
…ble_ROCm6.1

Bump Docker to ROCm 6.1, add gradlib for tuned gemm, include RCCL fixes
ykim362 pushed a commit to ykim362/vllm that referenced this pull request Jun 17, 2024
yukavio pushed a commit to yukavio/vllm that referenced this pull request Jul 3, 2024
Summary:

Initial integration for the sparse-fused gemm. To achieve this, we need
to ensure that we compress the weight matrix only once and never
decompress it, as decompression is currently unsupported.

Before this change, using `SparseParameter(SparseTensor)` meant that in
`MergedColumnParallelLinear` and `QKVParallelLinear` every time a new
shard was loaded by the `weight_loader` (e.g., the "q" portion of
`QKVParallelLinear`), we would decompress the tensor in-order to use
narrow to update the appropriate section of the weight tensor. With this
change, `SparseParameter(SparseTensor)` is replaced with
`LazyCompressedParameter`, which allows us to operate on
`uncompressed_data` until we explicitly compress it. At that point, the
`uncompressed_data` is compressed into `compressed_data` and freed.
Currently, the detection of when to call compress is somewhat hacky. For
`QKVParallelLinear`, we compress only after inserting "q", "k", and "v"
shard ids, and for `MergedColumnParallelLinear`, we compress once we've
inserted the same number of shards as outputs (determined by
`len(output_sizes)`), which implicitly assumes one shard per output.

Moving away from `SparseParameter(SparseTensor)` means that
`SparseTensor` no longer handles dispatching to the custom ops; instead,
this is handled by `SparseW16A16LinearMethod`. I believe this is a
positive change overall. `SparseTensor` was an unnecessary extra layer
of abstraction/indirection originally designed for the SLoRA work, not
vLLM.

This did result in the 2:4 sparse implementation breaking. However, it
turns out it was already broken (i.e., it was decompressing and running
dense within `SparseTensor`), so we "disable" it for now ("disable"
meaning decompress and run dense instead).

We should revisit all of this infrastructure post-MVP.

---------

Co-authored-by: Andrew Feldman <[email protected]>
yukavio pushed a commit to yukavio/vllm that referenced this pull request Jul 3, 2024
Summary:

Initial integration for the sparse-fused gemm. To achieve this, we need
to ensure that we compress the weight matrix only once and never
decompress it, as decompression is currently unsupported.

Before this change, using `SparseParameter(SparseTensor)` meant that in
`MergedColumnParallelLinear` and `QKVParallelLinear` every time a new
shard was loaded by the `weight_loader` (e.g., the "q" portion of
`QKVParallelLinear`), we would decompress the tensor in-order to use
narrow to update the appropriate section of the weight tensor. With this
change, `SparseParameter(SparseTensor)` is replaced with
`LazyCompressedParameter`, which allows us to operate on
`uncompressed_data` until we explicitly compress it. At that point, the
`uncompressed_data` is compressed into `compressed_data` and freed.
Currently, the detection of when to call compress is somewhat hacky. For
`QKVParallelLinear`, we compress only after inserting "q", "k", and "v"
shard ids, and for `MergedColumnParallelLinear`, we compress once we've
inserted the same number of shards as outputs (determined by
`len(output_sizes)`), which implicitly assumes one shard per output.

Moving away from `SparseParameter(SparseTensor)` means that
`SparseTensor` no longer handles dispatching to the custom ops; instead,
this is handled by `SparseW16A16LinearMethod`. I believe this is a
positive change overall. `SparseTensor` was an unnecessary extra layer
of abstraction/indirection originally designed for the SLoRA work, not
vLLM.

This did result in the 2:4 sparse implementation breaking. However, it
turns out it was already broken (i.e., it was decompressing and running
dense within `SparseTensor`), so we "disable" it for now ("disable"
meaning decompress and run dense instead).

We should revisit all of this infrastructure post-MVP.

---------

Co-authored-by: Andrew Feldman <[email protected]>
@alixiaodi alixiaodi mentioned this pull request Aug 2, 2024
Xaenalt added a commit to Xaenalt/vllm that referenced this pull request Dec 9, 2024
…x-633313fb5af9953589a88bc244a2a983

[Snyk] Security upgrade starlette from 0.38.6 to 0.40.0
garg-amit added a commit to garg-amit/vllm that referenced this pull request Dec 11, 2024
Xaenalt pushed a commit to Xaenalt/vllm that referenced this pull request Jan 15, 2025
…_version

Update Habana UBI image to fix CVE, GRPC issue and WARMUP issue
robertgshaw2-redhat referenced this pull request in robertgshaw2-redhat/vllm Apr 30, 2025
robertgshaw2-redhat referenced this pull request in robertgshaw2-redhat/vllm May 3, 2025
* [Update] LMcache connector v1 implementation

Signed-off-by: ApostaC <[email protected]>

* [Add] examples for disaggregated prefill

Signed-off-by: ApostaC <[email protected]>

* [add] extra information about evns

Signed-off-by: ApostaC <[email protected]>

* Initial stubs for P/D scheduling changes

Signed-off-by: Tyler Michael Smith <[email protected]>

* Updates

Signed-off-by: Tyler Michael Smith <[email protected]>

* Rs branch (#3)

* updated

Signed-off-by: [email protected] <[email protected]>

* Rs branch (#5)

Signed-off-by: [email protected] <[email protected]>

* Remove Unneeded Arguments (#7)

* updated

Signed-off-by: [email protected] <[email protected]>

* stash

Signed-off-by: [email protected] <[email protected]>

* cleanup

Signed-off-by: [email protected] <[email protected]>

---------

Signed-off-by: [email protected] <[email protected]>

* Improve disagg-example.sh (#8)

- fix spelling
- CUDA_VISIBLE_DEVICES should be set externally

Signed-off-by: Tyler Michael Smith <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* added connector

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* update

Signed-off-by: [email protected] <[email protected]>

* remove

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* seems to load properly

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* Revert "updated"

This reverts commit 97316d9.

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* stash

Signed-off-by: [email protected] <[email protected]>

* added

Signed-off-by: [email protected] <[email protected]>

* diffs for local dev on macos

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* update

Signed-off-by: Robert Shaw <[email protected]>

* updaed

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* Checkpoint.

Signed-off-by: Tyler Michael Smith <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* Cleanup

Signed-off-by: Tyler Michael Smith <[email protected]>

* WIP

Signed-off-by: Tyler Michael Smith <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated on scheduler side

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* Hacking away

Signed-off-by: Tyler Michael Smith <[email protected]>

* cleanup

Signed-off-by: Robert Shaw <[email protected]>

* ensure request removed from running list

Signed-off-by: Robert Shaw <[email protected]>

* Runs E2E. Garbage output. Crashes on 2nd request

Signed-off-by: Tyler Michael Smith <[email protected]>

* update

Signed-off-by: Tyler Michael Smith <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* rename files

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* update

Signed-off-by: Robert Shaw <[email protected]>

* Second request no longer crashes

Signed-off-by: Tyler Michael Smith <[email protected]>

* Remove gpu_model_runner hacks

Signed-off-by: Tyler Michael Smith <[email protected]>

* Clean up Justfile

Signed-off-by: Tyler Michael Smith <[email protected]>

* [Bugfix] Stale finished requests in EMPTY_MODEL_RUNNER_OUTPUT

Signed-off-by: Tyler Michael Smith <[email protected]>

* update

Signed-off-by: Tyler Michael Smith <[email protected]>

* justfile edits

Signed-off-by: Tyler Michael Smith <[email protected]>

* Update

Signed-off-by: Tyler Michael Smith <[email protected]>

* Fixes - lm_eval gsm8k has correctness

Signed-off-by: Tyler Michael Smith <[email protected]>

* "just delete the assert"

Signed-off-by: Tyler Michael Smith <[email protected]>

* fixup precommit issues

Signed-off-by: Tyler Michael Smith <[email protected]>

* Fixes

Signed-off-by: Tyler Michael Smith <[email protected]>

* updated (#12)

Signed-off-by: [email protected] <[email protected]>

* Add Accuracy Test (#13)

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

---------

Signed-off-by: [email protected] <[email protected]>

* Preemption Bugfixes (#15)

* stash fixed double free issue

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* fixed issue

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updatrd

Signed-off-by: [email protected] <[email protected]>

* updatrd

Signed-off-by: [email protected] <[email protected]>

* updatrd

Signed-off-by: [email protected] <[email protected]>

* updatrd

Signed-off-by: [email protected] <[email protected]>

* updatrd

Signed-off-by: [email protected] <[email protected]>

* updatrd

Signed-off-by: [email protected] <[email protected]>

---------

Signed-off-by: [email protected] <[email protected]>

* updated (#16)

Signed-off-by: [email protected] <[email protected]>

* Fix Bad Merge | Fix Memory Leak in Upstream (#18)

* updated

Signed-off-by: [email protected] <[email protected]>

* fix merge

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

---------

Signed-off-by: [email protected] <[email protected]>

* clean up justfile, examples

Signed-off-by: Tyler Michael Smith <[email protected]>

* more cleanup

Signed-off-by: Tyler Michael Smith <[email protected]>

* more cleanup

Signed-off-by: Tyler Michael Smith <[email protected]>

* more cleanup

Signed-off-by: Tyler Michael Smith <[email protected]>

* more cleanup

Signed-off-by: Tyler Michael Smith <[email protected]>

* More cleanup

Signed-off-by: Tyler Michael Smith <[email protected]>

* more cleanup

Signed-off-by: Tyler Michael Smith <[email protected]>

* more cleanup, precommit fixes

Signed-off-by: Tyler Michael Smith <[email protected]>

* More cleanup

Signed-off-by: Tyler Michael Smith <[email protected]>

* run_accuracy_test.sh UX

Signed-off-by: Tyler Michael Smith <[email protected]>

* squash warnings

Signed-off-by: Tyler Michael Smith <[email protected]>

* pre-commit

Signed-off-by: Tyler Michael Smith <[email protected]>

* update

Signed-off-by: Tyler Michael Smith <[email protected]>

* Add get_finished to base kv connector

Signed-off-by: mgoin <[email protected]>

* revert test.txt

Signed-off-by: Tyler Michael Smith <[email protected]>

* cleanup

Signed-off-by: Tyler Michael Smith <[email protected]>

* Cleanup

Signed-off-by: Tyler Michael Smith <[email protected]>

* review comments

Signed-off-by: Tyler Michael Smith <[email protected]>

---------

Signed-off-by: ApostaC <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Robert Shaw <[email protected]>
Signed-off-by: mgoin <[email protected]>
Co-authored-by: ApostaC <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: [email protected] <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>
robertgshaw2-redhat referenced this pull request in robertgshaw2-redhat/vllm May 4, 2025
* [Update] LMcache connector v1 implementation

Signed-off-by: ApostaC <[email protected]>

* [Add] examples for disaggregated prefill

Signed-off-by: ApostaC <[email protected]>

* [add] extra information about evns

Signed-off-by: ApostaC <[email protected]>

* Initial stubs for P/D scheduling changes

Signed-off-by: Tyler Michael Smith <[email protected]>

* Updates

Signed-off-by: Tyler Michael Smith <[email protected]>

* Rs branch (#3)

* updated

Signed-off-by: [email protected] <[email protected]>

* Rs branch (#5)

Signed-off-by: [email protected] <[email protected]>

* Remove Unneeded Arguments (#7)

* updated

Signed-off-by: [email protected] <[email protected]>

* stash

Signed-off-by: [email protected] <[email protected]>

* cleanup

Signed-off-by: [email protected] <[email protected]>

---------

Signed-off-by: [email protected] <[email protected]>

* Improve disagg-example.sh (#8)

- fix spelling
- CUDA_VISIBLE_DEVICES should be set externally

Signed-off-by: Tyler Michael Smith <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* added connector

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* update

Signed-off-by: [email protected] <[email protected]>

* remove

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* seems to load properly

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* Revert "updated"

This reverts commit 97316d9.

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* stash

Signed-off-by: [email protected] <[email protected]>

* added

Signed-off-by: [email protected] <[email protected]>

* diffs for local dev on macos

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* update

Signed-off-by: Robert Shaw <[email protected]>

* updaed

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* Checkpoint.

Signed-off-by: Tyler Michael Smith <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* Cleanup

Signed-off-by: Tyler Michael Smith <[email protected]>

* WIP

Signed-off-by: Tyler Michael Smith <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated on scheduler side

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* Hacking away

Signed-off-by: Tyler Michael Smith <[email protected]>

* cleanup

Signed-off-by: Robert Shaw <[email protected]>

* ensure request removed from running list

Signed-off-by: Robert Shaw <[email protected]>

* Runs E2E. Garbage output. Crashes on 2nd request

Signed-off-by: Tyler Michael Smith <[email protected]>

* update

Signed-off-by: Tyler Michael Smith <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* rename files

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* update

Signed-off-by: Robert Shaw <[email protected]>

* Second request no longer crashes

Signed-off-by: Tyler Michael Smith <[email protected]>

* Remove gpu_model_runner hacks

Signed-off-by: Tyler Michael Smith <[email protected]>

* Clean up Justfile

Signed-off-by: Tyler Michael Smith <[email protected]>

* [Bugfix] Stale finished requests in EMPTY_MODEL_RUNNER_OUTPUT

Signed-off-by: Tyler Michael Smith <[email protected]>

* update

Signed-off-by: Tyler Michael Smith <[email protected]>

* justfile edits

Signed-off-by: Tyler Michael Smith <[email protected]>

* Update

Signed-off-by: Tyler Michael Smith <[email protected]>

* Fixes - lm_eval gsm8k has correctness

Signed-off-by: Tyler Michael Smith <[email protected]>

* "just delete the assert"

Signed-off-by: Tyler Michael Smith <[email protected]>

* fixup precommit issues

Signed-off-by: Tyler Michael Smith <[email protected]>

* Fixes

Signed-off-by: Tyler Michael Smith <[email protected]>

* updated (#12)

Signed-off-by: [email protected] <[email protected]>

* Add Accuracy Test (#13)

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

---------

Signed-off-by: [email protected] <[email protected]>

* Preemption Bugfixes (#15)

* stash fixed double free issue

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* fixed issue

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updatrd

Signed-off-by: [email protected] <[email protected]>

* updatrd

Signed-off-by: [email protected] <[email protected]>

* updatrd

Signed-off-by: [email protected] <[email protected]>

* updatrd

Signed-off-by: [email protected] <[email protected]>

* updatrd

Signed-off-by: [email protected] <[email protected]>

* updatrd

Signed-off-by: [email protected] <[email protected]>

---------

Signed-off-by: [email protected] <[email protected]>

* updated (#16)

Signed-off-by: [email protected] <[email protected]>

* Fix Bad Merge | Fix Memory Leak in Upstream (#18)

* updated

Signed-off-by: [email protected] <[email protected]>

* fix merge

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

---------

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* cleanup code

Signed-off-by: [email protected] <[email protected]>

* cleanup code

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* stash

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updatted

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* revert

Signed-off-by: [email protected] <[email protected]>

* more spurious changes

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* Update vllm/distributed/kv_transfer/kv_connector/v1/nixl_connector.py

Co-authored-by: Tyler Michael Smith <[email protected]>

* Update vllm/distributed/kv_transfer/kv_connector/v1/nixl_connector.py

Co-authored-by: Tyler Michael Smith <[email protected]>

---------

Signed-off-by: ApostaC <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Robert Shaw <[email protected]>
Co-authored-by: ApostaC <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
robertgshaw2-redhat referenced this pull request in robertgshaw2-redhat/vllm May 6, 2025
* [Update] LMcache connector v1 implementation

Signed-off-by: ApostaC <[email protected]>

* [Add] examples for disaggregated prefill

Signed-off-by: ApostaC <[email protected]>

* [add] extra information about evns

Signed-off-by: ApostaC <[email protected]>

* Initial stubs for P/D scheduling changes

Signed-off-by: Tyler Michael Smith <[email protected]>

* Updates

Signed-off-by: Tyler Michael Smith <[email protected]>

* Rs branch (#3)

* updated

Signed-off-by: [email protected] <[email protected]>

* Rs branch (#5)

Signed-off-by: [email protected] <[email protected]>

* Remove Unneeded Arguments (#7)

* updated

Signed-off-by: [email protected] <[email protected]>

* stash

Signed-off-by: [email protected] <[email protected]>

* cleanup

Signed-off-by: [email protected] <[email protected]>

---------

Signed-off-by: [email protected] <[email protected]>

* Improve disagg-example.sh (#8)

- fix spelling
- CUDA_VISIBLE_DEVICES should be set externally

Signed-off-by: Tyler Michael Smith <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* added connector

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* update

Signed-off-by: [email protected] <[email protected]>

* remove

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* seems to load properly

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* Revert "updated"

This reverts commit 97316d9.

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* stash

Signed-off-by: [email protected] <[email protected]>

* added

Signed-off-by: [email protected] <[email protected]>

* diffs for local dev on macos

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* update

Signed-off-by: Robert Shaw <[email protected]>

* updaed

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* Checkpoint.

Signed-off-by: Tyler Michael Smith <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* Cleanup

Signed-off-by: Tyler Michael Smith <[email protected]>

* WIP

Signed-off-by: Tyler Michael Smith <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated on scheduler side

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* Hacking away

Signed-off-by: Tyler Michael Smith <[email protected]>

* cleanup

Signed-off-by: Robert Shaw <[email protected]>

* ensure request removed from running list

Signed-off-by: Robert Shaw <[email protected]>

* Runs E2E. Garbage output. Crashes on 2nd request

Signed-off-by: Tyler Michael Smith <[email protected]>

* update

Signed-off-by: Tyler Michael Smith <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* rename files

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* updated

Signed-off-by: Robert Shaw <[email protected]>

* update

Signed-off-by: Robert Shaw <[email protected]>

* Second request no longer crashes

Signed-off-by: Tyler Michael Smith <[email protected]>

* Remove gpu_model_runner hacks

Signed-off-by: Tyler Michael Smith <[email protected]>

* Clean up Justfile

Signed-off-by: Tyler Michael Smith <[email protected]>

* [Bugfix] Stale finished requests in EMPTY_MODEL_RUNNER_OUTPUT

Signed-off-by: Tyler Michael Smith <[email protected]>

* update

Signed-off-by: Tyler Michael Smith <[email protected]>

* justfile edits

Signed-off-by: Tyler Michael Smith <[email protected]>

* Update

Signed-off-by: Tyler Michael Smith <[email protected]>

* Fixes - lm_eval gsm8k has correctness

Signed-off-by: Tyler Michael Smith <[email protected]>

* "just delete the assert"

Signed-off-by: Tyler Michael Smith <[email protected]>

* fixup precommit issues

Signed-off-by: Tyler Michael Smith <[email protected]>

* Fixes

Signed-off-by: Tyler Michael Smith <[email protected]>

* updated (#12)

Signed-off-by: [email protected] <[email protected]>

* Add Accuracy Test (#13)

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

---------

Signed-off-by: [email protected] <[email protected]>

* Preemption Bugfixes (#15)

* stash fixed double free issue

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* fixed issue

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updatrd

Signed-off-by: [email protected] <[email protected]>

* updatrd

Signed-off-by: [email protected] <[email protected]>

* updatrd

Signed-off-by: [email protected] <[email protected]>

* updatrd

Signed-off-by: [email protected] <[email protected]>

* updatrd

Signed-off-by: [email protected] <[email protected]>

* updatrd

Signed-off-by: [email protected] <[email protected]>

---------

Signed-off-by: [email protected] <[email protected]>

* updated (#16)

Signed-off-by: [email protected] <[email protected]>

* Fix Bad Merge | Fix Memory Leak in Upstream (#18)

* updated

Signed-off-by: [email protected] <[email protected]>

* fix merge

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

---------

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* cleanup code

Signed-off-by: [email protected] <[email protected]>

* cleanup code

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* stash

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updatted

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* revert

Signed-off-by: [email protected] <[email protected]>

* more spurious changes

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* updated

Signed-off-by: [email protected] <[email protected]>

* Support MLA in NIXL connector

Signed-off-by: Tyler Michael Smith <[email protected]>

* WIP adding tests

Signed-off-by: Tyler Michael Smith <[email protected]>

* wip

Signed-off-by: Tyler Michael Smith <[email protected]>

* Fixes

Signed-off-by: Tyler Michael Smith <[email protected]>

---------

Signed-off-by: ApostaC <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Robert Shaw <[email protected]>
Co-authored-by: ApostaC <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: [email protected] <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
zyongye pushed a commit to zyongye/vllm that referenced this pull request Aug 5, 2025
zyongye pushed a commit to zyongye/vllm that referenced this pull request Aug 6, 2025
zyongye pushed a commit to zyongye/vllm that referenced this pull request Aug 7, 2025
845473182 referenced this pull request in 845473182/vllm Aug 29, 2025
"""
Expert parallelism load balancer (EPLB) for vLLM.

This module implements the core rearrangement algorithm.

The rearrangement algorithm is adapted from
[DeepSeek EPLB](https://github.com/deepseek-ai/eplb).

Please find at [raindaywhu#12](deepseek-ai/EPLB#12) an example
on how the EPLB algorithm works.
"""

import torch

def balanced_packing(weight: torch.Tensor,
                     num_packs: int) -> tuple[torch.Tensor, torch.Tensor]:
    """
    Pack n weighted objects to m packs, such that each bin contains exactly
    n/m objects and the weights of all packs are as balanced as possible.

    Parameters:
        weight: [X, n], the weight of each item
        num_packs: number of packs

    Returns:
        pack_index: [X, n], the pack index of each item
        rank_in_pack: [X, n], the rank of the item in the pack
    """
    num_layers, num_groups = weight.shape
    assert num_groups % num_packs == 0
    groups_per_pack = num_groups // num_packs

    if groups_per_pack == 1:
        pack_index = torch.arange(weight.size(-1),
                                  dtype=torch.int64,
                                  device=weight.device).expand(weight.shape)
        rank_in_pack = torch.zeros_like(weight, dtype=torch.int64)
        return pack_index, rank_in_pack

    indices = weight.float().sort(-1, descending=True).indices.cpu()
    pack_index = torch.full_like(weight,
                                 fill_value=-1,
                                 dtype=torch.int64,
                                 device="cpu")
    rank_in_pack = torch.full_like(pack_index, fill_value=-1)
    for i in range(num_layers):
        pack_weights = [0] * num_packs
        pack_items = [0] * num_packs
        for group in indices[i]:
            pack = min(
                (i
                 for i in range(num_packs) if pack_items[i] < groups_per_pack),
                key=pack_weights.__getitem__,
            )
            assert pack_items[pack] < groups_per_pack
            pack_index[i, group] = pack
            rank_in_pack[i, group] = pack_items[pack]
            pack_weights[pack] += weight[i, group]
            pack_items[pack] += 1
    return pack_index, rank_in_pack

def replicate_experts(
        weight: torch.Tensor,
        num_phy: int) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
    """
    Replicate `num_log` experts to `num_phy` replicas, such that the maximum
    load of all replicas is minimized.

    Parameters:
        weight: [X, num_log]
        num_phy: total number of experts after replication

    Returns:
        phy2log: [X, num_phy], logical expert id of each physical expert
        rank: [X, num_phy], the replica rank
        logcnt: [X, num_log], number of replicas for each logical expert
    """
    n, num_log = weight.shape
    num_redundant = num_phy - num_log
    assert num_redundant >= 0
    device = weight.device
    phy2log = torch.arange(num_phy, dtype=torch.int64,
                           device=device).repeat(n, 1)
    rank = torch.zeros(n, num_phy, dtype=torch.int64, device=device)
    logcnt = torch.ones(n, num_log, dtype=torch.int64, device=device)
    arangen = torch.arange(n, dtype=torch.int64, device=device)
    for i in range(num_log, num_phy):
        redundant_indices = (weight / logcnt).max(dim=-1).indices
        phy2log[:, i] = redundant_indices
        rank[:, i] = logcnt[arangen, redundant_indices]
        logcnt[arangen, redundant_indices] += 1
    return phy2log, rank, logcnt

def rebalance_experts_hierarchical(
    weight: torch.Tensor,
    num_physical_experts: int,
    num_groups: int,
    num_nodes: int,
    num_gpus: int,
):
    """
    Parameters:
        weight: [num_moe_layers, num_logical_experts]
        num_physical_experts: number of physical experts after replication
        num_groups: number of expert groups
        num_nodes: number of server nodes, where the intra-node network
        (e.g, NVLink) is faster
        num_gpus: number of GPUs, must be a multiple of `num_nodes`

    Returns:
        physical_to_logical_map: [num_moe_layers, num_physical_experts]
        logical_to_physical_map: [num_moe_layers, num_logical_experts, X]
        logical_count: [num_moe_layers, num_logical_experts]
    """
    num_layers, num_logical_experts = weight.shape
    assert num_logical_experts % num_groups == 0
    group_size = num_logical_experts // num_groups
    assert num_groups % num_nodes == 0
    groups_per_node = num_groups // num_nodes
    assert num_gpus % num_nodes == 0
    assert num_physical_experts % num_gpus == 0
    phy_experts_per_gpu = num_physical_experts // num_gpus

    def inverse(perm: torch.Tensor) -> torch.Tensor:
        inv = torch.empty_like(perm)
        inv.scatter_(
            1,
            perm,
            torch.arange(perm.size(1), dtype=torch.int64,
                         device=perm.device).expand(perm.shape),
        )
        return inv

    # Step 1: pack groups to nodes
    tokens_per_group = weight.unflatten(-1, (num_groups, group_size)).sum(-1)
    group_pack_index, group_rank_in_pack = balanced_packing(
        tokens_per_group, num_nodes)
    log2mlog = (((group_pack_index * groups_per_node + group_rank_in_pack) *
                 group_size).unsqueeze(-1) +
                torch.arange(group_size,
                             dtype=torch.int64,
                             device=group_pack_index.device)).flatten(-2)
    mlog2log = inverse(log2mlog)

    # Step 2: construct redundant experts within nodes
    # [num_layers * num_nodes, num_logical_experts // num_nodes]
    tokens_per_mlog = weight.gather(-1, mlog2log).view(
        -1, num_logical_experts // num_nodes)
    phy2mlog, phyrank, mlogcnt = replicate_experts(
        tokens_per_mlog, num_physical_experts // num_nodes)

    # Step 3: pack physical_experts to GPUs
    # [num_layers * num_nodes, num_physical_experts // num_nodes]
    tokens_per_phy = (tokens_per_mlog / mlogcnt).gather(-1, phy2mlog)
    pack_index, rank_in_pack = balanced_packing(tokens_per_phy,
                                                num_gpus // num_nodes)
    phy2pphy = pack_index * phy_experts_per_gpu + rank_in_pack
    pphy2phy = inverse(phy2pphy)

    pphy2mlog = phy2mlog.gather(
        -1, pphy2phy)  # [num_layers * num_nodes, num_log_per_nodes]
    pphy2mlog = (pphy2mlog.view(num_layers, num_nodes, -1) + torch.arange(
        0,
        num_logical_experts,
        num_logical_experts // num_nodes,
        device=group_pack_index.device,
    ).view(1, -1, 1)).flatten(-2)
    pphy2log = mlog2log.gather(-1, pphy2mlog)
    pphyrank = phyrank.gather(-1, pphy2phy).view(num_layers, -1)
    logcnt = mlogcnt.view(num_layers, -1).gather(-1, log2mlog)
    return pphy2log, pphyrank, logcnt

def rebalance_experts(
    weight: torch.Tensor,
    num_replicas: int,
    num_groups: int,
    num_nodes: int,
    num_gpus: int,
) -> tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
    """
    Entry point for expert-parallelism load balancer.

    Parameters:
        weight: [layers, num_logical_experts], the load statistics for all
            logical experts
        num_replicas: number of physical experts, must be a multiple of
            `num_gpus`
        num_groups: number of expert groups
        num_nodes: number of server nodes, where the intra-node network
            (e.g, NVLink) is faster
        num_gpus: number of GPUs, must be a multiple of `num_nodes`

    Returns:
        physical_to_logical_map: [layers, num_replicas], the expert index of
            each replica
        logical_to_physical_map: [layers, num_logical_experts, X], the replica
            indices for each expert
        expert_count: [layers, num_logical_experts], number of physical
            replicas for each logical expert
    """
    num_layers, num_logical_experts = weight.shape
    weight = weight.float().cpu()
    if num_groups % num_nodes == 0:
        # use hierarchical load-balance policy
        phy2log, phyrank, logcnt = rebalance_experts_hierarchical(
            weight, num_replicas, num_groups, num_nodes, num_gpus)
    else:
        # use global load-balance policy
        phy2log, phyrank, logcnt = rebalance_experts_hierarchical(
            weight, num_replicas, 1, 1, num_gpus)
    num_redundant_experts = num_replicas - num_logical_experts
    maxlogcnt = num_redundant_experts + 1
    log2phy: torch.Tensor = torch.full(
        (num_layers, num_logical_experts, maxlogcnt),
        -1,
        dtype=torch.int64,
        device=logcnt.device,
    )
    log2phy.view(num_layers, -1).scatter_(
        -1,
        phy2log * maxlogcnt + phyrank,
        torch.arange(num_replicas, dtype=torch.int64,
                     device=log2phy.device).expand(num_layers, -1),
    )
    return phy2log, log2phy, logcnt

__all__ = ["rebalance_experts"]
Pradyun92 pushed a commit to Pradyun92/vllm that referenced this pull request Sep 12, 2025
ISSUE: Non-harmony models with tool calling immediately terminate streaming with finish_reason: "stop" after role assignment
ROOT CAUSE: Upstream merge accidentally removed "and not self.use_harmony" condition, causing non-harmony models to enter wrong code path
SOLUTION: Restore original condition to properly gate non-harmony vs harmony streaming logic paths

The merge changed:
  if ((tool_choice_auto or self.reasoning_parser) and not self.use_harmony):
to:
  if tool_choice_auto or self.reasoning_parser:

This caused non-harmony models (Qwen) with tools to enter harmony-designed code path,
setting up tracking variables but then not entering any subsequent tool handling blocks,
resulting in immediate stream termination.

Fixed by restoring the "and not self.use_harmony" condition to ensure proper
separation between harmony and non-harmony streaming logic paths.

Updated MANTLE registry with new entry vllm-project#12 as CRITICAL category fix.

🤖 Generated with [Claude Code](https://claude.ai/code)

Signed-off-by: Pradyun Ramadorai <[email protected]>
heheda12345 added a commit to heheda12345/vllm that referenced this pull request Sep 29, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants