Skip to content

Conversation

WoosukKwon
Copy link
Collaborator

@WoosukKwon WoosukKwon commented Apr 5, 2023

Related to #22

This PR uses CUDA graph to reduce the CPU overhead of NCCL all reduce operation.

@WoosukKwon WoosukKwon requested a review from zhuohan123 April 5, 2023 09:31
Copy link
Member

@zhuohan123 zhuohan123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

self.group = get_tensor_model_parallel_group()
self.buffer = torch.empty(
size=(max_num_tokens, hidden_size),
dtype=torch.half, # FIXME: hardcoded dtype
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a dtype argument for this class?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed!

@WoosukKwon WoosukKwon merged commit 12659a0 into main Apr 5, 2023
@WoosukKwon WoosukKwon deleted the graph branch April 5, 2023 18:17
hongxiayang pushed a commit to hongxiayang/vllm that referenced this pull request Feb 13, 2024
slyalin pushed a commit to slyalin/vllm that referenced this pull request Apr 4, 2024
Disable NPU merged to OV master recently
z103cb referenced this pull request in z103cb/opendatahub_vllm May 16, 2024
Install and configure use of the NCCL version recommended by vLLM via
the [vllm-nccl](https://github.com/vllm-project/vllm-nccl) package. The
install is a little wonky... but this set of changes should work.

Signed-off-by: Travis Johnson <[email protected]>
dtrifiro pushed a commit to dtrifiro/vllm that referenced this pull request May 21, 2024
fxmarty pushed a commit to fxmarty/vllm-public that referenced this pull request May 31, 2024
Update max_context_len for custom paged attention.
tianyil1 pushed a commit to tianyil1/vllm that referenced this pull request Jun 5, 2024
bigPYJ1151 pushed a commit to bigPYJ1151/vllm that referenced this pull request Jun 25, 2024
…inear_fusion_and_prepack

Enable linear fusion/prepack and MOE AWQ fusion
@alixiaodi alixiaodi mentioned this pull request Aug 2, 2024
zyongye pushed a commit to zyongye/vllm that referenced this pull request Aug 5, 2025
* add tool server

Signed-off-by: Chen Zhang <[email protected]>

* add back demo tool server

Signed-off-by: Chen Zhang <[email protected]>

* update

Signed-off-by: Chen Zhang <[email protected]>

* update

Signed-off-by: Chen Zhang <[email protected]>

* update disallow cases

Signed-off-by: Chen Zhang <[email protected]>

* fix

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

---------

Signed-off-by: Chen Zhang <[email protected]>
zyongye pushed a commit to zyongye/vllm that referenced this pull request Aug 6, 2025
* add tool server

Signed-off-by: Chen Zhang <[email protected]>

* add back demo tool server

Signed-off-by: Chen Zhang <[email protected]>

* update

Signed-off-by: Chen Zhang <[email protected]>

* update

Signed-off-by: Chen Zhang <[email protected]>

* update disallow cases

Signed-off-by: Chen Zhang <[email protected]>

* fix

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

* fix some type

Signed-off-by: Chen Zhang <[email protected]>

---------

Signed-off-by: Chen Zhang <[email protected]>
heheda12345 added a commit to heheda12345/vllm that referenced this pull request Sep 29, 2025
…oject#26)

* indexer medatata to separate prefill and decode

* deep_gemm prefill kernel

* decode kernel, can run for single batch

* bug fixing insert decode k into kv before gemm

* don't use tilelang quant function

* faster non-looping torch for kv cache insertion

* add chunked prefill impl

* change quant kernel back to tilelang for promotion

* fix format (vllm-project#31)

Signed-off-by: Chen Zhang <[email protected]>

* update unit tests

* Fp8 indexer prefill (vllm-project#33)

* init

Signed-off-by: Chen Zhang <[email protected]>

* can run

---------

Signed-off-by: Chen Zhang <[email protected]>

* remove debug comment

Signed-off-by: Chen Zhang <[email protected]>

* cleanup

* further cleanup

---------

Signed-off-by: Chen Zhang <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants