Skip to content

Conversation

WoosukKwon
Copy link
Collaborator

@WoosukKwon WoosukKwon requested a review from zhuohan123 May 8, 2023 02:03
@WoosukKwon WoosukKwon merged commit 8917782 into main May 9, 2023
@WoosukKwon WoosukKwon deleted the logger branch May 9, 2023 06:03
hongxiayang pushed a commit to hongxiayang/vllm that referenced this pull request Feb 13, 2024
yukavio pushed a commit to yukavio/vllm that referenced this pull request Jul 3, 2024
SUMMARY:
Fix benchmarking imports - Absolute -> Relative. 

TEST PLAN:
- local testing : invoke the benchmark script from outside the
repository root.
```
vllm-test) varun@floppy-fan:~/code$ python3 -m  neuralmagic-vllm.neuralmagic.benchmarks.run_benchmarks --help
usage: run_benchmarks.py [-h] -i INPUT_CONFIG_FILE -o OUTPUT_DIRECTORY

Runs benchmark-scripts as a subprocess

options:
  -h, --help            show this help message and exit
  -i INPUT_CONFIG_FILE, --input-config-file INPUT_CONFIG_FILE
                        Path to the input config file describing the benhmarks to run
  -o OUTPUT_DIRECTORY, --output-directory OUTPUT_DIRECTORY
                        Path to a directory that is the output store

```
 - Run nm-benchmark GHA job.

---------

Co-authored-by: Varun Sundar Rabindranath <[email protected]>
dllehr-amd pushed a commit to dllehr-amd/vllm that referenced this pull request Jul 22, 2024
* add mixtral-8x22B tuning support
* moe configs Mixtral-8x22B TP=2,4,8
JHLEE17 pushed a commit to JHLEE17/vllm that referenced this pull request Aug 1, 2024
dtrifiro pushed a commit to dtrifiro/vllm that referenced this pull request Mar 26, 2025
…lm-project#85)

* Update Dockerfile.ubi to install vllm-cuda using wheel from RHEL AI team

the install script is located in payload/run.sh. An args file was also
added with the custom parameters, and is referenced in the tekton
pipeline.

* update payload/run.sh to use bot token

* add trap to guarantee run.sh deletion
wuhuikx pushed a commit to wuhuikx/vllm that referenced this pull request Mar 27, 2025
1. Update CANN image name
2. Add pta install step
3. update vllm-ascend docker image name to ghcr
4. update quick_start to use vllm-ascend image directly.
5. fix `note` style

Signed-off-by: wangxiyuan <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implement a system logger to print system status and warnings
1 participant