-
-
Notifications
You must be signed in to change notification settings - Fork 10.4k
Description
Your current environment
The output of `python collect_env.py`
$ python collect_env.py
/workspace/my-vllm/lib64/python3.12/site-packages/transformers/utils/hub.py:128: FutureWarning: Using TRANSFORMERS_CACHE
is deprecated and will be removed in v5 of Transformers. Use HF_HOME
instead.
warnings.warn(
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux 9.5 (Plow) (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-2)
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.34
Python version: 3.12.5 (main, Sep 11 2024, 00:00:00) [GCC 11.5.0 20240719 (Red Hat 11.5.0-2)] (64-bit runtime)
Python platform: Linux-5.14.0-284.88.1.el9_2.x86_64-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-80GB
GPU 1: NVIDIA A100-SXM4-80GB
GPU 2: NVIDIA A100-SXM4-80GB
GPU 3: NVIDIA A100-SXM4-80GB
Nvidia driver version: 535.104.12
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Vendor ID: GenuineIntel
Model name: Intel Xeon Processor (Icelake)
CPU family: 6
Model: 134
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
Stepping: 0
BogoMIPS: 5600.02
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid fsrm md_clear arch_capabilities
Virtualization: VT-x
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 2.5 MiB (80 instances)
L1i cache: 2.5 MiB (80 instances)
L2 cache: 160 MiB (40 instances)
L3 cache: 32 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-39
NUMA node1 CPU(s): 40-79
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Vulnerable: No microcode
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] flashinfer==0.1.6+cu124torch2.4
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-ml-py==12.560.30
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pyzmq==26.2.0
[pip3] torch==2.5.1
[pip3] torchvision==0.20.1
[pip3] transformers==4.46.3
[pip3] triton==3.1.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.6.5
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X NV12 NV12 NV12 PIX 0-39 0 N/A
GPU1 NV12 X NV12 NV12 PIX 0-39 0 N/A
GPU2 NV12 NV12 X NV12 NODE 0-39 0 N/A
GPU3 NV12 NV12 NV12 X SYS 40-79 1 N/A
NIC0 PIX PIX NODE SYS X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_0
NVIDIA_VISIBLE_DEVICES=GPU-d02eacbf-0d93-7141-2f45-650de9016f82,GPU-6169d05e-1d51-dfee-bbe5-1fe42096e35b,GPU-1dd95362-3e2e-8afc-f4c8-5d28663c3c73,GPU-7be30919-545a-54b7-a882-b565d1c0a133
VLLM_ALLOW_LONG_MAX_MODEL_LEN=1
VLLM_CACHE_ROOT=/tmp
VLLM_CONFIG_ROOT=/tmp
VLLM_WORKER_MULTIPROC_METHOD=fork
VLLM_USAGE_SOURCE=production-docker-image
CUDA_VISIBLE_DEVICES=0,1,2,3
CUDA_VISIBLE_DEVICES=0,1,2,3
LD_LIBRARY_PATH=/workspace/my-vllm/lib/python3.12/site-packages/cv2/../../lib64:/opt/vllm/lib/python3.12/site-packages/nvidia/nvtx/lib:/opt/vllm/lib/python3.12/site-packages/nvidia/cuda_runtime/lib:/opt/vllm/lib/python3.12/site-packages/nvidia/cuda_nvrtc/lib:
VLLM_NO_USAGE_STATS=1
CUDA_MODULE_LOADING=LAZY
Model Input Dumps
No response
🐛 Describe the bug
On v0.6.5 making a tools call with n>2 will break guided decoding with the xgrammar guided decoding backend.
Booting the server with:
$ vllm serve mistralai/Mistral-7B-Instruct-v0.3 --tool-call-parser mistral --enable-auto-tool-choice --compilation-config 3 --chat-template examples/tool_chat_template_mistral_parallel.jinja
And then sending this request:
curl -X 'POST' \
'http://localhost:8000/v1/chat/completions' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"model": "mistralai/Mistral-7B-Instruct-v0.3",
"messages": [
{
"content": "What is the temperature in SF?",
"role": "user"
}
],
"tool_choice": {
"type": "function",
"function": {
"name": "get_current_weather"
}
},
"tools": [{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"city": {
"type":
"string",
"description":
"The city to find the weather for, e.g. '\''San Francisco'\''"
},
"state": {
"type":
"string",
"description":
"the two-letter abbreviation for the state that the city is in, e.g. '\''CA'\'' which would mean '\''California'\''"
},
"unit": {
"type": "string",
"description": "The unit to fetch the temperature in",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["city", "state", "unit"]
}
}
}],
"repetition_penalty": 1.0,
"top_k": -1,
"n": 2
}'
will result in a 500 with this stack trace:
INFO: ::1:55612 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
ERROR 12-18 22:57:54 engine.py:135] IndexError('Error in model execution (input dumped to /tmp/err_execute_model_input_20241218-225754.pkl): tuple index out of range')
ERROR 12-18 22:57:54 engine.py:135] Traceback (most recent call last):
ERROR 12-18 22:57:54 engine.py:135] File "/workspace/my-vllm/lib64/python3.12/site-packages/vllm/worker/model_runner_base.py", line 116, in _wrapper
ERROR 12-18 22:57:54 engine.py:135] return func(*args, **kwargs)
ERROR 12-18 22:57:54 engine.py:135] ^^^^^^^^^^^^^^^^^^^^^
ERROR 12-18 22:57:54 engine.py:135] File "/workspace/my-vllm/lib64/python3.12/site-packages/vllm/worker/model_runner.py", line 1729, in execute_model
ERROR 12-18 22:57:54 engine.py:135] logits = self.model.compute_logits(hidden_or_intermediate_states,
ERROR 12-18 22:57:54 engine.py:135] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 12-18 22:57:54 engine.py:135] File "/workspace/my-vllm/lib64/python3.12/site-packages/vllm/model_executor/models/llama.py", line 578, in compute_logits
ERROR 12-18 22:57:54 engine.py:135] logits = self.logits_processor(self.lm_head, hidden_states,
ERROR 12-18 22:57:54 engine.py:135] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 12-18 22:57:54 engine.py:135] File "/workspace/my-vllm/lib64/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
ERROR 12-18 22:57:54 engine.py:135] return self._call_impl(*args, **kwargs)
ERROR 12-18 22:57:54 engine.py:135] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 12-18 22:57:54 engine.py:135] File "/workspace/my-vllm/lib64/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
ERROR 12-18 22:57:54 engine.py:135] return forward_call(*args, **kwargs)
ERROR 12-18 22:57:54 engine.py:135] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 12-18 22:57:54 engine.py:135] File "/workspace/my-vllm/lib64/python3.12/site-packages/vllm/model_executor/layers/logits_processor.py", line 77, in forward
ERROR 12-18 22:57:54 engine.py:135] logits = _apply_logits_processors(logits, sampling_metadata)
ERROR 12-18 22:57:54 engine.py:135] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 12-18 22:57:54 engine.py:135] File "/workspace/my-vllm/lib64/python3.12/site-packages/vllm/model_executor/layers/logits_processor.py", line 153, in _apply_logits_processors
ERROR 12-18 22:57:54 engine.py:135] logits_row = logits_processor(past_tokens_ids,
ERROR 12-18 22:57:54 engine.py:135] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 12-18 22:57:54 engine.py:135] File "/workspace/my-vllm/lib64/python3.12/site-packages/vllm/model_executor/guided_decoding/xgrammar_decoding.py", line 258, in __call__
ERROR 12-18 22:57:54 engine.py:135] sampled_token = input_ids[-1]
ERROR 12-18 22:57:54 engine.py:135] ~~~~~~~~~^^^^
ERROR 12-18 22:57:54 engine.py:135] IndexError: tuple index out of range
ERROR 12-18 22:57:54 engine.py:135]
ERROR 12-18 22:57:54 engine.py:135] The above exception was the direct cause of the following exception:
ERROR 12-18 22:57:54 engine.py:135]
ERROR 12-18 22:57:54 engine.py:135] Traceback (most recent call last):
ERROR 12-18 22:57:54 engine.py:135] File "/workspace/my-vllm/lib64/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 133, in start
ERROR 12-18 22:57:54 engine.py:135] self.run_engine_loop()
ERROR 12-18 22:57:54 engine.py:135] File "/workspace/my-vllm/lib64/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 196, in run_engine_loop
ERROR 12-18 22:57:54 engine.py:135] request_outputs = self.engine_step()
ERROR 12-18 22:57:54 engine.py:135] ^^^^^^^^^^^^^^^^^^
ERROR 12-18 22:57:54 engine.py:135] File "/workspace/my-vllm/lib64/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 214, in engine_step
ERROR 12-18 22:57:54 engine.py:135] raise e
ERROR 12-18 22:57:54 engine.py:135] File "/workspace/my-vllm/lib64/python3.12/site-packages/vllm/engine/multiprocessing/engine.py", line 205, in engine_step
ERROR 12-18 22:57:54 engine.py:135] return self.engine.step()
ERROR 12-18 22:57:54 engine.py:135] ^^^^^^^^^^^^^^^^^^
ERROR 12-18 22:57:54 engine.py:135] File "/workspace/my-vllm/lib64/python3.12/site-packages/vllm/engine/llm_engine.py", line 1405, in step
ERROR 12-18 22:57:54 engine.py:135] outputs = self.model_executor.execute_model(
ERROR 12-18 22:57:54 engine.py:135] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 12-18 22:57:54 engine.py:135] File "/workspace/my-vllm/lib64/python3.12/site-packages/vllm/executor/gpu_executor.py", line 88, in execute_model
ERROR 12-18 22:57:54 engine.py:135] output = self.driver_worker.execute_model(execute_model_req)
ERROR 12-18 22:57:54 engine.py:135] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 12-18 22:57:54 engine.py:135] File "/workspace/my-vllm/lib64/python3.12/site-packages/vllm/worker/worker_base.py", line 343, in execute_model
ERROR 12-18 22:57:54 engine.py:135] output = self.model_runner.execute_model(
ERROR 12-18 22:57:54 engine.py:135] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 12-18 22:57:54 engine.py:135] File "/workspace/my-vllm/lib64/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
ERROR 12-18 22:57:54 engine.py:135] return func(*args, **kwargs)
ERROR 12-18 22:57:54 engine.py:135] ^^^^^^^^^^^^^^^^^^^^^
ERROR 12-18 22:57:54 engine.py:135] File "/workspace/my-vllm/lib64/python3.12/site-packages/vllm/worker/model_runner_base.py", line 152, in _wrapper
ERROR 12-18 22:57:54 engine.py:135] raise type(err)(
ERROR 12-18 22:57:54 engine.py:135] IndexError: Error in model execution (input dumped to /tmp/err_execute_model_input_20241218-225754.pkl): tuple index out of range
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.