-
-
Notifications
You must be signed in to change notification settings - Fork 10.4k
Closed
Closed
Copy link
Labels
bugSomething isn't workingSomething isn't working
Description
Your current environment
The output of python collect_env.py
Your output of `python collect_env.py` here
Collecting environment information...
==============================
System Info
==============================
OS : Debian GNU/Linux 11 (bullseye) (x86_64)
GCC version : (Debian 10.2.1-6) 10.2.1 20210110
Clang version : Could not collect
CMake version : version 4.1.0
Libc version : glibc-2.31
==============================
PyTorch Info
==============================
PyTorch version : 2.9.0.dev20250730+cpu
Is debug build : False
CUDA used to build PyTorch : None
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.12.11 (main, Jul 22 2025, 04:27:29) [GCC 10.2.1 20210110] (64-bit runtime)
Python platform : Linux-6.8.0-1015-gcp-x86_64-with-glibc2.31
==============================
CUDA / GPU Info
==============================
Is CUDA available : False
CUDA runtime version : No CUDA
CUDA_MODULE_LOADING set to : N/A
GPU models and configuration : No CUDA
Nvidia driver version : No CUDA
cuDNN version : No CUDA
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 52 bits physical, 57 bits virtual
CPU(s): 180
On-line CPU(s) list: 0-179
Thread(s) per core: 2
Core(s) per socket: 90
Socket(s): 1
NUMA node(s): 1
Vendor ID: AuthenticAMD
CPU family: 25
Model: 17
Model name: AMD EPYC 9B14
Stepping: 1
CPU MHz: 2599.998
BogoMIPS: 5199.99
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 2.8 MiB
L1i cache: 2.8 MiB
L2 cache: 90 MiB
L3 cache: 352 MiB
NUMA node0 CPU(s): 0-179
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Mitigation; Safe RET
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat avx512vbmi umip avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm
==============================
Versions of relevant libraries
==============================
[pip3] numpy==2.2.6
[pip3] pyzmq==27.0.1
[pip3] torch==2.9.0.dev20250730+cpu
[pip3] torch-xla==2.9.0+git199a9bd
[pip3] torchvision==0.24.0.dev20250730+cpu
[pip3] transformers==4.55.2
[pip3] triton==3.4.0
[conda] Could not collect
==============================
vLLM Info
==============================
ROCM Version : Could not collect
Neuron SDK Version : N/A
vLLM Version : 0.10.1rc2.dev44+g24f4d1a22 (git sha: 24f4d1a22)
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
Could not collect
==============================
Environment Variables
==============================
VLLM_TARGET_DEVICE=tpu
NCCL_CUMEM_ENABLE=0
PYTORCH_NVML_BASED_CUDA_CHECK=1
TORCHINDUCTOR_COMPILE_THREADS=1
🐛 Describe the bug
Running vllm/examples/offline_inference/profiling_tpu/profiling.py
with v6e-4
and v2-alpha-v6e
inside the vllm_tpu
Docker container causes the program to crash.
export XLA_HLO_DEBUG=1
export MODEL=Qwen/Qwen2.5-7B-Instruct
export VLLM_TPU_PROFILE_DURATION_MS=3000
export VLLM_TPU_PROFILE_DELAY_MS=0
python3 profiling.py \
--model $MODEL \
--input-len 1024 --output-len 1 \
--batch-size 1 --enforce-eager \
--max-model-len 2048 \
--tensor-parallel-size 1 \
--profile-result-dir profiles
Error output log
root@t1v-n-e177828d-w-0:/workspace/vllm/examples/offline_inference/profiling_tpu# python3 profiling.py --model $MODEL --input-len 1024 --output-len 1 --batch-size 1 --enforce-eager --max-model-len 2048 --tensor-parallel-size 1 --profile-result-dir profiles
WARNING:root:libtpu.so and TPU device found. Setting PJRT_DEVICE=TPU.
INFO 08-19 17:27:53 [__init__.py:241] Automatically detected platform tpu.
INFO 08-19 17:27:53 [tpu.py:208] tpu_commons not found, using vLLM's TpuPlatform
Namespace(input_len=1024, output_len=1, batch_size=1, num_iters_warmup=5, num_iters=1, profile_result_dir='profiles', model='Qwen/Qwen2.5-7B-Instruct', runner='auto', convert='auto', task=None, tokenizer=None, tokenizer_mode='auto', trust_remote_code=False, dtype='auto', seed=None, hf_config_path=None, allowed_local_media_path='', revision=None, code_revision=None, rope_scaling={}, rope_theta=None, tokenizer_revision=None, max_model_len=2048, quantization=None, enforce_eager=True, max_seq_len_to_capture=8192, max_logprobs=20, logprobs_mode='raw_logprobs', disable_sliding_window=False, disable_cascade_attn=False, skip_tokenizer_init=False, enable_prompt_embeds=False, served_model_name=None, disable_async_output_proc=False, config_format='auto', hf_token=None, hf_overrides={}, override_neuron_config={}, override_pooler_config=None, logits_processor_pattern=None, generation_config='auto', override_generation_config={}, enable_sleep_mode=False, model_impl='auto', override_attention_dtype=None, logits_processors=None, load_format='auto', download_dir=None, model_loader_extra_config={}, ignore_patterns=None, use_tqdm_on_load=True, pt_load_map_location='cpu', guided_decoding_backend='auto', guided_decoding_disable_fallback=False, guided_decoding_disable_any_whitespace=False, guided_decoding_disable_additional_properties=False, reasoning_parser='', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=1, data_parallel_size=1, data_parallel_rank=None, data_parallel_start_rank=None, data_parallel_size_local=None, data_parallel_address=None, data_parallel_rpc_port=None, data_parallel_backend='mp', data_parallel_hybrid_lb=False, enable_expert_parallel=False, enable_eplb=False, num_redundant_experts=0, eplb_window_size=1000, eplb_step_interval=3000, eplb_log_balancedness=False, max_parallel_loading_workers=None, ray_workers_use_nsight=False, disable_custom_all_reduce=False, worker_cls='auto', worker_extension_cls='', enable_multimodal_encoder_data_parallel=False, block_size=None, gpu_memory_utilization=0.9, swap_space=4, kv_cache_dtype='auto', num_gpu_blocks_override=None, enable_prefix_caching=None, prefix_caching_hash_algo='builtin', cpu_offload_gb=0, calculate_kv_scales=False, kv_sharing_fast_prefill=False, mamba_cache_dtype='auto', mamba_ssm_cache_dtype='auto', limit_mm_per_prompt={}, media_io_kwargs={}, mm_processor_kwargs=None, mm_processor_cache_gb=4, disable_mm_preprocessor_cache=False, interleave_mm_strings=False, skip_mm_profiling=False, enable_lora=None, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', max_cpu_loras=None, fully_sharded_loras=False, default_mm_loras=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, max_num_batched_tokens=None, max_num_seqs=None, max_num_partial_prefills=1, max_long_partial_prefills=1, cuda_graph_sizes=[], long_prefill_token_threshold=0, num_lookahead_slots=0, scheduler_delay_factor=0.0, preemption_mode=None, scheduling_policy='fcfs', enable_chunked_prefill=None, disable_chunked_mm_input=False, scheduler_cls='vllm.core.scheduler.Scheduler', disable_hybrid_kv_cache_manager=False, async_scheduling=False, speculative_config=None, kv_transfer_config=None, kv_events_config=None, compilation_config={"level":null,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":[],"splitting_ops":null,"use_inductor":true,"compile_sizes":null,"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"cudagraph_mode":null,"use_cudagraph":true,"cudagraph_num_of_warmups":0,"cudagraph_capture_sizes":null,"cudagraph_copy_inputs":false,"full_cuda_graph":false,"pass_config":{},"max_capture_size":null,"local_cache_dir":null}, additional_config={}, disable_log_stats=False, enable_prompt_adapter=False)
INFO 08-19 17:27:54 [utils.py:326] non-default args: {'model': 'Qwen/Qwen2.5-7B-Instruct', 'max_model_len': 2048, 'enforce_eager': True, 'enable_lora': None}
INFO 08-19 17:28:01 [__init__.py:711] Resolved architecture: Qwen2ForCausalLM
INFO 08-19 17:28:01 [__init__.py:1750] Using max model len 2048
INFO 08-19 17:28:01 [scheduler.py:222] Chunked prefill is enabled with max_num_batched_tokens=8192.
INFO 08-19 17:28:01 [tpu.py:114] [TPU] Forcing DYNAMO_ONCE compilation level, and disabling cudagraph.
(EngineCore_0 pid=4451) INFO 08-19 17:28:02 [core.py:644] Waiting for init message from front-end.
(EngineCore_0 pid=4451) INFO 08-19 17:28:02 [core.py:74] Initializing a V1 LLM engine (v0.10.1rc2.dev44+g24f4d1a22) with config: model='Qwen/Qwen2.5-7B-Instruct', speculative_config=None, tokenizer='Qwen/Qwen2.5-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=auto, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=True, quantization=None, enforce_eager=True, kv_cache_dtype=auto, device_config=None, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=Qwen/Qwen2.5-7B-Instruct, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=False, pooler_config=None, compilation_config={"level":2,"debug_dump_path":"","cache_dir":"","backend":"openxla","custom_ops":[],"splitting_ops":null,"use_inductor":true,"compile_sizes":null,"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"cudagraph_mode":0,"use_cudagraph":true,"cudagraph_num_of_warmups":0,"cudagraph_capture_sizes":null,"cudagraph_copy_inputs":false,"full_cuda_graph":false,"pass_config":{},"max_capture_size":null,"local_cache_dir":null}
(EngineCore_0 pid=4451) INFO 08-19 17:28:02 [importing.py:43] Triton is installed but 0 active driver(s) found (expected 1). Disabling Triton to prevent runtime errors.
(EngineCore_0 pid=4451) INFO 08-19 17:28:02 [importing.py:63] Triton not installed or not compatible; certain GPU-related functions will not be available.
(EngineCore_0 pid=4451) INFO 08-19 17:28:02 [tpu_worker.py:35] tpu_commons not found, using vLLM's TPUWorker.
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
[Gloo] Rank 0 is connected to 0 peer ranks. Expected number of connected peer ranks is : 0
(EngineCore_0 pid=4451) INFO 08-19 17:28:02 [parallel_state.py:1134] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0
(EngineCore_0 pid=4451) WARNING 08-19 17:28:08 [tpu.py:170] Pin memory is not supported on TPU.
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu_model_runner.py:1872] Using exponential token paddings:
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu_model_runner.py:1874] 16
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu_model_runner.py:1874] 32
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu_model_runner.py:1874] 64
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu_model_runner.py:1874] 128
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu_model_runner.py:1874] 256
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu_model_runner.py:1874] 512
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu_model_runner.py:1874] 1024
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu_model_runner.py:1874] 2048
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu_model_runner.py:1874] 4096
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu_model_runner.py:1874] 8192
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu_model_runner.py:1838] Preparing request paddings:
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu_model_runner.py:1845] 8
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu_model_runner.py:1845] 16
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu_model_runner.py:1845] 32
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu_model_runner.py:1845] 64
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu_model_runner.py:1845] 128
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu_model_runner.py:1845] 256
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu_model_runner.py:1199] Loading model from scratch...
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu.py:55] Cannot use None backend on TPU.
(EngineCore_0 pid=4451) INFO 08-19 17:28:08 [tpu.py:59] Using Pallas V1 backend.
(EngineCore_0 pid=4451) INFO 08-19 17:28:09 [weight_utils.py:296] Using model weights format ['*.safetensors']
Loading safetensors checkpoint shards: 0% Completed | 0/4 [00:00<?, ?it/s]
2025-08-19 17:28:09.628755: W torch_xla/csrc/xla_graph_executor.cpp:107] Using persistent compilation cache with XLA_HLO_DEBUG=1 or XLA_IR_DEBUG=1 is not recommended. Changes to the HLO metadata will not be reflected in loaded executables.
2025-08-19 17:28:09.632572: W torch_xla/csrc/runtime/pjrt_computation_client.cpp:691] Failed to deserialize executable: UNIMPLEMENTED: Deserializing serialized executable not supported.
2025-08-19 17:28:10.498230: W torch_xla/csrc/runtime/pjrt_computation_client.cpp:691] Failed to deserialize executable: UNIMPLEMENTED: Deserializing serialized executable not supported.
2025-08-19 17:28:10.737001: W torch_xla/csrc/runtime/pjrt_computation_client.cpp:691] Failed to deserialize executable: UNIMPLEMENTED: Deserializing serialized executable not supported.
2025-08-19 17:28:10.973143: W torch_xla/csrc/runtime/pjrt_computation_client.cpp:691] Failed to deserialize executable: UNIMPLEMENTED: Deserializing serialized executable not supported.
2025-08-19 17:28:10.990855: W torch_xla/csrc/runtime/pjrt_computation_client.cpp:691] Failed to deserialize executable: UNIMPLEMENTED: Deserializing serialized executable not supported.
2025-08-19 17:28:11.212710: W torch_xla/csrc/runtime/pjrt_computation_client.cpp:691] Failed to deserialize executable: UNIMPLEMENTED: Deserializing serialized executable not supported.
2025-08-19 17:28:11.230755: W torch_xla/csrc/runtime/pjrt_computation_client.cpp:691] Failed to deserialize executable: UNIMPLEMENTED: Deserializing serialized executable not supported.
2025-08-19 17:28:11.475966: W torch_xla/csrc/runtime/pjrt_computation_client.cpp:691] Failed to deserialize executable: UNIMPLEMENTED: Deserializing serialized executable not supported.
2025-08-19 17:28:11.494094: W torch_xla/csrc/runtime/pjrt_computation_client.cpp:691] Failed to deserialize executable: UNIMPLEMENTED: Deserializing serialized executable not supported.
Loading safetensors checkpoint shards: 25% Completed | 1/4 [00:02<00:06, 2.21s/it]
Loading safetensors checkpoint shards: 50% Completed | 2/4 [00:02<00:02, 1.08s/it]
Loading safetensors checkpoint shards: 75% Completed | 3/4 [00:02<00:00, 1.40it/s]
2025-08-19 17:28:12.514549: W torch_xla/csrc/runtime/pjrt_computation_client.cpp:691] Failed to deserialize executable: UNIMPLEMENTED: Deserializing serialized executable not supported.
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:03<00:00, 1.71it/s]
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:03<00:00, 1.26it/s]
(EngineCore_0 pid=4451)
(EngineCore_0 pid=4451) INFO 08-19 17:28:12 [default_loader.py:267] Loading weights took 3.28 seconds
2025-08-19 17:28:25.861653: W torch_xla/csrc/runtime/pjrt_computation_client.cpp:691] Failed to deserialize executable: UNIMPLEMENTED: Deserializing serialized executable not supported.
(EngineCore_0 pid=4451) INFO 08-19 17:28:30 [tpu_model_runner.py:1702] Clear dynamo cache and cached dynamo bytecode.
(EngineCore_0 pid=4451) INFO 08-19 17:28:30 [kv_cache_utils.py:849] GPU KV cache size: 255,360 tokens
(EngineCore_0 pid=4451) INFO 08-19 17:28:30 [kv_cache_utils.py:853] Maximum concurrency for 2,048 tokens per request: 124.69x
(EngineCore_0 pid=4451) INFO 08-19 17:28:31 [core.py:215] init engine (profile, create kv cache, warmup model) took 18.32 seconds
INFO 08-19 17:28:31 [llm.py:298] Supported_tasks: ['generate']
Traceback (most recent call last):
File "/workspace/vllm/examples/offline_inference/profiling_tpu/profiling.py", line 110, in <module>
main(args)
File "/workspace/vllm/examples/offline_inference/profiling_tpu/profiling.py", line 27, in main
server = xp.start_server(9012) # noqa: F841
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/torch_xla/debug/profiler.py", line 41, in start_server
if not only_on_master or xm.is_master_ordinal():
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/torch_xla/core/xla_model.py", line 109, in is_master_ordinal
ordinal = get_local_ordinal() if local else runtime.global_ordinal()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/torch_xla/core/xla_model.py", line 94, in get_local_ordinal
return runtime.local_ordinal()
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/torch_xla/runtime.py", line 158, in local_ordinal
devices_per_process = addressable_device_count()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/torch_xla/runtime.py", line 138, in addressable_device_count
return torch_xla._XLAC._xla_num_devices()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: TPU initialization failed: open(/dev/vfio/0): Device or resource busy: Device or resource busy; Couldn't open iommu group /dev/vfio/0
Exception raised from MaybeThrow at torch_xla/csrc/status.cpp:123 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) + 0x80 (0x73c626d932c0 in /usr/local/lib/python3.12/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) + 0x65 (0x73c626d2f0b9 in /usr/local/lib/python3.12/site-packages/torch/lib/libc10.so)
frame #2: torch_xla::MaybeThrow(absl::lts_20230802::Status const&) + 0x4d8 (0x73c5e9468ab8 in /usr/local/lib/python3.12/site-packages/_XLAC.cpython-312-x86_64-linux-gnu.so)
frame #3: torch_xla::runtime::GetComputationClientOrDie() + 0x1d (0x73c5dfe0e8ad in /usr/local/lib/python3.12/site-packages/_XLAC.cpython-312-x86_64-linux-gnu.so)
frame #4: <unknown function> + 0x6bc100d (0x73c5df89200d in /usr/local/lib/python3.12/site-packages/_XLAC.cpython-312-x86_64-linux-gnu.so)
frame #5: <unknown function> + 0x6c063e1 (0x73c5df8d73e1 in /usr/local/lib/python3.12/site-packages/_XLAC.cpython-312-x86_64-linux-gnu.so)
<omitting python frames>
frame #17: __libc_start_main + 0xea (0x73c6adaa9d7a in /lib/x86_64-linux-gnu/libc.so.6)
ERROR 08-19 17:28:35 [core_client.py:562] Engine core proc EngineCore_0 died unexpectedly, shutting down client.
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working