Skip to content

Conversation

youkaichao
Copy link
Member

caused by #7109

adding msgspec should solve this.

cc @simon-mo @khluu @ywang96 @DarkLight1337 , let's find a permanent solution.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI.

Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge).

To run full CI, you can do one of these:

  • Comment /ready on the PR
  • Add ready label to the PR
  • Enable auto-merge.

🚀

@youkaichao youkaichao mentioned this pull request Aug 19, 2024
Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM if sufficient. It seems there is now a strange nvml error in doc build


[2024-08-19T18:38:40Z] Exception occurred:
--
  | [2024-08-19T18:38:40Z]   File "/usr/local/lib/python3.10/dist-packages/pynvml.py", line 979, in _nvmlCheckReturn
  | [2024-08-19T18:38:40Z]     raise NVMLError(ret)
  | [2024-08-19T18:38:40Z] pynvml.NVMLError_LibraryNotFound: NVML Shared Library Not Found
  | [2024-08-19T18:38:40Z] The full traceback has been saved in /tmp/sphinx-err-jlr90kw2.log, if you want to report the issue to the developers.
  | [2024-08-19T18:38:40Z] Please also report this if it was a user error, so that a better error message can be provided next time.
  | [2024-08-19T18:38:40Z] A bug report can be filed in the tracker at <https://github.com/sphinx-doc/sphinx/issues>. Thanks!
  | [2024-08-19T18:38:40Z] make: *** [Makefile:20: html] Error 2
  | [2024-08-19T18:38:41Z] 🚨 Error: The command exited with status 2


Comment on lines +5 to +15
# NOTE: we don't use `torch.version.cuda` / `torch.version.hip` because
# they only indicate the build configuration, not the runtime environment.
# For example, people can install a cuda build of pytorch but run on tpu.

is_tpu = False
try:
import torch_xla.core.xla_model as xm
xm.xla_device(devkind="TPU")
is_tpu = True
except Exception:
pass
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @WoosukKwon

we don't use any successful import as a flag. Instead, only when some device code executes successfully, then we trust that we are in the current platform.

technically, we can install libtpu python package for any platform.

@youkaichao youkaichao added the ready ONLY add when PR is ready to merge/full CI is needed label Aug 20, 2024
@youkaichao youkaichao merged commit e54ebc2 into vllm-project:main Aug 20, 2024
@youkaichao youkaichao deleted the fix_doc_build branch August 20, 2024 00:51
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
LeiWang1999 pushed a commit to LeiWang1999/vllm-bitblas that referenced this pull request Mar 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants