-
Notifications
You must be signed in to change notification settings - Fork 1.2k
fix: Do not send an empty 'tools' list to remote vllm #1957
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Hi @danalsan! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See pre-commit failure
|
||
input_dict: dict[str, Any] = {} | ||
if isinstance(request, ChatCompletionRequest) and request.tools is not None: | ||
# Only include the 'tools' param if there is any. It can break things if an empty list is sent to the vlllm. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Only include the 'tools' param if there is any. It can break things if an empty list is sent to the vlllm. | |
# Only include the 'tools' param if there is any. It can break things if an empty list is sent to the vLLM. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just this one and we should be good to go!
Fixes: llamastack#1955 Since 0.2.0, the vLLM gets an empty list (vs ``None`` in 0.1.9 and before) when there are no tools configured which causes the issue described in llamastack#1955. This patch avoids sending the 'tools' param to the vLLM altogether instead of an empty list. It also adds a small unit test to avoid regressions. Signed-off-by: Daniel Alvarez <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you.
|
||
input_dict: dict[str, Any] = {} | ||
if isinstance(request, ChatCompletionRequest) and request.tools is not None: | ||
# Only include the 'tools' param if there is any. It can break things if an empty list is sent to the vLLM. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apparently it depends on vllm version; but it's good that we are going to handle this gracefully again here.
FYI the CI failure is not related to the PR (the job doesn't even utilize vllm). I reported #1959 since I saw the failure a number of times before. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
Fixes: #1955
Since 0.2.0, the vLLM gets an empty list (vs
None
in 0.1.9 and before) when there are no tools configured which causes the issue described in #1955 p. This patch avoids sending the 'tools' param to the vLLM altogether instead of an empty list.It also adds a small unit test to avoid regressions.
The OpenAI specification does not explicitly state that the list cannot be empty but I found this out through experimentation and it might depend on the actual remote vllm. In any case, as this parameter is Optional, is best to skip it altogether if there's no tools configured.