-
-
Notifications
You must be signed in to change notification settings - Fork 10.6k
Description
🚀 The feature, motivation and pitch
Title: Conditional Prompt Inclusion in generate
Function for Streaming Efficiency
Feature Proposal:
This feature introduces a new parameter, is_return_prompt
, to the generate
function in vllm/entrypoints/api_server.py
. The parameter allows users to conditionally include the prompt in the generated response, addressing inefficiencies observed in streaming scenarios.
Motivation and Pitch:
In the current implementation, the generate
function always includes the prompt in its response, whether streaming is enabled or not. This results in inefficiencies, especially in streaming mode, where the prompt is repeatedly included with each token update. This behavior can be redundant and slow down the processing, as users typically do not need to see the prompt after it has been provided to the LLM.
Proposal:
The proposed feature will add an is_return_prompt
parameter to the generate
function. When is_return_prompt
is set to False
(the default), the prompt will not be included in the response. When set to True
, the prompt will be included as part of the output. This will make the streaming process more efficient and reduce redundancy.
Details:
- New Parameter:
is_return_prompt
(default:False
) - Effect: When
is_return_prompt
isTrue
, the prompt is included in the response. Otherwise, the prompt is omitted. - Use Case: Enhances performance in streaming scenarios by avoiding repeated prompt inclusion, which is particularly useful when processing large amounts of generated text.
Alternatives
No response
Additional context
This feature is particularly relevant for users working with streaming responses, where including the prompt with each token update can hinder performance. The new parameter will provide greater control over the response format, making it more suitable for various use cases and improving overall efficiency.
By incorporating this feature, users can benefit from more streamlined and performant interactions with the generate
function, especially in scenarios involving continuous or large-scale text generation.
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.