-
Notifications
You must be signed in to change notification settings - Fork 10
Client generator streaming support #34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@davidmigloz funny you say that. I acutally did look at your implementation earlier and wondered why you subclassed, then I realized it was for the streaming part. Makes sense and I immediately thought about a way to add support for this because it did look like you ended up copying a lot of the request building logic in
Thoughts on how this would be possible? How does the spec provide insight as to which endpoints are able to be streamed? Take /completions:
post:
operationId: createCompletion
tags:
- Completions
summary: Creates a completion for the provided prompt and parameters.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateCompletionRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/CreateCompletionResponse"
Any thoughts on what that interface would look like? |
I've refactored a bit the
The official OpenAI spec doesn't provide any info to differentiate streaming vs non-streaming. But it could be if the response is defined as What I had in my mind was something like this: /completions:
post:
operationId: createCompletion
tags:
- Completions
summary: Creates a completion for the provided prompt and parameters.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateCompletionRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/CreateCompletionResponse"
post:
operationId: createCompletionStream
tags:
- Completions
summary: Creates a completion for the provided prompt and parameters.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateCompletionRequest"
responses:
"200":
description: OK
content:
text/event-stream:
schema:
$ref: "#/components/schemas/CreateCompletionResponse" However, I'm realising now that you cannot define the endpoint two times nor two There doesn't seem to be any convention for SSE in the OpenAPI spec yet. So I guess the only option would be to use OpenAPI extensions (but it is something we want to avoid for this package as discussed). Anyway, the amount of code required to support streaming in the subclass is quite minimal, so I'm not really annoyed by having to maintain it manually. I think we can close this issue until the OpenAPI spec adds support for it. |
Sounds good! The refactor in #38 is exactly what I was thinking as well. Yea, your maintained manual code should be super minimal. |
Streaming support: if you check the implementation I'm subclassing the generated client to add streaming support manually. It may be possible to generate that code as well. The only piece that you cannot generate automatically from the spec is the StreamTransformer which could be provided as a ClientGeneratorOptions.
Originally posted by @davidmigloz in #32 (comment)
The text was updated successfully, but these errors were encountered: