Skip to content

Client generator streaming support #34

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
walsha2 opened this issue Nov 2, 2023 · 3 comments
Closed

Client generator streaming support #34

walsha2 opened this issue Nov 2, 2023 · 3 comments

Comments

@walsha2
Copy link
Contributor

walsha2 commented Nov 2, 2023

Streaming support: if you check the implementation I'm subclassing the generated client to add streaming support manually. It may be possible to generate that code as well. The only piece that you cannot generate automatically from the spec is the StreamTransformer which could be provided as a ClientGeneratorOptions.

Originally posted by @davidmigloz in #32 (comment)

@walsha2
Copy link
Contributor Author

walsha2 commented Nov 2, 2023

@davidmigloz funny you say that. I acutally did look at your implementation earlier and wondered why you subclassed, then I realized it was for the streaming part. Makes sense and I immediately thought about a way to add support for this because it did look like you ended up copying a lot of the request building logic in _requestStream. I think I have a solution for at least this first part (to reduce code duplication).

It may be possible to generate that code as well

Thoughts on how this would be possible? How does the spec provide insight as to which endpoints are able to be streamed? Take completions/ as an example:

/completions:
  post:
    operationId: createCompletion
    tags:
      - Completions
    summary: Creates a completion for the provided prompt and parameters.
    requestBody:
      required: true
      content:
        application/json:
          schema:
            $ref: "#/components/schemas/CreateCompletionRequest"
    responses:
      "200":
        description: OK
        content:
          application/json:
            schema:
              $ref: "#/components/schemas/CreateCompletionResponse"

The only piece that you cannot generate automatically from the spec is the StreamTransformer which could be provided as a ClientGeneratorOptions

Any thoughts on what that interface would look like?

@davidmigloz
Copy link
Contributor

@davidmigloz funny you say that. I acutally did look at your implementation earlier and wondered why you subclassed, then I realized it was for the streaming part. Makes sense and I immediately thought about a way to add support for this because it did look like you ended up copying a lot of the request building logic in _requestStream. I think I have a solution for at least this first part (to reduce code duplication).

I've refactored a bit the _request method in https://github.com/tazatechnology/openapi_spec/pull/38/files, now the subclass can easily add streaming support without duplicating any request code.

Thoughts on how this would be possible? How does the spec provide insight as to which endpoints are able to be streamed? Take completions/ as an example:

The official OpenAI spec doesn't provide any info to differentiate streaming vs non-streaming. But it could be if the response is defined as text/event-stream instead application/json.

What I had in my mind was something like this:

/completions:
  post:
    operationId: createCompletion
    tags:
      - Completions
    summary: Creates a completion for the provided prompt and parameters.
    requestBody:
      required: true
      content:
        application/json:
          schema:
            $ref: "#/components/schemas/CreateCompletionRequest"
    responses:
      "200":
        description: OK
        content:
          application/json:
            schema:
              $ref: "#/components/schemas/CreateCompletionResponse"
  post:
    operationId: createCompletionStream
    tags:
      - Completions
    summary: Creates a completion for the provided prompt and parameters.
    requestBody:
      required: true
      content:
        application/json:
          schema:
            $ref: "#/components/schemas/CreateCompletionRequest"
    responses:
      "200":
        description: OK
        content:
          text/event-stream:
            schema:
              $ref: "#/components/schemas/CreateCompletionResponse"

However, I'm realising now that you cannot define the endpoint two times nor two posts for the same endpoint. So it is not feasible..

There doesn't seem to be any convention for SSE in the OpenAPI spec yet. So I guess the only option would be to use OpenAPI extensions (but it is something we want to avoid for this package as discussed).

Anyway, the amount of code required to support streaming in the subclass is quite minimal, so I'm not really annoyed by having to maintain it manually.

I think we can close this issue until the OpenAPI spec adds support for it.

@walsha2
Copy link
Contributor Author

walsha2 commented Nov 13, 2023

Anyway, the amount of code required to support streaming in the subclass is quite minimal, so I'm not really annoyed by having to maintain it manually.
I think we can close this issue until the OpenAPI spec adds support for it.

Sounds good! The refactor in #38 is exactly what I was thinking as well. Yea, your maintained manual code should be super minimal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants