Skip to content

Added helpful error message telling user to use ChatCompletion #258

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

Filimoa
Copy link

@Filimoa Filimoa commented Mar 2, 2023

Problem

Many users (including me) are trying to call the "gpt-3.5-turbo" from the Completion class. See #250.

response = openai.Completion.create(
    model="gpt-3.5-turbo",
    prompt="Who won the world series in 2020?",
)

Which returns a confusing error message, telling the user to use a different endpoint.

    677         stream_error = stream and "error" in resp.data
    678         if stream_error or not 200 <= rcode < 300:
--> 679             raise self.handle_error_response(
    680                 rbody, rcode, resp.data, rheaders, stream_error=stream_error
    681             )

InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?

Proposal

Given this is a library that wraps the API, it'd be more helpful to tell the user to use the ChatCompletion class instead. My pull requests wraps the error and displays the following.

        stream_error = stream and "error" in resp.data
        if stream_error or not 200 <= rcode < 300:
>           raise self.handle_error_response(
                rbody, rcode, resp.data, rheaders, stream_error=stream_error
            )
E           openai.error.InvalidRequestError: Please use ChatCompletion instead of Completion when using the'gpt-3.5-turbo' models. For more information see https://platform.openai.com/docs/guides/chat/introduction.

Approach

My initial thought was to add some sort of kwarg checking in Completion but it looks like this is a lean wrapper class. Searching further it looks like handle_error_response is a good spot to put this since it already does something similar for internal errors.

Caveats

This approach is fairly tightly coupled to the OpenAI API error response message, I'm not sure how stable these messages are. With that said if the error message does change, this will not affect any other functionality.

@logankilpatrick
Copy link
Contributor

Thanks for this! I know the team was also working on some ways to improve this, I will let other folks follow up on if this is something we want to do in the Python SDK.

@hallacy hallacy requested a review from athyuttamre March 30, 2023 04:38
@hallacy
Copy link
Collaborator

hallacy commented Apr 8, 2023

I see the idea but I'm a little nervous about the dependency on the api error message. I'd rather have this this covered in something like an FAQ (which might be a good idea to start at this point) or with better documentation in the README. That said, I get the appeal. Given that this won't be the only model that has this issue, can we make this error more generic?

@athyuttamre
Copy link

One option is for the API to return an error code for this case and we can depend on that to change the error message instead e.g. code: "chat_model_unsupported"

However, I'm leaning towards updating our README / quickstart to use chat completions by default instead. PR here: #441. Hopefully that will reduce the confusion for new users. Open to reconsidering if this continues to be a problem!

@athyuttamre athyuttamre closed this May 8, 2023
safa0 pushed a commit to safa0/openai-agents-python that referenced this pull request Apr 27, 2025
Noticed a bunch of typos when reading code, fixing.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants