Skip to content

Conversation

dexhunter
Copy link

  • fix unable to run o-series model
  • remove extra line

* fix unable to run o-series model
* remove extra line
@dexhunter
Copy link
Author

dexhunter commented Feb 27, 2025

friendly request for review @simonguozirui @alexzhang13

openai.BadRequestError: Error code: 400 - {'error': {'message': "Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.", 'type': 'invalid_request_error', 'param'
: 'max_tokens', 'code': 'unsupported_parameter'}}

or can change max_tokens to max_completion_tokens if needed, while I suppose those two have different usages

@simonguozirui
Copy link
Collaborator

@dexhunter thanks for highlighting this and your contribution. Generally, I am gonna replace the LLM client call with litellm as described in #70, as model interfaces keep changing now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants