Skip to content

Extended thinking support for bedrock models #56

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: labs-release
Choose a base branch
from

Conversation

sarisha19
Copy link
Collaborator

Description of changes:

  • Additional field in ModelParams containing thinking_params, consumed by Bedrock models which support Extended thinking(Only Claude 3.7 currently).
  • Validation of model parameters before creating a Bedrock Converse client with them.
    -Tested with non-thinking models, thinking models(valid and invalid parameters) and with custom model.
    By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@sarisha19 sarisha19 marked this pull request as ready for review April 22, 2025 17:06
if "type" in thinking_params:
if thinking_params["type"] == "enabled":
# Fix temperature and unset top p for thinking mode
model_params.temperature = 1.0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This transformation can be skipped - it'd silently update the values for compliance and can lead to unexpected results. If the input config is not supported, the user should be the one to fix it

elif "budget_tokens" in thinking_params:
# Remove budget tokens if thinking is disabled
model_params.additional_model_request_fields["thinking"].pop("budget_tokens")
else:
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we avoid this else block? If a new bedrock model is launched that supports streaming, then you'd be removing the thinking params passed in the llm-config until EXTENDED_THINKING_MODELS is updated in this code

Not having this allows the values to pass through to converse-client creation, and if its an error on the application side if they send these params when they were not supposed to be, then the client creation will just fail - as expected

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants