-
Notifications
You must be signed in to change notification settings - Fork 177
Panic trying using gemini models #35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
update: setting max tokens to 50000 fixed the problem edit: it infact did not work, it's still crashing |
another update: model gemini-2.5 doesn't seem to support tool calling, when I try to use it I get tool XYZ not found errors in the logs but flash does seem to work 🤷♂️ didn't really look at the gemini docs to confirm this, too late for me rn will check this after i wake up |
even more another update: gemini 2.5 flash doesn't seem to work with tool calling either |
i'm experiencing the same issue |
Yeah have not tested the gemini provider enough I will need to do a thorough review of it and fix this |
I got something similar with openai (gpt 4.1)
|
That looks like it's a different issue. still out of bounds error but the stack trace feels weird |
Also running into similar issues using gemini-2.5 |
Have tried adding an OpenRouter provider which will allow calls to Gemini (with tools) #92 Not working consistently on larger repo's (maybe due to a greater tokens being used? need to see if it's inconsistent for other providers/models), but got some good results with it in the OpenCode repo |
Gemini should be fixed here #96 |
I have no idea what is going on haven't checked the code (yet) but it seems to be an out of bounds exception? (probably a different thing going on under the hood)
here's my config
sidenote: should also handle panics gracefully, exit raw mode / alternate screen (or deinit charm stuff if you're using those ig) and then print the error message.
The text was updated successfully, but these errors were encountered: