-
Notifications
You must be signed in to change notification settings - Fork 163
feat: Add support for OpenRouter #92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add support for OpenRouter #92
Conversation
@isaac-scarrott thanks for the PR, I am looking into ways to make it a little simpler to add support for providers that support the OpenAI API. Currently to add models and providers we need to add code changes I am trying to figure out a way to do this with configuration. I will let you know what that would look like and what the changes to this PR would be. |
@kujtimiihoxha Okay great thanks, i guess one of the main issues is the whole model alias concept |
Yep exactly @isaac-scarrott , I am trying to work on an implementation where we can still add some models so users don't have to configure everything to calculate cost all the time.. BUT they can add custom models/providers when they need them. |
@isaac-scarrott I think we still want to get this to a place where we can merge it. Can you look at #74 and follow the same prefix for models I am working on #98 which will allow for using models and providers that are not configured in the codebase but support openai API but we still want to have more popular models/providers/routers in the codebase so that the users don't have to configure model pricing and stuff like that each time. |
8d5a86f
to
86d0d1f
Compare
@kujtimiihoxha Sounds great, I have implemented these changes. Got a couple of questions, but I appreciate your're busy so feel free to ignore:
|
@isaac-scarrott thanks for the awesome PR, and yeah so
from openai import OpenAI
client = OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key="<OPENROUTER_API_KEY>",
)
completion = client.chat.completions.create(
extra_headers={
"HTTP-Referer": "<YOUR_SITE_URL>", # Optional. Site URL for rankings on openrouter.ai.
"X-Title": "<YOUR_SITE_NAME>", # Optional. Site title for rankings on openrouter.ai.
},
model="openai/gpt-4o", <-------- This here is not the same
messages=[
{
"role": "user",
"content": "What is the meaning of life?"
}
]
)
print(completion.choices[0].message.content) So in the model config you will have to update that also (or maybe I don't completely understand how openrouter works yet)
|
@kujtimiihoxha Makes sense, if you got rid of the concept of provider and model and just have model as an input this would work. OpenRouter is just a gateway you can access many other models from. I see OpenRouter as a provider that provides a single endpoint, single API key (you don't need separate API key's for Anthropic, Gemini, OpenAI, DeepSeek etc), single invoice and get access to all the popular models. Yep I I could create a script that generates a go or JSON file? Could all the |
With JSON files we would need to create an embedded fs to include them in the binary, in addition for providers that don't have an endpoint to retrieve model details like openrouter its easy to use the default provider configs e.x the azure one I shared used the openai configs for pricing. I don't love the current setup that much and I might rework how this all works but this seems to work OK for now and seems easy to add models and providers even if it is a bit boring and repetitive. |
Yep, not worth spending time on this now. I think we could just add top 15 most popular models manually for now? |
Yep that was my thought also, and than my next pr will allow one off models if you want to try new models |
For now probably just update the |
ff4552a
to
0bb9c80
Compare
finishReason := o.finishReason(string(acc.ChatCompletion.Choices[0].FinishReason)) | ||
|
||
if len(toolCalls) > 0 { | ||
finishReason = message.FinishReasonToolUse | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't think this was worth extracting to a getFinishReason
helper function?....yet?
internal/config/config.go
Outdated
viper.SetDefault("agents.coder.model", models.OpenRouterGPT41) | ||
viper.SetDefault("agents.task.model", models.OpenRouterGPT41Mini) | ||
viper.SetDefault("agents.title.model", models.OpenRouterGPT41Mini) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think Gemini 2.5 pro and Gemini 2.5 flash might be a better default bc they're cheaper and better on aider polyglot
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, Gemini 2.5 pro very good in the context of Aider.
However, I'm going to do some more testing at some point as to why this is happening, but Gemini 2.5 pro seems to be lazy and Claude performs a lot better currently
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to come up woth a better system prompt for gemini 2.5 the issue is that these models are trained differently and we need to prompt them differently. We can update the prompt function to take into account the model also not just the provider and then based on some conditions send a different prompt to gemini models. What is aiders system prompt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah agreed, would be ideal to have a prompt per model as they do have their own nuances.
I can find the system prompt for Aider, but the difference between Aider and OpenCode is that OpenCode is more agentic whereas Aider feels more input output. In addition Aider doesn't support calling tools and MCP's. Here are some of aider's prompts:
https://github.com/Aider-AI/aider/blob/7d185bb710927609e060c1e691d9ce231dcafdb5/aider/coders/architect_prompts.py#L6
https://github.com/Aider-AI/aider/tree/7d185bb710927609e060c1e691d9ce231dcafdb5/aider/coders
I do know that Aider uses a different diff/edit format depending on the model as they found different formats work better for different models
@isaac-scarrott do you think we can add
So that we show up on their site? you can use opencode.ai (we will have a site soon) |
@kujtimiihoxha Yeah of course, good idea. Will add it later today |
WithOpenAIExtraHeaders(map[string]string{ | ||
"HTTP-Referer": "opencode.ai", | ||
"X-Title": "OpenCode", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems to work, but my activity isn't updating in OpenRouter so I can't be sure. Will check again later
Not sure if we want these from a constant in somewhere like internal/config/config.go
@isaac-scarrott sorry to do this to you but I merged the Azure provider and now this seems to have conflicts |
@kujtimiihoxha All good, will fix now |
- Introduced `ProviderOpenRouter` in the `models` package. - Added OpenRouter-specific models, including `GPT41`, `GPT41Mini`, `GPT4o`, and others, with their configurations and costs. - Updated `generateSchema` to include OpenRouter as a provider. - Added OpenRouter-specific environment variable handling (`OPENROUTER_API_KEY`) in `config.go`. - Implemented default model settings for OpenRouter agents in `setDefaultModelForAgent`. - Updated `getProviderAPIKey` to retrieve the OpenRouter API key. - Extended `SupportedModels` to include OpenRouter models. - Added OpenRouter client initialization in the `provider` package. - Modified `processGeneration` to handle `FinishReasonUnknown` in addition to `FinishReasonToolUse`.
- Added "deepseek-chat-free" and "deepseek-r1-free" to the list of supported models in `opencode-schema.json`.
…egrate new models - Updated README.md to include OpenRouter as a supported provider and its configuration details. - Added `OPENROUTER_API_KEY` to environment variable configuration. - Introduced OpenRouter-specific models in `internal/llm/models/openrouter.go` with mappings to existing cost and token configurations. - Updated `internal/config/config.go` to set default models for OpenRouter agents. - Extended `opencode-schema.json` to include OpenRouter models in the schema definitions. - Refactored model IDs and names to align with OpenRouter naming conventions.
…l call logic in agent and OpenAI provider - Simplified finish reason check in `agent.go` by removing redundant variable assignment. - Updated `openai.go` to override the finish reason to `FinishReasonToolUse` when tool calls are present. - Ensured consistent finish reason handling in both `send` and `stream` methods of the OpenAI provider. [feature/openrouter-provider] Refactor finish reason handling and tool call logic in agent and OpenAI provider - Simplified finish reason check in `agent.go` by removing redundant variable assignment. - Updated `openai.go` to override the finish reason to `FinishReasonToolUse` when tool calls are present. - Ensured consistent finish reason handling in both `send` and `stream` methods of the OpenAI provider.
…nAI client configuration** - Introduced a new `extraHeaders` field in the `openaiOptions` struct to allow specifying additional HTTP headers. - Added logic in `newOpenAIClient` to apply `extraHeaders` to the OpenAI client configuration. - Implemented a new option function `WithOpenAIExtraHeaders` to set custom headers in `openaiOptions`. - Updated the OpenRouter provider configuration in `NewProvider` to include default headers (`HTTP-Referer` and `X-Title`) for OpenRouter API requests.
9da8992
to
fd487d5
Compare
Rebased, just fixing the headers as they don't seem to be used by OpenRouter. Then will try and tets all the models I added |
…nfigurations - Added new OpenRouter models: `claude-3.5-sonnet`, `claude-3-haiku`, `claude-3.7-sonnet`, `claude-3.5-haiku`, and `claude-3-opus` in `openrouter.go`. - Updated default agent models in `config.go`: - `agents.coder.model` now uses `claude-3.7-sonnet`. - `agents.task.model` now uses `claude-3.7-sonnet`. - `agents.title.model` now uses `claude-3.5-haiku`. - Updated `opencode-schema.json` to include the new models in the allowed list for schema validation. - Adjusted logic in `setDefaultModelForAgent` to reflect the new default models.
All seems to be working. Only issue is I don't see the application showing up in my OpenRouter usage dashboard. After placing a log of the options after this line they seem like they're being set successfully. Not sure if something is needed to be setup in OpenRouter, a little lost on what I'm missing to get the app to show up in OpenRouter |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a small thing, otherwise it seems good to go
internal/llm/provider/openai.go
Outdated
@@ -267,13 +281,23 @@ func (o *openaiClient) stream(ctx context.Context, messages []message.Message, t | |||
err := openaiStream.Err() | |||
if err == nil || errors.Is(err, io.EOF) { | |||
// Stream completed successfully | |||
eventChan <- ProviderEvent{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this was added by accident ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah my bad, have removed. Bad copy and paste
… stream function The changes remove the emission of a `ProviderEvent` with type `EventContentStop` in the `stream` function of the `openaiClient` implementation. This event was sent upon successful stream completion but is no longer used.
Thanks @isaac-scarrott |
* Add support for OpenRouter as a new model provider - Introduced `ProviderOpenRouter` in the `models` package. - Added OpenRouter-specific models, including `GPT41`, `GPT41Mini`, `GPT4o`, and others, with their configurations and costs. - Updated `generateSchema` to include OpenRouter as a provider. - Added OpenRouter-specific environment variable handling (`OPENROUTER_API_KEY`) in `config.go`. - Implemented default model settings for OpenRouter agents in `setDefaultModelForAgent`. - Updated `getProviderAPIKey` to retrieve the OpenRouter API key. - Extended `SupportedModels` to include OpenRouter models. - Added OpenRouter client initialization in the `provider` package. - Modified `processGeneration` to handle `FinishReasonUnknown` in addition to `FinishReasonToolUse`. * [feature/openrouter-provider] Add new models and provider to schema - Added "deepseek-chat-free" and "deepseek-r1-free" to the list of supported models in `opencode-schema.json`. * [feature/openrouter-provider] Add OpenRouter provider support and integrate new models - Updated README.md to include OpenRouter as a supported provider and its configuration details. - Added `OPENROUTER_API_KEY` to environment variable configuration. - Introduced OpenRouter-specific models in `internal/llm/models/openrouter.go` with mappings to existing cost and token configurations. - Updated `internal/config/config.go` to set default models for OpenRouter agents. - Extended `opencode-schema.json` to include OpenRouter models in the schema definitions. - Refactored model IDs and names to align with OpenRouter naming conventions. * [feature/openrouter-provider] Refactor finish reason handling and tool call logic in agent and OpenAI provider - Simplified finish reason check in `agent.go` by removing redundant variable assignment. - Updated `openai.go` to override the finish reason to `FinishReasonToolUse` when tool calls are present. - Ensured consistent finish reason handling in both `send` and `stream` methods of the OpenAI provider. [feature/openrouter-provider] Refactor finish reason handling and tool call logic in agent and OpenAI provider - Simplified finish reason check in `agent.go` by removing redundant variable assignment. - Updated `openai.go` to override the finish reason to `FinishReasonToolUse` when tool calls are present. - Ensured consistent finish reason handling in both `send` and `stream` methods of the OpenAI provider. * **[feature/openrouter-provider] Add support for custom headers in OpenAI client configuration** - Introduced a new `extraHeaders` field in the `openaiOptions` struct to allow specifying additional HTTP headers. - Added logic in `newOpenAIClient` to apply `extraHeaders` to the OpenAI client configuration. - Implemented a new option function `WithOpenAIExtraHeaders` to set custom headers in `openaiOptions`. - Updated the OpenRouter provider configuration in `NewProvider` to include default headers (`HTTP-Referer` and `X-Title`) for OpenRouter API requests. * Update OpenRouter model config and remove unsupported models * [feature/openrouter-provider] Update OpenRouter models and default configurations - Added new OpenRouter models: `claude-3.5-sonnet`, `claude-3-haiku`, `claude-3.7-sonnet`, `claude-3.5-haiku`, and `claude-3-opus` in `openrouter.go`. - Updated default agent models in `config.go`: - `agents.coder.model` now uses `claude-3.7-sonnet`. - `agents.task.model` now uses `claude-3.7-sonnet`. - `agents.title.model` now uses `claude-3.5-haiku`. - Updated `opencode-schema.json` to include the new models in the allowed list for schema validation. - Adjusted logic in `setDefaultModelForAgent` to reflect the new default models. * [feature/openrouter-provider] Remove unused ProviderEvent emission in stream function The changes remove the emission of a `ProviderEvent` with type `EventContentStop` in the `stream` function of the `openaiClient` implementation. This event was sent upon successful stream completion but is no longer used.
Seems to work well
ProviderOpenRouter
in themodels
package.GPT41
,GPT41Mini
,GPT4o
, and others, with their configurations and costs.generateSchema
to include OpenRouter as a provider.OPENROUTER_API_KEY
) inconfig.go
.setDefaultModelForAgent
.getProviderAPIKey
to retrieve the OpenRouter API key.SupportedModels
to include OpenRouter models.provider
package.processGeneration
to handleFinishReasonUnknown
in addition toFinishReasonToolUse
. - unsure this is goodFixes #78 (interesting title btw)
Fixes #68