Skip to content

feat: Add support for OpenRouter #92

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

isaac-scarrott
Copy link
Contributor

@isaac-scarrott isaac-scarrott commented Apr 27, 2025

Seems to work well

  • Introduced ProviderOpenRouter in the models package.
  • Added OpenRouter-specific models, including GPT41, GPT41Mini, GPT4o, and others, with their configurations and costs.
  • Updated generateSchema to include OpenRouter as a provider.
  • Added OpenRouter-specific environment variable handling (OPENROUTER_API_KEY) in config.go.
  • Implemented default model settings for OpenRouter agents in setDefaultModelForAgent.
  • Updated getProviderAPIKey to retrieve the OpenRouter API key.
  • Extended SupportedModels to include OpenRouter models.
  • Added OpenRouter client initialization in the provider package.
  • Modified processGeneration to handle FinishReasonUnknown in addition to FinishReasonToolUse. - unsure this is good

Fixes #78 (interesting title btw)
Fixes #68

@isaac-scarrott isaac-scarrott marked this pull request as ready for review April 27, 2025 09:57
@isaac-scarrott isaac-scarrott changed the title Feature - Add support for OpenRouter feat: Add support for OpenRouter Apr 27, 2025
@kujtimiihoxha
Copy link
Collaborator

@isaac-scarrott thanks for the PR, I am looking into ways to make it a little simpler to add support for providers that support the OpenAI API.

Currently to add models and providers we need to add code changes I am trying to figure out a way to do this with configuration.

I will let you know what that would look like and what the changes to this PR would be.

@isaac-scarrott
Copy link
Contributor Author

@kujtimiihoxha Okay great thanks, i guess one of the main issues is the whole model alias concept

@kujtimiihoxha
Copy link
Collaborator

Yep exactly @isaac-scarrott , I am trying to work on an implementation where we can still add some models so users don't have to configure everything to calculate cost all the time.. BUT they can add custom models/providers when they need them.

@kujtimiihoxha
Copy link
Collaborator

@isaac-scarrott I think we still want to get this to a place where we can merge it. Can you look at #74 and follow the same prefix for models openrouter.* this way we make sure we do not have clashes, also if there is a model which has the pricing the same use it for the configuration similar to #74.

I am working on #98 which will allow for using models and providers that are not configured in the codebase but support openai API but we still want to have more popular models/providers/routers in the codebase so that the users don't have to configure model pricing and stuff like that each time.

@isaac-scarrott isaac-scarrott force-pushed the feature/openrouter-provider branch 2 times, most recently from 8d5a86f to 86d0d1f Compare April 27, 2025 18:52
@isaac-scarrott
Copy link
Contributor Author

@kujtimiihoxha Sounds great, I have implemented these changes.

Got a couple of questions, but I appreciate your're busy so feel free to ignore:
  1. Why redefine each model with the openrouter. prefix, this seems to be just an alias and what matters is the APIModel value? Keeping this consistent would make it easier to switch providers. Have updated the PR with the prefix anyway, but just wondering
  2. Someone mentioned there was an API endpoint to get models, is it worth using this to generate the openrouter model config at release/build/caching at runtime? Maybe an after thought though as will add some complexity that probably isn't needed at this time

@kujtimiihoxha
Copy link
Collaborator

@isaac-scarrott thanks for the awesome PR, and yeah so

  1. I think the main thing I want with this is so that the users can just specify a model name and I can auto configure the provider for them. In addition specifically for OpenRouter the ApiModel is not the same for e.x from their docs
from openai import OpenAI

client = OpenAI(
  base_url="https://openrouter.ai/api/v1",
  api_key="<OPENROUTER_API_KEY>",
)

completion = client.chat.completions.create(
  extra_headers={
    "HTTP-Referer": "<YOUR_SITE_URL>", # Optional. Site URL for rankings on openrouter.ai.
    "X-Title": "<YOUR_SITE_NAME>", # Optional. Site title for rankings on openrouter.ai.
  },
  model="openai/gpt-4o", <-------- This here is not the same
  messages=[
    {
      "role": "user",
      "content": "What is the meaning of life?"
    }
  ]
)

print(completion.choices[0].message.content)

So in the model config you will have to update that also (or maybe I don't completely understand how openrouter works yet)

  1. I think that would be great maybe just a script to automatically generate the models/openrouter.go file? I think not all providers give us that endpoint unfortunately.

@isaac-scarrott
Copy link
Contributor Author

isaac-scarrott commented Apr 27, 2025

@kujtimiihoxha Makes sense, if you got rid of the concept of provider and model and just have model as an input this would work. OpenRouter is just a gateway you can access many other models from. I see OpenRouter as a provider that provides a single endpoint, single API key (you don't need separate API key's for Anthropic, Gemini, OpenAI, DeepSeek etc), single invoice and get access to all the popular models.

Yep I I could create a script that generates a go or JSON file? Could all the internal/llm/models go files be converted to JSON since they seem to be config?

@kujtimiihoxha
Copy link
Collaborator

With JSON files we would need to create an embedded fs to include them in the binary, in addition for providers that don't have an endpoint to retrieve model details like openrouter its easy to use the default provider configs e.x the azure one I shared used the openai configs for pricing. I don't love the current setup that much and I might rework how this all works but this seems to work OK for now and seems easy to add models and providers even if it is a bit boring and repetitive.

@isaac-scarrott
Copy link
Contributor Author

Yep, not worth spending time on this now. I think we could just add top 15 most popular models manually for now?

@kujtimiihoxha
Copy link
Collaborator

Yep that was my thought also, and than my next pr will allow one off models if you want to try new models

@kujtimiihoxha
Copy link
Collaborator

For now probably just update the ApiModel to be correct for the openai models and I will review the pr tomorrow

@isaac-scarrott isaac-scarrott force-pushed the feature/openrouter-provider branch 2 times, most recently from ff4552a to 0bb9c80 Compare April 27, 2025 20:21
Comment on lines +281 to +293
finishReason := o.finishReason(string(acc.ChatCompletion.Choices[0].FinishReason))

if len(toolCalls) > 0 {
finishReason = message.FinishReasonToolUse
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Didn't think this was worth extracting to a getFinishReason helper function?....yet?

Comment on lines 271 to 275
viper.SetDefault("agents.coder.model", models.OpenRouterGPT41)
viper.SetDefault("agents.task.model", models.OpenRouterGPT41Mini)
viper.SetDefault("agents.title.model", models.OpenRouterGPT41Mini)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think Gemini 2.5 pro and Gemini 2.5 flash might be a better default bc they're cheaper and better on aider polyglot

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, Gemini 2.5 pro very good in the context of Aider.

However, I'm going to do some more testing at some point as to why this is happening, but Gemini 2.5 pro seems to be lazy and Claude performs a lot better currently

Copy link
Collaborator

@kujtimiihoxha kujtimiihoxha Apr 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to come up woth a better system prompt for gemini 2.5 the issue is that these models are trained differently and we need to prompt them differently. We can update the prompt function to take into account the model also not just the provider and then based on some conditions send a different prompt to gemini models. What is aiders system prompt?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah agreed, would be ideal to have a prompt per model as they do have their own nuances.

I can find the system prompt for Aider, but the difference between Aider and OpenCode is that OpenCode is more agentic whereas Aider feels more input output. In addition Aider doesn't support calling tools and MCP's. Here are some of aider's prompts:
https://github.com/Aider-AI/aider/blob/7d185bb710927609e060c1e691d9ce231dcafdb5/aider/coders/architect_prompts.py#L6
https://github.com/Aider-AI/aider/tree/7d185bb710927609e060c1e691d9ce231dcafdb5/aider/coders

I do know that Aider uses a different diff/edit format depending on the model as they found different formats work better for different models

@kujtimiihoxha
Copy link
Collaborator

@isaac-scarrott do you think we can add

 extra_headers={
    "HTTP-Referer": "<YOUR_SITE_URL>", # Optional. Site URL for rankings on openrouter.ai.
    "X-Title": "<YOUR_SITE_NAME>", # Optional. Site title for rankings on openrouter.ai.
  },

So that we show up on their site? you can use opencode.ai (we will have a site soon)

@isaac-scarrott
Copy link
Contributor Author

isaac-scarrott commented Apr 28, 2025

@kujtimiihoxha Yeah of course, good idea. Will add it later today

Comment on lines +121 to +128
WithOpenAIExtraHeaders(map[string]string{
"HTTP-Referer": "opencode.ai",
"X-Title": "OpenCode",
Copy link
Contributor Author

@isaac-scarrott isaac-scarrott Apr 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems to work, but my activity isn't updating in OpenRouter so I can't be sure. Will check again later

Not sure if we want these from a constant in somewhere like internal/config/config.go

@kujtimiihoxha
Copy link
Collaborator

@isaac-scarrott sorry to do this to you but I merged the Azure provider and now this seems to have conflicts

@isaac-scarrott
Copy link
Contributor Author

@kujtimiihoxha All good, will fix now

- Introduced `ProviderOpenRouter` in the `models` package.
- Added OpenRouter-specific models, including `GPT41`, `GPT41Mini`, `GPT4o`, and others, with their configurations and costs.
- Updated `generateSchema` to include OpenRouter as a provider.
- Added OpenRouter-specific environment variable handling (`OPENROUTER_API_KEY`) in `config.go`.
- Implemented default model settings for OpenRouter agents in `setDefaultModelForAgent`.
- Updated `getProviderAPIKey` to retrieve the OpenRouter API key.
- Extended `SupportedModels` to include OpenRouter models.
- Added OpenRouter client initialization in the `provider` package.
- Modified `processGeneration` to handle `FinishReasonUnknown` in addition to `FinishReasonToolUse`.
- Added "deepseek-chat-free" and "deepseek-r1-free" to the list of supported models in `opencode-schema.json`.
…egrate new models

- Updated README.md to include OpenRouter as a supported provider and its configuration details.
- Added `OPENROUTER_API_KEY` to environment variable configuration.
- Introduced OpenRouter-specific models in `internal/llm/models/openrouter.go` with mappings to existing cost and token configurations.
- Updated `internal/config/config.go` to set default models for OpenRouter agents.
- Extended `opencode-schema.json` to include OpenRouter models in the schema definitions.
- Refactored model IDs and names to align with OpenRouter naming conventions.
…l call logic in agent and OpenAI provider

- Simplified finish reason check in `agent.go` by removing redundant variable assignment.
- Updated `openai.go` to override the finish reason to `FinishReasonToolUse` when tool calls are present.
- Ensured consistent finish reason handling in both `send` and `stream` methods of the OpenAI provider.

[feature/openrouter-provider] Refactor finish reason handling and tool call logic in agent and OpenAI provider

- Simplified finish reason check in `agent.go` by removing redundant variable assignment.
- Updated `openai.go` to override the finish reason to `FinishReasonToolUse` when tool calls are present.
- Ensured consistent finish reason handling in both `send` and `stream` methods of the OpenAI provider.
…nAI client configuration**

- Introduced a new `extraHeaders` field in the `openaiOptions` struct to allow specifying additional HTTP headers.
- Added logic in `newOpenAIClient` to apply `extraHeaders` to the OpenAI client configuration.
- Implemented a new option function `WithOpenAIExtraHeaders` to set custom headers in `openaiOptions`.
- Updated the OpenRouter provider configuration in `NewProvider` to include default headers (`HTTP-Referer` and `X-Title`) for OpenRouter API requests.
@isaac-scarrott isaac-scarrott force-pushed the feature/openrouter-provider branch from 9da8992 to fd487d5 Compare April 28, 2025 19:27
@isaac-scarrott
Copy link
Contributor Author

Rebased, just fixing the headers as they don't seem to be used by OpenRouter.

Then will try and tets all the models I added

…nfigurations

- Added new OpenRouter models: `claude-3.5-sonnet`, `claude-3-haiku`, `claude-3.7-sonnet`, `claude-3.5-haiku`, and `claude-3-opus` in `openrouter.go`.
- Updated default agent models in `config.go`:
  - `agents.coder.model` now uses `claude-3.7-sonnet`.
  - `agents.task.model` now uses `claude-3.7-sonnet`.
  - `agents.title.model` now uses `claude-3.5-haiku`.
- Updated `opencode-schema.json` to include the new models in the allowed list for schema validation.
- Adjusted logic in `setDefaultModelForAgent` to reflect the new default models.
@isaac-scarrott
Copy link
Contributor Author

All seems to be working.

Only issue is I don't see the application showing up in my OpenRouter usage dashboard. After placing a log of the options after this line they seem like they're being set successfully. Not sure if something is needed to be setup in OpenRouter, a little lost on what I'm missing to get the app to show up in OpenRouter

Copy link
Collaborator

@kujtimiihoxha kujtimiihoxha left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a small thing, otherwise it seems good to go

@@ -267,13 +281,23 @@ func (o *openaiClient) stream(ctx context.Context, messages []message.Message, t
err := openaiStream.Err()
if err == nil || errors.Is(err, io.EOF) {
// Stream completed successfully
eventChan <- ProviderEvent{
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this was added by accident ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah my bad, have removed. Bad copy and paste

… stream function

The changes remove the emission of a `ProviderEvent` with type `EventContentStop` in the `stream` function of the `openaiClient` implementation. This event was sent upon successful stream completion but is no longer used.
@kujtimiihoxha
Copy link
Collaborator

Thanks @isaac-scarrott

@kujtimiihoxha kujtimiihoxha merged commit 98e2910 into opencode-ai:main Apr 29, 2025
kujtimiihoxha pushed a commit to PhantomReactor/opencode that referenced this pull request May 2, 2025
* Add support for OpenRouter as a new model provider

- Introduced `ProviderOpenRouter` in the `models` package.
- Added OpenRouter-specific models, including `GPT41`, `GPT41Mini`, `GPT4o`, and others, with their configurations and costs.
- Updated `generateSchema` to include OpenRouter as a provider.
- Added OpenRouter-specific environment variable handling (`OPENROUTER_API_KEY`) in `config.go`.
- Implemented default model settings for OpenRouter agents in `setDefaultModelForAgent`.
- Updated `getProviderAPIKey` to retrieve the OpenRouter API key.
- Extended `SupportedModels` to include OpenRouter models.
- Added OpenRouter client initialization in the `provider` package.
- Modified `processGeneration` to handle `FinishReasonUnknown` in addition to `FinishReasonToolUse`.

* [feature/openrouter-provider] Add new models and provider to schema

- Added "deepseek-chat-free" and "deepseek-r1-free" to the list of supported models in `opencode-schema.json`.

* [feature/openrouter-provider] Add OpenRouter provider support and integrate new models

- Updated README.md to include OpenRouter as a supported provider and its configuration details.
- Added `OPENROUTER_API_KEY` to environment variable configuration.
- Introduced OpenRouter-specific models in `internal/llm/models/openrouter.go` with mappings to existing cost and token configurations.
- Updated `internal/config/config.go` to set default models for OpenRouter agents.
- Extended `opencode-schema.json` to include OpenRouter models in the schema definitions.
- Refactored model IDs and names to align with OpenRouter naming conventions.

* [feature/openrouter-provider] Refactor finish reason handling and tool call logic in agent and OpenAI provider

- Simplified finish reason check in `agent.go` by removing redundant variable assignment.
- Updated `openai.go` to override the finish reason to `FinishReasonToolUse` when tool calls are present.
- Ensured consistent finish reason handling in both `send` and `stream` methods of the OpenAI provider.

[feature/openrouter-provider] Refactor finish reason handling and tool call logic in agent and OpenAI provider

- Simplified finish reason check in `agent.go` by removing redundant variable assignment.
- Updated `openai.go` to override the finish reason to `FinishReasonToolUse` when tool calls are present.
- Ensured consistent finish reason handling in both `send` and `stream` methods of the OpenAI provider.

* **[feature/openrouter-provider] Add support for custom headers in OpenAI client configuration**

- Introduced a new `extraHeaders` field in the `openaiOptions` struct to allow specifying additional HTTP headers.
- Added logic in `newOpenAIClient` to apply `extraHeaders` to the OpenAI client configuration.
- Implemented a new option function `WithOpenAIExtraHeaders` to set custom headers in `openaiOptions`.
- Updated the OpenRouter provider configuration in `NewProvider` to include default headers (`HTTP-Referer` and `X-Title`) for OpenRouter API requests.

* Update OpenRouter model config and remove unsupported models

* [feature/openrouter-provider] Update OpenRouter models and default configurations

- Added new OpenRouter models: `claude-3.5-sonnet`, `claude-3-haiku`, `claude-3.7-sonnet`, `claude-3.5-haiku`, and `claude-3-opus` in `openrouter.go`.
- Updated default agent models in `config.go`:
  - `agents.coder.model` now uses `claude-3.7-sonnet`.
  - `agents.task.model` now uses `claude-3.7-sonnet`.
  - `agents.title.model` now uses `claude-3.5-haiku`.
- Updated `opencode-schema.json` to include the new models in the allowed list for schema validation.
- Adjusted logic in `setDefaultModelForAgent` to reflect the new default models.

* [feature/openrouter-provider] Remove unused ProviderEvent emission in stream function

The changes remove the emission of a `ProviderEvent` with type `EventContentStop` in the `stream` function of the `openaiClient` implementation. This event was sent upon successful stream completion but is no longer used.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Too many APIs = Too Much Moneyzz - Consider - https://openrouter.ai/ OpenRouter support?
3 participants