Skip to content

correct provider field in continue config.json for VScode #5630

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
3 tasks done
ltolstoy opened this issue May 12, 2025 · 2 comments · May be fixed by #5854
Closed
3 tasks done

correct provider field in continue config.json for VScode #5630

ltolstoy opened this issue May 12, 2025 · 2 comments · May be fixed by #5854
Assignees
Labels
area:configuration Relates to configuration options ide:vscode Relates specifically to VS Code extension kind:bug Indicates an unexpected problem or unintended behavior

Comments

@ltolstoy
Copy link

Before submitting your bug report

Relevant environment info

- OS: macos
- Continue version: 1.0.9
- IDE version: VSCode Version: 1.100.0 
- Model: Inception mercury-coder-small
- config:
  {
      "model": "mercury-coder-small",
      "title": "Mercury Coder Small cw 32k",
      "provider": "inception",
      "apiBase": "https://api.inceptionlabs.ai/v1/",
      "apikey": "sk_xxxxxxxxxxxxxxxxxx"
    },...

Description

The documentation in https://docs.continue.dev/customize/model-providers/inception says that correct value for "provider": "inception", but when I try to use it in my VSCode config,json for Continue plugin, it does not work, the "inception" is not in the list of allowed values for "provider" field in config.json, the allowed values list (shown in VScode as Valid values):
Value is not accepted. Valid values: "openai", "free-trial", "anthropic", "anthropic-vertexai", "cohere", "bedrock", "bedrockimport", "sagemaker", "together", "novita", "ollama", "huggingface-tgi", "huggingface-inference-api", "llama.cpp", "replicate", "gemini", "gemini-vertexai", "lmstudio", "llamafile", "mistral", "mistral-vertexai", "deepinfra", "groq", "fireworks", "ncompass", "cloudflare", "deepseek", "azure", "msty", "watsonx", "openrouter", "sambanova", "nvidia", "vllm", "cerebras", "askSage", "nebius", "vertexai", "xAI", "kindo", "moonshot", "siliconflow", "function-network", "scaleway", "relace", "morph", "ovhcloud", "venice".
If i use "provider": "openai" , API requests fails with error: 401 "Incorrect API key provided".
How to correctly set up mercury-coder-small model from inception labs in Continue plugin in VSCode?

To reproduce

No response

Log output

@dosubot dosubot bot added area:configuration Relates to configuration options ide:vscode Relates specifically to VS Code extension kind:bug Indicates an unexpected problem or unintended behavior labels May 12, 2025
@RomneyDa
Copy link
Collaborator

Fixed by #5854

@github-project-automation github-project-automation bot moved this from Todo to Done in Issues and PRs May 26, 2025
@RomneyDa
Copy link
Collaborator

@ltolstoy note it will still work even if the value is marked as not supported

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area:configuration Relates to configuration options ide:vscode Relates specifically to VS Code extension kind:bug Indicates an unexpected problem or unintended behavior
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

2 participants