-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Description
I am running ollama on my local network on a dedicaled m4 24gig with Qwen3:14b Quant with 65k context window.
I had it create a slightly more than basic react app and it created things, wrote some of the thngs in the dialog and wrote 50% of the things to disk, the other 50 it didn't I gave it a command to write the rest of the items...and then it did some, but not all of it, i did another command to finish...and now it just timesout.
I am reading the ollama serve logs and noticed that open code is sending a timeout singnal around 4 min ond each time.
So...the issue is opencode.
I didn't set a timeout in the config. Then i tried setting a timeout in the opencode config to 600 seconds, same results.
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"ollama-local-remote": {
"npm": "@ai-sdk/openai-compatible",
"options": {
"baseURL": "http://192.168.XX.XXX:11434/v1",
"timeout": 600000
},
"models": {
"llama3:8b": {"name": "Ollama Llama3"},
"qwen3:14b-65k": {
"name": "qwen3:14b 65k (remote local)",
"tools": true,
"reasoning": true,
"options": { "num_ctx": 65536 }
},
"qwen3:14b-q4_K_M": {
"name": "qwen3:14b-q4_K_M (remote local)",
"tools": true,
"reasoning": true
}
Is this the right way to do it? Or is this something that can't be changed. Thanks! OpenCode.ai special sauce is going to be for those homebuilds which are going to have less resources and as a result opencode needs to be able to wait longer. Love the community thoughts or is this coming soon. I think there was 3 releases yesterday so i know things are happening rapidly.