- Sponsor
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
[FIX] Unable to connect to local Ollama using self-hosted Docker #1100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Having a similar problem. |
Hi, |
I thought it might have just been problems with windows being windows with WSL but exact same problem with Linux Mint. There's got to be something that were just doing wrong. |
@raphaelventura I've struggled a while but found what wasn't working on my setup (Linux ubuntu): Be sure that the AI model API setup in the Admin Web Interface is set to http://host.docker.internal:11434/v1/ So based on your config shown in the screenshots above, prefer using your "ollama" API config and not "ollama-local" |
Thanks @infocillasas , but I've tried both URLs (one after the other) in the yaml config file as well as the model API config page. My ollama instance isn't running inside a container, its my distro's package application |
This is a very open issue that I have been fighting for days. I can run this fine on my mac in docker, but when I install it on an ubuntu desktop instance with a GPU I see the same issues and error message. I am hosting ollama on the same machine and also trying with docker ( just like OP did). I can easily connect to my local instance, but have no idea what this "/v1" is: curl http://192.168.6.241:11434 |
the /v1/ endpoint is the API's and is the correct address to setup Khoj to connect to Ollama
|
Same issue here with docker and system ollama installation:
NOTICE: chat model
model api
Khojv1.36.6 ollama logsThe ollama server logs show nothing, meaning there is no HTTP access. |
That's how the API is designed:
|
try with
|
Sadly, this didn't help:
There is still no access in the ollama logs... |
BTW, I've tried to set OLLAMA_ORIGINS in the systemd service (restarted: option applied):
I still get "Connection refused" in the docker instance and no access in the logs of ollama. |
It's now working, here. For short, the ollama service and network configuration did not match. Detailed explanation (on Linux)The docker configThe docker instance must be able to reach the ollama service running directly on the host. The docker installation of khoj with compose is using the bridge mode of docker via the "docker0" bridge. The docker instances hosts file have the The ollama configBecause of the bridge mode on linux, the ollama service MUST be configured to listen on the bridge address It's expected to use Possible solutionsa) ollama listening on 172.17.0.1For this to work, configure your service to listen on the correct address. For systemd, use
Restart ollama:
Downsides:
b) ollama listening on all interfacesConfigure ollama to listen on Downsides:
c) Redirect the trafficThe idea is to bind the docker0 on With socat:
NOTICE: something similar should be possible with nat PREROUTING and POSTROUTING iptables rules. downsides:
TestingThe docker instance must be able to reach the ollama service like this:
Notice the "Ollama is running". Hope this helps. |
Thanks very much for this detailed explanation. I modified my ollama service with a new Unfortunately, I still have a connection error. It may be trivial to troubleshoot but I'm quite unfamiliar with network stuff. I tried to docker-compose start server
docker-compose exec -it server /bin/bash
root@df6c8002d319:/app# curl http://host.docker.internal:11434
curl: (6) Could not resolve host: host.docker.internal from the directory containing the From within the container, I can see that root@df6c8002d319:/app# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.18.0.3 df6c8002d319 Seems like I'm missing some alias for |
I guess you didn't start all the docker stuff with What's the output of
|
Here are the 3 outputs in order after launching
|
The ouputs looks good and the docker0 bridge with the correct ips et binding is there. Do you have the host.docker.internal line in /etc/hosts with |
No, I didn't! I tried to add extra_hosts:
- "host.docker.internal:host-gateway" inside the
I'll let this issue open for now since we had to go through some configuration that is not documented, so that may be of interest for contributors to look into. |
Using Linux. Tried a) and b) and still unfortunately still getting;
Pretty sure my config is right;
I'm performing the following.
After editing said file above and running;
|
If ollama service is running, you do not need to run `ollam serve` (check `systemctl status ollama.service) and `ollama run` afterwards. Khoj will call what it needs to.
Maybe run the checks suggested above ?
…On February 20, 2025 12:27:44 PM GMT+01:00, Chris Franco ***@***.***> wrote:
Francommit left a comment (khoj-ai/khoj#1100)
Using Linux.
Tried a) and b) and still unfortunately still getting;
`APIConnectionError: Connection error.`
Pretty sure my config is right;
```
- OPENAI_BASE_URL=http://host.docker.internal:11434/v1/
```



I'm performing the following.
```
docker compose up
ollama serve
ollama run deepseek-r1:32b
```
After editing said file above and running;
```
➜ .khoj systemctl daemon-reload
systemctl restart ollama.service
```
--
Reply to this email directly or view it on GitHub:
#1100 (comment)
You are receiving this because you were mentioned.
Message ID: ***@***.***>
|
I faced the exactly the same issues described here. Environment:
I did something similar to @Francommit and do not get connection errors anymore. Steps:
So the issue is solved with that. Apart from that, the ollama integration doesn't seem to respect my local models. I have the following models downloaded: ❯ ollama list
NAME ID SIZE MODIFIED
llama3.1:8b-instruct-fp16 4aacac419454 16 GB 10 hours ago
llama3.1:8b 46e0c10c039e 4.9 GB 11 hours ago I successfully integrated ollama into Khoj, but can't use these models directly.
If I'm not missing something (please let me know), I can create another issue for that. |
@btschwertfeger |
Oh yes, that was the trick - Thank you! |
Describe the bug
I'm trying to connect to my local ollama server but am getting the following connection_exceptions (too large to fit inside the issue) after getting these DEBUG messages inside the server logs:
I've tried to setup the
OPENAI_API_BASE
variable to the two following values inside the docker-compose file:(and set the same URLs for the AI model API later in the server admin panel).
To Reproduce
Steps to reproduce the behavior:
ollama serve
thanollama pull llama3.1
docker-compose up
, and configure the AI model API and Chat Model as stated in the doc.Screenshots
API config

Models config

I tried with the names
8b
andlatest
, since the output ofollama list
yieldsPlatform
If self-hosted
The text was updated successfully, but these errors were encountered: