From 359b84de7aa3dc12814b4ec1b71b053b0bdba885 Mon Sep 17 00:00:00 2001 From: Ralf Schmid Date: Mon, 19 May 2025 13:58:26 +0200 Subject: [PATCH 1/3] Small changes --- .../docker-compose-scenarios.rst | 21 +++++++++++++++++++ 1 file changed, 21 insertions(+) diff --git a/install/docker-compose/docker-compose-scenarios.rst b/install/docker-compose/docker-compose-scenarios.rst index de537ed8..8e8a72c4 100644 --- a/install/docker-compose/docker-compose-scenarios.rst +++ b/install/docker-compose/docker-compose-scenarios.rst @@ -30,6 +30,7 @@ The following scenarios are supported and explained further below: - :ref:`Additional scenarios ` - Disable the backup service + - Add an Ollama instance to the stack You can find the files in the `Zammad-Docker-Compose repository `_. @@ -202,6 +203,26 @@ built in backup service in the stack to save resources. You can do so by just using the scenario file ``scenarios/disable-backup-service.yml`` for deployment. +Add Ollama +^^^^^^^^^^ + +You can spin up an additional `Ollama `_ container to use +:admin-docs:`Zammad's AI features ` on your machine. + +.. warning:: You should use this only for development or testing purposes and not + in production unless you have a good understanding about how to bridge your + GPU into a docker container (and even have a GPU in your server) and have a + basic understanding about the differences of LLMs and their sizes. + +To deploy an Ollama container inside the Zammad stack, use the scenario file +``scenarios/add-ollama.yml``. This creates an Ollama container which +automatically pulls and serves ``Llama3.2`` to be ready to use/test AI features +out of the box. + +Make sure to use the container's IP or name with the port ``11434`` appended or, +in the case of a reverse proxy, the URL for the +:admin-docs:`provider configuration ` in Zammad. + Other Use Cases ^^^^^^^^^^^^^^^ From 7f379758286771971c915f4831fb6c8370593e8e Mon Sep 17 00:00:00 2001 From: Ralf Schmid Date: Tue, 20 May 2025 16:17:53 +0200 Subject: [PATCH 2/3] Changes based on review --- install/docker-compose/docker-compose-scenarios.rst | 10 ++++------ 1 file changed, 4 insertions(+), 6 deletions(-) diff --git a/install/docker-compose/docker-compose-scenarios.rst b/install/docker-compose/docker-compose-scenarios.rst index 8e8a72c4..4b0fc2ed 100644 --- a/install/docker-compose/docker-compose-scenarios.rst +++ b/install/docker-compose/docker-compose-scenarios.rst @@ -209,18 +209,16 @@ Add Ollama You can spin up an additional `Ollama `_ container to use :admin-docs:`Zammad's AI features ` on your machine. -.. warning:: You should use this only for development or testing purposes and not - in production unless you have a good understanding about how to bridge your - GPU into a docker container (and even have a GPU in your server) and have a - basic understanding about the differences of LLMs and their sizes. +.. hint:: This is intended for development or testing purposes as running a + productive LLM stack is complex. To deploy an Ollama container inside the Zammad stack, use the scenario file ``scenarios/add-ollama.yml``. This creates an Ollama container which automatically pulls and serves ``Llama3.2`` to be ready to use/test AI features out of the box. -Make sure to use the container's IP or name with the port ``11434`` appended or, -in the case of a reverse proxy, the URL for the +Add the container name and port (``http://ollama:11434``) or, in case of using +a reverse proxy, the URL to the :admin-docs:`provider configuration ` in Zammad. Other Use Cases From 4f9a130d939c1eed1e287706cef40cfb1c14b412 Mon Sep 17 00:00:00 2001 From: Ralf Schmid Date: Wed, 21 May 2025 09:27:11 +0200 Subject: [PATCH 3/3] Removed reverse proxy mentioning --- install/docker-compose/docker-compose-scenarios.rst | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/install/docker-compose/docker-compose-scenarios.rst b/install/docker-compose/docker-compose-scenarios.rst index 4b0fc2ed..751e23f2 100644 --- a/install/docker-compose/docker-compose-scenarios.rst +++ b/install/docker-compose/docker-compose-scenarios.rst @@ -217,9 +217,8 @@ To deploy an Ollama container inside the Zammad stack, use the scenario file automatically pulls and serves ``Llama3.2`` to be ready to use/test AI features out of the box. -Add the container name and port (``http://ollama:11434``) or, in case of using -a reverse proxy, the URL to the -:admin-docs:`provider configuration ` in Zammad. +To use it in Zammad, add the service name and port (``http://ollama:11434``) to +the :admin-docs:`provider configuration `. Other Use Cases ^^^^^^^^^^^^^^^