Skip to content

[Bug]: The container becomes unhealthy and fails to start #1831

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
1 of 4 tasks
camucamulemon7 opened this issue Apr 11, 2025 · 7 comments
Open
1 of 4 tasks

[Bug]: The container becomes unhealthy and fails to start #1831

camucamulemon7 opened this issue Apr 11, 2025 · 7 comments

Comments

@camucamulemon7
Copy link

What component(s) are affected?

  • Python SDK
  • Opik UI
  • Opik Server
  • Documentation

Opik version

  • Opik version: 1.7.1
  • Opik version: 1.6.12

Describe the problem

Expected Behavior

  • When proxy settings are configured in each container's Dockerfile and the containers are started with docker compose up --build --detach, everything starts up correctly.

Actual Behavior

  • Even when proxy settings are configured in each container's Dockerfile, starting the containers with docker compose up --build --detach results in the containers becoming unhealthy.

Reproduction steps and code snippets

We are using Ubuntu 24.04 on AWS EC2.
Docker Version: 28.0.4
Docker Compose Version: 2.34.0

Error logs or stack trace

  • opik.sh --verify
Verifying container health...
opik-clickhouse-1 is not running (status-exited)
opik-mysql-1 is running and healthy
opik-python-backend-1 is running but not healthy (health-unhealthy) 
opik-redis-1 is running and healthy
opik-frontend-1 is not running (status-created)
opik-backend-1 is not running (status-created)
opik-minio-1 is running and healthy
  • docker compose logs -f
zookeeper-1 2025-04-11 10:20:06,919 [myid:1] - INFO [main:o.a.z.s.ContainerManager@83] - Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0
zookeeper-1 2025-04-11 10:20:06,922 [myid:1] - INFO [main:o.a.z.a.ZKAuditProvider@42] - Zookeeper audit is disabled.
clickhouse-1 /entrypoint.sh: create new user 'opik' instead 'default'
clickhouse-1 Processing configuration file '/etc/clickhouse-server/config-xml'.
clickhouse-1 Merging configuration file '/etc/clickhouse-server/config.d/additional_config.xml'.
clickhouse-1 Merging configuration file '/etc/clickhouse-server/config.d/docker_related_config.xml'.
clickhouse-1 Logging trace to /var/log/clickhouse-server/clickhouse-server.log
clickhouse-1 Logging errors to /var/1og/clickhouse-server/clickhouse-server.err.log
zookeeper-1 2025-04-11 10:20:08,246 [myid:] - INFO [SyncThread:0:o.a.z.s.p.FileTxnLog@284] - Creating new log file: 1og-1
mysql-1 2025-04-11T10:20:09.858234Z 6 [Warning] [MY-010453] [Server] root@localhost is created with an empty password! Please consider switching off the --initialize-insecure option.
mysql-1 2025-04-11T10:20:12.774564Z 0 [System] [MY-015018] [Server] MySQL Server Initialization - end.
mysql-1 2025-04-11 10:20:12+00:00 [Note] [Entrypoint]: Database files initialized
mysql-1 2025-04-11 10:20:12+00:00 [Note] [Entrypoint]: Starting temporary server
mysq1-1 2025-04-11T10:20:12.827652Z 0 [System] [MY -015015] [Server] MySQL Server - start.
mysql-1 2025-04-11T10:20:13.151822Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.4.2) starting as process 161
mysql-1 2025-04-11T10:20:13.178642Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
mysql-1 2025-04-11T10:20:13.653011Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
mysql-1 2025-04-11T10:20:14.194431Z [Warning] [MY-010068] [Server] CA certificate ca-pem is self signed.
mysql-1 2025-04-11T10:20:14.194487Z [System] [MY-013602] [Server] Channel mysql main configured to support TLS. Encrypted connections are now supported for this channel.
mysql-1 2025-04-11T10:20:14.205708Z [warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all 05 users. Consider choosing a different directory-mysql-1
mc-1 2025-04-11T10:20:14.253392Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Socket: /var/run/mysqld/mysqlx.sock
mc-1 2025-04-11T10:20:14.253471Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.4.2' socket: '/var/run/mysqld/mysqld.sock* port: 0 MySQL Community Server - GPL.
mc-1 2025-04-11 10:20:14+00:00 [Note] [Entrypoint]: Temporary server started.
mc-1 '/var/lib/mysql/mysq] sock' -> '/var/run/mysqld/mysqld.sock*
mc-1 Added "s3" successfully.
mc-1 Bucket created successfully "s3/public•
mc-1 Access permission for "s3/public/' is set to "download"
mc-1 exited with code 0
mysql-1 Warning: Unable to load '/usr/share/zoneinfo/iso3166.tab' as time zone. Skipping it.
mysql-1 Warning: Unable to load '/usr/share/zoneinfo/leap-seconds.list' as time zone. Skipping it.
mysql-1 Warning: Unable to load '/usr/share/zoneinfo/leapseconds' as time zone. Skipping it.
mysql-1 Warning: Unable to load '/usr/share/zoneinfo/tzdata.zi' as time zone. Skipping it.
mysql-1 Warning: Unable to load '/usr/share/zoneinfo/zone.tab' as time zone. Skipping it.
mysql-1 Warning: Unable to load '/usr/share/zoneinfo/zone1970.tab' as time zone. Skipping it.
mysql-1 2025-04-11 10:20:18+00:09 [Note] [Entrypoint]: Creating database opik
mysql-1 2025-04-11 10:20:18+00:00 [Note] [Entrypoint]: Creating user opik mysql-1
mysql-1 2025-94-11 10:20:18+00:00 [Note] [Entrypoint]: Giving user opik access to schema opik
mysql-1 2025-84-11 10:20:18+09:00 [Note] [Entrypoint]: Stopping temporary server
mysql-1 2025-04-11T10:20:18.131402Z 13 [System] [MY-013172] [Server] Received SHUTDOWN from user root. Shutting down mysqld (Version: 8.4.2)-
mysql-1 2025-04-11T10:20:19.505792Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.4.2) MySQL Community Server - GPL.
mysql-1 2025-04-11T10:20:19.505829Z 0 [System] [MY-015016] [Server] MySQL Server - end.
mysql-1 2025-04-11 10:29:20+00:00 [Note] [Entrypoint]: Temporary server stopped
mysq1-1 2025-04-11 10:20:20+00:00 [Note] [Entrypoint]: MySQL init process done. Ready for start up.
mysql-1 2025-04-11T10:20:20.1590122 0 [System] [MY -015015] [Server] MySQL Server - start.
mysql-1 2025-04-11T10:20:20.529645Z 0 [System] [MY -010116] [Server] /usr/sbin/mysqld (mysqld 8.4.2) starting as process 1
mysql-1 2025-04-11T10:20:20.537875Z 1 [System] [MY -0135761 [InnoDB] InnoDB initialization has started.
mysql-1 2025-04-11T10:20:21.071511Z 1 [System] [MY-013577] [InnoB] InnoDB initialization has ended.
mysql-1 2025-04-11T10:20:21.514931Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
mysql-1 2025-04-11T10:20:21.514998Z 0 [System] [MY-013602] [Server] Channel mysql main configured to support TLS. Encrypted connections are now supported for this channel.
mysql-1 2025-04-11T10:20:21.536763Z 0 [Warning] [MY-011810] [Server] Insecure configuration for --pid-file: Location '/var/run/mysqld' in the path is accessible to all OS users. Consider choosing a different directory.
mysql-1 2025-04-11T10:20:21.584688Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
mysql-1 2025-04-11T10:20:21.584756Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.4.2' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.

Gracefully stopping... (press Ctrl+C again to force) 
dependency failed to start: container opik-clickhouse-1 is unhealthy
  • /var/1og/clickhouse-server/clickhouse-server.err.log
2025-84.11 10:20:08.114899 [ 81 ] <Warning> Access(local directory): File /var/lib/clickhouse/access/users.list doesn't exist
2025.04.11 10:20:08.114937 [ 81 ] <Warning> Access(local directory): Recovering lists in directory /var/lib/clickhouse/access/

Healthcheck results

It will be posted later.

@camucamulemon7 camucamulemon7 changed the title [Bug]: containe is unhealthy [Bug]: The container becomes unhealthy and fails to start Apr 11, 2025
@BorisTkachenko
Copy link
Contributor

BorisTkachenko commented Apr 11, 2025

Hi @camucamulemon7 !
Just tried locally with 1.7.1 and all seems ok.
Steps:

  • navigate to a folder with cloned opik repo
  • git checkout 1.7.1
  • docker rm -f <existing_containers>
  • docker system prune
  • navigate to OpikRepoFolder/deployment/docker-compose
  • docker compose up --detach --build --force-recreate

After it all containers started and healthy

@camucamulemon7
Copy link
Author

Hi @BorisTkachenko
I’m not sure what the exact issue is either.
The logs mention that ClickHouse is unhealthy, but looking at the timeline, python-backend-1 becomes unhealthy first, so I suspect that might be the root cause.

@camucamulemon7
Copy link
Author

I compared the docker compose logs on my PC (WSL2) and found several differences:

  1. Logs for demo-data-generator-1

    • In the logs where the system starts correctly, the following message appears as the first line.
    demo-data-generator-1  | Starting the Demo Data creation task
    demo-data-generator-1  | ✅ Demo data created
    
    • In the logs where it fails to start, there are no logs from demo-data-generator-1.
  2. Logs for redis-1

    • In the successful logs, the following message appears.
    redis-1                | 1:C 09 Apr 2025 14:44:35.089 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
    redis-1                | 1:C 09 Apr 2025 14:44:35.089 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
    redis-1                | 1:C 09 Apr 2025 14:44:35.089 * Redis version=7.2.4, bits=64, commit=00000000, modified=0, pid=1, just s tarted
    redis-1                | 1:C 09 Apr 2025 14:44:35.089 * Configuration loaded
    redis-1                | 1:M 09 Apr 2025 14:44:35.090 * monotonic clock: POSIX clock_gettime
    redis-1                | 1:M 09 Apr 2025 14:44:35.090 * Running mode=standalone, port=6379.
    redis-1                | 1:M 09 Apr 2025 14:44:35.091 * Server initialized
    redis-1                | 1:M 09 Apr 2025 14:44:35.091 * Ready to accept connections tcp
    redis-1                | 1:signal-handler (1744209973) Received SIGTERM scheduling shutdown...
    redis-1                | 1:M 09 Apr 2025 14:46:13.592 * User requested shutdown...
    redis-1                | 1:M 09 Apr 2025 14:46:13.592 * Saving the final RDB snapshot before exiting.
    redis-1                | 1:M 09 Apr 2025 14:46:13.598 * DB saved on disk
    redis-1                | 1:M 09 Apr 2025 14:46:13.598 # Redis is now ready to exit, bye bye...
    
    • In the failed logs, the message "Ready to accept connections tcp" and everything after that is missing.↓
    redis-1                | 1:signal-handler (1744209973) Received SIGTERM scheduling shutdown...
    redis-1                | 1:M 09 Apr 2025 14:46:13.592 * User requested shutdown...
    redis-1                | 1:M 09 Apr 2025 14:46:13.592 * Saving the final RDB snapshot before exiting.
    redis-1                | 1:M 09 Apr 2025 14:46:13.598 * DB saved on disk
    redis-1                | 1:M 09 Apr 2025 14:46:13.598 # Redis is now ready to exit, bye bye...
    
  3. Logs for frontend-1

    • In the successful logs, the following message appears.
    frontend-1             | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
    frontend-1             | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
    frontend-1             | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
    frontend-1             | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
    frontend-1             | 10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf differs from the packaged version
    frontend-1             | /docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
    frontend-1             | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
    frontend-1             | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
    frontend-1             | /docker-entrypoint.sh: Configuration complete; ready for start up
    frontend-1             | 2025/04/09 14:45:02 [notice] 1#1: using the "epoll" event method
    frontend-1             | 2025/04/09 14:45:02 [notice] 1#1: nginx/1.27.4
    frontend-1             | 2025/04/09 14:45:02 [notice] 1#1: built by gcc 14.2.0 (Alpine 14.2.0) 
    frontend-1             | 2025/04/09 14:45:02 [notice] 1#1: OS: Linux 5.15.0-131-generic
    frontend-1             | 2025/04/09 14:45:02 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
    frontend-1             | 2025/04/09 14:45:02 [notice] 1#1: start worker processes
    frontend-1             | 2025/04/09 14:45:02 [notice] 1#1: start worker process 29
    frontend-1             | 2025/04/09 14:45:02 [notice] 1#1: start worker process 30
    frontend-1             | 2025/04/09 14:45:02 [notice] 1#1: start worker process 31
    frontend-1             | 2025/04/09 14:45:02 [notice] 1#1: start worker process 32
    frontend-1             | 2025/04/09 14:45:02 [notice] 1#1: start worker process 33
    frontend-1             | 2025/04/09 14:45:02 [notice] 1#1: start worker process 34
    frontend-1             | 127.0.0.1 - - [09/Apr/2025:14:45:07 +0000] "GET / HTTP/1.1" 200 514 "-" "curl/8.12.1"
    frontend-1             | 172.20.0.6 - - [09/Apr/2025:14:45:07 +0000] "POST /api/v1/private/projects/retrieve HTTP/1.1" 404 32 "-" "Python-urllib/3.12"
    frontend-1             | 172.20.0.6 - - [09/Apr/2025:14:45:07 +0000] "POST /api/v1/private/prompts/versions/retrieve HTTP/1.1" 404 41 "-" "python-httpx/0.28.1"
    frontend-1             | 127.0.0.1 - - [09/Apr/2025:14:45:08 +0000] "GET / HTTP/1.1" 200 514 "-" "curl/8.12.1"
    frontend-1             | 172.20.0.6 - - [09/Apr/2025:14:45:08 +0000] "POST /api/v1/private/prompts/versions HTTP/1.1" 200 257 "-" "python-httpx/0.28.1"
    frontend-1             | 172.20.0.6 - - [09/Apr/2025:14:45:08 +0000] "POST /api/v1/private/prompts/versions/retrieve HTTP/1.1" 200 272 "-" "python-httpx/0.28.1"
    frontend-1             | 172.20.0.6 - - [09/Apr/2025:14:45:08 +0000] "POST /api/v1/private/prompts/versions HTTP/1.1" 200 304 "-" "python-httpx/0.28.1"
    frontend-1             | 172.20.0.6 - - [09/Apr/2025:14:45:08 +0000] "POST /api/v1/private/prompts/versions/retrieve HTTP/1.1" 200 320 "-" "python-httpx/0.28.1"
    frontend-1             | 172.20.0.6 - - [09/Apr/2025:14:45:08 +0000] "POST /api/v1/private/prompts/versions HTTP/1.1" 200 483 "-" "python-httpx/0.28.1"
    frontend-1             | 172.20.0.6 - - [09/Apr/2025:14:45:08 +0000] "POST /api/v1/private/datasets/retrieve HTTP/1.1" 404 32 "-" "python-httpx/0.28.1"
    frontend-1             | 172.20.0.6 - - [09/Apr/2025:14:45:08 +0000] "POST /api/v1/private/datasets HTTP/1.1" 201 0 "-" "python-httpx/0.28.1"
    frontend-1             | 172.20.0.6 - - [09/Apr/2025:14:45:08 +0000] "POST /api/v1/private/datasets/retrieve HTTP/1.1" 200 205 "-" "python-httpx/0.28.1"
    frontend-1             | 172.20.0.6 - - [09/Apr/2025:14:45:08 +0000] "PUT /api/v1/private/datasets/items HTTP/1.1" 204 0 "-" "python-httpx/0.28.1"
    frontend-1             | 172.20.0.6 - - [09/Apr/2025:14:45:08 +0000] "POST /api/v1/private/datasets/items/stream HTTP/1.1" 200 311 "-" "python-httpx/0.28.1"
    frontend-1             | 172.20.0.6 - - [09/Apr/2025:14:45:08 +0000] "POST /api/v1/private/datasets/items/stream HTTP/1.1" 200 31 "-" "python-httpx/0.28.1"
    frontend-1             | 172.20.0.6 - - [09/Apr/2025:14:45:08 +0000] "POST /api/v1/private/experiments HTTP/1.1" 201 0 "-" "python-httpx/0.28.1"
    
    • In the failed logs, there are no logs from frontend-1.

Based on this information, are you able to identify the root cause? Thank you for your help.

@andrescrz
Copy link
Collaborator

andrescrz commented Apr 14, 2025

Hi @camucamulemon7,

Thank you for your message! I wanted to address a few points regarding your setup and the issues you mentioned.

First, I’d like to clarify that we natively support Windows. You can find the instructions for setting up the platform in our README.md:

# Start the Opik platform
powershell -ExecutionPolicy ByPass -c ".\opik.ps1"

There’s no need to use WSL2 for this setup, as we haven’t tested or validated it.

Second, regarding your original logs where you mentioned:

Even when proxy settings are configured in each container's Dockerfile

Could you clarify what you mean by proxy settings are configured? Specifically, did you make any changes to the docker-compose.yml or the Dockerfiles? This would help us better understand your setup.

Finally, based on your error logs, it seems the issue stems from the clickhouse container not starting properly. Since the backend and frontend containers depend on a healthy ClickHouse instance, they fail to start as a result.

It would be very helpful if you could share the logs from the clickhouse container. This will allow us to analyze the root cause and provide more targeted assistance.

Looking forward to your response!

Best regards,
Andrés

@camucamulemon7
Copy link
Author

Hi @andrescrz
I have added the following configuration to docker-compose.override.yaml.
As I’m still learning, there may be some mistakes—any corrections or suggestions would be greatly appreciated.

services:
  mysql:
    environment:
      HTTP_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      HTTPS_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      NO_PROXY: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      http_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      https_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      no_proxy: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      SSL_CERT_FILE: /usr/local/share/ca-certificates/zscaler.crt
      CURL_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.crt
      REQUESTS_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.cr
    volumes:
      - ./zscaler/zscaler.crt:/usr/local/share/ca-certificates
    ports:
      - "3306:3306" # Exposing MySQL port to host

  redis:
    environment:
      HTTP_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      HTTPS_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      NO_PROXY: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      http_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      https_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      no_proxy: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      SSL_CERT_FILE: /usr/local/share/ca-certificates/zscaler.crt
      CURL_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.crt
      REQUESTS_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.cr
    volumes:
      - ./zscaler/zscaler.crt:/usr/local/share/ca-certificates
    ports:
      - "6379:6379" # Exposing Redis port to host

  clickhouse-init:
    environment:
      HTTP_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      HTTPS_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      NO_PROXY: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      http_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      https_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      no_proxy: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      SSL_CERT_FILE: /usr/local/share/ca-certificates/zscaler.crt
      CURL_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.crt
      REQUESTS_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.cr
    volumes:
      - ./zscaler/zscaler.crt:/usr/local/share/ca-certificates

  clickhouse:
    environment:
      HTTP_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      HTTPS_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      NO_PROXY: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      http_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      https_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      no_proxy: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      SSL_CERT_FILE: /usr/local/share/ca-certificates/zscaler.crt
      CURL_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.crt
      REQUESTS_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.cr
    volumes:
      - ./zscaler/zscaler.crt:/usr/local/share/ca-certificates
    ports:
      - "8123:8123" # Exposing ClickHouse HTTP port to host
      - "9000:9000" # Exposing ClickHouse Native Protocol port to host

  zookeeper:
    environment:
      HTTP_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      HTTPS_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      NO_PROXY: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      http_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      https_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      no_proxy: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      SSL_CERT_FILE: /usr/local/share/ca-certificates/zscaler.crt
      CURL_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.crt
      REQUESTS_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.cr
    volumes:
      - ./zscaler/zscaler.crt:/usr/local/share/ca-certificates
    ports:
      - "2181:2181"
      
  minio:
    environment:
      HTTP_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      HTTPS_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      NO_PROXY: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      http_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      https_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      no_proxy: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      SSL_CERT_FILE: /usr/local/share/ca-certificates/zscaler.crt
      CURL_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.crt
      REQUESTS_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.cr
    volumes:
      - ./zscaler/zscaler.crt:/usr/local/share/ca-certificates
    ports:
      - "9001:9000"   # MinIO API Port
      - "9090:9090"   # MinIO Web UI Console

  mc:
    environment:
      HTTP_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      HTTPS_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      NO_PROXY: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      http_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      https_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      no_proxy: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      SSL_CERT_FILE: /usr/local/share/ca-certificates/zscaler.crt
      CURL_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.crt
      REQUESTS_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.cr
    volumes:
      - ./zscaler/zscaler.crt:/usr/local/share/ca-certificates

  backend:
    environment:
      HTTP_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      HTTPS_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      NO_PROXY: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      http_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      https_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      no_proxy: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      SSL_CERT_FILE: /usr/local/share/ca-certificates/zscaler.crt
      CURL_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.crt
      REQUESTS_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.cr
    volumes:
      - ./zscaler/zscaler.crt:/usr/local/share/ca-certificates
    ports:
      - "8080:8080" # Exposing backend HTTP port to host
      - "3003:3003" # Exposing backend OpenAPI specification port to host

  python-backend:
    environment:
      HTTP_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      HTTPS_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      NO_PROXY: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      http_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      https_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      no_proxy: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      SSL_CERT_FILE: /usr/local/share/ca-certificates/zscaler.crt
      CURL_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.crt
      REQUESTS_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.cr
    volumes:
      - ./zscaler/zscaler.crt:/usr/local/share/ca-certificates
    ports:
      - "8000:8000" # Exposing Python backend HTTP port to host

  demo-data-generator:
    environment:
      HTTP_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      HTTPS_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      NO_PROXY: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      http_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      https_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      no_proxy: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      SSL_CERT_FILE: /usr/local/share/ca-certificates/zscaler.crt
      CURL_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.crt
      REQUESTS_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.cr
    volumes:
      - ./zscaler/zscaler.crt:/usr/local/share/ca-certificates

  frontend:
    environment:
      HTTP_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      HTTPS_PROXY: "http://xxx.xxx.xxx.xxx:xxx"
      NO_PROXY: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      http_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      https_proxy: "http://xxx.xxx.xxx.xxx:xxx"
      no_proxy: localhost,mysql,redis,clickhouse-init,clickhouse,zookeeper,minio,mc,backend,python-backend,demo-data-generator,frontend
      SSL_CERT_FILE: /usr/local/share/ca-certificates/zscaler.crt
      CURL_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.crt
      REQUESTS_CA_BUNDLE: /usr/local/share/ca-certificates/zscaler.cr
    volumes:
      - ./zscaler/zscaler.crt:/usr/local/share/ca-certificates
    ports:
      - "5173:5173" # Exposing frontend server port to host

Proxy settings and SSL certificates (Zscaler) have been configured for all containers.
For services like python-backend and frontend that are built from Dockerfiles, these settings have also been added directly to their respective Dockerfiles.

Let me know if I’m missing something or if there’s a better approach.
Thank you for taking the time to read through this.

@andrescrz
Copy link
Collaborator

Hi @camucamulemon7

Could you please provide the ClickHouse container logs? This will help us debug the startup error more effectively.

Additionally, please also provide the Dockerfile changes for the python-backend, frontend or any other modified one.

Best,
Andrés

@camucamulemon7
Copy link
Author

camucamulemon7 commented Apr 17, 2025

Hi @andrescrz
After updating wget, the clickhouse container is no longer marked as unhealthy.
Currently, the backend-1 container is unhealthy. I suspect the cause is that port 8080 is not open and the connection is being blocked by SSL.
Could you please let me know where I should set the root CA certificate?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants