Skip to content

BUG: Fix healthcheck port for kubectl deployments #715

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

Nielio
Copy link

@Nielio Nielio commented Apr 10, 2025

Changed livenessProbe port from 1042 to 1024 for kubectl deployments.

The pod did not listen on port 1042, so connection for healthcheck was refused and the check failed. Changing it to the stats port 1024 fixes the health check.

Copy link

stale bot commented May 10, 2025

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label May 10, 2025
@Nielio Nielio changed the title Fix healthcheck port for kubectl deployments BUG Fix healthcheck port for kubectl deployments May 12, 2025
@Nielio Nielio changed the title BUG Fix healthcheck port for kubectl deployments BUG: Fix healthcheck port for kubectl deployments May 12, 2025
@stale stale bot removed the stale label May 12, 2025
@ivanmatmati
Copy link
Collaborator

Hi @Nielio , currently everything is aligned on 1042 port. Do you see any point to change that ?

@Nielio
Copy link
Author

Nielio commented May 23, 2025

Hi @ivanmatmati , i just deployed it in the open telekom cloud as DeamonSet and the health checks keept failing.
I inspected the running pods and nothing was listening on port 1042.
Changed the healthz port to 1024 and the health checks are successing.

So i assume there is just a typo in this yml from 1042 to 1024. Or i am wrong?

@ivanmatmati
Copy link
Collaborator

Can you check what is the value inside the configuration file with default value ? It's in the frontend healthz section. What is the version of the charts you're using ?

@ivanmatmati
Copy link
Collaborator

@Nielio , Can you check the version of Helm Chart you're using ? I've just tried with 1.44.3 and everything works fine.
In HAProxy configuration:

frontend healthz
  mode http
  bind 0.0.0.0:1042 name v4
  bind :::1042 name v6 v4v6
  monitor-uri /healthz
  option dontlog-normal

and in Ingress Controller yaml:

    livenessProbe:
      failureThreshold: 3
      httpGet:
        path: /healthz
        port: 1042
        scheme: HTTP
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 1

The 1024 port is dedicated to stats frontend:

frontend stats
  mode http
  bind :::1024 name v6
  bind *:1024 name stats
  stats enable
  stats uri /
  stats refresh 10s
  http-request set-var(txn.base) base
  http-request use-service prometheus-exporter if { path /metrics }

@Nielio
Copy link
Author

Nielio commented May 27, 2025

@ivanmatmati I did't use the helm chart. I downloaded the yml files from this repo and applied them manualy. The Version is like from my pull request.

I did't do much changes besides like in this pull request.

apiVersion: v1
kind: ConfigMap
metadata:
  name: haproxy-kubernetes-ingress
  namespace: haproxy-controller
data:
  syslog-server: "address:stdout, format: raw, facility:local0, level:debug"
  ssl-redirect: "true"
  ssl-redirect-port: "443"

---

apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  name: haproxy
  annotations:
    ingressclass.kubernetes.io/is-default-class: "true"
spec:
  controller: haproxy.org/ingress-controller/haproxy-controller

DeamonSet Changes

spec:
  # ...
  template:
    # ...
    spec:
      # ...
      containers:
        - name: haproxy-ingress
          image: haproxytech/kubernetes-ingress
          args:
            - --configmap=haproxy-controller/haproxy-kubernetes-ingress
            - --ingress.class=haproxy-controller # <--
            - --publish-service=haproxy-controller/haproxy-kubernetes-ingress # <--
          # ...
          livenessProbe:
            httpGet:
              path: /healthz
              port: 1024 # <--
          ports:
            - name: http
              containerPort: 8080
              hostPort: 8080
            - name: https
              containerPort: 8443
              hostPort: 8443
            - name: stat
              containerPort: 1024
              # hostPort: 1024 # <-- removed this one
          env:
            - name: TZ # <--
              value: "Europe/Berlin"
          # ...

It seams like my Service yaml is different from that one i downloaded on 2025-04-04

apiVersion: v1
kind: Service
metadata:
  labels:
    run: haproxy-ingress
  name: haproxy-kubernetes-ingress
  namespace: haproxy-controller
  annotations:
    # This field is mandatory for kws provider to update routes
    kubernetes.io/elb.id: xxxxxxxxxxxxxxxxx
    kubernetes.io/elb.class: performance
    kubernetes.io/elb.health-check-flag: 'on'
    kubernetes.io/elb.health-check-option: '{"protocol":"TCP","delay":"5","timeout":"10","max_retries":"3"}'
    kubernetes.io/elb.lb-algorithm: ROUND_ROBIN
    kubernetes.io/elb.pass-through: onlyLocal
spec:
  selector:
    run: haproxy-ingress
  type: LoadBalancer
  externalTrafficPolicy: Local
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 8080
    - name: https
      port: 443
      protocol: TCP
      targetPort: 8443
    - name: mariadb
      nodePort: 30101
      port: 2000
      protocol: TCP
      targetPort: 2000

---

apiVersion: v1
kind: Service
metadata:
  labels:
    run: haproxy-ingress
  name: haproxy-stats-clusterip
  namespace: haproxy-controller
spec:
  type: ClusterIP
  ports:
    - name: stat
      port: 1024
      protocol: TCP
  selector:
    run: haproxy-ingress

@Nielio
Copy link
Author

Nielio commented May 27, 2025

I will try using the current helm chart, instead of the yaml files on another cluster next week.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants