-
Notifications
You must be signed in to change notification settings - Fork 205
BUG: Fix healthcheck port for kubectl deployments #715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Hi @Nielio , currently everything is aligned on 1042 port. Do you see any point to change that ? |
Hi @ivanmatmati , i just deployed it in the open telekom cloud as DeamonSet and the health checks keept failing. So i assume there is just a typo in this yml from 1042 to 1024. Or i am wrong? |
Can you check what is the value inside the configuration file with default value ? It's in the |
@Nielio , Can you check the version of Helm Chart you're using ? I've just tried with 1.44.3 and everything works fine.
and in Ingress Controller yaml:
The 1024 port is dedicated to stats frontend:
|
@ivanmatmati I did't use the helm chart. I downloaded the yml files from this repo and applied them manualy. The Version is like from my pull request. I did't do much changes besides like in this pull request. apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy-kubernetes-ingress
namespace: haproxy-controller
data:
syslog-server: "address:stdout, format: raw, facility:local0, level:debug"
ssl-redirect: "true"
ssl-redirect-port: "443"
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: haproxy
annotations:
ingressclass.kubernetes.io/is-default-class: "true"
spec:
controller: haproxy.org/ingress-controller/haproxy-controller
DeamonSet Changes spec:
# ...
template:
# ...
spec:
# ...
containers:
- name: haproxy-ingress
image: haproxytech/kubernetes-ingress
args:
- --configmap=haproxy-controller/haproxy-kubernetes-ingress
- --ingress.class=haproxy-controller # <--
- --publish-service=haproxy-controller/haproxy-kubernetes-ingress # <--
# ...
livenessProbe:
httpGet:
path: /healthz
port: 1024 # <--
ports:
- name: http
containerPort: 8080
hostPort: 8080
- name: https
containerPort: 8443
hostPort: 8443
- name: stat
containerPort: 1024
# hostPort: 1024 # <-- removed this one
env:
- name: TZ # <--
value: "Europe/Berlin"
# ...
It seams like my Service yaml is different from that one i downloaded on 2025-04-04 apiVersion: v1
kind: Service
metadata:
labels:
run: haproxy-ingress
name: haproxy-kubernetes-ingress
namespace: haproxy-controller
annotations:
# This field is mandatory for kws provider to update routes
kubernetes.io/elb.id: xxxxxxxxxxxxxxxxx
kubernetes.io/elb.class: performance
kubernetes.io/elb.health-check-flag: 'on'
kubernetes.io/elb.health-check-option: '{"protocol":"TCP","delay":"5","timeout":"10","max_retries":"3"}'
kubernetes.io/elb.lb-algorithm: ROUND_ROBIN
kubernetes.io/elb.pass-through: onlyLocal
spec:
selector:
run: haproxy-ingress
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
- name: https
port: 443
protocol: TCP
targetPort: 8443
- name: mariadb
nodePort: 30101
port: 2000
protocol: TCP
targetPort: 2000
---
apiVersion: v1
kind: Service
metadata:
labels:
run: haproxy-ingress
name: haproxy-stats-clusterip
namespace: haproxy-controller
spec:
type: ClusterIP
ports:
- name: stat
port: 1024
protocol: TCP
selector:
run: haproxy-ingress
|
I will try using the current helm chart, instead of the yaml files on another cluster next week. |
Changed livenessProbe port from
1042
to1024
for kubectl deployments.The pod did not listen on port
1042
, so connection for healthcheck was refused and the check failed. Changing it to the stats port1024
fixes the health check.