-
Notifications
You must be signed in to change notification settings - Fork 730
Description
What keywords did you search in kubeadm issues before filing this one?
127.0.0.1 preflight
If you have found any duplicates, you should instead reply there and close this page.
If you have not found any duplicates, delete this section and continue on.
Is this a BUG REPORT or FEATURE REQUEST?
FEATURE REQUEST
Versions
kubeadm version (use kubeadm version
): all
Environment:
- Kubernetes version (use
kubectl version
): all - Cloud provider or hardware configuration: all
- OS (e.g. from /etc/os-release): Ubuntu and many other distros that use systemd-resolver
What happened?
When the underlying node is configured to use systemd stub resolver or really any local dns caching solution the nodes resolv.conf is usually pointed at 127.0.0.X
When this happens the kubelet by default still uses that resolv.conf as the basis for the resolv.conf that is generated for pods with a dnsPolicy: Default
. This has consequences for kube-dns as the kube-dns pods are configured to use this method to "derive" the default upstream servers and search paths.
If coredns is configured to use 127.0.0.X as it's upstream resolver nothing will be resolved as we now have a resolution loop.
What you expected to happen?
I think that kubeadm preflight checks should check for this and fail if the local /etc/resolv.conf includes 127.0.0.X in the servers line. 127.0.0.X because depending on the implementation it can be 127.0.0.53 (systemd stub resolver) or 127.0.0.1 (dnsmasq by default)
Should the preflight check fail the user can provide a different resolv.conf to the kubelet with the --resolv.conf
flag.
How to reproduce it (as minimally and precisely as possible)?
Bring up an ubuntu system with systemd stub resolver configured.
Anything else we need to know?
This is a problem that comes up fairly often and seems like something that kubeadm can check for and help avoid.