1

I have tried to build kubernetes using kubeadm on my bare-metal server with containerd as cri, but it seemed that coredns failed to start after installing cni (weave-net).

Two coredns containers are now in "CrashLoopBackOff" state, and the logs of them are:

plugin/forward: no nameservers found

And the description of "kubectl describe pod" is as follows:

Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  4m52s (x9 over 13m)    default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
  Normal   Scheduled         4m7s                   default-scheduler  Successfully assigned kube-system/coredns-58cf647449-8pq7k to k8s
  Normal   Pulled            3m13s (x4 over 4m6s)   kubelet            Container image "localhost:5000/coredns:v1.8.4" already present on machine
  Normal   Created           3m13s (x4 over 4m6s)   kubelet            Created container coredns
  Normal   Started           3m13s (x4 over 4m6s)   kubelet            Started container coredns
  Warning  Unhealthy         3m13s                  kubelet            Readiness probe failed: Get "http://10.32.0.3:8181/ready": dial tcp 10.32.0.3:8181: connect: connection refused
  Warning  BackOff           2m54s (x12 over 4m5s)  kubelet            Back-off restarting failed container

If I add some settings like "nameserver 8.8.8.8" on /etc/resolv.conf, coredns pods starts running. However, currently I don't use any external dns at all, and with Docker as cri, the coredns worked well though there was no settings on /etc/resolv.conf.

Is it possible to deal with this problem without setting some upstream dns server on resolv.conf?

Server information:

OS: RedHat Enterprise Linux 8.4
cri: containerd 1.4.11
cni: weave-net 1.16
tools: kubeadm, kubectl, kubelet 1.22.1

I have tried using calico as cni as well, but the result was the same.

Daigo
  • 278
  • 1
  • 17

2 Answers2

3

The cause was that coredns has a forwarding setting on its ConfigMap by default. It was trying to forward requests to upstream DNS server though there's no DNS setting on /etc/resolv.conf.

# kubectl edit configmap coredns -n kube-system

After deleting the following section, it started and worked properly.

    forward . /etc/resolv.conf {
       max_concurrent 1000
    }
Daigo
  • 278
  • 1
  • 17
1

I changed the following section in coredns configmap :

forward . /etc/resolv.conf {
   max_concurrent 1000
}

to this:

forward . 8.8.8.8 {
       max_concurrent 1000
    }

And it works.