0

I am trying to find out why my kube-dns does not resolve external urls and it seems it is caused by missing endpoints as described in:

(I am using Google Kubernetes engine and the cluster was created with the Google Cloud console.)

https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/ (section Are DNS endpoints exposed)

I can see that the endpoints are not exposed which probably cause my issue.

However, the documentation here is not very clear:

If you do not see the endpoints, see the endpoints section in the debugging Services documentation.

I tried to study it and I am not sure about following:

  1. Why the endpoints were not created automatically? We have kube-dns working in a different cluster and the endpoints exist there by default.
  2. Do I need to add the endpoints manually or can I recreate the whole kube-dns service with these endpoints?
  3. When adding manually, how do I choose the IP adresses to be correct?
  4. Do I need to tell other kubernetes services to use these endpoints?

This is the service definition:

apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kube-dns","kubernetes.io/cluster-service":"true","kubernetes.io/name":"KubeDNS"},"name":"kube-dns","namespace":"kube-system"},"spec":{"clusterIP":"10.87.0.10","ports":[{"name":"dns","port":53,"protocol":"UDP"},{"name":"dns-tcp","port":53,"protocol":"TCP"}],"selector":{"k8s-app":"kube-dns"}}}
  creationTimestamp: "2019-09-16T13:49:57Z"
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: KubeDNS
  name: kube-dns
  namespace: kube-system
  resourceVersion: "468357079"
  uid: deea9ec5-d888-11e9-9024-42010a840025
spec:
  clusterIP: 10.87.0.10
  clusterIPs:
  - 10.87.0.10
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: dns
    port: 53
    protocol: UDP
    targetPort: 53
  - name: dns-tcp
    port: 53
    protocol: TCP
    targetPort: 53
  selector:
    k8s-app: kube-dns
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
Vojtěch
  • 275
  • 3
  • 11
  • 1
    When a Service object has a spec.selector, an Endpoint object should be created, by Kubernetes controllers. Can you share the result of a `kubectl get pods -n kube-system -l k8s-app=kube-dns` If empty, do you see another label that would match your dns pods (`kubectl get pods --show-labels -n kube-system`)? – SYN Jul 31 '22 at 10:14
  • 1
    Okay, thanks to you, I understood the problem – for some reason, the pods of kube-dns were scaled to zero (probably my predessor changed this). The service existed but i didn't realize the pods were not actually running. Scaling the pods up immediately solved this issue. Thanks and sorry for a stupid issue. – Vojtěch Jul 31 '22 at 21:07

0 Answers0