0

I've got a Kubernetes installation on CoreOS with DNS addon running in a pod.

My problem is this: kube2sky cannot access the api-server. By default it uses 127.0.0.1:8080 which is not available to docker containers. Because the api-server is listening to localhost I switched the kube_master_url to the server version. This endpoint requires authentication.

This is the log from the kube2sky pod:

I1119 02:01:48.603839       1 kube2sky.go:389] Etcd server found: http://127.0.0.1:4001
I1119 02:01:49.604512       1 kube2sky.go:455] Using https://10.3.0.1:443 for kubernetes master
I1119 02:01:49.604524       1 kube2sky.go:456] Using kubernetes API v1
E1119 02:01:49.616085       1 reflector.go:136] Failed to list *api.Service: Get https://10.3.0.1:443/api/v1/services: x509: failed to load system roots and no roots provided
E1119 02:01:49.616142       1 reflector.go:136] Failed to list *api.Endpoints: Get https://10.3.0.1:443/api/v1/endpoints: x509: failed to load system roots and no roots provided

How do I get the authentication credentials into the pod so that kube2sky uses it?

This is the pod deklaration:

apiVersion: v1
kind: ReplicationController
metadata:
  name: kube-dns-v9
  namespace: kube-system
labels:
  k8s-app: kube-dns
  version: v9
  kubernetes.io/cluster-service: "true"
spec:
  replicas: 1
  selector:
    k8s-app: kube-dns
    version: v9
  template:
    metadata:
      labels:
        k8s-app: kube-dns
        version: v9
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: etcd
        image: gcr.io/google_containers/etcd:2.0.9
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
        command:
        - /usr/local/bin/etcd
        - -data-dir
        - /var/etcd/data
        - -listen-client-urls
        - http://127.0.0.1:2379,http://127.0.0.1:4001
        - -advertise-client-urls
        - http://127.0.0.1:2379,http://127.0.0.1:4001
        - -initial-cluster-token
        - skydns-etcd
        volumeMounts:
        - name: etcd-storage
          mountPath: /var/etcd/data
          - name: kube2sky
        image: gcr.io/google_containers/kube2sky:1.11
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
        args:
        # command = "/kube2sky"
        - -domain=cluster.local
        - -kube_master_url=https://10.3.0.1:443
      - name: skydns
        image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
        args:
        # command = "/skydns"
        - -machines=http://127.0.0.1:4001
        - -addr=0.0.0.0:53
        - -ns-rotate=false
        - -domain=cluster.local.
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 1
          timeoutSeconds: 5
      - name: healthz
        image: gcr.io/google_containers/exechealthz:1.0
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
        args:
        - -cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
        - -port=8080
        ports:
        - containerPort: 8080
          protocol: TCP
      volumes:
      - name: etcd-storage
        emptyDir: {}
      dnsPolicy: Default  # Don't use cluster DNS.
Wienczny
  • 1,043
  • 10
  • 13

2 Answers2

0

I think you have not properly setup the service account admission-controller or the service account and token controllers.

If you do: kubectl get pods --all-namespaces -l k8s-app=kube-dns -o yaml, do you see any mention of mountPath: /var/run/secrets/kubernetes.io/serviceaccount?

Or maybe you changed your apiserver's private key or cert?

Eric Tune
  • 155
  • 5
0

The SSL-key was missing the IP of my kubernetes api-server service. When rotating SSL-keys I had to delete the old service account secret and restart kubelet. This created a new service account. The service account crendentials are automatically mounted into the pod at /run/secrets/kubernetes.io/serviceaccount. This location is used by kubectl as fallback to find credentials.

Wienczny
  • 1,043
  • 10
  • 13