2

I have setup my cluster of 1 master and 1 node from the following guide. CentOS Manual Install

after that I am just trying to deploy the dashboard (or anything for that matter) and I get following error in

kubectl get events

25m        1h          23        10.3.0.5                 Node                                                     Warning   MissingClusterDNS   {kubelet 10.3.0.5}          (events with common reason combined)
30m        1h          16        10.3.0.5                 Node                                                     Warning   MissingClusterDNS   {kubelet 10.3.0.5}          kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "kubernetes-dashboard-1975554030-cc9n1_kube-system(ebab5633-c9d1-11e6-a741-000d3af22f09)". Falling back to DNSDefault policy.
56m        56m         1         10.3.0.5                 Node                                                     Warning   MissingClusterDNS   {kubelet 10.3.0.5}          kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "busybox_default(9634cf12-c9d7-11e6-a741-000d3af22f09)". Falling back to DNSDefault policy.
26m        26m         2         10.3.0.5                 Node                                                     Warning   MissingClusterDNS   {kubelet 10.3.0.5}          kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. pod: "kubernetes-dashboard-1975554030-31rnp_kube-system(bdce120a-c9db-11e6-a741-000d3af22f09)". Falling back to DNSDefault policy.
...

Also when trying to get to dashboard I get this

curl http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "no endpoints available for service \"kubernetes-dashboard\"",
  "reason": "ServiceUnavailable",
  "code": 503
}

Other logs and info

kubectl cluster-info
Kubernetes master is running at http://localhost:8080

kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"86dc49aa137175378ac7fba7751c3d3e7f18e5fc", GitTreeState:"clean", BuildDate:"2016-12-15T16:57:18Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0", GitCommit:"86dc49aa137175378ac7fba7751c3d3e7f18e5fc", GitTreeState:"clean", BuildDate:"2016-12-15T16:57:18Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}

kubectl get nodes
NAME       STATUS    AGE
10.3.0.5   Ready     3h

kubectl get services --all-namespaces
NAMESPACE     NAME                   CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
default       kubernetes             10.254.0.1       <none>        443/TCP   1h
kube-system   kubernetes-dashboard   10.254.155.149   <nodes>       80/TCP    31m

kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS             RESTARTS   AGE
kube-system   kubernetes-dashboard-1975554030-1ramq   0/1       CrashLoopBackOff   10         31m
Rohit Hazra
  • 153
  • 1
  • 7
  • Looks like the pod is crashing, you should be able to do a 'kubectl --namespace=kube-system describe pod kubernetes-dashboard-1975554030-1ramq' or add a --all-namespaces to your get events command to get the events from kube-system. It could be anything really. – David Houde Jan 27 '17 at 05:17

1 Answers1

1

The Kubernetes-dashboard requires a working cluster DNS service.

Here's a manifest you can use to deploy CoreDNS in your cluster with working DNS.

I'm taking a guess your cluster DNS is 10.254.0.10 based on some of the output above.

You should be able to save this code in a text file (eg called k8s-dns.yaml) and then install it with kubectl create -f k8s-dns.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        log stdout
        health
        # Replace cluster.local with your cluster domain
        kubernetes cluster.local
        proxy . /etc/resolv.conf
        cache 30
    }
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: coredns
  template:
    metadata:
      labels:
        k8s-app: coredns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
    spec:
      containers:
      - name: coredns
        image: rothgar/coredns:004
        imagePullPolicy: Always
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: coredns
  # Replace with your cluster DNS IP
  clusterIP: 10.254.0.10
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

As an aside. I would suggest following the documentation for kubeadm as it is much more up to date and will give you a fully working cluster.

Justin Garrison
  • 293
  • 2
  • 5
  • 11