0

Trying to get the Dashboard UI working in a kubeadm cluster using kubectl proxy for remote access. Getting

Error: 'dial tcp 192.168.2.3:8443: connect: connection refused'
Trying to reach: 'https://192.168.2.3:8443/'

when accessing http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ via remote browser.

Looking at API logs, I see that I'm getting the following errors:

I1215 20:18:46.601151       1 log.go:172] http: TLS handshake error from 10.21.72.28:50268: remote error: tls: unknown certificate authority
I1215 20:19:15.444580       1 log.go:172] http: TLS handshake error from 10.21.72.28:50271: remote error: tls: unknown certificate authority
I1215 20:19:31.850501       1 log.go:172] http: TLS handshake error from 10.21.72.28:50275: remote error: tls: unknown certificate authority
I1215 20:55:55.574729       1 log.go:172] http: TLS handshake error from 10.21.72.28:50860: remote error: tls: unknown certificate authority
E1215 21:19:47.246642       1 watch.go:233] unable to encode watch object *v1.WatchEvent: write tcp 134.84.53.162:6443->134.84.53.163:38894: write: connection timed out (&streaming.encoder{writer:(*metrics.fancyResponseWriterDelegator)(0xc42d6fecb0), encoder:(*versioning.codec)(0xc429276990), buf:(*bytes.Buffer)(0xc42cae68c0)})

I presume this is related to not being able to get the Dashboard working, and if so am wondering what the issue with the API server is. Everything else in the cluster appears to be working.

NB, I have admin.conf running locally and am able to access the cluster via kubectl with no issue.

Also, of note is that this had been working when I first got the cluster up. However, I was having networking issues, and had to apply this in order to get CoreDNS to work Coredns service do not work,but endpoint is ok the other SVCs are normal only except dns, so I am wondering if this maybe broke the proxy service?

* EDIT *

Here is output for the dashboard pod:

[gms@thalia0 ~]$ kubectl describe pod kubernetes-dashboard-77fd78f978-tjzxt --namespace=kube-system
Name:               kubernetes-dashboard-77fd78f978-tjzxt
Namespace:          kube-system
Priority:           0
PriorityClassName:  <none>
Node:               thalia2.hostdoman/hostip<redacted>
Start Time:         Sat, 15 Dec 2018 15:17:57 -0600
Labels:             k8s-app=kubernetes-dashboard
                    pod-template-hash=77fd78f978
Annotations:        cni.projectcalico.org/podIP: 192.168.2.3/32
Status:             Running
IP:                 192.168.2.3
Controlled By:      ReplicaSet/kubernetes-dashboard-77fd78f978
Containers:
  kubernetes-dashboard:
    Container ID:  docker://ed5ff580fb7d7b649d2bd1734e5fd80f97c80dec5c8e3b2808d33b8f92e7b472
    Image:         k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
    Image ID:      docker-pullable://k8s.gcr.io/kubernetes-dashboard-amd64@sha256:1d2e1229a918f4bc38b5a3f9f5f11302b3e71f8397b492afac7f273a0008776a
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --auto-generate-certificates
    State:          Running
      Started:      Sat, 15 Dec 2018 15:18:04 -0600
    Ready:          True
    Restart Count:  0
    Liveness:       http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-mrd9k (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-certs
    Optional:    false
  tmp-volume:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  kubernetes-dashboard-token-mrd9k:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-mrd9k
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>

I checked the service:

[gms@thalia0 ~]$ kubectl -n kube-system get service kubernetes-dashboard
NAME                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes-dashboard   ClusterIP   10.103.93.93   <none>        443/TCP   4d23h

And also of note, if I curl http://localhost:8001/api from the master node, I do get a valid response.

So, in summary, I'm not sure which if any of these errors are the source of not being able to access the dashboard.

horcle_buzz
  • 175
  • 3
  • 10

1 Answers1

1

I upgraded all nodes in the cluster to version 1.13.1 and voila, the dashboard now works AND so far, I have not had to apply the CoreDNS fix noted above.

horcle_buzz
  • 175
  • 3
  • 10