1

On my ESXi hypervisor I installed two PhotonOS VMs and made the first one a Kubernetes Master and the second one a Kubernetes Node according to this instructions from VMWare and the following two sites.

Both servers

The /etc/kubernetes/config file on both:

KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://k8s-master:8080"

On the master

/etc/kubernetes/apiserver:

KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_API_ARGS=""

node.json:

{
     "apiVersion": "v1",
     "kind": "Node",
     "metadata": {
         "name": "k8s-worker-1",
         "labels":{ "name": "k8s-worker"}
     },
     "spec": {
         "externalID": "k8s-worker-1"
     }
}

On the node

/etc/kubernetes/kubelet:

KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_HOSTNAME="--hostname-override=k8s-worker-1"
KUBELET_API_SERVER="--kubeconfig=/etc/kubernetes/kubeconfig"
KUBELET_ARGS=""

/etc/kubernetes/kubeconfig

apiVersion: v1
clusters:
- cluster:
    server: http://k8s-master:8080

Problem

So, the kubectl get pods -A returns No resources found and kubectl get rs -A returns

NAMESPACE              NAME                                   DESIRED   CURRENT   READY   AGE
kubernetes-dashboard   dashboard-metrics-scraper-79c5968bdc   1         0         0       106m
kubernetes-dashboard   kubernetes-dashboard-658485d5c7        1         0         0       106m

kubectl describe deployment -A returns

Name:                   dashboard-metrics-scraper
Namespace:              kubernetes-dashboard
CreationTimestamp:      Sat, 21 Aug 2021 02:44:38 +0000
Labels:                 k8s-app=dashboard-metrics-scraper
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               k8s-app=dashboard-metrics-scraper
Replicas:               1 desired | 0 updated | 0 total | 0 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           k8s-app=dashboard-metrics-scraper
  Annotations:      seccomp.security.alpha.kubernetes.io/pod: runtime/default
  Service Account:  kubernetes-dashboard
  Containers:
   dashboard-metrics-scraper:
    Image:        kubernetesui/metrics-scraper:v1.0.6
    Port:         8000/TCP
    Host Port:    0/TCP
    Liveness:     http-get http://:8000/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /tmp from tmp-volume (rw)
  Volumes:
   tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
Conditions:
  Type             Status  Reason
  ----             ------  ------
  Available        False   MinimumReplicasUnavailable
  ReplicaFailure   True    FailedCreate
  Progressing      False   ProgressDeadlineExceeded
OldReplicaSets:    <none>
NewReplicaSet:     dashboard-metrics-scraper-79c5968bdc (0/1 replicas created)
Events:            <none>


Name:                   kubernetes-dashboard
Namespace:              kubernetes-dashboard
CreationTimestamp:      Sat, 21 Aug 2021 02:44:38 +0000
Labels:                 k8s-app=kubernetes-dashboard
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               k8s-app=kubernetes-dashboard
Replicas:               1 desired | 0 updated | 0 total | 0 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           k8s-app=kubernetes-dashboard
  Service Account:  kubernetes-dashboard
  Containers:
   kubernetes-dashboard:
    Image:      kubernetesui/dashboard:v2.3.1
    Port:       8443/TCP
    Host Port:  0/TCP
    Args:
      --auto-generate-certificates
      --namespace=kubernetes-dashboard
    Liveness:     http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
  Volumes:
   kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-certs
    Optional:    false
   tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
Conditions:
  Type             Status  Reason
  ----             ------  ------
  Available        False   MinimumReplicasUnavailable
  ReplicaFailure   True    FailedCreate
  Progressing      False   ProgressDeadlineExceeded
OldReplicaSets:    <none>
NewReplicaSet:     kubernetes-dashboard-658485d5c7 (0/1 replicas created)
Events:            <none>

So as you see, I can not get the kubernetes dashboard up and running because the pods are not ready. What can I do to solve this? Thanks in advance.

RUFmord
  • 19
  • 4

2 Answers2

0

I have had a problem with the exact same symptoms. Perhaps it also has the same root cause. (Just for the record, I am using a cluster in VMware Tanzu.)

Run kubectl get events --namespace kubernetes-dashboard to get logs even if no pod could be started yet.

For me, the following was logged in the K8s events:

Error creating: pods "kubernetes-dashboard-5c4b99db7-" is forbidden: PodSecurityPolicy: unable to admit pod: []
Error creating: pods "dashboard-metrics-scraper-66dd8bdd86-" is forbidden: PodSecurityPolicy: unable to admit pod: []

If this is also your error, you should look into PodSecurityPolicy (https://kubernetes.io/docs/concepts/security/pod-security-policy/).

For me it helped to create a ClusterRoleBinding and RoleBinding for my user.

This article here explained everything: https://www.unknownfault.com/posts/podsecuritypolicy-unable-to-admit-pod/

  • While this link may answer the question, it is better to include the essential parts of the answer here and provide the link for reference. Link-only answers can become invalid if the linked page changes. - [From Review](/review/late-answers/523539) – Dave M Jun 25 '22 at 11:45
0

To start the troubleshooting, I can see in your deployment YAML on the condition section that there is a clue. You can see that a new replica set (kubernetes-dashboard-658485d5c7) was created; however, it couldn’t create the pods. Usually this issue is because the VM resources quota is exceeded. We can check this with help from the JSON output format with this command:

Kubectl get rs kubernetes-dashboard-658485d5c7 -o json | jq .status.conditions

Then, you will see a message like this:

“Message”: Pods \  kubernetes-dashboard-658485d5c7 is forbidden: failed quota:

To fix this issue, it will be necessary to specify resource limits in your containers. We can check the current default limits with this command:

Kubectl describe limits

Once we get these values, we can set our limits without exceeding those values in our deployment YAML file like the following example:

spec:
  replicas:1

 spec:
  containers:
    -name:kubernetes-dashboard

      resources:
        request:
        cpu:400m 
        Memory:6Mi

Please note that these values are only an example and you will need to set your values that fit with your compute quota resources; additionally, in this link you will find more information about how quota limits work.

Leo
  • 158
  • 4