1

I am learning kubernetes by walking through a kubernetes tutorial. While working through the exercises in module 4, I observed an odd behavior from kubernetes when overwriting a label. I could use some explanation, because what I see doesn't match either the tutorial, the documentation, or my expectations.

For reference, my runtime environment is as follows:

dfvsmdev@gotham:~/workspace/kube$ minikube version
minikube version: v1.10.0
commit: f318680e7e5bf539f7fadeaaf198f4e468393fb9
dfvsmdev@gotham:~/workspace/kube$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.1", GitCommit:"7879fc12a63337efff607952a323df90cdc7a335", GitTreeState:"clean", BuildDate:"2020-04-08T17:30:47Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
dfvsmdev@gotham:~/workspace/kube$ uname -a
Linux gotham 4.15.0-72-generic #81~16.04.1-Ubuntu SMP Tue Nov 26 16:34:21 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
dfvsmdev@gotham:~/workspace/kube$

Also, for convenience I've written a little script called get-pod-labels:

#!/bin/sh
REGEX=[-A-Za-z0-9._/]+
kubectl describe pods | egrep "(^Name: .+$|^.+ ${REGEX}=${REGEX}$)"

When I run this script it produces the following output:

dfvsmdev@gotham:~/workspace/kube$ ./get-pod-labels
Name:         hello-node-7bf657c596-xr8dk
Labels:       app=hello-node
              pod-template-hash=7bf657c596
Name:         kubernetes-bootcamp-86656bc875-nz9f6
Labels:       app=kubernetes-bootcamp
              kb=true
              pod-template-hash=86656bc875
Name:         kubernetes-bootcamp-86656bc875-wlqnw
Labels:       app=kubernetes-bootcamp
              kb=true
              pod-template-hash=86656bc875

Now I'm in the part of the tutorial where they want us to execute the command kubectl label pod POD_NAME app=v1. The point at this step in the tutorial is to modify an existing label. As it is written, the command gives an error, error: 'app' already has a value (kubernetes-bootcamp), and --overwrite is false. So I added the --overwrite flag and tried again.

dfvsmdev@gotham:~/workspace/kube$ kubectl label pods --overwrite kubernetes-bootcamp-86656bc875-nz9f6 app=v1
pod/kubernetes-bootcamp-86656bc875-nz9f6 labeled

But when I checked the pod labels to verify the modification, I saw something odd:

dfvsmdev@gotham:~/workspace/kube$ ./get-pod-labels
Name:         hello-node-7bf657c596-xr8dk
Labels:       app=hello-node
              pod-template-hash=7bf657c596
Name:         kubernetes-bootcamp-86656bc875-48frs
Labels:       app=kubernetes-bootcamp
              pod-template-hash=86656bc875
Name:         kubernetes-bootcamp-86656bc875-nz9f6
Labels:       app=v1
              kb=true
              pod-template-hash=86656bc875
Name:         kubernetes-bootcamp-86656bc875-wlqnw
Labels:       app=kubernetes-bootcamp
              kb=true
              pod-template-hash=86656bc875
dfvsmdev@gotham:~/workspace/kube$

As expected, kubernetes modified the label app=v1 for the pod name given. But it also created a new pod instance, which I did not expect. Neither is the new pod mentioned in the tutorial. The new pod instance appears to be a brand new install of the pod, because it does not have a custom label (kb=true) that I added to the other kubernetes-bootcamp pods.

So let's put aside the tutorial, because frankly it has lots of typos. Let's just focus on the expected behavior of modifying a label. the man page of kubectl label explains the overwrite flag like this:

--overwrite=false
    If true, allow labels to be overwritten, otherwise reject label updates that overwrite existing labels.

This documentation says nothing about creating a new instance of the pod. What is the rationale here? Is the creation of an unmodified pod somehow a feature of kubernetes? Or is it a bug? I am astonished at this behavior. What if I needed to modify the labels of hundreds of pods? Would kubernetes make hundreds of new pods?

Please explain this behavior for me.

ADDENDUM

In response to KoopaKiller's answer below, here is a clarification. The following trace log follows his steps. Note that at the beginning there are two "kubernetes-bootcamp" pods.

dfvsmdev@gotham:~/workspace/kube$ kubectl get pods --show-labels
NAME                                   READY   STATUS    RESTARTS   AGE     LABELS
hello-node-7bf657c596-xr8dk            1/1     Running   1          3d18h   app=hello-node,pod-template-hash=7bf657c596
kubernetes-bootcamp-86656bc875-nz9f6   1/1     Running   0          2d22h   app=kubernetes-bootcamp,kb=true,pod-template-hash=86656bc875
kubernetes-bootcamp-86656bc875-wlqnw   1/1     Running   0          2d22h   app=kubernetes-bootcamp,kb=true,pod-template-hash=86656bc875

dfvsmdev@gotham:~/workspace/kube$ kubectl label pods --overwrite kubernetes-bootcamp-86656bc875-nz9f6 app=v1
pod/kubernetes-bootcamp-86656bc875-nz9f6 labeled

dfvsmdev@gotham:~/workspace/kube$ kubectl get pods --show-labels
NAME                                   READY   STATUS    RESTARTS   AGE     LABELS
hello-node-7bf657c596-xr8dk            1/1     Running   1          3d18h   app=hello-node,pod-template-hash=7bf657c596
kubernetes-bootcamp-86656bc875-jh9ml   1/1     Running   0          2m9s    app=kubernetes-bootcamp,pod-template-hash=86656bc875
kubernetes-bootcamp-86656bc875-nz9f6   1/1     Running   0          2d22h   app=v1,kb=true,pod-template-hash=86656bc875
kubernetes-bootcamp-86656bc875-wlqnw   1/1     Running   0          2d22h   app=kubernetes-bootcamp,kb=true,pod-template-hash=86656bc875

As you can see, a new, third "kubernetes-bootcamp" pod was indeed created after the execution of the label/overwrite command.

Given that the same result did not occur in KoopaKiller's counter-example, I suppose that what I'm seeing could be a bug in the minikube implementation. That may be the best explanation.

Lee Jenkins
  • 161
  • 1
  • 6

1 Answers1

1

I've tested in my lab and I could concluded that the pod is not recreate after apply/modify a label.

Thekind: Pod don't recreate the pod after any modification.

You can make the test executing a simple pod, applying the label and changing the label with --overwrite flag, and them checking the pod AGE with the comamnf kubectl get pods.

Example:

kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx
    command:
      - sleep
      - "3600"
EOF
  1. Applying the pod labels and checking the AGE:
$ kubectl label pod nginx app=nginx
pod/nginx labeled

$ kg pods --show-labels
NAME    READY   STATUS    RESTARTS   AGE    LABELS
nginx   1/1     Running   0          2m4s   app=nginx

As you can see, the pod pode age is 2m4s.

  1. Modify the label and checking the age:
$ kubectl label pod nginx app=my-nginx --overwrite
pod/nginx labeled

$ kg pods --show-labels
NAME    READY   STATUS    RESTARTS   AGE     LABELS
nginx   1/1     Running   0          3m28s   app=my-nginx

Other considerations:

  • In a kubernetes environment, a pod is rarely created to be production ready, because after te task was done, the pod will be terminated, as mentioned in this link

    Pods do not, by themselves, self-heal. If a Pod is scheduled to a Node that fails, or if the scheduling operation itself fails, the Pod is deleted; likewise, a Pod won’t survive an eviction due to a lack of resources or Node maintenance.

You need to use, for example a deployment that will ensure you pod will be resilient and high available.