0

While doing setup testing on a 2 worker node Kubernetes cluster using kind (https://kind.sigs.k8s.io/docs/user/quick-start), i came up with this and cannot find info elsewhere.

I've created a folder on all of my worker nodes in /var/testpv and created the following PV:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pg-pv
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: fast-disks
  local:
    path: /var/testpv
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - fvi-worker2
          - fvi-worker

It worked fine so i created a second one:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pg-pv2
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  storageClassName: fast-disks
  local:
    path: /var/testpv
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - fvi-worker
          - fvi-worker2

(Same, just different name)

Then i created 2 PVC using this storage class:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: example-local-claim
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: fast-disks

and

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: example-local-claim2
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: fast-disks

Then, finally two pods using those PVCs:

apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: myfrontend
      image: nginx
      volumeMounts:
      - mountPath: "/var/www/html"
        name: mypd
  volumes:
    - name: mypd
      persistentVolumeClaim:
        claimName: example-local-claim

and

apiVersion: v1
kind: Pod
metadata:
  name: mypod2
spec:
  containers:
    - name: myfrontend
      image: nginx
      volumeMounts:
      - mountPath: "/var/www/html"
        name: mypd
  volumes:
    - name: mypd
      persistentVolumeClaim:
        claimName: example-local-claim2

It all work fine, i can see files created on each of those pods going to the correct storage, each pods on a different worker node.

However, if i try to create a new PV, PVC and pod, no error whatsoever!

Even worse, PV is created on the same location so i can actually see pod1 or pod2 (depending) files!

I would have assumed that Kubernetes would do some kind of check that a PV for the same host with the same would actually exist but apparently it is not.

Am i missing something? Am i doing something wrong? Or is it super necessary to actually be very careful while creating PVs?

Thanks for any insight,

vfrans
  • 43
  • 1
  • 6

1 Answers1

0

Yes it works because it’s the only way to share disks when using local pv. This is expected behavior. In some cases you might want to share files between pods and this is one way of achieving this.

Matt
  • 528
  • 3
  • 7