0

I have an existing nfs server containing my medias library (videos, music, ...).

I have a working pod using that folder, installed this way:

sudo helm install --generate-name --set nfs.server=192.168.1.2 --set nfs.path=/volume1/medias --set image.repository=quay.io/external_storage/nfs-client-provisioner-arm stable/nfs-client-provisioner
...

mypod.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
  namespace: mediasgrabbing-ns
  name: nas-medias-nfs-pv
  labels:
    type: remote
spec:
  capacity:
    storage: 2Ti
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs-client
  nfs:
    path: /volume1/medias
    server: 192.168.1.2
    readOnly: false
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  namespace: mediasgrabbing-ns
  name: nas-medias-nfs-pvc
  labels:
    app.kubernetes.io/name: nas-medias-nfs-pv
spec:
  storageClassName: nfs-client
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Ti
---

This works well and I can see my medias with the various pods defined in that namespace.

Now, in a different namespace, I need to access the medias library for other kind of processing.

I naively declare the PV/PVC in the same way, with a different name and a different namespace.

But when applying the file, I can see that the provisioner has created a subfolder '/volume1/medias/mediascenter-ns-nas-medias-nfs-pvc-pvc-91947c29-c87e-43ba-93af-f5ce147fb32f'. This is obviously not what I want, as I need to access the medias, and not have a separated volume in it.

Question:

  1. why did the provisioner creates a subfolder in the second use of the nfs share, while it didn't with the first use ?
  2. how to use the same nfs data volumes in several pods from several namespaces correctly ?

PS: trying to declare the volumes in the "default" namespace and use it twice didn't work.

spi
  • 113
  • 4
  • To sum, you want to have few pods on different namespaces connected to one NFS. Are you using K3s or Kubeadm? How many pods you have in your cluster? – PjoterS Nov 13 '20 at 11:35
  • @PjoterS Yes that's it. k3s on RPi4 (1 master 2Go, 2 workers 8Go). Actually, something like 80 pods are running, but the nfs will be shared across max 5 pods from different namespaces. – spi Nov 13 '20 at 18:38

0 Answers0