1

We are running kubernetes on centos7 on premises from past 3years, Recently our NFS storage device was migrated to different VLAN and there was a change in IP address, now none of pods are functioning properly and waiting for PV.

My question is what is best possible way to replace old NFS server IP with new NFS server IP in PV and all PVC without loosing any data?

2 Answers2

1

First, find the name of your PV:

kubectl get pv

Then get the YAML for your PV:

kubectl get pv <name> -o yaml > pv.yaml

Now edit the NFS server address:

  nfs:
    server: new.server.address.example
    path: "/exported/path/example"

Finally, apply your changes:

kubectl apply -f pv.yaml

Assuming the new NFS server is reachable and Kubernetes can talk to it, your pods should begin starting up.

Michael Hampton
  • 237,123
  • 42
  • 477
  • 940
  • 1) I have tried applying same pv.yaml chaning ip address but got name already exist. 2) I took backup of etcd and deleted the old PV and recreate with new ip in pv.yaml 3) Now new PV is provisioned but all PVC are still pointing to old NFS IP, Is there anyway we can re-map all PVC to use new PV IP – Nitin Mestry Sep 10 '20 at 12:37
  • That's interesting. I'll have to play around a bit more to see what it is actually doing. – Michael Hampton Sep 10 '20 at 12:40
  • Hm, I don't have a good solution right now. I suspect you might have to recreate all of the PVCs. In future you should always refer to resources (such as your NFS server) by hostname, not IP address, so you can avoid problems like this. – Michael Hampton Sep 10 '20 at 13:05
  • 1
    I have work around solution for this issue, but then it requires lot of manual efforts, Same issue was been raised by other person in this https://discuss.kubernetes.io/t/update-persistentvolume-nfs-ip/9633 – Nitin Mestry Sep 10 '20 at 13:09
0

If your pv.yaml is used a hostname, what about just hand-jamming the ip/hostname combo into your /etc/hosts file?

Another work around might be to create a new PV & PVC with new name, and change your pod config to request that new PVC matching your PV. Just make sure you retain the data spec.persistentVolumeReclaimPolicy: Retain

Especially since the pods are down anyway... Essentially just creating a new pod (or edit the current ones) to access the new PV with the old data?

I have not verified this process won't delete your data...

EbolaWare
  • 30
  • 6
  • In my pv.yaml I have used IP address instead of NFS server hostname, for now workaround will be to create new pv and PVC and manually copy all data under newly created PVC – Nitin Mestry Sep 11 '20 at 04:06
  • I would at least attempt to copy the old configs to newly named PV & PVC, then change the pod storage config to use it... Instead of manually doing anything... But I'm lazy... If you're going to manually copy the data anyway, why not give it a try once you copy the data from the first one? – EbolaWare Sep 11 '20 at 04:10