2

In Kubernetes, you can create a volume to mount into a pod with type "HostPath" to specify that the storage should be provided by a directory on the node running the pod.

The documentation specifies "single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster", but I can't find any documentation as to why that is.

One reason that occurs to me is that the path would need to exist on every node in the cluster, and would need to be consistent so that the pod could be moved seamlessly from node to node. But it would be easy enough to mount an NFS disk onto a consistent mount point on all of the nodes so that constraint was satisfied.

Are there any other reasons that anyone knows about? Perhaps the "HostPath" plugin is just not designed for production use because others are more generally useful, the coding effort has all gone to the others?

Giles Thomas
  • 203
  • 3
  • 10

1 Answers1

3

As you said, it's because the data will not be synced across multiple nodes. It's probably ok to do some weird syncing solution across multiple nodes if only one pod will access the data at a time, but this quickly becomes a headache with multiple pods.

Instead of syncing the directory with something like nfs, just use that as your volume instead of a HostPath volume. Kubernetes supports nfs as a volume already, and many many more solutions as well.

ConnorJC
  • 921
  • 1
  • 7
  • 19
  • Sorry for the slow reply! Thanks -- one concern I have about that, though, is the sheer number of NFS mounts that might be required. Each of the pods I'm deploying needs five external directories mapped onto its storage, four from one server and one from another. At 100 pods/node, that's 500 NFS mounts if I use the built-in NFS volume -- but only two if I use HostPath. – Giles Thomas Jan 05 '18 at 15:43