1

Absolute minikube/kubernetes neophyte here.

I'm using minikube with vm-driver = none (in case that matters) to deploy an application that gives me the option to specify a storageClass for provisioning volumes, and it uses the "standard" storageClass by default.

This WORKS, but it means that all of the data created in the application ends up in /tmp/ (the /tmp/hostpath_pv folder, I believe).

This makes me itch. I realize minikube itself will persist this data on a minikube restart, but I'm afraid of losing the data on a normal linux cleanup of the /tmp folder.

I'd like to be able to create a new storageClass (since that's the only easily configurable option I have to work with in the applications configuration YAML) that makes minikube stick persistent volumes in the /data directory (or anywhere else that won't be auto-cleaned by the OS).

Is there a simple way to do this? Or is it even a problem at all to have my volumes in /tmp?

Sorry for the complete noobery. I appreciate your help.

aga
  • 128
  • 3
JFitzDela
  • 11
  • 3

2 Answers2

1

With stateful applications like databases, we want to have volumes that persist data beyond the pod’s lifecycle. Kubernetes solves this problem by introducing the PersistentVolume and PersistentVolumeClaim resources that enable native and external persistent storage in your Kubernetes clusters. With minikube, you can achieve it by using hostPath as a PersistentVolume. Let's assume you want to have MongoDB backend. A volume claim, to make this volume available to MongoDB.

mongodb-pv.yml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: data
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 5Gi
  hostPath:
    path: /data/mongoData

Now you have to create local directory for storage:

$ mkdir -p /data/mongoData
$ chmod 777 /data/mongoData

Then you have to create Persistent Volume Claim. With a persistent volume claim, you issue a request to bind a matching persistent volume.

mongodb-pv-claim.yml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: data-claim
spec:
  storageClassName: ""
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

mongodb-deploy.yml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mongodb
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: db
    spec:
      restartPolicy: Always
      volumes:
        - name: data-storage
          persistentVolumeClaim:
            claimName: data-claim
      containers:
        - name: mongodb-container
          image: "de13/mongo-myapp"
          volumeMounts:
            - name: data-storage
              mountPath: /var/lib/mongo
          ports:
            - containerPort: 27017

Apply the files:

$ kubectl apply -f mongodb-pv.yml
$ kubectl apply -f mongodb-pv-claim.yml
$ kubectl apply -f mongodb-deploy.yml

$ kubectl get pvc
NAME         STATUS    VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS       AGE
data-claim   Bound     data      5Gi        RWO                               20s   <unset>

We already can see that the claim is fulfilled (bound) and that it is connected to the recently created volume named „data".

You can check this tutorial and Kubernetes documentation if you would like know more about Persistent Storage.

aga
  • 128
  • 3
  • Thank you so much for the detailed answer! The problem is that I can't change the deployments for the software I'm installing to change the persistentVolumeClaim to use my new definitions that point to /data - is there a way to accomplish the same thing by creating a storageClass since that's all I can configure on the final deployment? – JFitzDela Sep 02 '19 at 16:32
  • Please check https://vocon-it.com/2018/12/20/kubernetes-local-persistent-volumes/. Firstly you need to create StorageClass with WaitForFirstConsumer Binding Mode. I hope it will helps you! – aga Sep 04 '19 at 08:46
0

Another solution is to 1st create storageclass(with no provision), then pv, pvc and finally the Pod. Below is how I was able to make it work in Minikube:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: datacache-sc 
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

Then create PV:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: datacachepv-1
  labels:
   name: datacache
spec:
  storageClassName: datacache-sc
  persistentVolumeReclaimPolicy: Delete
  accessModes:
    - ReadWriteMany
  capacity:
    storage: 100Mi
  hostPath:
    path: /var/Cachebackup 
    type: DirectoryOrCreate

Then PVC:

apiVersion: v1
metadata:
  name: datacachepvc-1
spec:
  storageClassName: datacache-sc
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Mi
  selector:
    matchLabels:
      name: "datacache"

And finally mount this PVC in the Pod:

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    app: testrunner
  name: datacache
spec:
  containers:
  - image: abc.se/sandbox/xyz/myimage:0.1.2
    name: datacache
    ports:
    - containerPort: 3645 
      hostPort: 3645 
    resources: {}
    volumeMounts:
            - name: datacache-conf
              mountPath: /bin/datacache.conf
              subPath: datacache.conf
              readOnly: true 
            - name: datacache-volume
              mountPath: /DBbackup
  volumes:
  - name: datacache-conf
    configMap:
     name: datacache-conf
     items:
     - key: datacache.conf
       path: datacache.conf
  - name: datacache-volume
    persistentVolumeClaim:
      claimName: datacachepvc-1
Deb
  • 101