0

I am new at working with kubernetes setup, and are trying to run kafka pod with a persistence volume, so in case a pod goes down, the memory is not lost, and I can spin up a new cluster, using the persisted memory.

The problem with this

I tried to make one here

apiVersion: v1
kind: Service
metadata:
  name: kafka
  labels:
    app: kafka
spec:
  type: NodePort
  ports:
   - port: 9092
  selector:
   app: kafka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
 name: kafka
spec:
 selector:
   matchLabels:
     app: kafka
 serviceName: "kafka"
 template:
   metadata:
     labels:
       app: kafka
   spec:
     terminationGracePeriodSeconds: 10
     containers:
     - name: kafka
       image: bitnami/kafka:latest
       # readinessProbe:
       #   httpGet:
       #     port: 7070
       #     path: /readiness
       #   initialDelaySeconds: 120
       #   periodSeconds: 15
       #   failureThreshold: 1
       # livenessProbe:
       #   httpGet:
       #     port: 7070
       #     path: /liveness
       #   initialDelaySeconds: 360
       #   periodSeconds: 15
       #   failureThreshold: 3
       ports:
       - containerPort: 9092
       volumeMounts:
       - name: kafka
         mountPath: /binami/kafka
 volumeClaimTemplates:
 - metadata:
     name: datadir
   spec:
     accessModes: [ "ReadWriteOnce" ]
     resources:
       requests:
         storage: 1Gi

but this does not seem to work, for some reason it goes to an error state, with an error message:

pod has unbound immediate PersistentVolumeClaims

Which I am not sure I understand? - the volume is only being used by that?
and the pod has not restarted so I am quite confused why this is not working?

PVC-config:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: datadir-kafka-0
  namespace: default
  selfLink: /api/v1/namespaces/default/persistentvolumeclaims/datadir-kafka-0
  uid: 264204f8-21cc-11ea-8f02-00155de9e001
  resourceVersion: '149105'
  creationTimestamp: '2019-12-18T19:25:28Z'
  labels:
    app: kafka
  annotations:
    control-plane.alpha.kubernetes.io/leader: >-
      {"holderIdentity":"640e4416-2192-11ea-978b-8c1645373373","leaseDurationSeconds":15,"acquireTime":"2019-12-18T19:25:28Z","renewTime":"2019-12-18T19:25:30Z","leaderTransitions":0}
    pv.kubernetes.io/bind-completed: 'yes'
    pv.kubernetes.io/bound-by-controller: 'yes'
    volume.beta.kubernetes.io/storage-provisioner: docker.io/hostpath
  finalizers:
    - kubernetes.io/pvc-protection
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  volumeName: pvc-264204f8-21cc-11ea-8f02-00155de9e001
  storageClassName: hostpath
  volumeMode: Filesystem
status:
  phase: Bound
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 1Gi

PV-config:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: pvc-264204f8-21cc-11ea-8f02-00155de9e001
  selfLink: /api/v1/persistentvolumes/pvc-264204f8-21cc-11ea-8f02-00155de9e001
  uid: 264dab2e-21cc-11ea-8f02-00155de9e001
  resourceVersion: '149091'
  creationTimestamp: '2019-12-18T19:25:28Z'
  annotations:
    docker.io/hostpath: >-
      C:\Users\kube\.docker\Volumes\datadir-kafka-0\pvc-264204f8-21cc-11ea-8f02-00155de9e001
    pv.kubernetes.io/provisioned-by: docker.io/hostpath
  finalizers:
    - kubernetes.io/pv-protection
spec:
  capacity:
    storage: 1Gi
  hostPath:
    path: >-
      /host_mnt/c/Users/kube/.docker/Volumes/datadir-kafka-0/pvc-264204f8-21cc-11ea-8f02-00155de9e001
    type: ''
  accessModes:
    - ReadWriteOnce
  claimRef:
    kind: PersistentVolumeClaim
    namespace: default
    name: datadir-kafka-0
    uid: 264204f8-21cc-11ea-8f02-00155de9e001
    apiVersion: v1
    resourceVersion: '149073'
  persistentVolumeReclaimPolicy: Delete
  storageClassName: hostpath
  volumeMode: Filesystem
status:
  phase: Bound
kube
  • 1
  • 1

1 Answers1

0

It seems that the dynamic PersistentVolume provisioner does not support Local storage as stated here.

You need to specify a StorageClassName pointing the local volume you created within the datadir volumeClaimTemplates of your kafka StatefulSet. You can read an example here.

Try something like this:

 volumeClaimTemplates:
 - metadata:
     name: datadir
   spec:
     accessModes: [ "ReadWriteOnce" ]
     storageClassName: hostpath
     resources:
       requests:
         storage: 1Gi
dbaltor
  • 101
  • 3