2

I am using EKS with Kubernetes version 1.15 and when I create a Storageclass, Persistent-Volume, Persistent-Volume-Claim, and Deployment the pod fails with:

Warning  FailedAttachVolume  71s (x2 over 3m11s)  attachdetach-controller              AttachVolume.Attach failed for volume "efs-pv" : attachment timeout for volume fs-<volume>
Warning  FailedMount         53s (x2 over 3m8s)   kubelet, ip-<ip-address>.ec2.internal  Unable to mount volumes for pod "influxdb-deployment-555f4c8b94-mldfs_default(2525d10b-e30b-4c4c-893e-10971e0c683e)": timeout expired waiting for volumes to attach or mount for pod "default"/"influxdb-deployment-555f4c8b94-mldfs". list of unmounted volumes=[persistent-storage]. list of unattached volumes=[persistent-storage]

However when I try the same without building the Persistent-Volume it is successful, and creates its own that seemingly skips CSI. This is what I am working with:

deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: influxdb-deployment
spec:
  selector:
    matchLabels:
      app: influxdb
  replicas: 1
  template:
    metadata:
      labels:
        app: influxdb
    spec:
      containers:
      - name: influxdb
        image: influxdb:1.7.10-alpine
        ports:
        - containerPort: 8086
        volumeMounts:
        - name: persistent-storage
          mountPath: /var/lib/influx
      volumes:
      - name: persistent-storage
        persistentVolumeClaim:
          claimName: efs-claim

storageclass.yaml:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs-sc
provisioner: efs.csi.aws.com
reclaimPolicy: Retain

persistent-volume.yaml:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-<volume-id>

persistent-volume-claim.yaml:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: efs-claim
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 5Gi

Any idea on what is happening?

Thingable
  • 21
  • 2

1 Answers1

1

would you mind trying to create the CSIDriver object and the Daemonset related as well?

apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
  name: efs.csi.aws.com
spec:
  attachRequired: false
---
# Node Service
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: efs-csi-node
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: efs-csi-node
  template:
    metadata:
      labels:
        app: efs-csi-node
    spec:
      nodeSelector:
        kubernetes.io/os: linux
        kubernetes.io/arch: amd64
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
        - operator: Exists
      containers:
        - name: efs-plugin
          securityContext:
            privileged: true
          image: amazon/aws-efs-csi-driver:latest
          args:
            - --endpoint=$(CSI_ENDPOINT)
            - --logtostderr
            - --v=5
          env:
            - name: CSI_ENDPOINT
              value: unix:/csi/csi.sock
          volumeMounts:
            - name: kubelet-dir
              mountPath: /var/lib/kubelet
              mountPropagation: "Bidirectional"
            - name: plugin-dir
              mountPath: /csi
            - name: efs-state-dir
              mountPath: /var/run/efs
          ports:
            - containerPort: 9809
              name: healthz
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /healthz
              port: healthz
            initialDelaySeconds: 10
            timeoutSeconds: 3
            periodSeconds: 2
            failureThreshold: 5
        - name: csi-driver-registrar
          image: quay.io/k8scsi/csi-node-driver-registrar:v1.3.0
          args:
            - --csi-address=$(ADDRESS)
            - --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)
            - --v=5
          env:
            - name: ADDRESS
              value: /csi/csi.sock
            - name: DRIVER_REG_SOCK_PATH
              value: /var/lib/kubelet/plugins/efs.csi.aws.com/csi.sock
            - name: KUBE_NODE_NAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
          volumeMounts:
            - name: plugin-dir
              mountPath: /csi
            - name: registration-dir
              mountPath: /registration
            - name: efs-utils-config
              mountPath: /etc/amazon/efs
        - name: liveness-probe
          image: quay.io/k8scsi/livenessprobe:v2.0.0
          args:
            - --csi-address=/csi/csi.sock
            - --health-port=9809
          volumeMounts:
            - mountPath: /csi
              name: plugin-dir
      volumes:
        - name: kubelet-dir
          hostPath:
            path: /var/lib/kubelet
            type: Directory
        - name: registration-dir
          hostPath:
            path: /var/lib/kubelet/plugins_registry/
            type: Directory
        - name: plugin-dir
          hostPath:
            path: /var/lib/kubelet/plugins/efs.csi.aws.com/
            type: DirectoryOrCreate
        - name: efs-state-dir
          hostPath:
            path: /var/run/efs
            type: DirectoryOrCreate
        - name: efs-utils-config
          hostPath:
            path: /etc/amazon/efs
            type: DirectoryOrCreate
---

or skip the manual process above and just execute:

kubectl apply -k "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/ecr/?ref=release-1.0"

then try to spin up your stuff again.

hope it helps.

See here for possible release updates.

Uwe Keim
  • 2,370
  • 4
  • 29
  • 46