1

Our goal is to use gcsfuse to mount a google bucket content to some path, and share this path with rest of the pod So I tried to run our initContainer as privilege mode to run gcsfuse to mount bucket to our path1, inside of this initcontainer, I can see the content when doing ls -l that $path1

However, if I claim this path1 as the VolumnMounts and Volumn, other container cannot see the content under that path1

Unless I am following the example https://github.com/ageapps/k8s-storage-buckets/tree/master/gcsfuse-init to copy to some other folder and mount that folder as VolumnMount

But copying is not our preference, do we have a better way?

Mia
  • 11
  • 3

2 Answers2

1

I am not running it in Kubernetes just straight up containers on a gcp vm. In order to see the mounts from the gcsfuse container, I needed to also run the second container with the --privileged --cap-add SYS_ADMIN options.

Also when launching my fuse container I shared the mount back to the host by adding this flag --device /dev/fuse -v /data:/data:shared.

Hopefully that will help you as there is not that much info about running gcsfuse in containers.

1

So... after searching different solutions on the web I believe that this is the best way to do this, There is a Github Issue in order to implement PersistentVolume fuse mounts later, but we don't know when this will be possible.

Basically the solution on the link describes a workaround where we use the kubernetes lifecycle events postStart and preStop to do the mount and unmount for us.

The first step is to be sure that you have the gcsfuse binary installed in our container.

The way to do this is first create the gcsfuse.repo file.

[gcsfuse]
name=gcsfuse (packages.cloud.google.com)
baseurl=https://packages.cloud.google.com/yum/repos/gcsfuse-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0

After then in your docker file:

COPY gcsfuse.repo /etc/yum.repos.d/
RUN dnf -y install gcsfuse
RUN mkdir -p /etc/letsencrypt 

In order to perform the mount command on kubernetes, we need to run the pod as --privileged, and add the capability SYS_ADMIN

spec:
  ...
  template:
  ...
    spec:
    ... 
      containers:
      - name: my-container
        securityContext:
          privileged: true
          capabilities:
            add:
              - SYS_ADMIN
        lifecycle:
          postStart:
            exec:
              command: ["gcsfuse", "-o", "nonempty", "your-bucket-name", "/etc/letsencrypt"]
          preStop:
            exec:
              command: ["fusermount", "-u", "/etc/letsencrypt"]

To set the authentication you just need to ensure your GKE cluster is created with the OAuth scope https://www.googleapis.com/auth/devstorage.read_write, and everything else will be handled automatically.

Your GCS storage will be mounted in all instances of your pod, as ReadWriteMany, shared storage via fuse but you have to keep in mind that this solution will be slow while writing to the buckets.

Luis Manuel
  • 121
  • 2