0

I've heard that in later versions of Kubernetes (1.9 onwards if I'm not mistaken; I have 1.10), it's possible to expand a PersistentVolume as long as allowVolumeExpansion: true is set in the StorageClass configuration.

In my case, on GCP, the StorageClass my PVC uses does not have that line, and I cannot add it either.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    storageclass.beta.kubernetes.io/is-default-class: "true"
  creationTimestamp: 2018-05-30T17:07:33Z
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
    kubernetes.io/cluster-service: "true"
  name: standard
  resourceVersion: "8741704"
  selfLink: /apis/storage.k8s.io/v1/storageclasses/standard
  uid: f1bd0421-642b-11e8-bb11-42010a9a00b5
parameters:
  type: pd-standard
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Delete
volumeBindingMode: Immediate

So, I want to increase a PV, and its corresponding PVC from 8Gi to 100Gi. What's the best way of doing this? Is there a way to do it while preserving data, or must the current PV be deleted before making a new one?

Here's the PV YAML:

apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    kubernetes.io/createdby: gce-pd-dynamic-provisioner
    pv.kubernetes.io/bound-by-controller: "yes"
    pv.kubernetes.io/provisioned-by: kubernetes.io/gce-pd
  creationTimestamp: 2018-05-31T10:30:39Z
  finalizers:
  - kubernetes.io/pv-protection
  labels:
    failure-domain.beta.kubernetes.io/region: europe-west2
    failure-domain.beta.kubernetes.io/zone: europe-west2-c
  name: pvc-a70ca000-64bd-11e8-bb11-42010a9a00b5
  resourceVersion: "8728415"
  selfLink: /api/v1/persistentvolumes/pvc-a70ca000-64bd-11e8-bb11-42010a9a00b5
  uid: a9e8c071-64bd-11e8-bb11-42010a9a00b5
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 8Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: production-postgres
    namespace: neserver-6540663
    resourceVersion: "85487"
    uid: a70ca000-64bd-11e8-bb11-42010a9a00b5
  gcePersistentDisk:
    fsType: ext4
    pdName: gke-cluster-1-320626e3-pvc-a70ca000-64bd-11e8-bb11-42010a9a00b5
  persistentVolumeReclaimPolicy: Delete
  storageClassName: standard
status:
  phase: Bound
Chris Watts
  • 265
  • 1
  • 3
  • 11

1 Answers1

0

Unfortunately, this feature is not yet supported in GKE. You can open a Feature Request to have this feature enabled in future versions of GKE using the Public Issue Tracker

Patrick W
  • 582
  • 2
  • 8
  • Ironically, it seems I cannot create an issue on the issue tracker (button is greyed out). Does this mean I'm stuck with an 8Gi database until they fix this? Might just move to a regular monolithic server where I can actually control things. – Chris Watts Aug 20 '18 at 08:58
  • You should be able to create a new feature request as per [this doc](https://developers.google.com/issue-tracker/guides/access-ui) – Patrick W Aug 21 '18 at 15:36
  • Otherwise, yes, the disk size can't be changed. But you can create a new PVC with a new disk and transfer your data from the old one to the new one and delete the old one. This is more work but it will provide you a larger disk without losing data or paying for additional disks – Patrick W Aug 21 '18 at 15:37
  • Yeah, this is the only other option I can conceive, but it brings another question: how do I copy the data to the new disk? – Chris Watts Aug 22 '18 at 12:00
  • the PVC creates a persistent disk. You can attach that PD to another instance to copy data over. You can do the same thing with the new PVC, the disk can be created before you actually mount it to a pod – Patrick W Aug 22 '18 at 16:16