1

I have a sporadic issue specifically with big volumes (~2TB) where the pod in my Kubernetes cluster is stuck on "ContainerCreating" with the reason :

failed to mount the volume as "xfs", it already contains unknown data, probably partitions. Mount error: mount failed: exit status 32

mount: /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/aws/eu-central-1a/vol-03717f362cd8d0611: wrong fs type, bad option, bad superblock on /dev/xvdcs, missing codepage or helper program, or other error.

I checked the events and the describe pod but didn't find too much info. I resolved the issue after manually reformatting the volume but this is not the solution that I'm looking for. Any help will be appreciated

kenlukas
  • 2,886
  • 2
  • 14
  • 25
Wael Gabsi
  • 11
  • 1
  • 3

2 Answers2

1

How are you doing the Volume creation?

it seems like this issue is already reported in Github, you could follow there since looks like related to AWS EBS

https://github.com/kubernetes/kubernetes/issues/86064

Edgar Gore
  • 41
  • 2
  • Yes, it is already reported by us in Github. But unfortunately we didn't received any feedback from a while, so i gived it a try here. – Wael Gabsi Jan 10 '20 at 12:13
0

As per the GitHub thread it seems this issue were experiment in AWS-EBS.

Observed when an application reads unwritten blocks on an encrypted EBS volume. These unwritten blocks return random data.

This seems to be fixed now as EBS had deployed a fix for the latest generation Nitro instances on July 6th, so that unwritten blocks on an encrypted EBS volume will no longer return random data. A fix for Xen instances is expected later this year.

A workaround had been to delete PVC and let k8s recreate the PV.

Toni
  • 144
  • 5