1

I have a local Kubernetes cluster running with k3s and want to access the filesystem of a stopped Pod. The Pod originates from a CronJob and I want to investigate further why the Job failed.

For a "regular" Kubernetes setup, I would have tried to access the file system via the docker cli. With k3s, however, docker ps on the machine returns an empty list. From what I understand, k3s uses containerd, but I could not figure out how to inspect "containerd"-containers. My Google-fu missed me. :/

Giovanni Tirloni
  • 5,693
  • 3
  • 24
  • 49

1 Answers1

1

I am not sure why you want to enter the filesystem to check why Job failed.

When you are creating job it automatically creates pod. Example based on official docs.

$ sudo k3s kubectl apply -f https://k8s.io/examples/controllers/job.yaml
job.batch/pi created

In the meantime it created pod (in the meantime ive created alias kk="sudo k3s kubectl")

$ kk get pods
NAME                     READY   STATUS      RESTARTS   AGE
pi-796ng                 0/1     Completed   0          55s

$ kk get jobs
NAME      COMPLETIONS   DURATION   AGE
pi        1/1           7s         30s

1. To check what happen inside the pod you can check it by

$ sudo k3s kubectl logs <pod_name> -c <container_name>

$ kk logs pi-796ng -c pi
3.141592653589793238462643383279502884197169399375105820974944592307816406286208998628034825342117067982148086513282306647...

2. Describe pod / describe job

$ sudo k3s kubectl describe pod <pod_name>

$ sudo k3s kubectl describe job <job_name>

If they are in different namespace then default you need to add -n <namespace> flag to query

3. Kubernetes events

Execute command

$ sudo k3s kubectl get events

It will show you all events from your Kubernetes cluster.

Many troubleshooting factors might also depends on your Job spec. For example .spec.activeDeadlineSeconds or .spec.backoffLimit. More info here.

PjoterS
  • 615
  • 3
  • 11
  • Thanks @PjoterS for the detailed explanations. I already have access to the Pod's log, description and k8s events. However, the Job stores files in the file system that are not visible in the log. In my case: I'm running browser UI-tests using cypress.io, which stores videos of the test run which I'd like to see. – Daniel Albuschat Nov 22 '19 at 15:08
  • (Addendum: I will store the videos on storage in the future, but I'd like to look at the video of a past failed job in particular) – Daniel Albuschat Nov 22 '19 at 15:09
  • I think the only option here is use some Persistent Volume to save there these videos: https://kubernetes.io/docs/concepts/storage/persistent-volumes/ – PjoterS Nov 28 '19 at 13:03
  • Thanks PjoterS! I was afraid that it is not possible to access the files post-mortem and will put them on a Persistent Volume in the future. – Daniel Albuschat Dec 02 '19 at 11:30
  • What if the pod does not write logs to stdout but writes it to filesystem instead? (the pod is third-party) – Andrew Savinykh May 14 '21 at 07:37