0

I'm setting up Google Container Engine and have created pods, a resource controller, and a service. However, it never get ready and restarts for many times as follows. (restartPolicy is Always)

$ kubectl get pods
NAME                   READY     STATUS       RESTARTS   AGE
app-production-acg4r   0/1       ExitCode:0   8          5m
app-production-p7njh   0/1       ExitCode:0   8          5m

I followed Kubernetes Application Troubleshooting Guide, but had no luck.

First, I tried kubectl logs, but no output

$ kubectl logs app-production-acg4r app-production
$ kubectl logs app-production-p7njh app-production
$ kubectl logs --previous app-production-acg4r app-production
$ kubectl logs --previous app-production-p7njh app-production

I also tried to run command inside container with kubectl exec. It sometimes returns error:

$ kubectl exec notel-production-uz29p -c notel-production -- ls /var/log
error: Error executing remote command: Error executing command in container: container not found ("notel-production")

and sometimes no response:

$ kubectl exec notel-production-uz29p -c notel-production -- ls /var/log
(No response)

I also confirmed Cluster Troubleshooting Guide.

  • I logged in to a cluster and looked around /var/log/kubelet.log and /var/log/kube-proxy.log but I couldn't find something useful.
  • Restarting clusters made nothing
  • At least GCE persistent disk exists
  • I'm using replication controller and service

I have no idea what I can do any more. How can I investigate this problem? Or is this a Google Container Engine's issue?

Jumpei Ogawa
  • 137
  • 3

1 Answers1

1

It looks like your container is starting, and then quickly exiting. I'm guessing that from the STATUS which is ExitCode:0.

For debugging I would check the following:

  • if you run the same container directly with docker on your local machine, does it also exit immediately?
  • if not, are you overriding any enviroment variables or arguments or the command line in your Pod Template in a way that would make it exit immediately?
  • try setting the .spec.container[].command for your Pod Template to something like ["sleep", "10000"], so that the container stays alive long enough that you can use kubectl exec ... to debug.
Eric Tune
  • 155
  • 5
  • 1
    That's it! The image immediately shuts down on local docker too. I didn't know that I have to write CMD /bin/bash in Dockerfile or the container immediately shuts down. – Jumpei Ogawa Nov 05 '15 at 03:47