0

I used helm stable charts to install mongodb in my AWS kubernetes cluster, when i run helm install mongodb for the first time, no issues all pod runs and i am able to access the db too.

however, when i run helm install mongodb second time with new release name , pod logs shows that mongodb running successfully, how ever the status shows otherwise..

request-form-mongo-mongodb-7f8478854-t2g8z                        1/1       Running            0          3m
scheduled-task-mongo-mongodb-8689677f67-tzhr9                     0/1       CrashLoopBackOff

when i checked the describe pod logs for the error pod,

everything seems fine, but the last two lines are with this warning.

  Normal   Created                7m (x4 over 8m)   kubelet, ip-172-20-38-19.us-west-2.compute.internal  Created container
  Normal   Started                7m (x4 over 8m)   kubelet, ip-172-20-38-19.us-west-2.compute.internal  Started container
  Warning  FailedSync             7m (x6 over 8m)   kubelet, ip-172-20-38-19.us-west-2.compute.internal  Error syncing pod
  Warning  BackOff                2m (x26 over 8m)  kubelet, ip-172-20-38-19.us-west-2.compute.internal  Back-off restarting failed container

What could be the problem, and how to resolve this?

  • Can you please post here a log of this pod for further analysis? – d0bry Jun 06 '18 at 11:59
  • thank you for responding, but i resolved it myself. The node memory was not enough for container running. So it used to error out. i increased the node size adding enough memory for pv. It works fine now. – Shruthi Bhaskar Jun 08 '18 at 09:00

0 Answers0