0

We use kubernetes (specifically: openshift) to run our infrastructure in our team.

One of the daemonsets, fluentd, is currently causing a lot of trouble, frequently taking full nodes down with huge CPU, memory and disk I/O requirements (like really, it's absolutely stupid!).

We've set the following resource limits and requests on pod in the DaemonSet definition:

resources:
  limits:
    memory: 512Mi
  requests:
    cpu: 100m
    memory: 512Mi

I was expecting k8s to kill the pod when the memory consumption exceeds 512Mi. Yet, those pods are freely allowed to consume 1000+% of CPU and all of the available RAM on the machine (far above 512Mi).

I've done some research, and it seems the world is quite divided on what happens when the memory limits are exceeded.

  1. Some say the pod will keep running until the system is OOM and decides to kill something.
  2. Others say the pod will instantly be killed when it exceeds the allotted memory.

We would very much like to have the 2nd option happening to that pesky pod! What are we missing?

aspyct
  • 340
  • 6
  • 19
  • the best thing you can is to follow the first link and verify if your pods are using those limits – c4f4t0r Sep 11 '19 at 17:57

2 Answers2

0

did you managed kubernetes/your cluster limits/request to kill pods when they overconsume CPU/RAM instead of apply those limits on your Helm chart or K8s config file ?

Some great link about your question : HERE

Have a good day, hope to help you even a little

0

Apart from setting the above mentioned limits I would recommend you in this situation to set up resource quota for the namespace as well as default limits for all containers within the namespace which is described here.

In addition to that you may configure Out Of Resource Handling. I hope it will help you.

mario
  • 525
  • 3
  • 8