1

I am running rabbitmq within kubernetes using the rabbitmq:3.6.11-management docker image (https://hub.docker.com/_/rabbitmq/). Looking at the management dashboard, there seems to be a lot of memory being used by the "Processes / other" category (more than any other category). I have observed several times a node overrun the high water mark and drain all messages, but the "Processes / other" memory does not shrink so the broker remains inactive until the pod is manually restarted. At the time of failure, the memory chart looks like this:

Out of memory

I am using rabbitmq as a backend for a cluster of many Celery workers (about 30) with no special modifications to the default Celery configuration. Does anyone have any suggestions for better understanding what is using this memory? The documentation is very vague about what is in this particular category.

JBBreeman
  • 11
  • 1

0 Answers0