2

I created a container, in which I've created ~10 processes. I want to analyze how much memory they're consuming. To achieve that, I ran top inside the container, and docker stats outside.

In top, I see 10 processes, each taking 50MB of resident memory. So I would expect docker stats to show at least 500MB memory used by the container, but it shows only 140 MB.

Where does this discrepancy come from? What is the real memory consumption?

htop output: htop output

docker stats output: docker stats output

speller
  • 131
  • 5

1 Answers1

0

On Linux, a fork()ed process initially references the same memory pages as its parent, in a copy on write scheme. Running multiple copies of the same thing keeps the deduplication ratio very good.

Container memory use is the exact consumption. Its cgroups implementation uses the kernel to track resources. (Same thing applies to other cgroups users like systemd slices.) But hitting the limit by default will invoke the OOM killer.

A practical limit is somewhere in between the observed container utilization and the sum of resident set size. Conservatively, you could start at 500 MB. That is a lot better than uncapped limit of all your memory at 62,000 MB.

John Mahowald
  • 30,009
  • 1
  • 17
  • 32