I am running docker swarm across two Debian 8 nodes, each with 1GB ram.
I wanted to see how much memory was available to my application process (which will be a Solr server) so I did:
docker-compose run solr bash
In other words, instead of the solr server process I am just running a bash shell, inside a container otherwise identical to the one which will run my server.
If I run top
this is what I see:
top - 16:23:57 up 6:09, 0 users, load average: 0.00, 0.01, 0.05
Tasks: 2 total, 1 running, 1 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.0 us, 0.0 sy, 0.0 ni, 99.3 id, 0.0 wa, 0.0 hi, 0.3 si, 0.3 st
KiB Mem: 1024468 total, 634016 used, 390452 free, 93124 buffers
KiB Swap: 0 total, 0 used, 0 free. 375932 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
11 solr 20 0 23640 2704 2320 R 0.3 0.3 0:00.01 top
1 solr 20 0 21940 3540 3044 S 0.0 0.3 0:00.12 bash
Apparently out of 1GB allocated to the container I am using 634MB, even though the only two processes running in the container are top
and bash
.
How much should I expect normally to be consumed by the OS?
I assume some of this is consumed by the Docker daemon and swarm too, outside the container on the host node.
hmm, I've just read this http://www.linuxatemyram.com/ ...should I interpret top
to mean that, of the 634MB 'used', 375MB of that is just disk cache and is available to my app process?
So I really have 766MB free - is that right?
The point of all this is so I can start my Solr server process with a realistic memory limit, to make best use of the 1GB node without getting shut down due to out-of-memory error.
UPDATE
Ok this has been marked as duplicate, so it is the 'linux ate my ram' thing. From that page the answer is not to use top
but rather:
$ dc run solr bash
solr@59cb7fafca0e:/opt/solr-4.10.4$ free -m
total used free shared buffers cached
Mem: 1000 628 371 8 94 372
-/+ buffers/cache: 161 838
Swap: 0 0 0
...and from this the 'true' free memory figure is 838MB
(which seems to derive from free + buffers + cached
)