1

I am seeing too high memory consumption in my elasticsearch instance.

I set ES_HEAP_SIZE=4g.

The starting command for ES starts with: /usr/lib/jvm/java-8-oracle/jre/bin/java -Xms4g -Xmx4g

So far so good.

But I am seeing more than 7GB RSS memory consumption.

Here is /proc/status output: http://pastebin.com/mXW6Vnfc

But when I run jstat -gc everything looks normal, I see around 3.7GB in OC and 270 MB in EC (http://pastebin.com/c84urvSM).

This is sorted pmap output: http://pastebin.com/GG92Ercr

Do you have any idea, why the memory consumption is so high?

Also, ES is run on virtual server under OpenVZ.

usamec
  • 69
  • 2
  • usually this is garbage collection issue https://www.elastic.co/guide/en/elasticsearch/guide/current/_don_8217_t_touch_these_settings.html – Sum1sAdmin Jul 15 '16 at 10:30
  • 1
    `I am seeing too high memory consumption in my elasticsearch instance` Why is it too high? What is normal? How do you know it is too high? Based on what documentation have you concluded that? – 030 Jul 17 '16 at 14:58
  • If I set the max consumption to 4GB of heap space, I expect, that in reality would be at most 5 GB of real memory. Reality is, that it is 7 GB and still going up. – usamec Jul 20 '16 at 11:40

3 Answers3

0

The issue is probably in the JVM and if you launch multiple instances of ES, look into it with Java Mission Control http://www.oracle.com/technetwork/java/javaseproducts/mission-control/java-mission-control-1998576.html or jconsole https://docs.oracle.com/javase/8/docs/technotes/guides/management/jconsole.html. This will give you an idea why it's consuming so much memory.

Please keep in mind that the JVM uses GC and other proceses outside of the heap memory. Start ES with -xms 1024 then leave it grow. Also as far as I remember, java takes -xms,-xmx arguments as MB no as GB, just to be on the safe side convert everything to MB.

Alex H
  • 1,814
  • 11
  • 18
0

Use jvisualvm to see heap consumption (and much more), it's the official debugging and profiling tool for java, installed by default with java.

Attach it to the elasticsearch process, check that the -Xmx and -Xms flags have the right value and look at the graphs. It's very straightforward.

The java process should have 4 GB of heap at all times because you set both min and max heap. It should consume 4-4.5 GB on the system because there is some overhead to manage the heap that is not accounted as part of the heap.

Anyway, java doesn't allow the process to use more heap than configured (it would kill the process if it tried to). It's likely that there is something else using the memory on your computer. Use top or htop to have a look at other running processes.

user5994461
  • 2,749
  • 1
  • 17
  • 30
  • When I looked into jvisualvm I only see, that the max heap size is 4GB, 3.5 GB is being used and that's all. There is like no explanation about other 3GBs used over that. – usamec Jul 20 '16 at 11:47
  • Also looks like mbeans might have some more info, will check into that when leak happens again. – usamec Jul 20 '16 at 12:39
  • Then it's not elasticsearch that is using the rest of the memory. There is something else. – user5994461 Jul 20 '16 at 13:36
  • The elasticsearch process is using 7GB, that's what every system tool says. – usamec Jul 21 '16 at 06:19
0

What you set with ES_HEAP_SIZE is the Java heap size available to the program being run on the JVM. On top of that JVM has its own overhead, there's overhead from the GC (its data is not accounted for as Java heap) and each running thread needs memory for its stack (how many threads do you have and what's the stack size?). ElasticSearch probably doesn't use off-heap storage itself, but it's always a thing to look out for.

I've heard of cases where changing the memory allocator helped reduce the total JVM footprint (jemalloc vs. glibc malloc). Finally I'd try a different (more recent?) JVM version or a different GC algorithm, if possible...

Karol Nowak
  • 234
  • 1
  • 5