2

I use Logstash and Elasticsearch for our squid log store and analyze.The size of the logs increase at the speed of 40Gb per day in our freeBSD ZFS storage system. the Elasticsearch gets into failure every five days and no logs can be further written.

And I tried

index.number_of_shards: 1
index.number_of_replicas: 0

But it seems no use.

attached is the snapshot of the plugs head

elasticsearch head plug status

Can anyone explain what I'm doing wrong? And which configuration should I modify.

UPDATE

The log

[2013-12-08 19:51:16,916][WARN ][index.engine.robin ] [David Cannon] [logstash-2013.12.08][0] failed engine java.lang.OutOfMemoryError: Java heap space

[2013-12-09 17:03:07,500][DEBUG][action.admin.cluster.node.info] [David Cannon] failed to execute on node [sSoHeIz5TSG8fR3IRHj_Pg] org.elasticsearch.transport.RemoteTransportException: Failed to deserialize response of type [org.elasticsearch.action.admin.cluster.node.info.NodeInfo] Caused by: org.elasticsearch.transport.TransportSerializationException: Failed to deserialize response of type [org.elasticsearch.action.admin.cluster.node.info.NodeInfo]

Kindule
  • 156
  • 8

1 Answers1

1

I've seen this before, and it's all to do with the amount of Heap Space allocated to Logstash by the JVM.

You can try increasing this by providing this flag -XX:MaxHeapSize=256m to the JVM when you start it, except you should probably try setting the MaxHeapSize to something like 512m or even bigger.

Elasticsearch comes with some pretty sane defaults, but it's possible to do even more tuning, setting the size of it's heap and so on for searching.

ES and Logstash can scale to billions of events, and many terabytes of log data, with careful configuration.

Tom O'Connor
  • 27,440
  • 10
  • 72
  • 148