0

I'm using elasticsearch in conjunction with Graylog.

Is there a way to limit the size of the elasticsearch database, possibly using a round-robin database approach for the logs? My setup is relatively small (~100GiB database), and I'm aware that elasticsearch needs a lot of space to store all the database indices, but I need to limit its size, no matter if data needs to be deleted.

What's the best practice approach here? How do you limit the amount of stored and indexed logs in your setup?

watain
  • 141
  • 3
  • 14

1 Answers1

2

Graylog comes with a highly-configurable index rotation and retention system out-of-the-box.

Simply configure the strategy which matches your requirements best on the System / Indices page.

Screenshot: System / Indices page

joschi
  • 20,747
  • 3
  • 46
  • 50
  • Thanks a lot for your answer! This seems to be it. However, to me it seems that this will only clean up indices. What happens with the effective log data? Will old log data be removed from the disk together with its index or only the index? – watain Dec 06 '16 at 13:21
  • What exactly do you mean with "effective log data"? Graylog stores ingested logs exclusively in Elasticsearch. – joschi Dec 07 '16 at 14:15
  • I was confused what an index really is. Now that I know that the indices actually contain all the data stored in Elasticsearch, it makes perfectly sense. I thought that the indices only contain header data / the index of the stored data. Never mind. – watain Jan 24 '17 at 09:16