0

I have a set-up with several Logstash nodes sending input to ElasticSearch, and have a kibana server which allows me to visualize this.

The current infrastructure is pretty simplistic and is on single node machines. We are looking to scale it out to a larger testbed. However, before investing in scaling out a large deployment of ELK, I am looking to get a better understanding of how well it scales and it's performance parameters.

I have not been able to find numbers on the Elastic Search website or in their case studies

The questions are these:

  1. How well does Elastic Search Scale? How many logs/sec can it consume, how many nodes are required? Any numbers or insight would do.

  2. How well does it perform with time as indexes, we visualize the use case being more of structured queries. In particular how does it compare to SQL like databases. One of the concerns raised is that would it be better to use SQL databases if we know the log structure before hand. We do not need necessarily a search engine functionality if performance is a big bottleneck.?

I am a newbie in ELK/SQL server management, so please excuse me if the questions seem to be not well formed.

tsar2512
  • 121
  • 2

1 Answers1

0

The case studies on elastic's site do have some numbers, for example from the data dog case study:

enter image description here

At Stack Exchange we have found elastic scales very well (using for logstash, haproxylogs (~ 150 million log entries a day), and syslog/eventlog as well as the search for this site), but the first thing you need to do is quantify your load. With elastic it would like be something like:

  • Documents (log entry) rate
  • Query rate
  • Data Size

etc...

Kyle Brandt
  • 82,107
  • 71
  • 302
  • 444