I have a set-up with several Logstash nodes sending input to ElasticSearch, and have a kibana server which allows me to visualize this.
The current infrastructure is pretty simplistic and is on single node machines. We are looking to scale it out to a larger testbed. However, before investing in scaling out a large deployment of ELK, I am looking to get a better understanding of how well it scales and it's performance parameters.
I have not been able to find numbers on the Elastic Search website or in their case studies
The questions are these:
How well does Elastic Search Scale? How many logs/sec can it consume, how many nodes are required? Any numbers or insight would do.
How well does it perform with time as indexes, we visualize the use case being more of structured queries. In particular how does it compare to SQL like databases. One of the concerns raised is that would it be better to use SQL databases if we know the log structure before hand. We do not need necessarily a search engine functionality if performance is a big bottleneck.?
I am a newbie in ELK/SQL server management, so please excuse me if the questions seem to be not well formed.