I set up an Elasticsearch cluster with one dedicated master node, two master-eligible data nodes and one coordinating node. The number of replicas is set to one.
There are two pipelines in Logstash, each receiving syslog messages from a firewall, converting it to JSON and feeding it into either one of the data nodes. I don't explicitly generate a UUID for the documents.
Grafana is connected to the coordinating node to pull data from the cluster.
So far so good. But I noticed that in Grafana I see every document twice. I assume that this is not correct, but I have no idea what might be the issue.
I checked the output from Logstash and found no copies, so I guess the duplication happens in the cluster. Can anybody give me a hint here? Do I have to add an ID to the documents prior to indexing?
Thanks, Henry