2

I use elasticsearch as part of a Logstash stack, in which all of the components of the stack are installed on the same server.

The purpose of this is to expose application logs to developers for debug purposes. I don't need to keep the indices created. I have a cron job that removes indices that are older than 7 days.

The raw logs are preserved elsewhere in case we need historical analysis.

The problem I have is that elasticsearch keeps entering a Red health state due to unassigned shards. I've researched various ways to recover this, but inevitably, I end up deleting the raw index files and restarting the service.

This is a real pain, as its always the time when the developers need access that elasticsearch is borked.

It seems add to me that there isn't an easier way to recover elasticsearch other than to delete the offending indices. I've configured elasticsearch to use a single node, no replicas and not to do any network discovery, but every couple of days, it keeps falling over.

Am I wasting my time trying to run elasticsearch on single server? Is it always going to keep falling over due to unassigned shards? Given what I use it for, it would seem like overkill to actually have to deploy a cluster.

Note: I am running this stack in Amazon EC2

Garreth McDaid
  • 3,399
  • 26
  • 41

2 Answers2

3

I've discovered after much suffering that the best way to run elasticsearch on a single server is to change the default setting of:

index.number_of_replicas: 1

to

index.number_of_replicas: 0

If there are 0 replicas, elasticsearch will never try to assign shards to any other "replica" then itself, thereby removing the issue of unassigned shards and corrupted indices.

My full (stable) standalone, non-default elasticsearch config is:

node.max_local_storage_nodes: 1
index.number_of_replicas: 0

Note, this is a config for a log-reader setup only, not a full scale production setup.

Garreth McDaid
  • 3,399
  • 26
  • 41
1

Not sure why you're getting unassigned shards, especially with Logstash. I use curator to manage elasticsearch. My ELK stack runs in a single VM (for now), so it's plenty starved for power, but it still runs. I had to tweak the hell out of elasticsearch itself to optimize it for the VM. Key components for me were ES_HEAP_SIZE & MAX_OPEN_FILES.

churnd
  • 3,977
  • 5
  • 33
  • 41