1

I run a number of standalone Logstash servers to allow review of log files from web application servers.

One of these recently reported a Yellow cluster state due to unassigned shards. This is a common enough occurrence, which I usually deal with by deleting the most recent index and restarting Elasticsearch.

In this case, it didn't work. When I delete the indices (either via the API or simply by deleting the files from the file system) and restart Elasticsearch, the cluster state initially is green, but as soon as the first index is created, it turns yellow with precisely 5 unassigned shards.

This server was working fine for several weeks, and is not at all loaded. I've also checked that there are no other Elasticsearch servers in the CIDR (Its in a VPC in Amazon AWS anyway).

I've turned on debugging in the logs, but its double dutch to me. There are no references to shards not being able to be assigned.

Garreth McDaid
  • 3,399
  • 26
  • 41

1 Answers1

1

The easiest fix for this is to configure elasticsearch so that it doesn't use any replicas:

index.number_of_replicas: 0

If elasticsearch isn't trying to distribute shards to other nodes, it won't have unassigned shards.

I'm not sure why the default config on elasticsearch is to have

index.number_of_replicas: 1

Must people who are tricking around with it for the first time will run it on a single server, and then spend days trying to figure out why the health goes yellow due to unassigned shards.

Garreth McDaid
  • 3,399
  • 26
  • 41