0

I am currently running Elasticsearch 7.3.2 on a Rasbian (Buster) instance on a Raspberry Pi 4+. It looks like it was running in a green status for a few days, processing files, but suddenly, i noticed a Yellow Status. I looked into gc.log and the file was showing that there is a

Entering Safepoint region: GenCollectForAllocation
Pause Young (Allocation Failure)
Using 4 workers of 4 for Evaluation
Desired survivor size 3342336 bytes, new threshold 6 (max threshold 6)

I am trying to get this back to green, but i am not sure if this is going to be a major issue or not. It does look like it will leave a safepoint region, and then within a fraction of a second will hit it again.

When looking at my mounted NAS, i noticed that it is only 34% full, but im not sure the best approach to solve this

When I ping elasticsearch with: curl localhost:9200/_cat/nodes?pretty it returns:

{
  "error": {
    "root_cause": [{
      "type": "circuit_breaking_exception",
      "reason": "[parent] Data too large, data for [<http request>] would be [1059250992/1010.1mb], which is larger than the limit of [1013704294/966.7mb], real usage: [1059250992/1010.1mb], new bytes reserved: [0/0b], usages [request=0/0b, fielddata=0/0b, in_flight_requests=0/0b, accounting=818200108/780.2mb]",
      "bytes_wanted": 1059250992,
      "bytes_limit":  1010704294,
      "durability": "PERMANENT"
    }],
    ...
    "status": 429
  }
}

Is this something I can easily resolve in the Elasticsearch yml file in terms of limitations?

Fallenreaper
  • 101
  • 3

1 Answers1

0

Yellow usually means you have all your primary shards, but some replica shards are un-allocated. On a cluster with a single node, I wouldn't expect you to have replica shards. I would recommend looking at _cat/shards to make sure you only have primary shards and no replicas (it's in column 3; p or r).

9072997
  • 141
  • 8
  • I see 2 entries. a p, STARTED and a r UNASSIGNED. The unassigned though has no additional information. – Fallenreaper Jan 03 '20 at 14:41
  • Not sure why there is an unassigned replica shard though. – Fallenreaper Jan 03 '20 at 14:51
  • You can just get rid of it by telling Elasticsearch you want 0 replicas on that index. [docs here](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-update-settings.html) – 9072997 Jan 05 '20 at 01:31
  • I see it in the docs you linked as the expample, but there are no entries for the key: "number_of_replicas" do you have something better as proof? Right now im running a vanilla yaml file that i can change which has no references to additional nodes. I can show you the file if you like as proof, but im trying to figure out not only how to resolve this, but prevent it from cropping up again later. – Fallenreaper Jan 06 '20 at 14:35
  • different indices can have different replication configurations, so it is not a cluster-wide setting, and therefore does not reside in the YAML. If you are trying to prevent it from happening again I would recommend looking at how that index was created. There is a `index.number_of_replicas` option, but this is only the default value for new indices, and can be overridden when creating an index. – 9072997 Jan 06 '20 at 14:50