9

I've got a Raspberry Pi 2 (latest Raspbian as of Apr 2015) setup that last week was running both ElasticSearch and Logstash on a test network (not a straightforward setup, but it was stable for over a week!). I rebooted my machine today and have been having a really hard time getting things running again; ES and LS will both run independently, but when I try to push LS output into ES the ES instance dies without explanation. My goal is to get both running and LS pumping data into ES via the standard output plugin.

ElasticSearch [v1.5.0]

I believe this is where the core problem is. ES can start up via service elasticsearch start and remains running, is accessible via HTTP requests to port 9200, and all signs of life seem healthy. As soon as something (anything, as far as I can tell) tries to write data to an index, the process dies and debug logs @ /var/log/elasticsearch/* don't contain anything related to service failure. I've tried inserting via logstash (see below) as well as with curl, both of which terminate the ES process. The curl command I'm running is curl -XPOST "http://localhost:9200/logstash-2015.04.05/records/" -d "{ \"type\" : \"specialRecord\" }".

Logstash [v1.4.2]

I'm currently running with this simple config:

input {
    stdin { }
}

output {
        stdout { codec => rubydebug }
        elasticsearch {
                host => '127.0.0.1'
                cluster => 'elasticsearch'
        }
}

Other notes

Some things I've tried:

  • I've tried cranking up logging levels for ElasticSearch to DEBUG / TRACE and the output is remarkably uninteresting. Happy to provide logs if it'd be helpful.

  • I've tried giving ES 256MB and 512MB of heap space, which doesn't seem to affect anything. I've also watched memory utilization during all of this and running out of memory doesn't appear to be a problem.

  • I've tried disabling multicast to try to weed out a bunch of networking variables but that didn't seem to make a difference.

  • I've ensured that the data directory for ES has plenty of space, write permissions, etc. ES creates subdirectories in the path.data directory when it's loaded but I don't believe anything is added since when I restart the ES process the index stats suggest that the total # of documents is zero.

I'm pretty stumped now and disappointed that nothing I need (or at least am able to find) is being logged. Any ideas on what might be going on here?

anyweez
  • 193
  • 4
  • If you're not getting anything useful from the logs the the only option (other than compiling from source and adding more debug statements) seems to be using strace to watch system calls. That might give you a hint as to why elasticsearch is dying. To reduce volume, start as normal and then strace the running process just before you initiate the write. – Paul Haldane Apr 06 '15 at 11:28
  • Having a crash without any logs reminds me of JNI problems, isn't there a JVM process dump (`hs_err_PID.log`)? ES 1.5 uses a native library called Sigar for monitoring, it may have problem with Raspberry's ARM. Could you try to run Sigar by itself? I would try to upgrade to ES 1.5.2 or ES 2.0 which doesn't use Sigar anymore. – G Quintana Nov 03 '15 at 10:48
  • Have you turned off swap? – Rumbles Feb 13 '17 at 22:47
  • Elasticsearch recommends 8G ram to start with. I once ran it on a Raspberry Pi 3. It works, but you need to be a bit careful with the speed you send data in and also queries can take some time. – webwurst Mar 09 '19 at 10:48

1 Answers1

1

You need more hardware

Your raspi may be (considerably) under powered for your workload.

I'm in no way an Elasticstack expert, but I have set it up in several test scenarios and for limited/light production use. In my experience, while the initial setup requires relatively few resources, as the number of indexes grow the system generates significantly more disk IO and CPU load.

This is especially apparent after a restart while the system is recovering the shards. If your indexes are not too big, you could consider monthly buckets instead of the default daily buckets, which seems to help in this regard.

Jens Ehrich
  • 390
  • 2
  • 7