1

I'm currently having an issue getting backups for ElasticSearch working correctly via their snapshotting system. Per their instructions, a snapshot repository has to be configured first, using this command:

curl -XPUT 'http://$server_IP:9200/_snapshot/backup' -d '{
   "type": "fs",
   "settings": {
       "location": "/data/backup/elasticsearch/snapshots",
       "compress": true
   }
}'

However, when I run that command, I'm met with this error:

No handler found for uri [/_snapshot/backup] and method [PUT]

That error turns up little to no help when searched online, in any similar format. I can swap my $server_IP for 127.0.0.1 or just 'localhost', and the error will change to:

curl: (7) Failed to connect to 127.0.0.1 port 9200: Connection refused

Ok, seems easy enough. Except that if I simply curl to 127.0.0.1:9200, it reports back fine. The error only exists if I reference the _snapshot repository when trying to create it.

I have a Samba directory setup and working, and the system is listening on :9200. The Samba dir has correct permissions when tested from other systems. I'm kind of out of ideas as to what the true error is.

This is not a clustered system, and is acting as a single ElasticSearch master node/shard. I setup Samba as more of a troubleshooting step and since the ES docs said a shared storage directory is needed. Would it be possible to backup the ElasticSearch data directory simply using tar/gzip? This is for a Graylog installation, so I need to have rolling backups of the inbound logs. If I can backup/restore those by way of a standard tar/gzip, I'd be a happy camper avoiding the above calls. My only concern is how the info is treated when ES is initialized, loads its indexes, etc.

0 Answers0