0

I'm using Synology Hyper backup to backup my NAS to AWS S3. To reduce cost I added a lifecycle to the S3 bucket, that moves the data to AWS glacier after a couple of days.

Now I want to restore the data. Therefore I need to revert the step and bring all data back to S3 such that Synology's hyperbackup can retrieve them.

I already clicked on the respective bucket -> initiate restore

It says that the restoration could take 12 - 24 hours, however it's been days now and I see that the respective data has storage class "Deep glacier"

Any idea what is going wrong?

This is a snapshot of the respective bucket. As one can see two files are still marked as "Deep Glacier" although I initiated the restore action multiple times for them.enter image description here

Update
Here is some related question / answer on stackoverflow (which seems to be less esoteric than serverfault...)

Update2 It seems that there was a problem, that there were many more files in subfolders which I oversaw. I'm currently trying to restore everything in the bucket recursively. Will update when finished.

mcExchange
  • 121
  • 1
  • 4
  • I haven't actually ever had to do this, but this article may be useful. I _suspect_ based on what it's saying (in this case the documentation isn't as good as it could be) the object is restored in the same place / object, but you can see a "restoration expiry date" in the properties of the object. A screenshot from AWS would be quite helpful. https://aws.amazon.com/premiumsupport/knowledge-center/restore-glacier-tiers/ – Tim Sep 14 '21 at 08:35

2 Answers2

2

So the problem was that there were numerous files in subfolders which I overlooked. Using AWS cli I could finally "init restore" all of them. Afterwards Synology's Hyper Backup restoration worked normally. Here the commands to

restore all files from Glacier back to S3 using aws cli:

# create a text file with all glacier files:
aws s3api list-objects-v2 \
  --bucket my-bucket \
  --query "Contents[?StorageClass=='DEEP_ARCHIVE']" \
  --output text  | awk '{print substr($0, index($0, $2))}' | awk '{NF-=3};3' > filelist_of_glacier_files.txt

# init restore on all files in that filelist:
while read filename; do \
  aws s3api restore-object \
     --bucket my-bucket --key $filename \
     --restore-request '{"Days":25,"GlacierJobParameters":{"Tier":"Standard"}}' ; 
done < filelist_of_glacier_files.txt

Afterwards, Synology's Hyper-Backup "restore" will work normally (after waiting for ~24 hours until glacier restore completed)

mcExchange
  • 121
  • 1
  • 4
0

That is a bit odd. Thank you for the workaround.

I've also had some issues previously backing up to glacier. The backup seemed to get stuck after a while, when the size of the backup (even though incremental) got too large.

Eventually I just stopped backing up to glacier, it just did not make sense.

  • Your answer could be improved with additional supporting information. Please [edit] to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Sep 22 '21 at 15:47