0

I'm trying to automate an archiving function where if a file is older then 30 days and is encrypted it will be moved to S3 to free up disk space on the server. The issue I'm having now is the the aws cli does not include the the directory path.

find /mount/path/ -atime +30 -name *gpg* -o -name *pgp* -exec aws2 s3 cp {} s3://bucket-for-archive/ --acl bucket-owner-full-control --sse --dryrun \;

(dryrun) upload: ../../mount/path/more/path/file0001 to s3://bucket-for-archive/file0001.pgp
(dryrun) upload: ../../mount/path/more/path/file0002 to s3://bucket-for-archive/file0002.pgp

Is it possible to upload the file to the bucket with the full path? The output would look something like this:

(dryrun) upload: ../../mount/path/more/path/file0001 to s3://bucket-for-archive/mount/path/more/path/file0001.pgp
(dryrun) upload: ../../mount/path/more/path/file0002 to s3://bucket-for-archive/mount/path/more/path/file0002.pgp
RunThor
  • 197
  • 2
  • 11

1 Answers1

0

This appears to work, using the {} to pass the directory and file to the s3 path

find /mount/path/ -atime +30 -name *gpg* -o -name *pgp* -exec aws2 s3 cp {} s3://bucket-for-archive{} --acl bucket-owner-full-control --sse --dryrun \;
RunThor
  • 197
  • 2
  • 11