0

On our various servers that have a product that includes a database, I create a job that runs a small script that makes a backup of the database, compresses the file, and sends it to S3. I have one version of the script for windows / SQL Server, and a second for Ubuntu / Cassandra, but both use the aws s3 cp command to ship the file. I noticed today that the windows / SQL Server backups have not happened since mid-February. No error, just the file never shows up in the bucket.

I was able to re-create locally on my windows laptop by running the following command:

aws s3 cp .\EmptyFile "s3://[prefix]-database-backups/[ServerName]/[DatabaseName]/emptyfile"

By chance, I noticed that the Ubuntu / Cassandra does not include the database name. Running that command works:

aws s3 cp .\EmptyFile "s3://[prefix]-database-backups/[ServerName]/emptyfile"

Running with --debug did not indicate an error. In fact, the debug logs are suspiciously similar.

Why would the backups with an extra directory start failing all of the sudden?

Phillip
  • 1
  • 1
  • Maybe the directory must be explicitly created before adding files to it? Maybe the script has S3 permission to add files but not to create directories? – user5994461 Apr 22 '20 at 19:14
  • The directory has over 300 backups already. All of them created by this script / job before February. – Phillip Apr 22 '20 at 19:15
  • S3 does not know directories. Meaning your issue is likely not that. Please check your error messages. – M. Glatki Apr 23 '20 at 09:59
  • No error messages are returned when running manually, or through windows scheduler. Even when running the commands with --debug – Phillip Apr 23 '20 at 12:17

0 Answers0