2

I'm unsure what's going on here:

I've got a backup script which runs fine under root. It produces a >300kb database dump in the proper directory.

When running it as a cron job with exactly the same command however, an empty gzip file appears with nothing in it.

The cron log shows no error, just that the command has been run.

This is the script:

#! /bin/bash

DIR="/opt/backup"
YMD=$(date "+%Y-%m-%d")
su -c "pg_dump -U postgres mydatabasename | gzip -6 > "$DIR/database_backup.$YMD.gz" " postgres

# delete backup files older than 60 days
OLD=$(find $DIR -type d -mtime +60)
if [ -n "$OLD" ] ; then
    echo deleting old backup files: $OLD
    echo $OLD | xargs rm -rfv
fi

When changed to:

 pg_dump -U postgres mydatabasename | gzip -6 > "$DIR/database_backup.$YMD.gz"

The same thing happens.

And the cron job:

01 10 * * * root sh /opt/daily_backup_script.sh

It produces a database_backup file, just an empty one. Anyone know what's going on here?

edit:

Ok, simplified to this but it's still not working via cron

#! /bin/bash

DIR="/opt/backup"
YMD=$(date "+%Y-%m-%d")

pg_dumpall -U postgres > "$DIR/database_backup.$YMD"

And

01 10 * * * root /opt/daily_backup_script.sh
user705142
  • 433
  • 6
  • 16
  • Try changing the doublequotes like this: su -c 'pg_dump -U postgres mydatabasename | gzip -6 > "$DIR/database_backup.$YMD.gz" 'postgres – Alan Kuras Apr 04 '12 at 05:45
  • you dont need to issue "sh" command in cronjob, because you are already declaring that the script should be run in bash.I dont know it it wasnt supposed to be #!/bin/bash without a space. – Alan Kuras Apr 04 '12 at 05:45
  • The last thing it goes into my mind is that you dont need to issue su command in cronjob file, as you are already running this as root – Alan Kuras Apr 04 '12 at 05:50
  • Ok - done both of those things, still the same I'm afraid. Also removed the su.. still nothing, argh ! – user705142 Apr 04 '12 at 05:51
  • Possible duplicate of [Why is my crontab not working, and how can I troubleshoot it?](https://serverfault.com/questions/449651/why-is-my-crontab-not-working-and-how-can-i-troubleshoot-it) – Jenny D Jun 09 '17 at 15:56
  • This can happen if the permissions on the target file (if it exists) are not properly set. My `etc/crontab` entry `5 1 * * * postgres /usr/bin/pg_dumpall > /mnt/Vancouver/programming/rdb/postgres/bak/pg_dumpall_dump` -- which had been running fine previously -- was failing. I had updated Postgres, and manually backed up my database, hence my `pg_dumpall_dump` file permissions were `root:root`. The folder permissions and file ownership needed to be `postgres:victoria`: on the parent folder, `sudo chown -R postgres:victoria /mnt/Vancouver/programming/rdb/postgres/bak/` – Victoria Stuart Oct 29 '20 at 20:58
  • ... I meant to add: before correcting those permissions, the files were being created / timestamped by cron, but were empty (zero byte) -- analogous to what the OP reported. – Victoria Stuart Oct 29 '20 at 21:10

2 Answers2

6

You need to specify the full path to pg_dump -- cron runs it's scripts with a very restrictive PATH by default.

womble
  • 95,029
  • 29
  • 173
  • 228
0

Assumptions:

I assume that your credentials for postgress database are stored in ~/.pgpass and file's permissions are set to 0600 and that you are using PG* environment variables.

Reason:

The reason that your backup command works (creates backup of database) when run from the terminal, and is not working (creates empty file) when run from crontab is the environment.

Cause:

When you run a command in crontab, (even when you enforce to execute it as a desired user (eg. su - root -c 'my_aesome_command') the environment variables for requested user are different from environment variables as they are set when user is logged in terminal.

Solution:

I fixed this problem by adding specific user's environment variables to /etc/environment file by executing this command env >> /etc/environment when I load my backup script (actually when I load my container since I am working with docker)