0

I have a script that runs backup of system files via tar as a cronjob for my server. Every time the script is being fired off, it completed the task as successful. All backups were tested and appeared to be working and not corrupted.

But, my system stalls and froze for a while and it lags my commands as slow. It is same condition as when I ran graphic desktop. So, why does system stall after tar run?

This is command in my script on crontab for tar backup run.

tar -zcvf tarbackup.tar.gz --one-file-system \
--exclude=/run \
--exclude=/tmp \
--exclude=/home \
--exclude=*.system.tar.gz \
--exclude=*.home.tar.gz \
/

EDITED:

To solve that issue, I had to reboot every time. I wish to understand why it happens, so I can explore the option of running tar backup without having to reboot the system.

Faron
  • 107
  • 6

2 Answers2

1

The stall occurs while dirty pages are being written to disk; you can get a quick introduction to the problem at this LWN article. Basically, the current default for the amount of memory used to cache writes is way too high. Try setting /proc/sys/vm/dirty_background_bytes to 104857600 and /proc/sys/vm/dirty_bytes to 209715200. You can do this for the current boot by running sysctl or permanently by editing /etc/sysctl.conf.

sciurus
  • 12,493
  • 2
  • 30
  • 49
1

This answer functions in a way which avoids you having to manipulate the dirty_bytes/dirty_background_bytes system global which may affect other applications when not doing a backup.

Its a bit of a hack to be honest, but I leave it in case its useful to you.

tar -zcv --one-file-system \
--exclude=/run \
--exclude=/tmp \
--exclude=/home \
--exclude=*.system.tar.gz \
--exclude=*.home.tar.gz \
/ | \
dd bs=2048k oflag=sync of=tarbackup.tar.gz
Matthew Ife
  • 22,927
  • 2
  • 54
  • 71