9

I am currently using a duplicity script to backup my 110G/2T CentOS server to a 2T sftp server.

Since 4 days passed, duplicity backed up just 90G. This is not a problem. The main problem is; I have got nearly 600G cache that duplicity generated at "/user/.cache/duplicity". This size is not normal, so what should I do? Will duplicity shrink or remove these cache files and folders when finished the task? Will duplicity backup it's cache too (I did not exclude the /user folder)?

Additional info: I am using Hetzner server and this is the backup script that I am using: https://wiki.hetzner.de/index.php/Duplicity_Script/en

In addition, I just excluded directories proc, sys and dev from the root (and backed up everything else starting from the root, because I wanted a full server backup).

Bahadir Tasdemir
  • 195
  • 1
  • 1
  • 7

2 Answers2

8

According to the mailing list

you will have to manually exclude it ..

it holds your backup chains index files (table of contents of the backup repository). Caching them locally accelerates options like status and incremental backup and others. These operations need to know what is already backed up to work. If they are cached they do not need to be transferred and decrypted every time again and again.

.. ede

For the rest it seems to be a long standing bug.

On the Debian bug tracker, they recommend to

duplicity cleanup --extra-clean --force ....

Warning: The suggested --extra-clean option is dangerous and can bite you very hard. It makes backups unrestorable by usual means.

Andrew Schulman
  • 8,561
  • 21
  • 31
  • 47
Marki
  • 2,795
  • 3
  • 27
  • 45
  • 1
    Thank you very much for your answer, so I must exclude those folders. What I must ask now is this command that you gave me "duplicity cleanup --extra-clean --force ....". Must I use this after the backup process ends or every time that I need? – Bahadir Tasdemir Apr 07 '16 at 06:58
  • 1
    About that recommendation: You should be VERY sure about handling backups older than the current active backup chain (1 week?). Before PURGING metadata with --extra-clean. You will need that old metadata for the restore of old backups. – user18099 Jun 26 '18 at 12:34
0

When we started cleaning up very old backups remotely (S3), then duply commands started to delete very old local metadata automatically.

That is, we are keeping backups for only x months now. And the local metadata cache size did shrink accordingly.

user18099
  • 125
  • 7