31

Maybe this will sound like dumb question but the way i'm trying to do it doesn't work.

I'm on livecd, drive is unmounted, etc.

When i do backup this way

sudo dd if=/dev/sda2 of=/media/disk/sda2-backup-10august09.ext3 bs=64k

...normally it would work but i don't have enough space on external hd i'm copying to (it ALMOST fits into it). So I wanted to compress this way

 sudo dd if=/dev/sda2 | gzip > /media/disk/sda2-backup-10august09.gz

...but i got permissions denied. I don't understand.

chris
  • 11,784
  • 6
  • 41
  • 51
Phil
  • 1,839
  • 6
  • 27
  • 33
  • 2
    Don't. This is not a backup. Check the 'dump' and 'restore' commands. – Juliano Aug 10 '09 at 16:07
  • Or tar or cpio.... – chris Aug 10 '09 at 17:06
  • 2
    Juliano, what do you mean by 'this is not backup'? – Phil Aug 10 '09 at 21:26
  • 5
    This is not a backup because backups are serious, well-structured and uses proper tools intended to create backups. You are just making a copy of the raw data of a partition. To restore this data, you will need another partition with the same geometry, which is not guaranteed. Also, if you damage a single block of your archive (superblock, inode tables, root directory, etc), you risk losing all your data. With a proper backup this wouldn't happen. – Juliano Aug 11 '09 at 02:09
  • 10
    "To restore this data, you will need another partition with the same geometry, which is not guaranteed" Why would he need that, can't he mount the partition image on a loopback device? – Kyle Brandt Aug 12 '09 at 13:54
  • ...what would you recommend instead? – Phil Aug 13 '09 at 17:16
  • This is not a backup for another reason: data integrity. Some of the data on open files are on cache, some are on disk. If linux starts writing to the file after you read part of it, you end up with a corrupt file. The first part is the old version, the latter is the new version. You can only do this kind of backup securely if the partition is not mounted, or mounted read-only. – ThoriumBR Sep 03 '14 at 21:02

4 Answers4

47

Do you have access to the sda2-backup...gz file? Sudo only works with the command after it, and doesn't apply to the redirection. If you want it to apply to the redirection, then run the shell as root so all the children process are root as well:

sudo bash -c "dd if=/dev/sda2 | gzip > /media/disk/sda2-backup-10august09.gz"

Alternatively, you could mount the disk with the uid / gid mount options (assuming ext3) so you have write permissions as whatever user you are. Or, use root to create a folder in /media/disk which you have permissions for.

Other Information that might help you:

  • The block size only really matters for speed for the most part. The default is 512 bytes which you want to keep for the MBR and floppy disks. Larger sizes to a point should speed up the operations, think of it as analogous to a buffer. Here is a link to someone who did some speed benchmarks with different block sizes. But you should do your own testing, as performance is influenced by many factors. Take also a look at the other answer by andreas
  • If you want to accomplish this over the network with ssh and netcat so space may not be as big of an issue, see this serverfault question.
  • Do you really need an image of the partition, there might be better backup strategies?
  • dd is a very dangerous command, use of instead of if and you end up overwriting what you are trying to backup!! Notice how the keys o and i are next to each other? So be very very very careful.
Kyle Brandt
  • 82,107
  • 71
  • 302
  • 444
  • i'll try this. how do i also make it bs=64k? (and do i have to?) – Phil Aug 10 '09 at 15:40
  • The bs=64k only makes the transfer go faster because dd will be reading blocks of 64k each instead of the default block size of (I don't remember). – chris Aug 10 '09 at 16:44
  • What chris said, and if you want to include it put it after dd and before the pipe symbol ( | ) as it is an argument to dd. – Kyle Brandt Aug 10 '09 at 16:49
  • 1
    I also occasionally will use "sudo tee $file > /dev/null" in a pipeline to allow writing to a file that my user account doesn't have access too. – Rik Schneider Sep 30 '15 at 04:25
8

In the first case, dd is running as root. In the second case, dd is running as root but gzip is running as you.

Change the permissions on /media/disk, give yourself a root shell, or run the gzip as root too.

chris
  • 11,784
  • 6
  • 41
  • 51
7

In addition, you can replace gzip with bzip2 --best for much better compression:

sudo dd if=/dev/sda2 | bzip2 --best > /media/disk/$(date +%Y%m%d_%H%M%S)_sda2-backup.bz2
Andrew Schulman
  • 8,561
  • 21
  • 31
  • 47
dmityugov
  • 756
  • 4
  • 5
  • 7
    At a cost of lots of time. See http://changelog.complete.org/archives/910-how-to-think-about-compression "How to think about compression" for more details. – Bill Weiss Aug 10 '09 at 21:03
  • @BillWeiss: Thanks for your comment, very interesting read! – andreas Oct 18 '13 at 09:06
  • compression : lzma > bzip2 > gzip .. speed: gzip > bzip2 > lzma . Unless you are publishing the disk image on internet, there is not much benefit for the time , CPU power and memory you are spending for a better compression. –  Oct 19 '15 at 07:23
  • 1
    And nowadays zstd is pretty good option. – Smar Mar 04 '21 at 08:12
  • You can use status=progress after "if=" to add some progress tracking – William Desportes Aug 04 '21 at 10:46
4
sudo dd if=/dev/sda1 bs=32M | 7z a -si  /data/$(date +%Y%m%d_%H%M%S)_sda1-backup.tar.7z

7z utilizes all CPU cores. Also, adding bs=32M or with some other non-default values may significantly speed up the process.

Test results:

root@pentagon:~# dd if=/dev/sda1 | bzip2 -c > /data/$(date +%Y%m%d_%H%M%S)_pentagon-backup-sda1.bz2
12288000+0 records in
12288000+0 records out
6291456000 bytes (6.3 GB) copied, 2033.77 s, 3.1 MB/s
root@pentagon:~# dd if=/dev/sda1 bs=32M | 7z a -si  /data/$(date +%Y%m%d_%H%M%S)_pentagon-backup-sda1.tar.7z

7-Zip (a) [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=C,Utf16=off,HugeFiles=on,64 bits,4 CPUs x64)

Creating archive: /data/20210818_104748_pentagon-backup-sda1.tar.7z

Items to compress: 1

5917M + [Content]187+1 records in
187+1 records out
6291456000 bytes (6.3 GB) copied, 1393.34 s, 4.5 MB/s
                   
Files read from disk: 1
Archive size: 818956969 bytes (782 MiB)
Everything is Ok

Almost 2 times faster.

root@pentagon:~# ls -Alh /data
....
-rw-r--r-- 1 root root            1.2G Aug 18 10:40 20210818_100651_pentagon-backup-sda1.bz2
-rw-r--r-- 1 root root            782M Aug 18 11:11 20210818_104748_pentagon-backup-sda1.tar.7z
....

And, almost 2 times smaller.

Credits to Igor Pavlov for that.