dd conv=sync,noerror
(or conv=noerror,sync
) corrupts your data.
Depending on the I/O error encountered, and blocksize used (larger than physical sector size?), the input and output addresses do not actually stay in sync but end up at the wrong offsets, which makes the copy useless for filesystem images and other things where offsets matter.
A lot of places recommend using conv=noerror,sync
when dealing with bad disks. I used to make the same recommendation, myself. It did work for me, when I had to recover a bad disk some time ago.
However, testing suggests that this is not actually reliable in any way at all.
Use losetup
and dmsetup
to create an A error B
device:
truncate -s 1M a.img b.img
A=$(losetup --find --show a.img)
B=$(losetup --find --show b.img)
i=0 ; while printf "A%06d\n" $i ; do i=$((i+1)) ; done > $A
i=0 ; while printf "B%06d\n" $i ; do i=$((i+1)) ; done > $B
The A, B loop devices look like this:
# head -n 3 $A $B
==> /dev/loop0 <==
A000000
A000001
A000002
==> /dev/loop1 <==
B000000
B000001
B000002
So it's A, B with incrementing numbers which will help us to verify offsets later.
Now to put a read error in between the two, courtesy of Linux device mapper:
# dmsetup create AerrorB << EOF
0 2000 linear $A 0
2000 96 error
2096 2000 linear $B 48
EOF
This example creates AerrorB
as in 2000
sectors of A
, followed by 2*48
sectors of error, followed by 2000
sectors of B
.
Just to verify:
# blockdev --getsz /dev/mapper/AerrorB
4096
# hexdump -C /dev/mapper/AerrorB
00000000 41 30 30 30 30 30 30 0a 41 30 30 30 30 30 31 0a |A000000.A000001.|
00000010 41 30 30 30 30 30 32 0a 41 30 30 30 30 30 33 0a |A000002.A000003.|
[...]
000f9fe0 41 31 32 37 39 39 36 0a 41 31 32 37 39 39 37 0a |A127996.A127997.|
000f9ff0 41 31 32 37 39 39 38 0a 41 31 32 37 39 39 39 0a |A127998.A127999.|
000fa000
hexdump: /dev/mapper/AerrorB: Input/output error
So it reads until A127999\n
, since each line has 8 bytes that totals at 1024000 bytes which is our 2000 sectors of 512 bytes. Everything seems to be in order...
Will it blend?
for bs in 1M 64K 16K 4K 512 42
do
dd bs=$bs conv=noerror,sync if=/dev/mapper/AerrorB of=AerrorB.$bs.gnu-dd
busybox dd bs=$bs conv=noerror,sync if=/dev/mapper/AerrorB of=AerrorB.$bs.bb-dd
done
ddrescue /dev/mapper/AerrorB AerrorB.ddrescue
Results:
# ls -l
-rw-r--r-- 1 root root 2113536 May 11 23:54 AerrorB.16K.bb-dd
-rw-r--r-- 1 root root 2064384 May 11 23:54 AerrorB.16K.gnu-dd
-rw-r--r-- 1 root root 3145728 May 11 23:54 AerrorB.1M.bb-dd
-rw-r--r-- 1 root root 2097152 May 11 23:54 AerrorB.1M.gnu-dd
-rw-r--r-- 1 root root 2097186 May 11 23:54 AerrorB.42.bb-dd
-rw-r--r-- 1 root root 2048004 May 11 23:54 AerrorB.42.gnu-dd
-rw-r--r-- 1 root root 2097152 May 11 23:54 AerrorB.4K.bb-dd
-rw-r--r-- 1 root root 2097152 May 11 23:54 AerrorB.4K.gnu-dd
-rw-r--r-- 1 root root 2097152 May 11 23:54 AerrorB.512.bb-dd
-rw-r--r-- 1 root root 2097152 May 11 23:54 AerrorB.512.gnu-dd
-rw-r--r-- 1 root root 2162688 May 11 23:54 AerrorB.64K.bb-dd
-rw-r--r-- 1 root root 2097152 May 11 23:54 AerrorB.64K.gnu-dd
-rw-r--r-- 1 root root 2097152 May 11 23:54 AerrorB.ddrescue
From the file sizes alone you can tell things are wrong for some blocksizes.
Checksums:
# md5sum *
8972776e4bd29eb5a55aa4d1eb3b8a43 AerrorB.16K.bb-dd
4ee0b656ff9be862a7e96d37a2ebdeb0 AerrorB.16K.gnu-dd
7874ef3fe3426436f19ffa0635a53f63 AerrorB.1M.bb-dd
6f60e9d5ec06eb7721dbfddaaa625473 AerrorB.1M.gnu-dd
94abec9a526553c5aa063b0c917d8e8f AerrorB.42.bb-dd
1413c824cd090cba5c33b2d7de330339 AerrorB.42.gnu-dd
b381efd87f17354cfb121dae49e3487a AerrorB.4K.bb-dd
b381efd87f17354cfb121dae49e3487a AerrorB.4K.gnu-dd
b381efd87f17354cfb121dae49e3487a AerrorB.512.bb-dd
b381efd87f17354cfb121dae49e3487a AerrorB.512.gnu-dd
3c101af5623fe8c6f3d764631582a18e AerrorB.64K.bb-dd
6f60e9d5ec06eb7721dbfddaaa625473 AerrorB.64K.gnu-dd
b381efd87f17354cfb121dae49e3487a AerrorB.ddrescue
dd
agrees with ddrescue
only for block sizes that happen to be aligned to our error zone (512
, 4K
).
Let's check raw data.
# grep -a -b --only-matching B130000 *
AerrorB.16K.bb-dd: 2096768:B130000
AerrorB.16K.gnu-dd: 2047616:B130000
AerrorB.1M.bb-dd: 2113152:B130000
AerrorB.1M.gnu-dd: 2064000:B130000
AerrorB.42.bb-dd: 2088578:B130000
AerrorB.42.gnu-dd: 2039426:B130000
AerrorB.4K.bb-dd: 2088576:B130000
AerrorB.4K.gnu-dd: 2088576:B130000
AerrorB.512.bb-dd: 2088576:B130000
AerrorB.512.gnu-dd: 2088576:B130000
AerrorB.64K.bb-dd: 2113152:B130000
AerrorB.64K.gnu-dd: 2064000:B130000
AerrorB.ddrescue: 2088576:B130000
While the data itself seems to be present, it is obviously not in sync; the offsets are completely out of whack for bs=16K,1M,42,64K... only those with offset 2088576
are correct, as can be verified against the original device.
# dd bs=1 skip=2088576 count=8 if=/dev/mapper/AerrorB
B130000
Is this expected behaviour of dd conv=noerror,sync
? I do not know and the two implementations of dd
I had available don't even agree with each other. The result is very much useless if you used dd
with a performant blocksize choice.
The above was produced using dd (coreutils) 8.25
, BusyBox v1.24.2
, GNU ddrescue 1.21
.
3Upvote for the question "Is conv=sync,noerror a requirement when doing forensic stuff? " – nergeia – 2014-02-20T10:00:49.727