Using dd with bad blocks on a harddrive?

4

1

I'm looking to use dd to make a clone of failing harddrive. My concern is there will be bad blocks for surely. So my question is with dd will a bad block leave a gap the size of the selected block-size(bs) or will it only be as big as the sector on the harddrive?

Earlz

Posted 2011-07-11T16:56:45.387

Reputation: 3 966

I don't know, but your hard drive should be getting requests to return bytes or several bytes at a time. I'd expect that dd would gracefully handle whatever the hard drive gives back. So if the drive actually signals that there's a complete failure to read that data (not sure they have a signal for that), then dd should handle that gracefully. Hard drives have brains in them to mitigate damage and deal with bad blocks. That mitigation has limits to it, but it's possible with a failing drive, to use it normally just a little more before it completely morphs in to a paper weight. Good luck. – James T Snell – 2011-07-11T17:12:34.327

Answers

4

I'm pretty sure it'll be the larger of the two.

Let's say you're using a 512 byte block-size in dd, but your disk uses 4K sectors, and one of them is bad. All four 512-byte reads that dd tries to make of that 4K sector will fail, resulting in a 4K gap.

Now let's say you're using an 8K dd block-size but your disk uses 4K sectors. When dd attempts to do that 8K read, it will fail because one of the sectors in the read failed, resulting in an 8K gap.

Now is probably a good time to mention GNU ddrescue (not to be confused with the non-GNU software of the same name) which basically automates using dd to rescue a failing drive, with several efficiency tricks. It starts with a large block size for speed, but it keeps track of where it saw bad blocks and then goes back to try to read different parts of them with smaller read sizes, until it gets down to a list of absolutely unreadable 512-byte blocks. It took me a while to make sense of the documentation but once I figured it out, I found it to be a very useful tool and very preferable to using dd directly myself for this kind of task.

Spiff

Posted 2011-07-11T16:56:45.387

Reputation: 84 656

-1

This is what I done today, and recover my data. I was having problems duplicating (backing up a disk) with about 30 bad blocks. The first thing I have done is backup files using regular Filezilla to backup all good data. I notice that one big file was not copying correctly (Stopping in the middle and restarting the transfer). Luckly I have a previous backup of same file. To duplicate the disk, then I had to find the bad blocks on the disk using this procedure:

1st find out the problem disk identifying the HD info using fdisk -l

2nd if lets say your disk is /dev/sdb then you need to run the command badblocks -v /dev/sdb it will list all you bad blocks on the drive. Luckily there will be a few. If no bad blocks are found, then your drive blocks are OK and need to figure something else out. My block size is 512 so I use that default number to run DD

3rd each block is 512 size, so what I done is to set bs=512

Each time I runned DD regularly as I always do, my data, after the errors, will come out corrupted. So I then use the parameters as explained on the page https://www.gnu.org/software/coreutils/manual/html_node/dd-invocation.html search the "For failing disks" part.

dd if=/dev/sdb of=/dev/sda bs=512 conv=noerror,sync iflag=fullblock 

It took a while. Each bad block encountered sound like a banging on the faulty drive. It does copy block by block, and thru all my bad blocks made the same noise. The amount of times made a noise, was because it found another bad block and tells you about on display error msg. What the ‘conv=noerror,sync’ does, is to pad out bad reads with NULs, while ‘iflag=fullblock’ caters for short reads, but keeps in sync your data up to the end. No corruption at all, it just does not copy the faulty blocks and fills it with empty NULs.

After the copy with DD was done, I just replace that bad file reverting Filezilla from a past backup and everything worked OK. I hope this will be usefull for others trying to backup faulty drives.

NOTE: My bad blocks where pretty much close to each other. About 4 blocks at a time together in groups where detected bad. If your blocks are all over the disk, several files could be affected. Luckly, on my case, a big database 4gb file was only affected.

Luis H Cabrejo

Posted 2011-07-11T16:56:45.387

Reputation: 27

If the same answer address multiple questions, there's a good chance the questions are duplicates. If so, it's better to answer one and flag the other as a possible duplicate. That avoids bloat of repetitious answers, and linking the questions makes it easier for readers to find all the answers. – fixer1234 – 2019-07-25T22:35:38.880

Again, let me know if its not proper and will delete as I mention before. That solution fixed my problem completely. – Luis H Cabrejo – 2019-07-25T23:32:08.130