1
I would like to know what would be the best way to proceed with cloning my hard drive to the point that I can simply insert the cloned drive into my PC and seamlessly boot from it as I currently do with the existing drive.
I have a hard drive running Debian that looks like it's failing according to its SMART data. I do have backups and I can reinstall the OS on a new drive as well; however, my first preference at the moment would be to clone the drive, and I currently have no other option than using System Rescue CD 5.0.3 from a bootable CD.
The drive does not have much on it--probably under 10 GB of used space, with very little data, so I'm not too concerned with time because I'm not expecting this to take an extraordinary amount of time.
If I recall, I went through the options when installing Debian to set it up as an encrypted drive, so I believe /dev/sda shows up as an unencrypted boot partition and the rest is encrypted, and then in that "rest", I have a small 10 GB root partition inside the encrypted area and the remainder is unused currently.
I'm also dealing with older PATA drives--no available SATA drives--and the computer has one PATA connector on the motherboard, into which a PATA ribbon cable is attached with the CD-ROM drive for booting and the near-failing hard drive, so there is no room to attach any second PATA drive in order for a local transfer.
To get around this, I have a second computer with the same single PATA connector on the motherboard, in which I've attached another CD-ROM drive for booting and the destination hard drive.
I have booted both computers via the CD-ROM drive to bring up System Rescue CD 5.0.3, and I'm considering my options to get the failing drive cloned as best as possible.
The computers are available over the LAN, and I'm connecting to both of them remotely over SSH via a terminal with no graphical interface.
I'm not too sure about the sizes of the source drive and the destination drive. It's possible that the source drive has a larger capacity than the destination drive, so ideally I would want to transfer only the used space rather than run through the entire empty drive.
I was considering using ddrescue as described here; however, it only describes transferring the data locally.
UPDATE: I'm looking at how the Debian installer set up the source drive. It appears I have three partitions and only the last one is encrypted:
src # fdisk -l /dev/sda
Disk /dev/sda: 37.3 GiB, 40027029504 bytes, 78177792 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x332e4146
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 247807 245760 120M 83 Linux
/dev/sda2 247808 8060927 7813120 3.7G 82 Linux swap / Solaris
/dev/sda3 8060928 78176255 70115328 33.4G 83 Linux
src# cryptsetup --verbose isLuks /dev/sda1
Device /dev/sda1 is not a valid LUKS device.
Command failed with code 22: Invalid argument
src# cryptsetup --verbose isLuks /dev/sda2
Device /dev/sda2 is not a valid LUKS device.
Command failed with code 22: Invalid argument
src# cryptsetup --verbose isLuks /dev/sda3
Command successful.
I believe I'm also trying to transfer between drives of like capacity: a 40 GB PATA drive to another 40 GB PATA drive.
Here is the destination:
dest# fdisk -l /dev/sda
Disk /dev/sda: 37.3 GiB, 40027029504 bytes, 78177792 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
UPDATE: I'm considering using NBD to expose the source drive's partitions over the LAN in order to use ddrescue from the destination.
Here is what I tried so far to expose the source drive...
src# nbd-server -d 8000 /dev/sda
...and mount locally on the destination computer:
dest# nbd-client src 8000 /mnt/nbd-sda
Unfortunately, I'm getting an error when trying this; I can't even mount the remote device:
Warning: the oldstyle protocol is no longer supported.
This method now uses the newstyle protocol with a default export
Error: Cannot open NBD: No such file or directory
Please ensure the 'nbd' module is loaded.
Exiting.
UPDATE: The next thing I'm trying is simply recreating the partitions on the destination drive by hand.
I began by copying the MBR over:
src# dd if=/dev/sda of=/tmp/sda-mbr.dat bs=512 count=1
dest# scp root@src:/tmp/sda-mbr.dat /tmp
dest# dd if=/tmp/sda-mbr.dat of=/dev/sda
dest# sync
Before proceeding, I thought it would help at least to make a recovery partition this time.
dest# fdisk /dev/sda
I'm deleting the last partition and giving myself about 15 GB of space for a final partition.
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x332e4146
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 247807 245760 120M 83 Linux
/dev/sda2 247808 8060927 7813120 3.7G 82 Linux swap / Solaris
/dev/sda3 8060928 45809663 37748736 18G 83 Linux
/dev/sda4 45809664 78177791 32368128 15.4G 83 Linux
I need to create the same encrypted partition on the destination as /dev/sda3 on the source; I might as well do the same for this recovery partition:
dest# cryptsetup luksFormat /dev/sda3 --verify-passphrase
dest# cryptsetup luksFormat /dev/sda4 --verify-passphrase
Next, open the encrypted recovery partition:
dest# cryptsetup open /dev/sda4 sda4-opened
dest# mkdir /mnt/sda4-open
dest# mke2fs -j /dev/mapper/sda4-opened
dest# mount /dev/mapper/sda4-opened /mnt/sda4-open
At least now I can mount this recovery partition remotely and transfer the data to the better drive.
First, I opened the encrypted partition on the source drive:
src# cryptsetup open /dev/sda3 sda3-opened
src# mkdir /mnt/sda3-open
src# mount /dev/mapper/sda3-opened /mnt/sda3-open
Now with df, I can see I'm only using 12 GB of disk space here.
Let's unmount but keep it mapped:
src# umount /mnt/sda3-open
src# rmdir /mnt/sda3-open
Now I wanted to mount the recovery partition on the source drive:
src# mkdir /mnt/dest-sda4
src# sshfs root@dest:/mnt/sda4-open /mnt/dest-sda4
With this mounted, I could now run ddrescue:
src# ddrescue -f -n /dev/sda1 /mnt/dest-sda4/sda1.ddrescue.img /mnt/dest-sda4/sda1.ddrescue.log
This produced an image of the same size as the original partition, so it looks like this isn't excluding unused space.
I'm trying fsarchiver instead now:
src# fsarchiver savefs /mnt/dest-sda4/sda1.fsarchiver.img.fsa /dev/sda1
Statistics for filesystem 0
* files successfully processed:....regfiles=314, directories=6, symlinks=0, hardlinks=0, specials=0
* files with errors:...............regfiles=0, directories=0, symlinks=0, hardlinks=0, specials=0
Mounting /dev/sda1 and running df shows it's only using up 33 MB, and the .fsa file is only 24 MB, so maybe it's compressed. It's better than the original 120 MB.
Now let's try with the root partition sda3 to see how this goes:
src# fsarchiver savefs /mnt/dest-sda4/sda3.fsarchiver.img.fsa /dev/mapper/sda3-opened
This will probably take a while so I'm saving this update for now.
UPDATE: This went faster than I expected. Here's what I ended up getting:
dest# ls -lh
total 7.7G
drwx------ 2 root root 16K Apr 8 01:49 lost+found
-rw-r--r-- 1 root root 24M Apr 8 02:04 sda1.fsarchiver.img.fsa
-rw-r--r-- 1 root root 7.7G Apr 8 02:43 sda3.fsarchiver.img.fsa
Here's the even more interesting part looking at the output from the command above:
src# fsarchiver savefs /mnt/dest-sda4/sda3.fsarchiver.img.fsa /dev/mapper/sda3-opened
Statistics for filesystem 0
* files successfully processed:....regfiles=149025, directories=84796, symlinks=20559, hardlinks=127551, specials=1269
* files with errors:...............regfiles=0, directories=0, symlinks=0, hardlinks=0, specials=0
If I'm reading this correctly, this is encouraging because it didn't have any difficulty reading data off of the source drive.
Let's clean up some:
src# umount /mnt/dest-sda4
src# rmdir /mnt/dest-sda4
Next, I'm restoring the archived files back onto /dev/sda1 and /dev/sda3 of the destination, but first let's take a look and see what we have on the destination drive because I forgot where I left off setting it up.
First, is there any filesystem on /dev/sda1?
dest# mkdir /mnt/sda1
dest# mount /dev/sda1 /mnt/sda1
NTFS signature is missing.
Failed to mount '/dev/sda1': Invalid argument
The device '/dev/sda1' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
Ok. I expected no filesystem but I didn't expect an NTFS message. So there's nothing there.
Let's restore the first partition image:
dest# fsarchiver restfs /mnt/sda4-open/sda1.fsarchiver.img.fsa id=0,dest=/dev/sda1
Statistics for filesystem 0
* files successfully processed:....regfiles=314, directories=6, symlinks=0, hardlinks=0, specials=0
* files with errors:...............regfiles=0, directories=0, symlinks=0, hardlinks=0, specials=0
Let's mount now:
dest# mount /dev/sda1 /mnt/sda1
dest# ls -l /mnt/sda1
...
dest$ df -h | grep sda1
...
Things look good so far.
Let's do the root partition now.
dest# cryptsetup open /dev/sda3 sda3-opened
dest# mkdir /mnt/sda3-open
dest# mount /dev/mapper/sda3-opened /mnt/sda3-open
NTFS signature is missing.
Failed to mount '/dev/mapper/sda3-opened': Invalid argument
The device '/dev/mapper/sda3-opened' doesn't seem to have a valid NTFS.
Maybe the wrong device is used? Or the whole disk instead of a
partition (e.g. /dev/sda, not /dev/sda1)? Or the other way around?
Same as before--there's nothing there.
Let's restore the partition image:
dest# fsarchiver restfs /mnt/sda4-open/sda3.fsarchiver.img.fsa id=0,dest=/dev/mapper/sda3-opened
Statistics for filesystem 0
* files successfully processed:....regfiles=149025, directories=84796, symlinks=20559, hardlinks=127551, specials=1269
* files with errors:...............regfiles=0, directories=0, symlinks=0, hardlinks=0, specials=0
Let's mount now:
dest# mount /dev/mapper/sda3-opened /mnt/sda3-open
dest# ls -l /mnt/sda3
...
dest$ df -h | grep sda3
...
Things look good so far.
I ran the following as well on both:
# fsarchiver probe simple
Things look as expected.
One thing I believe I'm still missing is that I think this will mess up Grub. I seem to recall something about Stage 1 booting fine from the MBR but then it couldn't find Stage 2 on the /boot partition when I tried to do something like this the last time.
This page led to this, which describes how to repair Grub:
dest# mount -o bind /proc /mnt/sda3-open/proc
dest# mount -o bind /dev /mnt/sda3-open/dev
dest# mount -o bind /sys /mnt/sda3-open/sys
dest# chroot /mnt/sda3-open /bin/bash
(dest) chroot# mount /dev/sda1 /boot/
(dest) chroot# grub-install /dev/sda
Installing for i386-pc platform.
Installation finished. No error reported.
(dest) chroot# umount /boot
(dest) chroot# exit
dest# umount /mnt/sda3-open/{sys,dev,proc}
When I reboot, this should work and the drive should boot properly; however, it's late now and I don't want to get into it just yet.
Also I'm not yet convinced this will have a happy ending just yet. The grub-install command above stated it's installing for i386 but I believe i want 64-bit.
I might have to redo this portion by rebooting System Rescue CD via rescue64. I'm not sure if the default boot brought up 32-bit.
Again, I'm going to deal with the rest tomorrow.
UPDATE: So the good news is that the default booting for System Rescue CD was rescue64, so that wouldn't have been any problem.
It turns out I completely forgot about LVM, and the drive's UUIDs obviously don't match.
...
cryptsetup: lvm is not available
cryptsetup: lvm is not available
cryptsetup: lvm is not available
cryptsetup: lvm is not available
ALERT! /dev/disk/by-uuid/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx does not exist.
Check cryptopts=source= bootarg: cat /proc/cmdline
or missing modules, devices: cat /proc/modules; ls /dev
-r Dropping to a shell. Will skip /dev/disk/by-uuid/xxxxxxxx-xxxx-xxxx-xxxx-xxxx
xxxxxxxx if you can't fix.
modprobe: module ehci-orion not found in modules.dep
BusyBox vx.xx.x (Debian x:x.xx.x-x+xxxxxx) built-in shell (ash)
Enter 'help for a list of built-in commands.
/bin/sh: can't access tty: job control turned off
(initramfs)
I suppose I could fight with these, but I won't bother. Instead, I'm going to try what dirkt suggested and clone the full /dev/sda--UUIDs and everything--since 40 GB is only four times 10 GB, which I transferred last night and didn't take too long over the LAN.
I couldn't do this last night because I couldn't get NBD working, so I resorted to saving to image files. I can't do that if I'm doing a complete disk clone, so let's see if pipes or named pipes work any better.
So back to the beginning, both computers have now booted from the System Rescue CD bootable CD, both are available over the network via their respective DHCP-assigned IP addresses, and both have had their root password set via the passwd
command.
Before doing this with the real drives, I want to practice with a tiny fake one, so I'm going to begin by setting that up.
src# dd if=/dev/zero of=/root/tempsrc.dat bs=1M count=128
...
src# fdisk -l /root/tempsrc.dat
Disk /root/tempsrc.dat: 128 MiB, 134217728 bytes, 262144 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x8b8647e7
Device Boot Start End Sectors Size Id Type
/root/tempsrc.dat1 * 2048 34815 32768 16M 83 Linux
/root/tempsrc.dat2 34816 100351 65536 32M 82 Linux swap / Solaris
/root/tempsrc.dat3 100352 262143 161792 79M 83 Linux
src# mkdir /mnt/tempsrc
src# mkdir /mnt/tempsrc-mounted
src# losetup /dev/loop1 /root/tempsrc.dat -o $(expr 2048 \* 512)
src# mke2fs /dev/loop1
src# mount /dev/loop1 /mnt/tempsrc-mounted
src# echo 'This is partition 1' > /mnt/tempsrc-mounted/note1.txt
src# umount /mnt/tempsrc-mounted
src# losetup -d /dev/loop1
src# losetup /dev/loop1 /root/tempsrc.dat -o $(expr 100352 \* 512)
src# cryptsetup luksFormat /dev/loop1 --verify-passphrase
src# cryptsetup open /dev/loop1 loop1-opened
src# mke2fs -j /dev/mapper/loop1-opened
src# mount /dev/mapper/loop1-opened /mnt/tempsrc-mounted
src# echo 'This is partition 3' > /mnt/tempsrc-mounted/note3.txt
src# umount /mnt/tempsrc-mounted
src# cryptsetup close loop1-opened
src# losetup -d /dev/loop1
src# rmdir /mnt/tempsrc-mounted
src# rmdir /mnt/tempsrc
I know. I didn't deal with LVM again. Oh well.
I now have a /root/tempsrc.dat that contains an image of a disk like an SD card image that I want to transfer over to the remote destination. On the first partition is a file called note1.txt
, and the third partition is encrypted and has a note3.txt
with different contents. I would like to make sure I can get to all of this after running the fsarchiver
and transferring it over.
Let's get something ready on the destination:
dest# dd if=/dev/zero of=/root/tempdest.dat bs=1M count=128
dest# fdisk -l /root/tempdest.dat
Disk /root/tempdest.dat: 128 MiB, 134217728 bytes, 262144 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Let's also create loopback devices for these:
src# losetup /dev/loop1 /root/tempsrc.dat
dest# losetup /dev/loop2 /root/tempdest.dat
Now as I was getting ready to perform the transfer, I found out fsarchiver can't handle it as stated here and here.
I was hoping to do something like the following:
src# fsarchiver savefs /tmp/fifo1 /dev/loop1
dest# fsarchiver restfs /tmp/fifo2 id=0,dest=/dev/loop2
UPDATE: I replaced my destination 40 GB drive with a temporary third drive and powered on the destination PC.
Let's begin by setting up this new drive:
dest# mkdir /mnt/sda-open
dest# mount /dev/sda1 /mnt/sda-open
Trying to transfer again, except this time operating on the entire /dev/sda at once:
src# mkdir /mnt/dest-sda
src# sshfs root@dest:/mnt/sda-open /mnt/dest-sda
src# fsarchiver savefs /mnt/dest-sda/src-sda.fsarchiver.img.fsa /dev/sda
filesys.c#317,generic_mount(): partition [/dev/sda] cannot be mounted on [/tmp/fsa/20180408-222928-xxxxxxxx-00] as [vfat] with options []
oper_save.c#1032,filesystem_mount_partition(): cannot mount partition [/dev/sda]: filesystem may not be supported by either fsarchiver or the kernel.
removed /mnt/dest-sda/src-sda.fsarchiver.img.fsa
Well, so much for that idea. I guess they call it "FS" archiver for a reason. Let's try partimage.
src# partimage --compress=1 save /dev/sda /mnt/dest-sda/src-sda.partimg.bz2
This didn't work either; apparently this deals with filesystems and not disks as a whole as well.
Since we're operating on the disk as a whole, let's see if ddrescue would work now.
src# ddrescue --no-scrape /dev/sda /mnt/dest-sda/src-sda.ddrescue.img /mnt/dest-sda/src-sda.ddrescue.img.log
GNU ddrescue 1.21
Press Ctrl-C to interrupt
ipos: 785580 kB, non-trimmed: 0 B, current rate: 12320 kB/s
opos: 785580 kB, non-scraped: 0 B, average rate: 10615 kB/s
non-tried: 39241 MB, errsize: 0 B, run time: 1m 14s
rescued: 785580 kB, errors: 0, remaining time: 1h
percent rescued: 1.96% time since last successful read: 0s
Copying non-tried blocks... Pass 1 (forwards)
I started this at 5:41 p.m. for a 40 GB drive over I think a 100 Mbps LAN. At the moment, the output claims it will be done in about an hour.
Any reason you couldn't just clone the complete drive instead of looking at MBR and partitions etc.? BTW, you can also answer your own questions, and accept the answer. – dirkt – 2018-04-08T08:04:18.297
I haven't reached a satisfactory answer yet; these are just notes on what I've been trying so far and nothing more. After sleeping on it, I'm thinking of doing just that now--cloning the whole drive. My concern was the time required to transfer mostly-unused 40 GB over the LAN when I have less than 10 GB used that needs to be transferred; however, since it didn't take very long to transfer 10 GB, I think I'll give it a shot to redo the full drive. – jia103 – 2018-04-08T16:52:38.663