1

Here's a little background on the situation at hand. I have a DELL Precision T7600 at work that I'm responsible for maintaining, which just lost a hard-drive, thankfully just the /home directory was on it, and has now been recovered. Now I've been tasked with making a RAID 1 of the OS drive so that our downtime is to a minimum.

I've read about hard-drive cloning, on the Arch-Linux wiki, but I could not wrap my head around the process. Perhaps I'm making this too complex and it is a simple dd if=/dev/sdc of=/dev/md126 command, but I just want to make sure before I go solo on this.

I'm currently waiting to the array to resync to blank new disks (see my other question if you're interested). I suppose this is necessary. What would happen if I decided to dd to the array right now? Would stuff just crash? And while I wait, id dding to an array from a device file even possible or recommended? I'm not sure what's best practice here.

Thank you for your time and input!

UPDATE 1

I tried dding to the /dev/md0 device, but it was a tiny bit smaller, than the original, so I got an error from dd about not being able to copy to /dev/md0. Also, I tried to boot off of this array, but ran into error: file '/grub/i386-pc/normal.mod' not found. and was put into a grub rescue>, which I don't know what to do with. So I tried to mount the array in order to do a grub-install on it, but was met with failure, as mount told me: unknown filesystem type 'linux_raid_member'

UPDATE 2

I decided against a RAID1 array, and ran the following command to clone my OS drive to the two blank drives:

sudo pv /dev/sdc | tee >(dd of=/dev/sda) >(dd of=/dev/sdb) | dd of=/dev/null

This cloned my OS drive successfully, without grub errors like the 1st attempt. Grub loaded, but would not boot the OS, and I was thrown into dracut emergency mode. I got out of this by issuing the following commands from my LiveUSB sfdisk -d /dev/sdc | sfdisk /dev/sda and ditto for sdb.

Fedora loaded this time, but I was thrown into emergency mode, which is caused, at least in my case, as I've dealt with it before, by inexistant /etc/fstab entries. So I pruned the fstab to only mount the / partition.

Now I will endeavor to create the RAID array knowing full well that this will destroy the partition table, so what I will do is back it up first, and hopefully after running:

sudo mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sda /dev/sdb

from my Live USB stick I will have a RAID1 array. Or it could end up that I destroy the partition table and need to reload it again. Or I might have to reissue the dd command and wait another 20 hrs, we'll see :)!

rivanov
  • 113
  • 6
  • Is the RAID environment physically attached? or is it attached via iscsi or fiberchannel? – CIA Nov 03 '15 at 15:56
  • It is an integrated RAID controller. Seems like it's fake RAID according to the answer I got to the ['other question'](http://serverfault.com/questions/733413/why-does-linux-automatically-start-a-hardware-raid1-resync-on-two-new-and-blank) referenced above. – rivanov Nov 03 '15 at 15:58
  • I recommend getting a real RAID system in place first. Then, it should be as simple as using `dd` to do a bit-by-bit copy from the single disk to the RAID partition. – CIA Nov 04 '15 at 20:05
  • Seems like it's not as easy as `dd`. See Update 1. – rivanov Nov 05 '15 at 01:52
  • What was the error when you tried to `dd` to `/dev/md0`? Try option 3 from http://linas.org/linux/Software-RAID/Software-RAID-4.html – CIA Nov 05 '15 at 02:20
  • Yes. the error was after `dd`ing to `/dev/md0`. I will try it thanks for the feedback. – rivanov Nov 05 '15 at 15:19
  • This method uses `mkraid`, which is deprecated now in favor of `mdadm`. I cannot get access to this package. – rivanov Nov 11 '15 at 23:34

1 Answers1

1

There is some missing information, like what is the partition structure and how full is sdc? Assuming sufficient available space somewhere:

First, the simple way is to create a partitions /dev/sda1 and /dev/sdb1 to contain the /boot directory outside of the raid array. Once these are created you can copy the contents of the active /boot directory into the new /boot partitions. Assuming you have space somewhere to save sdc: There is a package fsarchiever which will do this for you. There is a howto here:

The first step is to save your current system:

fsarchiever savefs filename1.fsa /dev/sdc1

repeat this for all the partitions

Second, create the partition structure on sda and sdb. sda1 and sdb1 are the /boot partition. Then create an LVM partition with the remaining disk space on sda2 and sdb2. This can be done with gparted.

Third, the saved sdc partitons can be restored:

fsarchiever restfs filename.fsa1 id=0,dest=/dev/md0/partition_id1

repeat for other partitions.

Fourth, then create /mnt/root on the current running sdc and mount the new LVM root partition.

Since the /boot directory is now in /dev/sda1 and /dev/sdb1 you will have to remove that information from the new /root partition and create an entry in the new /etc/fstab to mount the /dev/sda1 partition on /root/boot. Now all the remaining partitions plus /dev /proc/ and /sys need to be mounted on /mnt/root/.. (see chroot tutorial) Now you can chroot to /mnt/root. You can verify the environment and do a grub2-mkconfig, grub2-install /dev/. once this is in place you can boot to the new sda-sdb device pair

dan sawyer
  • 141
  • 2
  • 11