1

I am cloning what appears to be a redhat 4 (possibly 5?)server to fairly newer hardware, as the original has a failing board. DBA would rather not reconfigure a new installation so they want me to clone if possible. I used Clonezilla stable release 2.5.0-25 and did 2nd option disk to remote disk copy over network via static IPs. Used this tutorial: https://www.youtube.com/watch?v=8UGR_RLCptQ

Redhat version info:

[root@original_server ~]# cat /etc/redhat-release 
redhat-4
#Enterprise Linux Enterprise Linux Server release 5 (Carthage)

Old hardware: Asus RS260/2x Xeon E5420/12gb DDR3 ECC FB RAM (24gb prior to hardware issues)/ICP ICP5085BL RAID controller/RAID 10 8 drives Optimal

New Hardware: Asus RS720/2X Xeon 2620/48gb DDR3 ECC FB RAM/Asus PIKE 2308 RAID Controller/RAID 10 8 drives Optimal

During the process I was not asked to clone the boot loader, though the sda1 partition mounted at /boot appeared to have been cloned afterward.

Long story short it appears the clone was successful and the old data is on the new server in the correct partitions, but when I try to boot I get Unable to access resume device (LABEL=SWAP-sda5) and mount: could not find filesystem '/dev/root'. Then a few more no such file or directory errors then Kernel panic.

So far I've tried:

  • Rebuilding initrd using a CentOS 5.11 64bit DVD and following these instructions: https://wiki.centos.org/TipsAndTricks/CreateNewInitrd. When I used the $(uname -r) values as specified the command returned "No modules available for kernel "2.6.18-398.el5". I reran the command with the kernel version# that was on the existing initrd file (2.6.18-8.el5) and it worked. File was exactly the same size.

  • Installing LSI Fusion-MPT SAS2 driver for el5_3 for RAID via RPM from Asus site.

  • Deleting original initrd and rebuilding after doing RAID controller install. initrd file was only very slightly smaller (one or two bytes).

  • Getting UUIDs from Gparted for sda1, sda2, sda3, sda6 and modifying /etc/fstab with them instead of the labels.

  • Uncommenting #boot=/dev/sda in grub.conf and modifying it to boot=/dev/sda1.

  • Modifying kernel command in boot sequence (changing ro to rw, chanting root= to point to /dev/sda, /dev/sda3, and to UUID=uuid of /dev/sda3), none of which worked.

Things I haven't tried yet that I'm aware are options:

  • Reinstalling grub, but do I reinstall to /dev/sda1 (where it originally was) or /dev/sda? And how do I back up the original grub settings prior?

  • Installing the RAID controller driver from source (another thing I'm not very familiar with).

  • Running fsck: not too familiar, have ran it with -f -y options in the past but apparently you want to run it interactively so as not to break the system.

I'm guessing RAID driver issue, but I'm not sure how to get it included in initrd. If there is a better option for linux system cloning I am open to it (Partimage would not load when I tried it but I can attempt it again). Already spent three days on this so hopefully I've done my due diligence prior to asking.

Original /etc/fstab:

[root@original_server ~]# cat /etc/fstab
LABEL=/                 /                       ext3    defaults        1 1
LABEL=/boot             /boot                   ext3    defaults        1 2
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
tmpfs                   /dev/shm                tmpfs   defaults        0 0
LABEL=/main             /main                   ext3    defaults        1 2
LABEL=/opt              /opt                    ext3    defaults        1 2
proc                    /proc                   proc    defaults        0 0
sysfs                   /sys                    sysfs   defaults        0 0
LABEL=SWAP-sda5         swap                    swap    defaults        0 0

Original /boot/grub/grub.conf:

[root@original_server ~]# cat /boot/grub/grub.conf 
# grub.conf generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /boot/, eg.
#          root (hd0,0)
#          kernel /vmlinuz-version ro root=/dev/sda3
#          initrd /initrd-version.img
#boot=/dev/sda
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Enterprise Linux (2.6.18-8.el5)
root (hd0,0)
kernel /vmlinuz-2.6.18-8.el5 ro root=LABEL=/ rhgb quiet
initrd /initrd-2.6.18-8.el5.img

TLDR: Attempted clone of redhat 4 machine to newer hardware over network using Clonezilla and got Could not find filesystem /dev/root. Made modifications to fstab and grub.conf, installed RAID driver, modified boot options, and recreated initrd and same result.

I can provide screenshots or more info if needed. Any help is appreciated, thank you.

Tero Kilkanen
  • 34,499
  • 3
  • 38
  • 58

1 Answers1

0

The issue here is that the root= option in GRUB kernel line is incorrect. You need to update the grub.cfg, and then re-install GRUB to the boot device.

Now, I am not sure where you should install it. Usually it should go to the actual disk device, that is, /dev/sdx, not to a partition (/dev/sdxN). However, it should cause no issues if you install it to the partition.

I am not familiar how one updates existing GRUB installation in Red Hat. I searched for instructions, and I found this: https://unix.stackexchange.com/questions/152222/equivalent-of-update-grub-for-rhel-fedora-centos-systems

Tero Kilkanen
  • 34,499
  • 3
  • 38
  • 58