0

I have: Centos 6.7

grub-install -v

grub-install (GNU GRUB 0.97)

lsblk

enter image description here

two new SSD 128gb

Live usb with Parted_Magic_2015.03.06

/boot/grub/device.map

# this device map was generated by anaconda
(hd0)     /dev/sda
(hd1)     /dev/sdb

/boot/grub/grub.conf

default=1
timeout=5
splashimage=(hd0,2)/grub/splash.xpm.gz
hiddenmenu
title CentOS (4.1.10-1.el6.elrepo.x86_64)
    root (hd0,2)
    kernel /vmlinuz-4.1.10-1.el6.elrepo.x86_64 ro root=/dev/mapper/VolGroup-LogVol02 LANG=uk_UA.UTF-8 rd_NO_LUKS  KEYBOARDTYPE=pc KEYTABLE=us rd_LVM_LV=VolGroup/LogVol02 SYSFONT=latarcyrheb-sun16 rhgb crashkernel=128M quiet rd_MD_UUID=88b7c4d8:48557d19:3018c405:b427edf6 rd_LVM_LV=VolGroup/LogVol00 rd_NO_DM
    initrd /initramfs-4.1.10-1.el6.elrepo.x86_64.img

I want:

1) create new mdadm raid 1 with one partition using two unformatted new ssd 128 gb

2) copy md0 (boot) and VolGroup-LogVol01 (dm-2) (home) to VolGroup-LogVol02 (dm-1)

3) swap will be mount using fstab from file

5) clone the current raid for the new and the result should be something like this:

enter image description here

6) make changes in boot files

7) restart server and run from new md222

Please tell me how to do that all data were not damaged, that all permissions to files and SElinux settings were not changed?

I would be very grateful if someone will share their experiences and write a mini-step instructions how to make these modifications!

Sanya Snex
  • 191
  • 2
  • 8

1 Answers1

1

1) make raid 1 with two ssd

mdadm -D /dev/md127

/dev/md127:
        Version : 1.0
  Creation Time : Mon Dec 14 12:11:26 2015
     Raid Level : raid1
     Array Size : 125033344 (119.24 GiB 128.03 GB)
  Used Dev Size : 125033344 (119.24 GiB 128.03 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Wed Dec 16 11:10:25 2015
          State : active 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

           Name : localhost.localdomain:ssdraid
           UUID : f3q92q3f:6afff489:1fc15ss0:e38rr7fc
         Events : 2673

    Number   Major   Minor   RaidDevice State
       2       8       97        0      active sync   /dev/sdg1
       1       8       81        1      active sync   /dev/sdf1

parted -l

enter image description here

2) Сopy all files (file owner and permissions are stored)

rsync -avxHAX --progress / /ssdsys/

3) edit files: /boot/grub/device.map /boot/grub/grub.conf /etc/fstab

4) Change ssd to top in BIOS boot list

5) Install grub

grub

find /boot/grub/stage1

root (hd1,0)

setup (hd1)

root (hd0,0)

setup (hd0)

update 29-12-2015:

When you download system from the new drive

service named stop

delete if exist all files and directory from: /var/named/chroot/var/named/

delete if exist all files from:/var/named/chroot/etc/

service named start

(because I copy system with running "named service" if you use BIND these folders is used to mount. You can find what files need to be deleted when rename folder from /var/named/chroot/ to /var/named/chroot-copy/ and start bind and you see error "mount point not exist" and you can see path to files you need to remove)

Sanya Snex
  • 191
  • 2
  • 8