2

OK so I am a freelance sysadmin. I was asked to resize root partion (/) because it was 20Gb and /home was 3Tb.

What I wasn't told is that the server is using RAID and GPT, so I can't use fdisk but will have to use parted, and I don't know if RAID will come into play.

Here is all the parted, df -h, and fstab: http://pastebin.com/RFbQL0qV

Can anyone help ?

thms0
  • 71
  • 9

3 Answers3

2

As you are using EXT4, it should be possible to shrink the /home/ partition. Let do an example shrinking it to about 2 TB:

  1. unmount your filesystem with umount /dev/md3
  2. check your filesystem with fsck /dev/md3
  3. resize the filesystem with resize2fs /dev/md3 1800G
  4. resize the RAID device with mdadm --grow /dev/md3 --size=1900G Please note that I left the array bigger than the underlying filesystem, by a great margin. This is because the last thing you want is to shrink too much your array, at a point where the underlying filesystem can't fit in it. This is a very bad scenario, with almost 100% guaranteed data loss.
  5. recheck your filesystem with fsck /dev/md3
  6. remount your filesystem and try to read/write to/from it.

Here you can find some other information.

Anyway, if your /dev/md3 device is almost empty, maybe destroying/recreating the array/partitions can both be easier and faster.

shodanshok
  • 44,038
  • 6
  • 98
  • 162
  • Yes, it needed mode rescue. I am backing up all files and trying it. Will update later. I prefer this solution to the other one since it doesn't require me to use parted, just resize the partition. I'll follow the same workflow for md2 and it should work [well I hope so] – thms0 Apr 23 '15 at 02:48
  • Things didn't work out the way I want.. I resized one partition and now I get "mdadm: Cannot set device size for /dev/md2: No space left on device" – thms0 Apr 23 '15 at 08:01
  • What is your current situation? Can you post your partition table setup and the likes? – shodanshok Apr 23 '15 at 08:13
  • It worked ! I rebooted, /dev/md3 493G 1.5G 466G 1% /home I will try to reboot in rescue mode and do the same thing for md2 again. /dev/root 20G 18G 219M 99% / devtmpfs 32G 4.0K 32G 1% /dev none 4.0K 0 4.0K 0% /sys/fs/cgroup none 6.3G 828K 6.3G 1% /run none 5.0M 0 5.0M 0% /run/lock none 32G 0 32G 0% /run/shm none 100M 0 100M 0% /run/user /dev/md3 493G 1.5G 466G 1% /home I am gonna boot in rescue and try agan for / – thms0 Apr 23 '15 at 08:32
  • Update 3: md3 has good block size of 600Gb from cat /proc/mdstat: 629145600 blocks [2/2] [UU] I now want to extand md2 which is 20478912 to 1887436800 but --grow=1887436800 shows no space left on device. I don't understand, really. Why no space left ?! – thms0 Apr 23 '15 at 10:11
  • Please give me more detail. Can you repost something similar to your first post? – shodanshok Apr 23 '15 at 13:59
  • Here you go: http://pastebin.com/g416ss7q Thanks a lot for your help. – thms0 Apr 23 '15 at 14:34
  • You have two problem: **1)** even if you resized /dev/md3, its underlying partitions still at 3000GB (see parted output, third partition). **2)** Even if your resize md3's partition, space will be freed at their ends. This means that the partitions underlying the /dev/md2 have no room to grow, due to the adjacent md3's partitions. You had to physically move md3's partitions (using parted/gparted) to create some space where md2's partitions can grow. This is the exact problem (successfully) solved by LVM. You should seriously consider a server reinstallation. – shodanshok Apr 23 '15 at 14:55
  • Server reinstallation is exactly what I am considering. I did not do this set up. I am remoting, so I can't solve this issue without physical access, right ? Can't I just resize the md3 partition using parted, remotely in rescue mode ? – thms0 Apr 23 '15 at 18:04
  • You not only need to resize partition, but also to _move_ them. And moving such big partition will be a long and tedious (albeit possible) process. – shodanshok Apr 23 '15 at 18:48
  • Can't you guide me through doing it ? I am lost :/ – thms0 Apr 23 '15 at 21:10
  • Its is difficult to exactly guide you. The slightest misunderstanding (both from me and you) or error will result in data loss. If you are _really_ sure to continue and are you ready to face potential data loss? If not, stop here. – shodanshok Apr 24 '15 at 06:49
  • yes man. I have the same task again. I could pay you I'm a freelancer I'm paid for it. – thms0 May 06 '15 at 09:22
  • Teach me parted. This would be your task. and I'd pay for you but u need to have the skills to do it without data loss it's for a data channel. rly important client for me. – thms0 May 06 '15 at 09:23
0

This should be fun. Boot from a rescue disk and use resize2fs to shrink the /home filesystem in the raid array, then mdadm -z to shrink the size of the raid array, then mdadm -f -r to fail and remove one of the two drives from the raid arrays. Use parted on the drive you removed to delete the partitions and recreate them with the changed size you want. Then use mdadm --add to add the partition on the second drive back into the arrays, and wait for it to resync. Then fail and remove the first drive from the array, repartition the same way you did the second drive, and re-add and wait for it to resync. Finally, use mdadm -z again to increase the usable size of the root raid array, and resize2fs on it to expand the filesystem to use that space.

psusi
  • 3,247
  • 1
  • 16
  • 9
  • Won't that make me loose all my data ? – thms0 Apr 22 '15 at 23:12
  • @thms0, no... your data is mirrored, so the idea is to break the mirror, partition the new disk the way you want it, clone the mirror to the new disk, then clone back. The shrinking of the volume is done offline and safely using resize2fs and mdadm to shrink the fs and raid array. – psusi Apr 23 '15 at 01:49
0

OK, just seen your answers, thanks everyone for your input. Right now, I am gonna do a NFS and rsync -avPH to do a full backup of the system. Also saved the full packages list that are installed. I will try your solution, if it fails I will reinstall and just rsync back.

I'll update later.

Thanks a lot for answering, anyway :).

thms0
  • 71
  • 9