1

This Ubuntu Server 16.04 machine has these disks:

sudo fdisk -l
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x7ac0eeb9

Device     Boot      Start        End    Sectors  Size Id Type
/dev/sda1             2048 3886718975 3886716928  1.8T fd Linux raid autodetect
/dev/sda2       3886721022 3907028991   20307970  9.7G  5 Extended
/dev/sda5       3886721024 3907028991   20307968  9.7G fd Linux raid autodetect

Partition 2 does not start on physical sector boundary.

Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xc9b50d2d

Device     Boot      Start        End    Sectors  Size Id Type
/dev/sdb1  *          2048 3886718975 3886716928  1.8T fd Linux raid autodetect
/dev/sdb2       3886721022 3907028991   20307970  9.7G  5 Extended
/dev/sdb5       3886721024 3907028991   20307968  9.7G fd Linux raid autodetect

Partition 2 does not start on physical sector boundary.

Disk /dev/md1: 9.7 GiB, 10389291008 bytes, 20291584 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Disk /dev/md0: 1.8 TiB, 1989864849408 bytes, 3886454784 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

So, I have two physical 1.8TB drives with three partitions each and two raids (/dev/md0 and /dev/md1).

If I do a cat /proc/mdstat I get:

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sda1[0]
      1943227392 blocks super 1.2 [2/1] [U_]
      bitmap: 10/15 pages [40KB], 65536KB chunk

md1 : active raid1 sda5[0]
      10145792 blocks super 1.2 [2/1] [U_]

And, if I look inside each RAID I have:

sudo mdadm -D /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Mar 20 06:41:14 2018
     Raid Level : raid1
     Array Size : 1943227392 (1853.21 GiB 1989.86 GB)
  Used Dev Size : 1943227392 (1853.21 GiB 1989.86 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Wed Dec  5 19:38:00 2018
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : impacs:0
           UUID : 619c5551:3e475969:80882df7:7da3f864
         Events : 166143

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       2       0        0        2      removed

And

sudo mdadm -D /dev/md1
/dev/md1:
        Version : 1.2
  Creation Time : Tue Mar 20 06:41:40 2018
     Raid Level : raid1
     Array Size : 10145792 (9.68 GiB 10.39 GB)
  Used Dev Size : 10145792 (9.68 GiB 10.39 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

    Update Time : Sun Dec  2 00:57:07 2018
          State : clean, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : impacs:1
           UUID : 1b9a0dc4:cc30cd7e:274fefd9:55266436
         Events : 81

    Number   Major   Minor   RaidDevice State
       0       8        5        0      active sync   /dev/sda5
       2       0        0        2      removed

It looks like /dev/sdb1 is not part of /dev/md0. How can I safely add it to that raid?.

Edit: I must add this raid was created at install time, using the Ubuntu Server installer and I'm pretty sure I selected the two 1.8TB discs to be part of the array.

Edit: Finally the failing drive was replaced and everything and the RAID rebuilt without issues, everything is ok right now.

leonardorame
  • 317
  • 3
  • 14

1 Answers1

0

Your drive was taken offline from the RAID because it is faulty and throwing errors. Have the drive replaced. You can then create new partitions on the new drive and add them back to the arrays.

Michael Hampton
  • 237,123
  • 42
  • 477
  • 940