My current mdstat:
$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md0 : active raid6 sde[8] sdh[4] sdg[1] sdd[6] sdb[5] sdc[7]
9766914560 blocks super 1.2 level 6, 512k chunk, algorithm 2 [7/6] [UUUUU_U]
unused devices: <none>
Here is mdadm --detail:
$ sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Fri Apr 26 21:52:21 2013
Raid Level : raid6
Array Size : 9766914560 (9314.46 GiB 10001.32 GB)
Used Dev Size : 1953382912 (1862.89 GiB 2000.26 GB)
Raid Devices : 7
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Tue Mar 28 15:19:34 2017
State : clean, degraded
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Name : server:0 (local to host server)
UUID : 7dfb32ef:8454e49b:ec03ac98:cdb2e691
Events : 34230
Number Major Minor RaidDevice State
8 8 64 0 active sync /dev/sde
1 8 96 1 active sync /dev/sdg
4 8 112 2 active sync /dev/sdh
5 8 16 3 active sync /dev/sdb
6 8 48 4 active sync /dev/sdd
10 0 0 10 removed
7 8 32 6 active sync /dev/sdc
My questions are:
- How am I supposed to figure out the removed HDD? Without tricks and guesses like subtracting the set of disks shown in mdadm output from all available HDDs in my system (ls /dev/sd*), etc....
- Why mdadm could remove the disk? Is it OK to re-add it, if I run smartctl tests and they finish successfully?
UPDATE Correct answer is sdf. I found it by comparing set of disks shown in mdadm output and all disks in the system (sda - is boot disk with OS), but I still found such procedure too difficult.