2

I got a brand new (same model) WD Caviar Green drive to replace the faulty one in my 5x2TB Raid 5 array. However, the new disk appears to be slightly different. According to the internet, Western Digital changed their block size?

Here's one of the original disks:

Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
81 heads, 63 sectors/track, 765633 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xfdeee051

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1              63  3907025198  1953512568   83  Linux

And the new one:

Disk /dev/sdd: 2000.4 GB, 2000398934016 bytes
81 heads, 63 sectors/track, 765633 cylinders, total 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x11e82af4

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048  3907029167  1953513560   83  Linux

Note the different I/O size. When I try to sfdisk the partition table over...

[root]# sfdisk -d /dev/sda | sfdisk /dev/sdd
Checking that no-one is using this disk right now ...
OK

Disk /dev/sdd: 243201 cylinders, 255 heads, 63 sectors/track
Old situation:
Warning: The partition table looks like it was made
  for C/H/S=*/81/63 (instead of 243201/255/63).
For this listing I'll assume that geometry.
Units = cylinders of 2612736 bytes, blocks of 1024 bytes, counting from 0

   Device Boot Start     End   #cyls    #blocks   Id  System
/dev/sdd1          0+ 765633- 765634- 1953513560   83  Linux
        end: (c,h,s) expected (1023,80,63) found (513,80,63)
/dev/sdd2          0       -       0          0    0  Empty
/dev/sdd3          0       -       0          0    0  Empty
/dev/sdd4          0       -       0          0    0  Empty
New situation:
Units = sectors of 512 bytes, counting from 0

   Device Boot    Start       End   #sectors  Id  System
/dev/sdd1            63 3907025198 3907025136  83  Linux
/dev/sdd2             0         -          0   0  Empty
/dev/sdd3             0         -          0   0  Empty
/dev/sdd4             0         -          0   0  Empty
Warning: partition 1 does not end at a cylinder boundary

sfdisk: I don't like these partitions - nothing changed.
(If you really want this, use the --force option.)

Ehhhh I don't want to force it. When I try to do it manually...

[root]# fdisk /dev/sdd

The device presents a logical sector size that is smaller than
the physical sector size. Aligning to a physical sector (or optimal
I/O) size boundary is recommended, or performance may be impacted.

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4, default 1): 
Using default value 1
First sector (2048-3907029167, default 2048):

It doesn't let me start the partition at 63, like the other disks. Help! Should I force it?

colinmarc
  • 197
  • 1
  • 7
  • 2
    The drives are the same size, compare the first two lines of the if the fdisk output, notice they are identical. But the new drive is a 4k drive. If you did force it, then that drive is not going to be aligned. It will not perform as well as the others. – Zoredache Oct 20 '11 at 22:05
  • so what should I do? – colinmarc Oct 20 '11 at 22:06
  • 1
    Back up all your data to some other location, partition the new drives to match, rebuild the RAID volume, and restore? Or possibly try to buy another drive that matches your current ones. – Zoredache Oct 20 '11 at 22:08
  • hm. both those options suck... – colinmarc Oct 20 '11 at 22:11
  • In the future you'll leave a bit of room at the end of the drives just in case. And never need it :) – MikeyB Oct 21 '11 at 13:03
  • so... I partitioned it starting at 2048, and tried just re-adding it to the array, expecting it to fail. It's rebuilding the array now, and seems happy. What's the downside? – colinmarc Oct 21 '11 at 18:06
  • it seems like the new disk partition is actually bigger (as you can see above), so it's just not using part of it. How that happened, I could use some help on. Maybe I did @MikeyB's suggestion and forgot about it? =P – colinmarc Oct 21 '11 at 18:09
  • Figure out what the actual size of each of the raid partitions is: `blockdev --getsize64 /dev/sdX1` – MikeyB Oct 21 '11 at 18:35

1 Answers1

3

Three options:

  • Bite the bullet and remake your MD.

  • Force the partition table and suffer from the performance hit

  • Use sg_format to resize the device slightly larger so that there's enough room for an aligned partition (if you're LUCKY)

I suspect you'll do (2) until you find time to do (1).

MikeyB
  • 38,725
  • 10
  • 102
  • 186