7

I try to setup DRBD on a raw disk device /dev/sdb without partition table, nor LVM stack PV/VG/LV

As this disk is virtual and hypervisor I use allow on-the-fly disk extension, I do not want to bother with LVM operations or re-partitioning when comes time to extend my DRBD file system

My resource definition cannot be simpler

resource data {
  device  /dev/drbd1;
  meta-disk internal;
  disk    /dev/sdb;
  on node1 {
    address 10.10.10.16:7789;
  }
  on node2 {
    address 10.10.10.17:7789;
  }
}

Create metadata works

# drbdadm create-md data
initializing activity log
NOT initializing bitmap
Writing meta data...
New drbd meta data block successfully created.

But attach operation fails

 # drbdadm attach data
 1: Failure: (127) Device minor not allocated
 additional info from kernel:
 unknown minor
 Command 'drbdsetup-84 attach 1 /dev/sdb /dev/sdb internal' terminated with exit code 10

Error message really sounds like command expect a partition table index as device minor code.

How should I attach a raw device to DRBD resource ?

Yves Martin
  • 879
  • 3
  • 7
  • 21

2 Answers2

5

drbdadm attach data isn't the only command you want to be using after creating the metadata.

One of the following procedures should work for getting your device up:

drbdadm create-md data
drbdadm up data

-- or --

drbdadm create-md data
drbdsetup-84 new-resource data
drbdsetup-84 new-minor data 1 0 
drbdmeta 1 v08 /dev/sdb internal apply-al 
drbdsetup-84 attach 1 /dev/sdb /dev/sdb internal
drbdsetup-84 connect data ipv4:10.10.10.16:7789 ipv4:10.10.10.17:7789 --protocol=C

Once you've done that, you'll have a device with a connection state of "Connected" and a disk state of "Inconsistent/ Inconsistent"; this will always/ only be the case after you create brand new meta-data on both nodes.

From there, simply pick one node to promote to Primary, which will cause DRBD to sync from Primary => Secondary:

# drbdadm primary data --force 

You should never under normal circumstances need to use --force to promote your DRBD device from here on out.

However, you also said:

As this disk is virtual and hypervisor I use allow on-the-fly disk extension, I do not want to bother with LVM operations or re-partitioning when comes time to extend my DRBD file system

That probably isn't going to work with DRBD. DRBD puts it's metadata at the end of the block device, and in that metadata the number of blocks (and other things) are tracked. Dynamically extending the backing block device is likely going to cause problems for you.

Chaminda Bandara
  • 547
  • 6
  • 17
Matt Kereczman
  • 1,887
  • 8
  • 12
  • All theses steps are processed by Debian drbd.service so only "create-md" is really required when situation is clear... But mine was inconsistent at the step, so everything just fails. – Yves Martin Sep 22 '16 at 20:43
  • @YvesMartin, I'll update my answer with what to do after you start the services or otherwise up your device for the first time (and only the first time). – Matt Kereczman Sep 22 '16 at 22:23
  • About DRBD disk extension, everything works as expected, live without downtime - tested and applied on production too: first extend virtual disk, trigger `/sys/block/sdX/device/rescan`, run `drbdadm -- --assume-clean resize data` and last but not least extend FS with `resize2fs` – Yves Martin Jul 19 '17 at 07:19
2

In the very specific case of Debian DRBD package, there is no need to operate "attach data".

Here is the minimal sequence to get DRBD up and running with Debian:

  • Create your ressource file /etc/drbd.d/data.res on both nodes, typically to define /dev/drbd1 (remind this volume number 1 for clear bitmap operation!)
  • Invoke drbdadm create-md data on both nodes
  • Start service on both nodes, they should wait for each other to be ready: systemctl start drbd.service
  • Confirm Connected state with drbdadm cstate data. If not, do not go further until any service startup or network connectivity issue is solved.
  • On primary node only, clear bitmap to prevent useless initial synchronization: drbdadm -- --clear-bitmap new-current-uuid data/1 (mind last parameter: resourceName/volumeNumber)
  • On primary node only, promote node as primary: drbdadm primary data

From that point, on primary node, /dev/drbd1 device is available for any regular block operations like blockdev or mkfs.

Trigger clear bitmap operation with care, it makes any data on secondary node unrecoverable. By the way, it is really convenient for initial setup as it prevents your secondary node storage to be fully written for hours, enforcing your virtualization layer to allocate blocks on storage, which is annoying for thin provisioning.

Yves Martin
  • 879
  • 3
  • 7
  • 21
  • The `new-current-uuid --clear-bitmap` does not discard data on either node. That command clears out DRBD's bitmap, effectively telling it to skip the initial sync. – Matt Kereczman Jul 02 '19 at 16:34
  • OK I agree... I have to find a better "wording" to express my idea that any data on secondary node will no longer be accessible... – Yves Martin Jul 04 '19 at 10:02
  • Hi, is there any way to force the other replicas to perform the initial synchronization after executing `create-md` and `new-current-uuid` with `--clear-bitmap` and `--force-resync` optioins on the first one? – kvaps Jul 06 '21 at 10:34
  • I do not understand your question - what are "other replicas" ? there is no need for an initial synchronization as just-created block device is supposed to be "empty"... – Yves Martin Jul 22 '21 at 20:57