4

Do ceph have high availability, I config 2 node like this

  cluster:

    id:     07df97db-f315-4c78-9d2a-ab85007a1856
    health: HEALTH_WARN
            Reduced data availability: 32 pgs inactive
            Degraded data redundancy: 374/590 objects degraded (63.390%), 18 pgs degraded, 32 pgs undersized

  services:
    mon: 2 daemons, quorum ceph1,ceph2
    mgr: ceph1(active), standbys: ceph2
    mds: mycephfs-1/1/1 up  {0=ceph1=up:active}, 1 up:standby
    osd: 2 osds: 1 up, 1 in

  data:
    pools:   6 pools, 96 pgs
    objects: 216  objects, 12 MiB
    usage:   75 MiB used, 945 MiB / 1020 MiB avail
    pgs:     33.333% pgs not active
             374/590 objects degraded (63.390%)
             64 active+clean
             18 undersized+degraded+peered
             14 undersized+peered

As you can see i set up 2 mon call ceph1 and ceph2, but when i stop ceph1, the ceph2 is unable to write file into cephfs -mounted storage on VM

So how to make ceph high availability ? Does is require more node or somethings like that ?

1 Answers1

6

You need at least 3 MON to achieve HA because of the quorum. With only 2 nodes your storage will be stopped by an problem on one node.

"When a Ceph Storage Cluster runs multiple Ceph Monitors for high availability, Ceph Monitors use Paxos to establish consensus about the master cluster map. A consensus requires a majority of monitors running to establish a quorum for consensus about the cluster map (e.g., 1; 2 out of 3; 3 out of 5; 4 out of 6; etc.)." link

If you need HA storage with 2 nodes take a look at solutions which has been designed for this. I can recommend StarWind VSAN free with active-active replication and no quorum requirement.

batistuta09
  • 8,723
  • 9
  • 21