I have a 5-node Proxmox cluster using Ceph as the primary VM storage backend. The Ceph pool is currently configured with a size
of 5 (1 data replica per OSD per node) and a min_size
of 1. Due to the high size
setting, much of the available space in the pool is being used to store unnecessary replicas (Proxmox 5-node cluster can sustain no more than 2 simultaneous node failures), so my goal is to reduce the size
parameter to 3 by altering the setting on the pool itself, therefore increasing available pool space.
I've gone through the Proxmox and Ceph documentation but couldn't find information on reducing the size
parameter on a live pool. I did find the command to set the size
parameter, but not sure of any potential issues I may encounter, or whether or not reducing the size
on a live pool is even possible. Unfortunately I can't run any tests either, since the pool is running in production.
I've already considered simply creating a new pool with the appropriate parameters, but I would prefer to save time migrating the data from one pool to another if I can.
Thanks in advance.
EDIT:
root@node1:~# ceph osd pool ls detail
pool 4 'ceph-5' replicated size 5 min_size 1 crush_rule 0 object_hash rjenkins pg_num 512 pgp_num 512 autoscale_mode warn last_change 42673 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd