2

Here's what my Ceph situation looks like (from ceph df):

GLOBAL:
    SIZE     AVAIL     RAW USED     %RAW USED 
    596G      593G        3633M          0.59 
POOLS:
    NAME                          ID     USED     %USED     MAX AVAIL     OBJECTS 
    rbd                           0         0         0          296G           0 
    .rgw.root                     1      1636         0          296G           4 
    default.rgw.control           2         0         0          296G           8 
    default.rgw.data.root         3      1214         0          296G           4 
    default.rgw.gc                4         0         0          296G          32 
    default.rgw.log               5         0         0          296G         127 
    default.rgw.users.uid         6       327         0          296G           2 
    default.rgw.users.keys        7        12         0          296G           1 
    default.rgw.meta              8      3281         0          296G          10 
    default.rgw.buckets.index     9         0         0          296G           2 
    default.rgw.buckets.data      12        0         0          197G           0 

I notice that my global size is 596G. But the default.rgw.buckets.data, where all the data I send to rados gw goes, has only 197G max avail. Why is this? How can I use all my available space with that pool?

1 Answers1

4

The 'MAX AVAIL' column represents the amount of data that can be used before the first OSD becomes full. It takes into account the projected distribution of data across disks from the CRUSH map and uses the 'first OSD to fill up' as the target.

It also factors in replication size. If your data pool has a larger replication size than the other pools, that would explain the difference.

You can check the replication size like this.

ceph osd pool get default.rgw.buckets.data size