1

Since CephFS does not support snapshot yet, could we use Ceph Pool Snapshot for backup purpose from accidental deletion of files inside CephFS?

ceph osd pool mksnap {pool-name} {snap-name}
ceph osd pool rmsnap {pool-name} {snap-name}
rados -p {pool-name} lssnap
rados -p {pool-name} rollback {snap-name}

If it's possible, I would like to make use of Ceph Pool Snapshot for backing up files inside CephFS and rolling back CephFS pool should there is an accidental file deletion inside CephFS.

I know we could always use offsite backup, but I could not afford another 200+ TB storage at the moment, and I hope the snapshot will only use a small amount of space inside the Ceph Cluster.

References:

chrone
  • 137
  • 2
  • 8

1 Answers1

2

Well I was wondering about the usage and mechanism of CephFS snapshot and the search results brought me here.

Firstly, snapshot in CephFS is available, but not yet stable. With allow_new_snaps set, snapshot will be enabled in CephFS, and making snapshots is as easy as creating a directory. Besides being not stable, what I've found is that files in snapshots still seem to be changing as the files in filesystem change, but haven't got a clue about this.

Snapshotting the pool seems to be a reliable way to do backups, but keep in mind that you gotta snapshot both the data pool and the metadata pool, and both snapshots need to be taken at the same time, in order to get a consistent snapshot of the filesystem. What's worse, you will need to combine both snapshots and make a new filesystem with them in order to get a single file or directory from the snapshot, but multi-fs is not yet implemented, AFAIK, in ceph. So your only way to do a recover may be overwriting the current filesystem with the snapshot entirely.

I'm using the allow_new_snaps way which seems to be more promising.

  • Thanks @wangguoqin1001! I guess we have to wait for this to be stable in future version of Ceph. :) – chrone Jan 09 '17 at 02:18
  • I found https://docs.ceph.com/en/mimic/dev/cephfs-snapshots/ and no hint that this feature is unstable. Has it become stable in the mean time? Is the current answer outdated? – user643011 Sep 18 '21 at 13:09
  • 1
    @user643011 Cephfs has been advanced a lot these years, and sorry I'm not actively using Ceph these days, at least not for this year. I believe they are much better now, and will get back to you when I'm back to work later this year. – wangguoqin1001 Sep 18 '21 at 19:30