Questions tagged [ceph]

Ceph is a free software storage platform designed to present object, block, and file storage from a single distributed computer cluster.

Ceph is a free software storage platform designed to present object, block, and file storage from a single distributed computer cluster. Ceph's main goals are to be completely distributed without a single point of failure, scalable to the exabyte level, and freely-available.

156 questions
2
votes
4 answers

Scale-out distributed storage with snapshots

I'm aware there are many similar questions around, several with great answers. I still haven't quite found what I'm looking for though: a distributed, scale-out FS that supports snapshots. Gluster with finished snapshot support would be great, but…
user206444
2
votes
1 answer

How do I mount an rbd device from fstab

According to this link: http://docs.ceph.com/docs/master/start/quick-rbd/ I can mount an rbd, which works perfectly. The question I have is how do I do this from fstab? The end goal being to mount it to /var/lib/mysql I've only found examples for…
hookenz
  • 14,132
  • 22
  • 86
  • 142
2
votes
1 answer

What kind of "volume/storage management" has the largest support/feature set these days?

(DISCLAIMER: This is "time bound" as probably every other question, I just point it out now instead of implicating it) part of reviewing a few basic decisions for our infrastructure is once again the topic of volume/storage management. I'm mainly…
Martin M.
  • 6,428
  • 2
  • 24
  • 42
2
votes
1 answer

copy kvm image from ceph to other storage

A while ago I created a ceph storage cluster and connected it to proxmox2 according to the method described in http://pve.proxmox.com/wiki/Storage:_Ceph. We have some kvm images running on the ceph storage that where originaly just for testing…
Vincent
  • 191
  • 1
  • 3
  • 10
2
votes
0 answers

Proxmox on Ceph performance & stability issues / Configuration doubts

We have just installed a cluster of 6 Proxmox servers, using 3 nodes as Ceph storage, and 3 nodes as compute nodes. We are experiencing strange and critical issues with the performances and stability of our cluster. VMs and Proxmox web access tends…
Danyright
  • 163
  • 5
2
votes
0 answers

CEPH/ RGW encryption with mutliple tenants

I'm currently planning a small OpenStack deployment and I want to use Ceph for object storage (via Rados Gateway) and block storage. Ceph supports multitenancy. It also supports encryption using key management services like vault or barbican. Since…
2
votes
1 answer

bluestore(/var/lib/ceph/osd/ceph-2/block) _read_bdev_label failed to open /var/lib/ceph/osd/ceph-2/block: (1) Operation not permitted

When I have upgrade the from Octopus 15.2.10 to Pacific 16.2.0, the mon nodes starts successfully using the manual upgrade process (by installing the packages with no orch) however when I have upgrade the OSDs, the ceph-osd service does not…
Behzad
  • 37
  • 3
2
votes
1 answer

Orchestrator is not available with fresh Rook instance

I'm trying to set up a rook Ceph cluster on my kubernetes cluster. Topography: 3 kubernetes nodes (all are master/worker pods) Each node has /dev/vdX on it for ceph Each node is intended to work as part of the ceph cluster I deployed Rook operator…
cclloyd
  • 583
  • 1
  • 13
  • 24
1
vote
0 answers

Ceph Clock Skew

I want to know where I can configure the limit of skew time for my ceph monitors. Also,how does CEPH throw this clock skew error? Specifically from which file and where can I find the file so that I can edit? I am already running NTP and…
1
vote
0 answers

CEPH netfilter connection tracking issues

We have a CEPH cluster (Ubuntu 18.04, Luminous) for Openstack images and volumes. As I was taking it into production I found many performance issues, slow OSDs, and throughput down to a trickle; this turned out to be due to the iptables rules. As is…
Dennis
  • 11
  • 1
1
vote
2 answers

GlusterFS or Ceph RBD for storing Virtual Machine image

I am using glusterfs 5.3 for storing images of virtual machines in Cloudstack/KVM environment, majority of VMs are DB Servers (Sql Server & MariaDB). But I am facing performance issue on VMs specifically on database servers. I get lots of Time out…
Prateek
  • 11
  • 1
1
vote
0 answers

ceph-deploy osd create --data /dev/vdb fails

From my ceph (mimic release) administrative node I ran ceph-deploy osd create --data /dev/vdb ceph0 against a bare metal ceph node and it worked without error. Now I run the same command against a virtual ceph node (with an unpartitioned qcow2…
mr.zog
  • 902
  • 3
  • 16
  • 36
1
vote
0 answers

Hung kernel tasks after unclean shutdown of ceph cluster

I am running ceph (created by the rook-ceph operator v0.9.3) on kubernetes v1.13. After an unclean shutdown of our cluster, some processes randomly go into uninterruptible sleep. After some time, the kubernetes cluster fails to schedule new Pods.…
strangedev
  • 11
  • 1
1
vote
1 answer

Ceph iSCSI Gateway installation on Ubuntu 18.04

I'm installing Ceph using Ansible with ceph-ansible project, branch static-3.2. There is a problem with iSCSI Gateway installation. If you are using iscsigws name in inventory file - it shows that This is available only for RHEL. (Still doesn't work…
Lisek
  • 199
  • 1
  • 6
  • 15
1
vote
0 answers

How wide (data + parity) would you go for erasure coded pools / raid 6?

I am currently doing k=8 and m=2 or in other words data = 8 + parity = 2 shards I am using an all hdd ceph cluster. I have thought about going data = 11 + parity = 2 which would save a lot of data. Theoretically, you can keep on going up but what…
Vish
  • 176
  • 5