3

I've spent quite some time now attempting to configure Puppet so that it will setup Cinder to use GlusterFS as the backend, rather than the LVMISCSI backend, but I haven't had any luck.

Versions:

  • Puppet 3.7.3
  • Cinder 1.0.8
  • Gluster 3.4.2
  • Ubuntu 14.10 Server
  • puppetlabs-openstack 4.2.0

Configs:

In addition to the Puppet Openstack configs, which work just fine, I have the following for my storage node manifest:

class { 'cinder::backends':
  enabled_backends => ['glusterfs']
}
class { 'cinder::volume::glusterfs':
  glusterfs_shares => ['192.168.2.5:/cinder-volumes'],
  glusterfs_mount_point_base => '/var/lib/cinder/mnt'
}

resulting in a cinder.conf on my storage node that looks like this:

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_confg = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
verbose = no
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
rabbit_host=192.168.2.1
use_syslog=False
api_paste_config=/etc/cinder/api-paste.ini
glance_num_retries=0
volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver
enabled_backends=glusterfs
debug=no
glance_api_ssl_compression=False
glance_api_insecure=False
rabbit_userid=openstack
rabbit_use_ssl=False
log_dir=/var/log/cinder
glance_api_servers=192.168.1.5:9292
volume_backend_name=DEFAULT
rabbit_virtual_host=/
rabbit_hosts=192.168.2.1:5672
glusterfs_shares_config=/etc/cinder/shares.conf
control_exchange=openstack
rabbit_ha_queues=False
glance_api_version=2
amqp_durable_queues=False
rabbit_password=**redacted**
rabbit_port=5672
rpc_backend=cinder.openstack.common.rpc.impl_kombu

However, despite the Gluster configuration, I still get errors pertaining to LVMISCSI:

2015-01-02 10:22:49.488 1005 WARNING cinder.volume.manager [-] Unable to update stats, LVMISCSIDriver -2.0.0 (config name glusterfs) driver is uninitialized. (ad infinitum)

as well as a stacktrace:

WARNING cinder.context [-] Arguments dropped when creating context: {'user': u'203e3f206c5445beac797c1bcaecea8e', 'tenant': u'd588bf47f01349f39fd609440ca1d97a', 'user_identity': u'203e3f206c5445beac797c1bcaecea8e d588bf47f01349f39fd609440ca1d97a - - -'}
ERROR cinder.utils [req-6bcc2064-aa07-47ac-809e-cbb46c53b245 203e3f206c5445beac797c1bcaecea8e d588bf47f01349f39fd609440ca1d97a - - -] Volume driver LVMISCSIDriver not initialized
ERROR oslo.messaging.rpc.dispatcher [req-6bcc2064-aa07-47ac-809e-cbb46c53b245 203e3f206c5445beac797c1bcaecea8e d588bf47f01349f39fd609440ca1d97a - - -] Exception during message handling: Volume driver not ready.
TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 133, in _dispatch_and_reply
TRACE oslo.messaging.rpc.dispatcher     incoming.message))
TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 176, in _dispatch
TRACE oslo.messaging.rpc.dispatcher     return self._do_dispatch(endpoint, method, ctxt, args)
TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/dispatcher.py", line 122, in _do_dispatch
TRACE oslo.messaging.rpc.dispatcher     result = getattr(endpoint, method)(ctxt, **new_args)
TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 144, in lvo_inner1
TRACE oslo.messaging.rpc.dispatcher     return lvo_inner2(inst, context, volume_id, **kwargs)
TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/cinder/openstack/common/lockutils.py", line 233, in inner
TRACE oslo.messaging.rpc.dispatcher     retval = f(*args, **kwargs)
TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 143, in lvo_inner2
TRACE oslo.messaging.rpc.dispatcher     return f(*_args, **_kwargs)
TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 416, in delete_volume
TRACE oslo.messaging.rpc.dispatcher     {'status': 'error_deleting'})
TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/cinder/openstack/common/excutils.py", line 68, in __exit__
TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 397, in delete_volume
TRACE oslo.messaging.rpc.dispatcher     utils.require_driver_initialized(self.driver)
TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.7/dist-packages/cinder/utils.py", line 761, in require_driver_initialized
TRACE oslo.messaging.rpc.dispatcher     raise exception.DriverNotInitialized()
TRACE oslo.messaging.rpc.dispatcher DriverNotInitialized: Volume driver not ready.
TRACE oslo.messaging.rpc.dispatcher 
ERROR oslo.messaging._drivers.common [req-6bcc2064-aa07-47ac-809e-cbb46c53b245 203e3f206c5445beac797c1bcaecea8e d588bf47f01349f39fd609440ca1d97a - - -] Returning exception Volume driver not ready. to caller

I don't want to clutter the question, but if you need any other information let me know, and thanks in advance!

Edit

I changed my storage node manifest so that it contains:

class { 'cinder::backends':
  enabled_backends => ['gluster']
}
#  class { '::cinder::volume::glusterfs':    
#    glusterfs_shares => ['192.168.2.5:/cinder-volumes'],
#    glusterfs_mount_point_base => '/var/lib/cinder/mnt'
#  }
cinder::backend::glusterfs { 'gluster':
  glusterfs_shares           => ['192.168.2.5:/cinder-volumes'],
  glusterfs_mount_point_base => '/var/lib/cinder/mnt'
}

Now Cinder/Gluster actually, work, but not perfectly. You'll notice that I removed the volume definition and went directly to the backend definition. This seems to have done the trick in getting Gluster and Cinder to play nicely, but I'm still getting the following in cinder-scheduler.log

WARNING cinder.context [-] Arguments dropped when creating context: {'user': u'203e3f206c5445beac797c1bcaecea8e', 'tenant': u'd588bf47f01349f39fd609440ca1d97a', 'user_identity': u'203e3f206c5445beac797c1bcaecea8e d588bf47f01349f39fd609440ca1d97a - - -'}
WARNING cinder.scheduler.host_manager [req-15855071-6eaa-41a2-87a9-70be47767f28 203e3f206c5445beac797c1bcaecea8e d588bf47f01349f39fd609440ca1d97a - - -] volume service is down or disabled. (host: media)
WARNING cinder.scheduler.host_manager [req-15855071-6eaa-41a2-87a9-70be47767f28 203e3f206c5445beac797c1bcaecea8e d588bf47f01349f39fd609440ca1d97a - - -] volume service is down or disabled. (host: media@glusterfs)
030
  • 5,731
  • 12
  • 61
  • 107
ironhardchaw
  • 61
  • 1
  • 6

1 Answers1

1

In pursuit of a couple other issues, I determined that this issue was caused by some of the leftover configuration bits in the cinder.conf above. Specifically I removed all iscsi references, and moved the Gluster bits out into a separate section, resulting in:

[DEFAULT]
rabbit_host=192.168.2.1
use_syslog=False
api_paste_config=/etc/cinder/api-paste.ini
glance_num_retries=0
enabled_backends=gluster
debug=True
storage_availability_zone=nova
glance_api_ssl_compression=False
glance_api_insecure=False
rabbit_userid=openstack
rabbit_use_ssl=False
log_dir=/var/log/cinder
glance_api_servers=192.168.1.5:9292
rabbit_virtual_host=/
default_availability_zone=nova
rabbit_hosts=192.168.2.1:5672
verbose=True
control_exchange=openstack
rabbit_ha_queues=False
glance_api_version=2
amqp_durable_queues=False
rabbit_password=**redacted**
rabbit_port=5672
rpc_backend=cinder.openstack.common.rpc.impl_kombu

[database]
idle_timeout=3600
max_retries=10
retry_interval=10
min_pool_size=1
connection=mysql://cinder:**redacted**@192.168.2.1/cinder

[gluster]
volume_driver=cinder.volume.drivers.glusterfs.GlusterfsDriver
glusterfs_shares_config=/etc/cinder/shares.conf
volume_backend_name=gluster
glusterfs_mount_point_base=/var/lib/cinder/mnt

You'll also notice the [database] section, which may or may not have changed things as well. Will update if I find any additional specifics.

ironhardchaw
  • 61
  • 1
  • 6