3

I am trying out the volume attachment function at Openstack (version:wallaby) to the server as additional device but failed.

The volume backend is ceph which all of the osds are up and healthy.

ceph-osd/38*                   active    idle   0         172.16.6.64                        Unit is ready (1 OSD)
    ntp/149                      active    idle             172.16.6.64     123/udp            chrony: Ready
ceph-osd/39                    active    idle   1         172.16.6.65                        Unit is ready (1 OSD)
    ntp/147                      active    idle             172.16.6.65     123/udp            chrony: Ready
ceph-osd/40                    active    idle   2         172.16.6.66                        Unit is ready (1 OSD)
    ntp/146*                     active    idle             172.16.6.66     123/udp            chrony: Ready
ceph-osd/41                    active    idle   3         172.16.6.67                        Unit is ready (1 OSD)
    ntp/148                      active    idle             172.16.6.67     123/udp            chrony: Ready

where the servers are served by nova.

The whole volume attachment process is successful on some of the servers. I can attach the volume created to the nova instance as /dev/vdb and /dev/vdc. However, some of the servers cannot. I have checked the /var/log/nova/nova-compute.log and got the below message

 ERROR oslo_messaging.rpc.server libvirt.libvirtError: internal error: unable to execute QEMU command 'blockdev-add': error connecting: Invalid argument

Additional info: The volume can be attached when the server is in shutoff state but the server cannot be powered up if the volume is attached. I take a look at the /var/log/nova/nova-compute.log and the ERROR message is as below:

ERROR oslo_messaging.rpc.server libvirt.libvirtError: internal error: process exited while connecting to monitor: 2021-11-01T16:34:08.889402Z qemu-system-x86_64: -blockdev {"driver":"rbd","pool":"cinder-ceph","image":"volume-c41ce9db-e375-4b21-920f-e815035b51ed","server":[{"host":"172.16.6.104","port":"6789"},{"host":"172.16.6.106","port":"6789"},{"host":"172.16.6.105","port":"6789"}],"user":"cinder-ceph","auth-client-required":["cephx","none"],"key-secret":"libvirt-1-storage-secret0","node-name":"libvirt-1-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}: error connecting: Invalid argument

I am pretty sure the volume does not have issue as I can attach the same volume to other nova instance without issue.

I have done the below measure but still no luck:

(1) I recreated the nova instance so with a different instance id but still fail to attach

(2) I have checked the nova and ceph connecting virsh secret and config at /etc/nova/nova.conf and /etc/libvirt/secrets with virsh secret-list. They are the same as other successful volume attachment instance

All of the openstack service is up and running without error. Could anyone give me some clue about the ERROR message?

Mureinik
  • 113
  • 6
ony4869
  • 33
  • 3
  • If you can attach the volume to a different instance then it's not the volume. What are the differences between the instances? Compare the xml definitions of the instances. Does cinder-volume.log reveal anything? – eblock Nov 04 '21 at 08:27
  • hi @eblock, thanks for the suggestion. I finally find out that only cinder-ceph virsh secret key is installed. I search through the /etc/nova/nova.conf file to get the rdb_secret_uuid and created the xml and base64 for virsh secret-set-value by using the secret files I back up earlier. After the installation of nova-ceph key, I can mount the volume successfully. (put here as a record: virsh secret-set-value --secret --base64 ) – ony4869 Nov 05 '21 at 03:45

0 Answers0