1

We're having some troubles deploying Kilo on a system with 3 controllers and 3 computes, through mirantis fuel 7.0.

The problems involve creation and attaching of volumes, specially the ones stored on a NetApp SAN. As a result, I had to delete some stuck volumes and instances through accessing to cinder and nova databases and deleting files from instances, volumes, volumes_admin_metadata, volume_attachment and volume_glance_metadata.

The problem is, the volume count on the "Overview" for the project still counts those disappeared volumes and instances, so I'd like to know what part of the database that information is being read and how to correct it / synchronize it.

Also I't like to know how to remove the phisical LVM corresponding to those volumes, since they still show up when I do an "lsblk" on the controller that was storing them.

Thanks

  • 1
    when you deployed the stack did you install the fuel plugin for NetApp? - what is the ourput of "cinder volume list" and why is there an LVM driver and an NetApp driver? – Sum1sAdmin Apr 28 '16 at 11:25
  • Yes, we installed and checked NetApp plugin, and it can create volumes both in cinder (LVM) and netapp. We use cinder multibackend feature so we can create volumes on cinder and netapp deppending on the volume-type users select. – animaletdesequia Apr 28 '16 at 12:34

1 Answers1

1

I think you are using a multi backed cinder that can create volumes by using the netapp and lvm drivers - sometimes volumes can become stuck in any type of status 'create, extend, snapshot, delete etc. there is already a cli and horizon tool for resetting the status of stuck volumes, since you can't delete a volume that is stuck at a different status:

cinder reset-state --state available uuid

enter image description here

as for where the LVM is - it will be on the server which you installed the cinder role: from the fuel server

fuel role list

and then ssh onto the cinder node and look at lvm -v

if you don't intend you use the LVM driver (it's a reference driver so you can see how storage as a service works) then make sure to remove reference to the LVM driver in your cinder.conf.

you shouldn't have to go into the database to remove infrastructure, but it is necessary some times.

Sum1sAdmin
  • 1,914
  • 1
  • 11
  • 20
  • I used the reset-stat feature, the problem is since it didn't work with some volumes due to misconfiguration I was forced to delete them from the cinder database itself, so now I'm not able to see them nor find their UUID. I asume that in some point of the database those volumes are still considered "in use". – animaletdesequia Apr 28 '16 at 12:36
  • 1
    do you know if the are NetApp or lvm? - if cinder created lvm's it locks them and they can't be manually removed while cinder is running, have a look at lvscan and in /var/lib/cinder/volumes, when cinder is stopped you might be able to lvremove. – Sum1sAdmin Apr 28 '16 at 13:11
  • The only volumes being created are all LVM (we're having problems connecting to the NFS shares, but that's another matter). I think I have sorted the "usage" part, by manually setting the correct values on cinder database, on quota_usages table. The same goes for nova. Now i can create/delete volumes and it shows the right amount. The only thing I need now is how to remove LVM volumes from the system itself. – animaletdesequia Apr 28 '16 at 13:42
  • 1
    ah, - tgt the virtual iscsi target, if I remember it locks the volumes, so stop tgt and stop cinder-volume and then lvremove /dev/cinder-volumes/uuid – Sum1sAdmin Apr 28 '16 at 13:58
  • It worked! I think you just save my job hehe. Now I need to find out the NFS/NetApp thing but I'm being told it's a network problem so the data guys would have to eat that part... – animaletdesequia Apr 28 '16 at 15:15