23

When I try to remove a logical volume I get the message

#lvremove /dev/my-volumes/volume-1 
Can't remove open logical volume "volume-1"

#lvchange -an -v /dev/my-volumes/volume-1 
Using logical volume(s) on command line
/dev/dm-1: read failed after 0 of 4096 at 0: Input/output error
Deactivating logical volume "volume-1"
Found volume group "my-volumes"
LV my-volumes/volume-1 in use: not deactivating

#lvremove -vf /dev/my-volumes/volume-1 
Using logical volume(s) on command line
/dev/dm-1: read failed after 0 of 4096 at 0: Input/output error
Can't remove open logical volume "volume-1"

#lvs
/dev/dm-1: read failed after 0 of 4096 at 0: Input/output error
LV              VG           Attr   LSize   Origin Snap%  Move Log Copy%  Convert
volume-1        my-volumes   -wi-ao  50.00g  

How can I force the removal of this volume?

Thanks, Everett

Everett Toews
  • 623
  • 1
  • 5
  • 12

13 Answers13

15

What does the logical volume contain? Is it a filesystem (I accidentally wrote partition)? Could it be it's mounted? In that case:

umount /dev/my-volumes/volume-1

Does it have any active snapshots?

Edit: try lvchange -an -v /dev/my-volumes/volume-1 and lvremove -vf /dev/my-volumes/volume-1.

Edit 2: please post 'lvs'.

Edit 3: Try this with some other problematic volume. It's not the cleanest option but according to this site it may work, and it's less problematic than rebooting anyway.

dmsetup remove my--volumes-volume--number
lvremove /dev/my-volumes/volume-number
Eduardo Ivanec
  • 14,531
  • 1
  • 35
  • 42
  • It doesn't contain anything. It's not a partition. It's not mounted (any longer). No active snapshots. – Everett Toews May 05 '11 at 17:28
  • Well, what did you use it for? It may give us a clue as to what may be wrong. – Eduardo Ivanec May 05 '11 at 17:42
  • Added the info you requested to the question. It was being used as a volume for OpenStack Compute (aka Nova). I actually managed to remove it by going nuclear and rebooting the machine and then doing an lvremove. Way more drastic than I wanted to be. I still have some other volumes hanging around that I would like to get rid of without having to reboot so any help you can provide is appreciated. – Everett Toews May 05 '11 at 20:13
  • I've added something for you to try with some other problematic volume. – Eduardo Ivanec May 05 '11 at 20:48
  • Tried it. No luck. I also tried everything from http://wiki.davidjb.com/blog:unix:removing-open-logical-volumes-in-centos-rhl (hence the "/dev/dm-1: read failed..." in some of my output) but that didn't work either. – Everett Toews May 05 '11 at 21:09
  • What about running `udevadm control --stop-exec-queue` before trying the `lvremove`? https://bugzilla.redhat.com/show_bug.cgi?id=577798 – Eduardo Ivanec May 05 '11 at 21:21
13

If you are unable to unmount or lvremove a logical volume, verify that there are no processes holding the LV

Locate the major/minor numbers for the logical volume you’re trying to remove eg:vol0

# dmsetup info -c | grep vol0

Take note of the 5th column, which indicates if a volume is “open,” and the 2nd and 3rd columns, which are the major and minor IDs, respectively.

Find any process attached to this volume by searching on the major and minor IDs discovered above:

# lsof | grep "major,minor"

Shut down or kill any process still accessing the volume to continue unmounting and removal.

then try lvremove

Michael Hampton
  • 237,123
  • 42
  • 477
  • 940
pruthvi
  • 141
  • 2
  • 6
  • 2
    In my case no help. lsof, fuser do not show any usage. dmsetup says device is busy. device is unmounted (can be mounted at any time). only a hardware reboot helps :/ – John Mar 06 '19 at 16:01
6

I got into similar situation, but removing of LV was blocked, because I was using mount -o bind.

The article below helped a lot, using lsof with major/minor numbers of LV showed process holding opened LV, in my case smbd.

Than just simply cat /proc/mounts | grep LV_name led me to conclusion, why lvremove or dmsetup remove refuse to get a rid of unmounted LV.

http://kb.eclipseinc.com/kb/why-cant-i-remove-a-linux-logical-volume/

janfai
  • 101
  • 1
  • 2
5

You probably have iet or tgt running (which one depends on what your iscsi_helper value is in /etc/nova/nova.conf, defaults to iet), and the service has an open filehandle. You can check which one by doing something like this (in my case it's tgt)

# fuser /dev/nova-volumes/volume-00000001
/dev/dm-5:           19155

# lsof | grep /dev/dm-5
tgtd      19155            root   12u      BLK              252,5         0t0    2531554 /dev/dm-5

If it's iet, stop the service by doing:

service iscsitarget stop

If it's tgt, stop the service by doing:

service tgt stop

You should then be able to delete your volumes.

Lorin Hochstein
  • 4,868
  • 15
  • 54
  • 72
2

Shutdown the LXC containers that use the filesystems via their config : lxc.mount.entry

Tonny
  • 21
  • 1
1

If you are unable to remove lvm, then please follow these steps:

  1. unmount the Partition:

    # umount /dev/sda8

    (eg. I have mounted the partition on /dev/sda8)

  2. try to remove lvm like so:

    # lvremove /dev/vgname/lvname

If you get an error like "can't remove open logical volume in linux", then try to deactivate the LVM through below commands and then remove it:

 # lvchange -an  /dev/vgname/lvname

 # lvremove /dev/vgname/lvname

Let us know if you face any issue.

Pierre.Vriens
  • 1,159
  • 34
  • 15
  • 19
1

I had this problem with a LV that comes from 3 PVs on an iSCSI device (with multipathing).

No solution worked but a simple reboot! (comment it out in fstab, so it don't get mounted again)

Maybe that helps someone.

davidak
  • 111
  • 3
1

In my case I was running cAdvisor in a container, and this seems to prevent the removal of any block devices which were mounted when it started. My fix was:

  1. Unmount the LVM volume
  2. Restart the cAdvisor container (docker restart $CONTAINER_ID)
  3. Attempt removal again
RobM
  • 155
  • 5
0

I've had this same problem and none of the solutions here worked. Indeed it might be NFS holding up your LV, in this case fuser or lsof won't show you any open files to your mount point, so just check if your NFS server is running with

# systemctl | grep -i nfs

or

# ps -aef | grep -i nfs

and check if the nfs service is exporting the mount point once related to your device with

# exportfs -v

If so just remove it or comment it from /etc/exports

Restart or stop nfs server with

# systemctl restart nfs
# systemctl stop nfs

or

# service nfs restart
# service nfs stop

You'll be able to issue your lvremove command after this.

0

this also might be locked by nfslock service in RHEL, just stop that service and the you are good to go.

0

I had a similar problem. The lv I tried to remove was a VM block device, holding a volumegroup. This volumegroup was filtered in lvm.conf, but some dev mapper items were created earlier.

To figure out was is hold by a device, look at its minor number (253, ??) ll /dev/<vg>/<lv> should point to ../dm-??

Then ls -la /sys/dev/block/253:??/holders will give you links of vg (ex -> ../../dm-xx ) relaying on your device (as a pv)

Remove them with dmsetup remove /dev/dm-xx (make sure that those dm are not used) Then you should remove /dev// which is no more a "pv" somewhere

  • Im My case this is `dmsetup remove /dev/dm-36` but i get the error `device-mapper: remove ioctl on vg0-snap--tmp--vm06.docker--disk failed: – Device or resource busy – Command failed` see: https://serverfault.com/questions/926681/kill-a-process-inside-an-lvm-snapshot – rubo77 Aug 16 '18 at 12:20
0

You may unlink your LV from the DM device:

fuser -kuc /dev/my-sample-volumes/volume-sample-1

/dev/dm-21: 2400ce(root) 2739ce(root) 4793ce(root)

ls -l /dev/my-sample-volumes/volume-sample-1

lrwxrwxrwx 1 root root 8 Aug 15 02:53 /dev/my-sample-volumes/volume-sample-1 -> ../dm-21

unlink /dev/my-sample-volumes/volume-sample-1

lvremove /dev/my-sample-volumes/volume-sample-1

  • Hi and welcome. I don't think `lvremove` is going to work after `unlink`, could you clarify that part? Just edit your answer. – kubanczyk Sep 07 '19 at 21:56
  • It should work. I've tested it. It will just unlink to its softlink. the LV should be still existing and you may remove it with lvremove. – dayzero Sep 09 '19 at 02:54
-1

I have the same problem as yours. I tried the following command and resolved: swapoff -a

lvremove ...

zhaorufei
  • 99
  • 1
  • that could lead someone to blindly try your command. Better to just point out, if LV is used for swap, then swapoff the lv first, and only this, not just -a – sgohl Nov 13 '17 at 14:47