0

So we had a migration in the office using pvmove. Then this happened

Mar  8 12:26:51 v1 kernel: [ 5798.100321] BUG: kernel NULL pointer dereference, address: 0000000000000140
Mar  8 12:26:51 v1 kernel: [ 5798.101099] #PF: supervisor read access in kernel mode
Mar  8 12:26:51 v1 kernel: [ 5798.101716] #PF: error_code(0x0000) - not-present page
Mar  8 12:26:51 v1 kernel: [ 5798.102310] PGD 0 P4D 0 
Mar  8 12:26:51 v1 kernel: [ 5798.102904] Oops: 0000 [#1] SMP NOPTI
Mar  8 12:26:51 v1 kernel: [ 5798.103465] CPU: 48 PID: 1190 Comm: kworker/48:1H Not tainted 5.5.8-050508-generic #202003051633
Mar  8 12:26:51 v1 kernel: [ 5798.104071] Hardware name: ASUSTeK COMPUTER INC. RS700A-E9-RS12/KNPP-D32 Series, BIOS 1301 06/17/2019
Mar  8 12:26:51 v1 kernel: [ 5798.104693] Workqueue: kblockd blk_mq_run_work_fn
Mar  8 12:26:51 v1 kernel: [ 5798.105315] RIP: 0010:blk_mq_get_driver_tag+0x61/0x100
Mar  8 12:26:51 v1 kernel: [ 5798.105931] Code: 00 00 48 89 45 c0 8b 47 18 48 8b 7f 10 48 c7 45 d8 00 00 00 00 89 45 d0 b8 01 00 00 00 c7 45 c8 01 00 00 00 48 89 7d e0 75 50 <48> 8b 87 40 01 00 00 8b 40 04 39 43 24 73 07 c7 45 c8 03 00 00 00
Mar  8 12:26:51 v1 kernel: [ 5798.106653] RSP: 0018:ffffa92b9c59bcc0 EFLAGS: 00010246
Mar  8 12:26:51 v1 kernel: [ 5798.107371] RAX: 0000000000000001 RBX: ffff8d9b04805a00 RCX: ffffa92b9c59bda0
Mar  8 12:26:51 v1 kernel: [ 5798.108146] RDX: 0000000000000001 RSI: ffffa92b9c59bda0 RDI: 0000000000000000
Mar  8 12:26:51 v1 kernel: [ 5798.108881] RBP: ffffa92b9c59bd00 R08: 0000000000000000 R09: ffff8d9b04805ee8
Mar  8 12:26:51 v1 kernel: [ 5798.109613] R10: 0000000000000000 R11: 0000000000000800 R12: ffff8d9b04805a00
Mar  8 12:26:51 v1 kernel: [ 5798.110397] R13: ffffa92b9c59bda0 R14: ffff8d9b04805a48 R15: 0000000000000000
Mar  8 12:26:51 v1 kernel: [ 5798.111167] FS:  0000000000000000(0000) GS:ffff8d9b1ef80000(0000) knlGS:0000000000000000
Mar  8 12:26:51 v1 kernel: [ 5798.111938] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Mar  8 12:26:51 v1 kernel: [ 5798.112693] CR2: 0000000000000140 CR3: 000000a7875fc000 CR4: 00000000003406e0
Mar  8 12:26:51 v1 kernel: [ 5798.113553] Call Trace:
Mar  8 12:26:51 v1 kernel: [ 5798.114351]  blk_mq_dispatch_rq_list+0xf9/0x550
Mar  8 12:26:51 v1 kernel: [ 5798.115121]  ? deadline_remove_request+0x4e/0xb0
Mar  8 12:26:51 v1 kernel: [ 5798.115862]  ? dd_dispatch_request+0x63/0x1f0
Mar  8 12:26:51 v1 kernel: [ 5798.116637]  blk_mq_do_dispatch_sched+0x67/0x100
Mar  8 12:26:51 v1 kernel: [ 5798.117404]  blk_mq_sched_dispatch_requests+0x12d/0x180
Mar  8 12:26:51 v1 kernel: [ 5798.118178]  __blk_mq_run_hw_queue+0x5a/0x110
Mar  8 12:26:51 v1 kernel: [ 5798.118944]  blk_mq_run_work_fn+0x1b/0x20
Mar  8 12:26:51 v1 kernel: [ 5798.119741]  process_one_work+0x1eb/0x3b0
Mar  8 12:26:51 v1 kernel: [ 5798.120534]  worker_thread+0x4d/0x400
Mar  8 12:26:51 v1 kernel: [ 5798.121369]  kthread+0x104/0x140
Mar  8 12:26:51 v1 kernel: [ 5798.122156]  ? process_one_work+0x3b0/0x3b0
Mar  8 12:26:51 v1 kernel: [ 5798.122960]  ? kthread_park+0x90/0x90
Mar  8 12:26:51 v1 kernel: [ 5798.123741]  ret_from_fork+0x22/0x40
Mar  8 12:26:51 v1 kernel: [ 5798.124524] Modules linked in: act_police cls_u32 sch_ingress sch_sfq sch_htb nls_utf8 isofs uas usb_storage xt_socket nf_socket_ipv4 nf_socket_ipv6 nf_defrag_ipv6 nf_defrag_ipv4 xt_mark iptable_mangle ebt_ip6 ebt_arp ebt_ip ebtable_broute ebtable_nat ebtable_filter ebtables ip6table_filter ip6_tables iptable_filter ip_tables x_tables bpfilter binfmt_misc dm_mirror dm_region_hash dm_log dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio input_leds ipmi_ssif amd64_edac_mod edac_mce_amd i2c_piix4 k10temp ipmi_si ipmi_devintf ipmi_msghandler mac_hid kvm_amd ccp kvm ib_iser rdma_cm iw_cm ib_cm ib_core iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi vhost_net vhost tap bonding lp parport br_netfilter bridge stp llc autofs4 btrfs blake2b_generic zstd_compress raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx xor raid6_pq multipath linear ast drm_vram_helper drm_ttm_helper ttm raid1 hid_generic raid0 drm_kms_helper usbhid crct10dif_pclmul syscopyarea crc32_pclmul
Mar  8 12:26:51 v1 kernel: [ 5798.124577]  bnx2x sysfillrect ghash_clmulni_intel sysimgblt fb_sys_fops aesni_intel crypto_simd mdio cryptd igb ahci hid glue_helper nvme libcrc32c drm dca libahci nvme_core i2c_algo_bit
Mar  8 12:26:51 v1 kernel: [ 5798.130380] CR2: 0000000000000140
Mar  8 12:26:51 v1 kernel: [ 5798.131449] ---[ end trace 2451c5dc4d61723b ]---
Mar  8 12:26:51 v1 kernel: [ 5798.246646] RIP: 0010:blk_mq_get_driver_tag+0x61/0x100
Mar  8 12:26:51 v1 kernel: [ 5798.248626] Code: 00 00 48 89 45 c0 8b 47 18 48 8b 7f 10 48 c7 45 d8 00 00 00 00 89 45 d0 b8 01 00 00 00 c7 45 c8 01 00 00 00 48 89 7d e0 75 50 <48> 8b 87 40 01 00 00 8b 40 04 39 43 24 73 07 c7 45 c8 03 00 00 00
Mar  8 12:26:51 v1 kernel: [ 5798.250301] RSP: 0018:ffffa92b9c59bcc0 EFLAGS: 00010246
Mar  8 12:26:51 v1 kernel: [ 5798.251725] RAX: 0000000000000001 RBX: ffff8d9b04805a00 RCX: ffffa92b9c59bda0
Mar  8 12:26:51 v1 kernel: [ 5798.253111] RDX: 0000000000000001 RSI: ffffa92b9c59bda0 RDI: 0000000000000000
Mar  8 12:26:51 v1 kernel: [ 5798.254411] RBP: ffffa92b9c59bd00 R08: 0000000000000000 R09: ffff8d9b04805ee8
Mar  8 12:26:51 v1 kernel: [ 5798.255695] R10: 0000000000000000 R11: 0000000000000800 R12: ffff8d9b04805a00
Mar  8 12:26:51 v1 kernel: [ 5798.256925] R13: ffffa92b9c59bda0 R14: ffff8d9b04805a48 R15: 0000000000000000
Mar  8 12:26:51 v1 kernel: [ 5798.258145] FS:  0000000000000000(0000) GS:ffff8d9b1ef80000(0000) knlGS:0000000000000000
Mar  8 12:26:51 v1 kernel: [ 5798.259333] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Mar  8 12:26:51 v1 kernel: [ 5798.260587] CR2: 0000000000000140 CR3: 000000a7875fc000 CR4: 00000000003406e0

Currently neither abort works nor continuing:

# pvmove --abort
    Failed to copy one or more poll_operation_id members.

# pvmove --atomic /dev/nvme3n1 /dev/md127
 Detected pvmove in progress for /dev/nvme3n1
 Ignoring remaining command line arguments
 ABORTING: Mirror percentage check failed.

We removed the "pvmove0_mimage_0" successfully trying to go forward - it still doesn't work.

  LV                                        VG      Attr       LSize   Pool  Origin Data%  Meta%        Move Log Cpy%Sync Convert Devices          
    Intel                                     vgnvme0 twi-aotz--   3.64t              90.90  12.86                                  Intel_tdata(0)   
    [Intel_tdata]                             vgnvme0 TwI-ao----   3.64t                                                            /dev/md127(78)   
    [Intel_tdata]                             vgnvme0 TwI-ao----   3.64t                                                            pvmove0(0)       
    [Intel_tmeta]                             vgnvme0 ewI-ao---- 900.00m                                                            /dev/md127(0)    
    [Intel_tmeta]                             vgnvme0 ewI-ao---- 900.00m                                                            pvmove0(0)       
    [Intel_tmeta]                             vgnvme0 ewI-ao---- 900.00m                                                            pvmove0(0)       
    [lvol0_pmspare]                           vgnvme0 ewI-a----- 324.00m                                                            pvmove0(0)       
    [pvmove0]                                 vgnvme0 p-C-aom---   1.82t                                                            /dev/nvme3n1(0)  
    [pvmove0]                                 vgnvme0 p-C-aom---   1.82t                                                            /dev/nvme3n1(84) 
    [pvmove0]                                 vgnvme0 p-C-aom---   1.82t                                                            /dev/nvme3n1(228)
    [pvmove0]                                 vgnvme0 p-C-aom---   1.82t                                                            /dev/nvme3n1(3) 

System is running and we see some usage of the vgnvme0-pvmove0 (most likely because it is a mirror) but how do we abort pvmove in such situation? This is some really absolutely undocumented thing. We don't want to reimage backups because there was already something new written within 3 hours of migration.

Current suggestion for recovery is to create a new thin pool, migrate to it volume by volume stopping running vms one by one, alter software db to accomodate the new location and restart the vm... And after total successful migration remove the old thinpool. I can't seem to find a way to make a mirror of individual thin lvs, is that possible? If we could mirror individual LVs we could migrate all thins without much issue We have lvs like vm1 vm2 vm3 etc...

Ajay
  • 61
  • 1
  • 6

1 Answers1

0

So what we did is move with vm managedsave for safety to a new thin pool (using dd with sparse) and then remove old thin pool from lvm configuration.

Main issue is that pvmove was in a position of "splitbrain" when the moved data got booted but due to --atomic still resided on the previous disk. In this case there is no way to easily continue because pvmove apparently has a bug in it preventing data copy to prevent overwriting of the copied already data (which as I mentioned above got in use for some reason after reboot).

Ajay
  • 61
  • 1
  • 6