-1

I had an idea of a Raspberry Pi file server using multiple external disk drives in a RAID array. For starters I only had a single 1TB disk so I setup a raid1 array on that disk with the intent on adding additional drives later.

In the end this turned to be a bad idea and not really fit for my use case so now I want to convert the disk back into a "normal" one. I've followed this https://superuser.com/questions/971549/how-to-convert-a-software-raid-1-partition-to-non-raid-partition thinking it's exactly what I want. I guess it wasn't because now my disk seems to be unmountable.

What are my options now? Is there a way to recover my data?

I'll paste here outputs of various commands ran before I zeroed the superblock.

$ sudo cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sda1[0]
      976595904 blocks super 1.2 [1/1] [U]
      bitmap: 0/8 pages [0KB], 65536KB chunk

unused devices: <none>
$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Sat Sep  5 14:02:15 2020
        Raid Level : raid1
        Array Size : 976595904 (931.35 GiB 1000.03 GB)
     Used Dev Size : 976595904 (931.35 GiB 1000.03 GB)
      Raid Devices : 1
     Total Devices : 1
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Mon Apr 19 11:46:30 2021
             State : clean
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : raspberrypi:0  (local to host raspberrypi)
              UUID : a1cd3f87:6165ec4a:d68c7589:708a0fe3
            Events : 74

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
$ sudo fdisk -l
...
Disk /dev/sda: 931.48 GiB, 1000170586112 bytes, 1953458176 sectors
Disk model: Elements 2621
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: C70BB0D7-644F-4911-B379-A3729BB61C3D

Device     Start        End    Sectors   Size Type
/dev/sda1   2048 1953458142 1953456095 931.5G Linux filesystem


Disk /dev/md0: 931.35 GiB, 1000034205696 bytes, 1953191808 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
$ sudo mdadm --stop /dev/md0
mdadm: stopped /dev/md0

$ sudo mdadm --zero-superblock /dev/sda1

$ sudo mount /dev/sda1 /mnt/ext
mount: /mnt/ext: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error.
Darwin
  • 107
  • 1
  • I'm pretty sure I followed this guide when setting the array up https://magpi.raspberrypi.org/articles/build-a-raspberry-pi-nas Could running `sudo mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sda1 missing` be enough to save me here? – Darwin Apr 19 '21 at 10:41
  • Two remarks: 1. If you have a backup, then just reformat and use that. 2. If you do not have a backup and if the data is important, then stop using the disk. First recreate the setup on a second disk and test. Or even better, make a copy of the disks content (e.g. using dd) before making more changes. – Hennes Apr 19 '21 at 11:10
  • Also, the question seems to be a better fit on Superuser than on serverfault. Lewt me try to get it migrated. – Hennes Apr 19 '21 at 11:10
  • @Hennes Sure, go ahead if that's possible. – Darwin Apr 19 '21 at 11:41

2 Answers2

3

Since the RAID superblock was created at the beginning of the disk initially, the real data begins afterwards.

Usually, that is what you want, because it ensures that any access to the disk goes through the RAID, and the set remains consistent, but it also means that block 0 of the RAID is not block 0 of the disk.

So this means that the data needs to be shifted. This can be done in-place with dd since you are moving blocks from the back to the front, but if that process is interrupted, you are likely to lose data, and it is kind of irreversible since dd cannot work in reverse order, which you'd need to move blocks towards the end of the disk.

At this point I'd ask myself if the effort is worth it. The overhead of a one-disk RAID1 is the kernel adding an offset to every request, in a well-optimized code path. The easy way out for you is to just recreate the RAID with the --assume-clean option.

Simon Richter
  • 3,209
  • 17
  • 17
  • Will recreating the array keep my data? Is there a way to recreate it without having to guess what my initial parameters were when creating it? – Darwin Apr 19 '21 at 11:49
  • I did try running `sudo mdadm --create --verbose /dev/md0 --level=mirror --raid-devices=2 /dev/sda1 missing` and it said `mdadm: partition table exists on /dev/sda1 but will be lost or meaningless after creating array` and asked if I wanted to proceed but I got scared and declined. – Darwin Apr 19 '21 at 11:50
  • @Darwin, recreating with `--assume-clean` keeps the data, yes. The partition table worries me though. – Simon Richter Apr 19 '21 at 12:07
  • @Darwin, can you check where the partition tables are? `dd if=/dev/sda bs=1M count=10 | hexdump -C | grep ' 55 aa '` should give you good candidates. – Simon Richter Apr 19 '21 at 12:14
  • Ok. First it's probably best that I copy the contents of the disk somewhere before I do anything else. I'll do what you ask next. – Darwin Apr 19 '21 at 12:22
  • That is a read-only test. It should give you two candidates, most likely `000001f0` and `002001f0`, at least that's what I get when I experiment with loop devices. You might see a few false positives since you have actual data on there -- a valid partition table signature is at the end of a 512 byte sector, so the `f0` part is fixed and the number before that is odd. – Simon Richter Apr 19 '21 at 12:43
  • I get a bunch of stuff, but I'm guessing the first two entries are relevant: `000001f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa |..............U.|` `001001f0 73 74 61 72 74 0d 0a 00 8c a9 be d6 00 00 55 aa |start.........U.|` – Darwin Apr 19 '21 at 12:48
  • @Darwin, ah, that is different than expected. I can replicate this by creating the array with `-e 0.9`, i.e. the old metadata format -- it seems alignment requirements are different then. – Simon Richter Apr 19 '21 at 12:54
  • Let us [continue this discussion in chat](https://chat.stackexchange.com/rooms/123139/discussion-between-darwin-and-simon-richter). – Darwin Apr 19 '21 at 12:58
0

In the end, I managed to recover my ext4 partition using the TestDisk program. https://www.cgsecurity.org/wiki/TestDisk

I didn't recover the RAID setup, but all the data was saved.

Darwin
  • 107
  • 1