24

Some background

  • The disk itself was "worked on" by a friend and is said to be still intact, undamaged and still mountable/recoverable
  • The disk was part of a software raid 1 on Ubuntu 12.04
  • The other disk in the original raid 1 was formatted and used for another purpose, leaving the current disk (the one in question) still technically part of a raid that no longer exists

What I have tried already

  • Basic mounting

    • I added an entry to fstab, marked the disk as ext3/ext4 and tried to mount.
    • Upon mounting the following error appears

      wrong fs type, bad option, bad superblock on

    • And in dmesg

      EXT4-fs (sdc1): VFS: Can't find ext4 filesystem

  • I have tried to find the file system type of the disk and have come up with

    $sudo file -s /dev/sdc
    /dev/sdc: x86 boot sector; partition 1: ID=0x83, starthead 254, startsector 63, 1953520002 sectors, code offset 0xb8

Where I need some help / My Questions

  • Is there a way to convert the disk to ext4 without damaging the data?
  • Is there a simple way to mount the Linux 83 file type disk and recover the data?
  • I have another disk currently free in case it is a possibility to somehow rebuild the raid
  • My main goal is to recover the data from the disk. I am open to all options.

Update

Some commands' output

  • fdisk -l /dev/sdc

    $fdisk -l /dev/sdc

    Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
    255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x0005ed9c

    Device Boot Start End Blocks Id System
    /dev/sdc1 63 1953520064 976760001 83 Linux

  • file -s /dev/sdc1

    $file -s /dev/sdc1
    /dev/sdc1: data

  • hexdump -C -n 32256 /dev/sdc (Not sure if this could help or not)

    $hexdump -C -n 32256 /dev/sdc`
    00000000  fa b8 00 10 8e d0 bc 00  b0 b8 00 00 8e d8 8e c0  |................|
    00000010  fb be 00 7c bf 00 06 b9  00 02 f3 a4 ea 21 06 00  |...|.........!..|
    00000020  00 be be 07 38 04 75 0b  83 c6 10 81 fe fe 07 75  |....8.u........u|
    00000030  f3 eb 16 b4 02 b0 01 bb  00 7c b2 80 8a 74 01 8b  |.........|...t..|
    00000040  4c 02 cd 13 ea 00 7c 00  00 eb fe 00 00 00 00 00  |L.....|.........|
    00000050  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
    *
    000001b0  00 00 00 00 00 00 00 00  9c ed 05 00 00 00 00 fe  |................|
    000001c0  ff ff 83 fe ff ff 3f 00  00 00 82 59 70 74 00 00  |......?....Ypt..|
    000001d0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
    *
    000001f0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 55 aa  |..............U.|
    00000200  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
    *
    00007e00
    
Adam
  • 401
  • 1
  • 3
  • 7
  • The problem is that the partition thinks it has some raid volume on it and not an ext4fs. And the kernel is right. However as it was a raid 1 it happens to be an ext4fs. a mount -f ext4 /dev/sdc1 /mountpoint should do the trick. To force mount to assume ext4 instead of looking for a file system is what -f does – Bananguin Feb 15 '13 at 18:21
  • 1
    The force mount doesn't give any errors, but the mount point is blank. Either the data is gone, or the mount didn't work as expected. Doing a df shows me that the newly mounted disk is 2% in use which is significantly lower than expected. – Adam Feb 15 '13 at 21:22
  • @user1129682, if mount says it isn't ext4, then it isn't... trying to force it isn't going to help. – psusi Feb 16 '13 at 00:45
  • @psusi: worked for me. Gilles answer explainns why it works under some circumstances – Bananguin Feb 16 '13 at 18:44
  • @Bananguin Don't you mean mount -t ext4? The -f flag is for 'fake' mounting (ubuntu 14.04). – Quantum7 Sep 08 '14 at 10:41
  • @Quantum7: yes, you are right, but i cant edit the comment anymore – Bananguin Sep 08 '14 at 15:17

4 Answers4

22

This is working excellently in Ubuntu 14.04:

sudo -i
mdadm --assemble --scan

You will get:

mdadm: /dev/md/1 has been started with 1 drive (out of 2)

Then mount and see your files:

cd /mnt && mkdir to-restore-md1 && mount /dev/md1 to-restore-md1
ls -la to-restore-md1
Jeff Schaller
  • 67,283
  • 35
  • 116
  • 255
  • Was getting "exists but is not an md array" on a failed hard drive that was part of an array... and this worked better than all the other suggestions. Mounted successfully, busy copying data off right now. – Zayne S Halsall Dec 28 '16 at 11:14
  • this option worked well for me. 2 Intel SSD's in RAID1. Pulled one and slaved off SATA port to pc running suse linux. Initially shows up as only /dev/sdc and as /dev/md127. Then did mdadm --assemble --scan which resulted in /dev/md/Volume0_0p1 and /dev/md/Volume0_0p2 and so on corresponding to 4 partitions that were on the disk. P2 was the one i needed so: mkdir /p2 followed by mount /dev/md/Volume0_0p2 /p2 mounted it which was EXT3 and i can easily access and copy the data. It also mounted it as read-write. – ron May 16 '17 at 16:10
  • sometimes the --scan mode does not start arrays with missing disks, in my case here I had to stop the auto-assembled array and start it again with mdadm --assemble --force /dev/md/1 /dev/sdc1 – Bruno Medeiros Mar 30 '19 at 02:27
  • mdadm --assemble --scan will give you: mdadm: No arrays found in config file or automatically – theking2 Mar 16 '21 at 18:26
7

Linux mdraid has several metadata formats. Formats 0.9 and 1.0 put the metadata at the end of the containing device, and the payload (the filesystem) starts at the beginning of the device and can be accessed directly without going through the raid layer. Formats 1.1 and 1.2 put the metadata at the middle and beginning of the containing device respectively, so the payload is at an offset.

The Ubuntu installer creates volumes with the 1.2 metadata format, so your data starts after the metadata instead of at the beginning of the device.

The simplest way to access that data is to assemble the raid device. In a RAID-1 volume, a single device is sufficient.

madadm -A /dev/sdc1

(Stop here unless you like pain.)

You can also access the data at an offset. The only point I can see to doing this is if you have to work in a very old kernel that doesn't support 1.x mdraid formats. First, determine the offset mdadm -E /dev/sdc1: look for the line Data Offset : SSS sectors. An mdadm sector is 512 bytes.

sectors=$(mdadm -E /dev/sdc1 | awk -F: '$1 ~ /Data offset/ {print $2}')
bytes=$(($sectors * 512))
losetup -f -o $bytes /dev/sdc1

In desperation, with 1.x formats, the data offset is stored in bytes 128–135 of the metadata, little-endian¹. 1.2 metadata is 4096 bytes after the beginning of the device.

You can also change the partition table to make it start further. Be very careful with your arithmetic. Only do that if you want to keep using the disk on a long-term basis in an old system that can't access the raid device.

¹ Or with platform endianness? I'm not sure.

  • the data can start at different offsets (see mdadm -E /dev/sdc1 for where exactly) but certainly not at 4k for 1.2 metadata, since 4k is precisely where the metadata is stored. See also http://unix.stackexchange.com/q/57477/22565 – Stéphane Chazelas Feb 15 '13 at 23:41
  • @StephaneChazelas Oops, yes, brain fart. Thanks. – Gilles 'SO- stop being evil' Feb 15 '13 at 23:50
  • 3
    mdadm -A /dev/sdc1 outputs mdadm: device /dev/sdc1 exists but is not an md array. I've gone a bit further to use mdadm and see if there is any additional info... mdadm --misc --examine /dev/sdc1 outputs mdadm: No md superblock detected on /dev/sdc1.. Is there a way that I can re-write the superblocks on this disk to mark it as an available disk for RAID assembly? – Adam Feb 16 '13 at 17:12
  • @Gilles A mdadm -E /dev/sdc returns the following for me: /dev/sdc: MBR Magic : aa55 Partition[0] : 1953520002 sectors at 63 (type 83) but no information for /dev/sdc1 though – Adam Feb 16 '13 at 19:05
  • @Adam Are you sure the disk wasn't written to since it last worked? Please post the output of fdisk -l /dev/sdc and file -s /dev/sdc1. – Gilles 'SO- stop being evil' Feb 17 '13 at 09:50
  • I cannot be completely sure the disk wasn't written to since it last worked - it was in the hands of someone else and I'm taking their word. See the update in the original question for the output of the commands. – Adam Feb 17 '13 at 10:12
  • @Adam Either you're running an old system that doesn't support mdraid 1.x formats (if so, try another Ubuntu 12.04, or a recent Sysrescuecd), or the disk has been messed up in some way. Normally /dev/sda1 should have type fd and not 83, and file should recognize the raid volume. Your problem has been upgraded from a simple setup problem to a much harder data recovery problem. Might the disk have been encrypted, or part of a fakeraid (BIOS RAID)? – Gilles 'SO- stop being evil' Feb 17 '13 at 11:24
  • @Gilles I'm currently trying to mount the disk on Ubuntu 12.04. It is a fresh install as of 2 weeks ago and I only use it for coding. I think it is more likely that the disk is messed up somehow. I can vouch that the disk is not encrypted since I had originally setup the software raid. Is it possible to force mdadm to pick up this disk and rebuild a RAID array? Or does it seem that it is too much trouble at this point? – Adam Feb 17 '13 at 11:53
  • 1
    @Adam If mdadm can't find its metadata there's nothing you can do there: you can't force it to do something since it doesn't know what to do. You need to look for a filesystem, and if psusi's advice doesn't lead anywhere, the outlook is bleak. Maybe a hexdump of the first few kilobytes of the disk could inspire someone (beware that it could expose some confidential data). – Gilles 'SO- stop being evil' Feb 17 '13 at 11:57
  • @Gilles Seems like I'm stuck for the moment. I've posted a hexdump if it may help anyone. I'll continue looking into this on my own as well to see if anything comes up. I'm still open to suggestions too! – Adam Feb 17 '13 at 12:32
6

To my surprise, I was/am able to recover the data by simply using foremost.

The help received here was invaluable. After trying a variety of suggested combinations, as well as my own mix-ins, the ideal method (to mount and use the disk as normal) didn't seem like an option any more. Resorting to data recovery is my solution in this case.

Adam
  • 401
  • 1
  • 3
  • 7
5

It seems that you have already zapped the mdadm superblock. If it used to be there and was format 1.1 or 1.2, then most likely the filesystem is at offset 2048 sectors. You can run e2fsck /dev/sdc1?offset=2048 to force it to look for the filesystem starting at that offset. If it finds it then you can modify your partition table to point to where the filesystem actually starts. You can use parted /dev/sdc and the unit s command to use units of sectors. print the table, note the start and end sector, then rm the partition, then recreate it with mkpart and use the same end sector, but add the offset to the start sector.

If 2048 does not work, you might also try 1985.

psusi
  • 17,303
  • Running e2fsck /dev/sdc1?offset=2048 (I also ran offset=1985) outputs Bad magic number..Superblock invalid... as well as suggesting that the superblock is corrupt and trying to run e2fsck with an alternative superblock. Seems like I should provide it an alternative superblock to move forward. – Adam Feb 16 '13 at 20:16
  • @Adam, no, you just need to get the correct offset. testdisk should be able to do a detailed scan and fix the partition table for you. – psusi Feb 16 '13 at 20:44
  • testdisk is completely new territory for me. A basic run (Analyse) show No ext2, JFS, Reiser.. marker. Bad relative sector. No partition is bootable. It also provides the following: 1 P Linux 0 1 1 121600 254 63 1953520002 How can I make sense of that in order to help the situation? – Adam Feb 16 '13 at 21:30
  • @Adam, I've never used it myself, I just know it is supposed to be able to scan for and find the superblock. You did run it on the whole disk, not the partition right? – psusi Feb 16 '13 at 21:46
  • After running an analysis on the full disk, it turned up no partitions. Currently running a deep scan now. If this doesn't turn up anything, I'm not sure where to go from here. – Adam Feb 16 '13 at 22:12
  • The deep scan turned up the same results as the quick scan: No partitions found. I can't see anything to go on as far as testdisk is concerned. Not sure what to do at this point. All suggestions so far have led to a giant wall >.< – Adam Feb 16 '13 at 22:41
  • @Adam, then it seems the disk was NOT part of a raid1. Maybe it was part of some other raid level, in which case, you can not use it alone. – psusi Feb 17 '13 at 04:13
  • Wait, I think the offset actually is in bytes, and the numbers I listed were sectors, so you will need to multiply them by 512. – psusi Feb 17 '13 at 17:53
  • I tried 63*512 without any luck. I tried a variety of other offsets as well. 4096 gave the best results. I used fsck and it told me that it had reverted the disk to a ext2 system since the ext3/4 was corrupted. It then went on to attempt to convert the entire disk to ext2 (I believe so) since it was saying inode # has illegal blocks.... I aborted and switched over to foremost after that. I may have lot some of the data at the start of the disk - I'll find out soon. >.< – Adam Feb 17 '13 at 21:09