2

I have mistakenly installed Fedora server on my RAID 5 rahter on a SSD. Now the raid is not mounting:

# mount /dev/md0 /mnt/raid
mount: /mnt/raid: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error.

lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 1.8T 0 disk └─md0 9:0 0 5.5T 0 raid5 ├─md0p1 259:14 0 199.5M 0 md ├─md0p2 259:15 0 1G 0 md ├─md0p3 259:16 0 7.8G 0 md └─md0p4 259:17 0 5.5T 0 md sdb 8:16 0 1.8T 0 disk └─md0 9:0 0 5.5T 0 raid5 ├─md0p1 259:14 0 199.5M 0 md ├─md0p2 259:15 0 1G 0 md ├─md0p3 259:16 0 7.8G 0 md └─md0p4 259:17 0 5.5T 0 md sdc 8:32 0 1.8T 0 disk └─md0 9:0 0 5.5T 0 raid5 ├─md0p1 259:14 0 199.5M 0 md ├─md0p2 259:15 0 1G 0 md ├─md0p3 259:16 0 7.8G 0 md └─md0p4 259:17 0 5.5T 0 md sdd 8:48 0 1.8T 0 disk └─md0 9:0 0 5.5T 0 raid5 ├─md0p1 259:14 0 199.5M 0 md ├─md0p2 259:15 0 1G 0 md ├─md0p3 259:16 0 7.8G 0 md └─md0p4 259:17 0 5.5T 0 md nvme0n1 259:0 0 238.5G 0 disk ├─nvme0n1p1 259:1 0 200M 0 part /boot/efi ├─nvme0n1p2 259:2 0 1G 0 part /boot ├─nvme0n1p3 259:3 0 7.8G 0 part [SWAP] ├─nvme0n1p4 259:4 0 50G 0 part / └─nvme0n1p5 259:5 0 179.5G 0 part /home

fdisk -l

Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0xa3cc0309

Device Boot Start End Sectors Size Id Type /dev/sda1 2048 3907029167 3907027120 1.8T fd Linux raid autodetect

Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0xed8ce2fd

Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 3907029167 3907027120 1.8T fd Linux raid autodetect

Disk /dev/sdc: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x3d7e7ce7

Device Boot Start End Sectors Size Id Type /dev/sdc1 2048 3907029167 3907027120 1.8T fd Linux raid autodetect

Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x105b1153

Device Boot Start End Sectors Size Id Type /dev/sdd1 2048 3907029167 3907027120 1.8T fd Linux raid autodetect

Disk /dev/nvme0n1: 238.5 GiB, 256060514304 bytes, 500118192 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 9B5B3FC7-C545-4AE7-94A4-D1B2BFCB006D

Device Start End Sectors Size Type /dev/nvme0n1p1 2048 411647 409600 200M EFI System /dev/nvme0n1p2 411648 2508799 2097152 1G Linux filesystem /dev/nvme0n1p3 2508800 18849791 16340992 7.8G Linux swap /dev/nvme0n1p4 18849792 123707391 104857600 50G Linux filesystem /dev/nvme0n1p5 123707392 500117503 376410112 179.5G Linux filesystem

Disk /dev/md0: 5.5 TiB, 6000793878528 bytes, 11720300544 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 524288 bytes / 1572864 bytes Disklabel type: gpt Disk identifier: E95D6E40-653D-4E3B-AD23-473266AB7890

Device Start End Sectors Size Type /dev/md0p1 3072 411647 408576 199.5M EFI System /dev/md0p2 411648 2509823 2098176 1G Linux filesystem /dev/md0p3 2509824 18849791 16339968 7.8G Linux swap /dev/md0p4 18849792 11720294399 11701444608 5.5T Linux filesystem

cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] md0 : active raid5 sdd[4] sdb[0] sdc[2] sda[1] 5860150272 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU] bitmap: 0/15 pages [0KB], 65536KB chunk

unused devices: <none>

mdadm --examine /dev/sda /dev/sda: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 6f619c94:6b307950:3fdba8c6:4f42b906 Name : localhost.localdomain:0 Creation Time : Wed Jul 20 12:14:37 2016 Raid Level : raid5 Raid Devices : 4

Avail Dev Size : 3906767024 (1862.89 GiB 2000.26 GB) Array Size : 5860150272 (5588.67 GiB 6000.79 GB) Used Dev Size : 3906766848 (1862.89 GiB 2000.26 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262056 sectors, after=176 sectors State : clean Device UUID : a1d9b5ed:86165a6d:d89c58e5:eb920479

Internal Bitmap : 8 sectors from superblock Update Time : Mon Jul 13 15:53:28 2020 Bad Block Log : 512 entries available at offset 72 sectors Checksum : afaddf01 - correct Events : 65852

     Layout : left-symmetric
 Chunk Size : 512K

Device Role : Active device 1 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

mdadm --examine /dev/sdb /dev/sdb: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 6f619c94:6b307950:3fdba8c6:4f42b906 Name : localhost.localdomain:0 Creation Time : Wed Jul 20 12:14:37 2016 Raid Level : raid5 Raid Devices : 4

Avail Dev Size : 3906767024 (1862.89 GiB 2000.26 GB) Array Size : 5860150272 (5588.67 GiB 6000.79 GB) Used Dev Size : 3906766848 (1862.89 GiB 2000.26 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262056 sectors, after=176 sectors State : clean Device UUID : d3a9e0cc:a3d59a4f:25bba234:b9d61ef3

Internal Bitmap : 8 sectors from superblock Update Time : Mon Jul 13 15:53:28 2020 Bad Block Log : 512 entries available at offset 72 sectors Checksum : 3a7dd06a - correct Events : 65852

     Layout : left-symmetric
 Chunk Size : 512K

Device Role : Active device 0 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

mdadm --examine /dev/sdc /dev/sdc: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 6f619c94:6b307950:3fdba8c6:4f42b906 Name : localhost.localdomain:0 Creation Time : Wed Jul 20 12:14:37 2016 Raid Level : raid5 Raid Devices : 4

Avail Dev Size : 3906767024 (1862.89 GiB 2000.26 GB) Array Size : 5860150272 (5588.67 GiB 6000.79 GB) Used Dev Size : 3906766848 (1862.89 GiB 2000.26 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262056 sectors, after=176 sectors State : clean Device UUID : 6bda75d1:a7960438:b0b18851:bdc1e1d8

Internal Bitmap : 8 sectors from superblock Update Time : Mon Jul 13 15:53:28 2020 Bad Block Log : 512 entries available at offset 72 sectors Checksum : 2a25a397 - correct Events : 65852

     Layout : left-symmetric
 Chunk Size : 512K

Device Role : Active device 2 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

mdadm --examine /dev/sdd /dev/sdd: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 6f619c94:6b307950:3fdba8c6:4f42b906 Name : localhost.localdomain:0 Creation Time : Wed Jul 20 12:14:37 2016 Raid Level : raid5 Raid Devices : 4

Avail Dev Size : 3906767024 (1862.89 GiB 2000.26 GB) Array Size : 5860150272 (5588.67 GiB 6000.79 GB) Used Dev Size : 3906766848 (1862.89 GiB 2000.26 GB) Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262056 sectors, after=176 sectors State : clean Device UUID : 0f8d3e2f:409ba082:7369a795:614a1a07

Internal Bitmap : 8 sectors from superblock Update Time : Mon Jul 13 15:53:28 2020 Bad Block Log : 512 entries available at offset 72 sectors Checksum : 44e19b3c - correct Events : 65852

     Layout : left-symmetric
 Chunk Size : 512K

Device Role : Active device 3 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing)

Is there any chace to recover data from the RAID?

Avi
  • 21
  • If sd{a,b,c,d} are part of the MD device (please add the output of /proc/mdstat to your question) then fdisk -l on these devices does not make sense. You see old data (before the RAID creation). – Hauke Laging Jul 13 '20 at 23:18
  • Posted my /proc/mdstat – Avi Jul 14 '20 at 03:57
  • It's absolutely not clear which partitions your Fedora installation overwrote, so I'd just recreate RAID because there's little chance you'll be able to recover it. If you had data you need to restore, please try using R-Studio Undelete. – Artem S. Tashkinov Jul 14 '20 at 05:35
  • That's a lot of damage... what was on the RAID before? Just a filesystem, which one, or other things like LVM, LUKS, partitions, ...? Check mdadm --examine for the creation date, is it still the old RAID or did it get re-created as a whole too? – frostschutz Jul 14 '20 at 08:07
  • @roaima What makes you think the RAID has not started...? – Hauke Laging Jul 14 '20 at 08:32
  • @HaukeLaging Ah. I missed the md0 line from /proc/mdstat entirely. I'll delete and rethink. Thanks – Chris Davies Jul 14 '20 at 11:09
  • @frostschutz just added the mdadm --examine data. As you can see creation time left at the original. Update time is when I upgraded the OS. – Avi Jul 14 '20 at 13:32

0 Answers0