21

I have a 3TB hard disk pulled out of a WD Mybook Live NAS. The partition table is as follows:

Model: ATA WDC WD30EZRS-11J (scsi)
Disk /dev/sdb: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 3      15.7MB  528MB   513MB                primary
 1      528MB   2576MB  2048MB  ext3         primary  raid
 2      2576MB  4624MB  2048MB  ext3         primary  raid
 4      4624MB  3001GB  2996GB  ext4         primary

So I'm trying to access partition 4 (the big one!):

root@john-desktop:~/linux-3.9-rc8# mount -t ext4 /dev/sdb4 /mnt/
mount: wrong fs type, bad option, bad superblock on /dev/sdb4,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail  or so

Dmesg output:

[ 2058.085881] EXT4-fs (sdb4): bad block size 65536

This is fair enough. As far as I can tell my kernel isn't configured to support block sizes of over 4K.

My question is, what symbol should I be searching for in my kernel config to allow bigger block sized partitions to be mounted? I've scoured google for this, and I thought I saw the option before but I can't find any mention of it in the latest stable kernel source.

Edit: Full hard disk info from hdparm here: http://pastebin.com/hDdbUzjd

Edit: dumpe2fs output:

Mount count:              0
Maximum mount count:      30Last checked:             Wed May 30 15:22:14 2012Check interval:           15552000 (6 months)Next check after:         Mon Nov 26 14:22:14 2012Lifetime writes:          319 GBReserved blocks uid:      0 (user root)Reserved blocks gid:      0 (group root)First inode:              11
Inode size:               256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      cd7a28a0-714c-9942-29f4-83bca1209130
Journal backup:           inode blocks
Journal features:         journal_incompat_revoke
Journal size:             2048M
Journal length:           32768
Journal sequence:         0x00010146
Journal start:            0
John Hunt
  • 848
  • Please note, passing a block size to the mount command does nothing: mount -t ext4 -o bs=65536 /dev/sdb4 /mnt .... the bs option is only supported on a handful of filesystems (and not ext ones..) – John Hunt Apr 24 '13 at 13:09
  • What is the dumpe2fs -h output for that volume? My 2012 man page for mke2fs says: "Valid block-size values are 1024, 2048 and 4096 bytes per block." Quite a jump from 4096 to 65536. – Hauke Laging Apr 24 '13 at 13:22
  • @HaukeLaging - Have added that output, thanks :) – John Hunt Apr 24 '13 at 13:31
  • See http://superuser.com/a/246756/116326 – jofel Apr 24 '13 at 13:48
  • @jofel that doesn't actually offer a solution, just a vague reasoning. – John Hunt Apr 24 '13 at 13:52
  • I'm currently trying a new kernel with 64k support by editing arch/x86/include/asm/page_types.h (#define PAGE_SHIFT 16 instead of #define PAGE_SHIFT 12) – John Hunt Apr 24 '13 at 16:02
  • Ok, that isn't going to work (fails compiling all over the place if you change that number). Maybe I can alter the block size on a file system using a tool of some kind? – John Hunt Apr 24 '13 at 16:19

2 Answers2

26

Woohoo, I solved it :)

The short answer is you can't mount >4k block size devices on x86 linux machines as far as I can tell without some serious kernel hacking.

However, there is a work around.. using fuse-ext2 to mount the disk:

fuseext2 -o ro -o sync_read /dev/sdb4 /mnt/

(you'll probably need to apt-get fuseext2 first..)

works perfectly first time!

This is basically going to be how people can recover their MyBook Live disks.

Howto blog post here: http://john-hunt.com/2013/04/25/recovering-data-from-a-wd-mybook-live-2tb-3tbor-similar/

John Hunt
  • 848
  • 2
    Yea, the kernel page cache limits block size to the CPU's page size, so for i386/amd64, you can't mount a fs with more than 4k block size. Another alternative is to use e2tools. – psusi Apr 25 '13 at 15:03
  • 1
    Thank you! I was able to access a Seagate Central NAS drive this way. – Tobia Sep 02 '15 at 20:45
  • I try this way on 2 machines (kubuntu 14.04 x86 and 15.10 x64), but get freezing and CPU usage 100% by fuseext2 on open mount folder. Can me anybody say, why? – Yura Shinkarev Feb 11 '16 at 13:29
  • I'm getting a similar experience to YShinkarev - hanging when trying to access the fuseext2 mount, also freezes when I try to umount it. Ctrl+c has no effect... – Adam Griffiths Jul 26 '17 at 17:34
  • 3
    While this does not answer the question per se, you can recover the data using debugfs /dev/sdXX where you can run basic commands such as ls and rdump to copy the files to a safe location. Source: http://n-dimensional.de/blog/2012/05/01/wd-mybook-live-data-rescue/ I used this method because the fuseext2 freezes (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=776248) – Alecz Oct 06 '17 at 15:04
  • A version from Ubuntu repository worked very unstable for me until I built and installed it from source https://github.com/alperakcan/fuse-ext2 – Igor Mikushkin Feb 04 '23 at 21:50
  • Apologies, the link is currently dead. Hopefully I'll recover it over the next couple of months. – John Hunt Feb 09 '23 at 10:04
3

Some folks have experienced lockups with fuseext2, so here's an alternative:

debugfs /dev/sdb4

debugfs opens a CLI. rdump <directory> <target> will recursively copy an entire directory from the disk's filesystem to the host filesystem. For example, rdump home /tmp will copy the disk's /home directory to /tmp/home.

thiton
  • 2,310
  • I did not get lockups with fuse-ext2. I was unable to mount at all. There was no error/debug output or logging either. Tried latest release from 2018 and also built from source using latest git branch. Thankfully, debugfs works perfectly fine, which is to be expected since it was created by one of the ext4 developers. – Snake Jun 29 '21 at 03:25