1

I have a volume group, which has a size of approximately 30 TByte. It has an EXT3 File System on it:

SERVER:/home/usfman # dumpe2fs -h /dev/mapper/datavg-foolv
dumpe2fs 1.41.9 (22-Aug-2009)
Filesystem volume name:   <none>
Last mounted on:          <not available>
Filesystem UUID:          censored
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery sparse_super large_file
Filesystem flags:         signed_directory_hash 
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              289669120
Block count:              1158676480
Reserved block count:     57920704
Free blocks:              216859296
Free inodes:              289592213
First block:              0
Block size:               4096
Fragment size:            4096
Reserved GDT blocks:      747
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
Filesystem created:       censored
Last mount time:          censored
Last write time:          censored
Mount count:              5
Maximum mount count:      33
Last checked:             censored
Check interval:           15552000 (6 months)
Next check after:         censored
Reserved blocks uid:      0 (user root)
Reserved blocks gid:      0 (group root)
First inode:              11
Inode size:           256
Required extra isize:     28
Desired extra isize:      28
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      censored
Journal backup:           inode blocks
Journal size:             128M

SERVER:/home/usfman # 

Question: Can we increase this FS from 4.3TB to ~13 TByte?

Or there will be some limitions on the maximum size? The EXT3 Wiki page suggests the maximum size could be from 4 to 32 TByte. Can someone clarify?

https://en.wikipedia.org/wiki/Ext3

UPDATE: -->> Block size: 4096 So 16 TByte is the maximum FS size with 4K blocks. I just need that someone confirms this wiki link.

SLES 11, 64 bit kernel

  • If you are using a 64-bit kernel, 13TB is technically possible, but anecdotal evidence points to problems with gigantic filesystem sizes like this. I have read this somewhere not too long ago but right now, can not find the link to it. Sorry. – MelBurslan Jun 15 '16 at 13:25

1 Answers1

1

This gives you an overview of maximum file size https://access.redhat.com/solutions/1532 . You may want to think about upgrading ext3 to ext4 https://docs.fedoraproject.org/en-US/Fedora/14/html/Storage_Administration_Guide/ext4converting.html . So 13 TB should go fine if your kernel is not too old, e.g. you are running SLES 11, SP4.