2

I have just created a new raid 5 array using 3 4TB drives (aiming for 8TB of space) on an ubuntu system. While I had a few issues getting started, I believe I have set it up correctly and I have created an ext4 filesystem on it as a single partition using the whole array. When I look at it in gparted though, it reports

Size: 7.28 TiB (this is correct - I know the difference between TB and TiB)
Used: 117 GiB
Unused: 7.16 TiB

If I run sudo df -h I get

Filesystem      Size  Used Avail Use% Mounted on
/dev/md0        7.2T   51M  6.8T   1% /home/brad/raid

which is a different size again. The available is 400G less than the size, but the used is only 51M here!

My question is, is this the expected output at this point in time, or is this an indication that something has gone awry? If it is expected then what is using the space that is reported on gparted as used?

In case anyone wants to see it here is the output from cat /proc/mdstat

md0 : active raid5 sdb1[0] sdd1[3] sdc1[1]
      7813772288 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      [=>...................]  recovery =  7.2% (283321088/3906886144) finish=2181.8min speed=27679K/sec

unused devices: <none>

and from sudo fdisk -l

Disk /dev/sda: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000ac78f

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048   472330239   236164096   83  Linux
/dev/sda2       472332286   488396799     8032257    5  Extended
/dev/sda5       472332288   488396799     8032256   82  Linux swap / Solaris

WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sdb: 4000.8 GB, 4000787030016 bytes
255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1               1  4294967295  2147483647+  ee  GPT
Partition 1 does not start on physical sector boundary.

WARNING: GPT (GUID Partition Table) detected on '/dev/sdc'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sdc: 4000.8 GB, 4000787030016 bytes
255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1               1  4294967295  2147483647+  ee  GPT
Partition 1 does not start on physical sector boundary.

WARNING: GPT (GUID Partition Table) detected on '/dev/sdd'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sdd: 4000.8 GB, 4000787030016 bytes
255 heads, 63 sectors/track, 486401 cylinders, total 7814037168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1               1  4294967295  2147483647+  ee  GPT
Partition 1 does not start on physical sector boundary.

WARNING: GPT (GUID Partition Table) detected on '/dev/sde'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sde: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1               1  1953525167   976762583+  ee  GPT

Disk /dev/md0: 8001.3 GB, 8001302822912 bytes
2 heads, 4 sectors/track, 1953443072 cylinders, total 15627544576 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn't contain a valid partition table

Hmmmm, the bit at the end about /dev/md0 not containing a valid partition table is interesting.

brad
  • 145

1 Answers1

2

The issue that /dev/md0 doesn't have a partition table is not relevent to your problem. You plainly stated that you created the filesystem on the raw device

I have created an ext4 filesystem on it as a single partition using the whole array

so it makes sense that you have no partition table, as you did not partition the space. Its not an issue, but be sure that you never write a partition table to that partition, as you will clobber the beginning of the filesystem and lose access to data.

As to your other partitions, I see fdisk complaining that the disks have GPT, and advising you to use gdisk instead of fdisk. It is hard to tell anything from the fdisk output.

Now to your primary question, where is the space?

/dev/md0        7.2T   51M  6.8T   1% /home/brad/raid

Where did your ~400 GB go? The went into the filesystem overhead. The ext4 filesystem preallocates all of the metadata it needs to store to allocate every inode on the system, and additionally on a volume that big, you'll have a lot of copies of the fs superblock and a large number of blocks will be allocated to the filesystem journal. There is nothing to fix or change here, and the size of the filesystem metadata will not grow over time in an ext[234] filesystem. Your only real option if you don't like that amount of filesystem overhead is to tune your inode size or use a different filesystem.

casey
  • 14,754
  • I was hoping it was just filesystem overhead, but wow! 400G! That's close to 6% of my RAID gone (or about $35 worth, not so bad in monetary terms I guess). Is there a more suitable filesystem I should be considering here? Also, why the discrepancy between gparted and df? – brad Sep 04 '14 at 03:40