4

The default reserved blocks percentage for ext filesystems is 5%. On a 4TB data drive this is 200GB which seems excessive to me.

Obviously this can be adjusted with tune2fs:

 tune2fs -m <reserved percentage> <device>

however the man page for tune2fs states that one of the reasons for these reserved blocks is to avoid fragmentation.

So given the following (I have tried to be specific to avoid wildly varying opinions):

  • ~4TB HDD
  • Used to store large files (all >500mb)
  • Once full, Very few writes (maybe once a month 1-5 files are replaced)
  • Data only (no OS or applications running from the drive)
  • Moderate reads (approx 20tb a week and the whole volume read every 3 months)
  • HDD wear is of concern and killing a HDD for the sake of saving 20GB is not the desired outcome (Is this even a concern?)

What is the maximum percentage that the drive can be filled to without causing (noticeable from a performance and/or hdd wear perspective) fragmentation?

Are there any other concerns with filling a large data hdd to a high percentage and/or setting the reserved blocks count to say 0.1%?

maloo
  • 510

1 Answers1

3

The biggest problem with fragmentation is free space fragmentation, which means that when your filesystem gets full and there are no longer big chunks of free space left, your filesystem performance falls off a cliff. Each new file can allocate only small chunks of space at a time, so is very fragmented. Even when other files are deleted, the previously written files are splattered all over the disk, causing new files to be fragmented again.

In the usage case you describe above (~500MB files, relatively few overwrites or new files being written, old ~500MB files being deleted periodically, I'm assuming some kind of video storage system) you will get relatively little fragmentation - assuming your file size remains relatively constant. This is especially true if your writes are single-threaded, since multiple write threads will not be competing for the small amount of free space and interleaving their block allocations. For every old file deleted from disk, you will get a few hundred MB of contiguous space (assuming the file was not fragmented to begin with), and it would be filled up again.

If you do have multiple concurrent writers, then using fallocate() to reserve large chunks of space for each file (and truncate() at the end to free up any remaining space) will avoid fragmentation as well. Even without this, ext4 will try to reserve (in memory) about 8MB of space for a file while it is being written, to avoid the worst fragmentation.

I'd recommend that you keep at least a decent multiple of your file size free (e.g. 16GB or more) so that you don't ever get to the point of consuming all the dribs and drabs of free blocks and introducing permanent free space fragmentation.

LustreOne
  • 1,774