5

I wondered about some missing space on my ext3 partition and, after some googling, found that debian based ubuntu reserves 5% of the size for root.

I also found posts describing how to change that size via tune2fs utility.

Now I've got 2 questions, that I didn't find clear answers for:

  • should I unmount the partition before changing the reserved space. what could happen if I don't?

  • how much space should I reserve for the filesystem, so that it can operate efficiently?

Thank you!

udo
  • 181
  • 1
    I think this --> http://unix.stackexchange.com/questions/7950/reserved-space-for-root-on-a-filesystem-why is very similar to what you ask. – boehj Apr 19 '11 at 14:41

1 Answers1

7

You don't need to unmount the partition prior to doing this. Regarding question two, it depends. As HDDs have grown in size, so has the total amount of disk space that's reserved for root. If you have a 2 TB HDD and it's totally used for /, then I would say you could quite safely tune it down to 1% by doing this:

$ sudo tune2fs -m 1 /dev/sda*X*

A smaller drive in the region of 320 GB I'd probably leave as is.

Keep in mind that drives that are for data storage purposes don't really need all this space reserved for root. In this case you can change the number of reserved blocks like this:

$ sudo tune2fs -r 20000 /dev/sdb*X*

Hope that helps.

EDIT: Regarding fragmentation issues, ext file systems are inherently immune to fragmentation issues. To quote Theodore Ts'o:

If you set the reserved block count to zero, it won't affect performance much except if you run for long periods of time (with lots of file creates and deletes) while the filesystem is almost full (i.e., say above 95%), at which point you'll be subject to fragmentation problems. Ext4's multi-block allocator is much more fragmentation resistant, because it tries much harder to find contiguous blocks, so even if you don't enable the other ext4 features, you'll see better results simply mounting an ext3 filesystem using ext4 before the filesystem gets completely full.

boehj
  • 2,630
  • ok, that clears it pretty much. thank you! a statement like

    if you run for long periods of time (with lots of file creates and deletes) while the filesystem is almost full (i.e., say above 95%), at which point you'll be subject to fragmentation problems.

    is exactly what I was looking for.

    – udo Apr 19 '11 at 14:52
  • But let's be clear: ext3 fragmentation isn't at all the same as NTFS fragmentation. ext3 allocates disk blocks to files as individual blocks. NTFS allocates disk blocks in extents. A "fragmented" NTFS file can have multiple inode-equivalents full of very small extents of disk blocks. –  Apr 19 '11 at 16:18
  • @boehj Can you explain a little on how you decided on the amount of blocks reserved (20000) for root when the drive is primarily used for storage? Is that something that is relative to the size of the drive, amount of files, file sizes, etc? – SaultDon Aug 21 '12 at 21:44
  • Actually I can't really. I used that example because when I learnt about the process, that was the example quoted in the guide. And subsequent to that I've used this number on quite a few systems and had no problems. Perhaps someone else can add some info? – boehj Aug 22 '12 at 12:24
  • @boehj Dang, I was trying to figure it out because I'm not sure if you need to change the size relative to the HD size; does a 1TB HD need more space than a 100GB HD/Partition or does it depend on the disk usage (storage, root fs, home, boot, etc...) – SaultDon Aug 27 '12 at 18:18
  • To be honest I think it depends solely on usage. On a pure storage partition you could go with none. On a regular system partition you could use 20 000. On some production server partitions it may still be useful to use 5% as it will allow root a lot of room to write to. – boehj Aug 30 '12 at 10:36