We have a large file system on which a full du
(disk usage) summary takes over two minutes. I'd like to find a way to speed up a disk usage summary for arbitrary directories on that file system.
For small branches I've noticed that du
results seem to be cached somehow, as repeat requests are much faster, but on large branches the speed up become negligible.
Is there a simple way of speeding up du
, or more aggressively caching results for branches that haven't been modified since the previous search?
Or is there an alternative command that can deliver disk usage summaries faster?
du
would be bad, but a faster wrapper script with an identical interface would be very useful for us. Further, I would expect that caching results dependent on last-modified time (and assuming no disk-wide operations, eg. defragmentation) would give exact size results: am I missing something? – Ian Mackinnon Mar 02 '11 at 18:11find
. But then there'slocate
. – Yuval Jul 19 '13 at 12:16StatFs
for a super fast estimate of directory sizes. It was nearly 1000x faster for large, complex directories, compared todu
. – Joshua Pinter Oct 16 '19 at 17:10df
command is enough (and fast). – Michel de Ruiter Apr 26 '21 at 09:36