5

I was wondering if there is any caching done with these utilities. I assume not, but not positive. Are there any typical similar utilities that do use caching to speed up results on subsequent runs?

derobert
  • 109,670

3 Answers3

2

There is no need for caching in df as df does a single statfs() call (per filesystem). And obviously this call does read files on disk and sum up their sizes... The filesystems (e.g. the kernel) keep track of the free space.

du uses (without being aware of that) the page cache all applications use. In order to cache explicitly between runs there would have to be a "du daemon" anyway.

Hauke Laging
  • 90,279
  • du could have a cache without a daemon. All it has to do is store the directory sizes in a file somewhere, and then read back that file on next run. (not saying it does, just saying a daemon is not needed) – phemmer May 04 '14 at 21:56
  • @Patrick But without something like inotify this would make caching a probability game. – Hauke Laging May 04 '14 at 21:59
  • Ah, thats why you meant daemon, so it could monitor real time. Yes, agree :-) – phemmer May 04 '14 at 22:00
0

There is caching at some level (command or perhaps OS or FS-driver level - probably one of the latter two if you ask me), although forgive me for not knowing more details:

d@s7/mp3Ϡϡ time du -sh /mp3/    
27G     /mp3/
du -sh /mp3/  0.01s user 0.03s system 32% cpu 0.112 total

d@s7/mp3Ϡϡ time du -sh /mp3/
27G     /mp3/
du -sh /mp3/  0.00s user 0.01s system 82% cpu 0.015 total

d@s7/mp3Ϡϡ time du -sh /mp3/
27G     /mp3/
du -sh /mp3/  0.00s user 0.01s system 86% cpu 0.014 total

d@s7/mp3Ϡϡ time du -sh /mp3/
27G     /mp3/
du -sh /mp3/  0.01s user 0.01s system 78% cpu 0.020 total

Results obtained on Ubuntu 15.04 with ext4 filesystem, kernel version 3.19.0-15-generic.

djvs
  • 101
  • those margins are really tiny to meaningfully demonstrate caching performance. I would use a much larger directory – axolotl Apr 18 '22 at 19:56
-2

confirm, it caches

see this:

$du -sh testUpload.txt
104M    testUpload.txt

$ dd if=/dev/zero of=testUpload.txt bs=1M count=50
50+0 records in
50+0 records out
52428800 bytes (52 MB, 50 MiB) copied, 0.0248501 s, 2.1 GB/s

$ du -sh testUpload.txt
104M    testUpload.txt

<b>

$ ls -al testUpload.txt
-rw-rw-rw- 1 alfred alfred 52428800 Jul  4 11:50 testUpload.txt