2

df -h prints:

Filesystem      Size  Used Avail Use% Mounted on
/dev/root        59G  6.6G   50G  12% /
devtmpfs        1.8G     0  1.8G   0% /dev
tmpfs           2.0G     0  2.0G   0% /dev/shm
tmpfs           2.0G  9.0M  1.9G   1% /run
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/mmcblk0p1  253M   54M  199M  22% /boot
tmpfs           391M     0  391M   0% /run/user/1000

while ncdu / prints:

Total disk usage:   1.8 GiB  Apparent size:   1.8 GiB  Items: 176500

Why is one reporting 6.6+ GiB used while the other reports only 1.8 GiB?

1 Answers1

4

The output of df is based on filesystem-level statistics, whereas ncdu (and a regular du) generate their results by scanning through directories, reading the sizes of individual files, and summarizing that.

If the du-style commands are run as a non-root user, they won't necessary have full access to all directories, and so may not be able to see everything (use sudo ncdu in this case).

Try running du -hs / and comparing its results to the "total disk usage" reported by ncdu /. You may find the results are similar... and with du, you might also see messages about a bunch of directories the command is unable to access and so fails to take into account. The same is probably true of ncdu, but it just hides the error messages.

Also, it seems to me that ncdu might not have been updated to understand the virtual filesystems of modern Linux distributions, and may become confused by them. On my Debian 10 system, ncdu / reports:

Total disk usage:  75.6 GiB  Apparent size: 128.1 TiB  Items: 469143

In my case, the "Apparent size" is clearly nonsensical and useless. The "Total disk usage" is about the same as what I get with du -hs /... but because this includes a number of RAM-based virtual filesystems (devtmpfs, tmpfs), this number is also unlikely to be very useful.

But if I restrict the command to one filesystem only (e.g. ncdu -x /), I seem to get more reasonable results, which also match the output of du -shx / to within a rounding error, and also the output of df -h when remembering that a filesystem may need some space for its internal metadata.

Another possible source of errors would be if you're using advanced filesystem features like BTRFS filesystem snapshots. Sure, you might have only 1.8 GiB of files on your root filesystem, but if the filesystem also contains two snapshots of its previous states, the total amount of disk space used might be up to 3x what the sum of the size of files would lead you (and any du-like command) to expect.

Since the df command gets its information by asking the filesystem driver, the larger "used" value reported by it might include the snapshots, which could be otherwise invisible until accessed with the proper method.

telcoM
  • 96,466