3

My root filesystem is running out of inodes. If this were an issue of disk space, I'd use du -s to get a top-level overview of where the space is going, then head down the directory tree to find particular offenders. Is there an equivalent option for inodes?

The answers in this question will point out individual directories with high usage, but in my case, that's no good: the Linux source directory, for example, gets scattered across 3000+ directories with low inode counts, rather than showing up as /usr/src/linux-4.0.5 52183.

Mark
  • 4,244
  • 1
    Running something like find dir/ -xdev | wc -l will give you the number of inodes in dir/. You could probably use this with the -exec option of find to run this command for each directory. – saiarcot895 Jul 14 '15 at 04:38
  • 1
    Since you mention Linux: du -s --inodes – lcd047 Jul 14 '15 at 05:04
  • 1
    @lcd047: for the record: --inodes is not yet present in GNU coreutils 8.4 (for example, RH/CentOS/SL 6.6). – Ulrich Schwarz Jul 14 '15 at 05:19
  • @UlrichSchwarz du --version - du (GNU coreutils) 8.23 – lcd047 Jul 14 '15 at 05:21
  • In adddition to the answers below, to find out who is creating many files, you could maybe get some results by using lsof and counting the processes with many occurences of file descriptors... absolutely not guaranteed (the program most likely only do them one by one...) but this may help to catch a "massively creating files in parralel" nasty program? ) actually, for those ones, top and similar programs can also help find who is massively using io) – Olivier Dulac Jul 15 '15 at 00:15

1 Answers1

4

With GNU coreutils (Linux, Cygwin) since version 8.22, you can use du --inodes, as pointed out by lcd047.

If you don't have recent GNU coreutils, and there are no hard links in the tree or you don't care if they're counted once per link, you can get the same numbers by filtering the output of find. If you want the equivalent of du -s, i.e. only toplevel directories, then all you need is to count the number of lines with each toplevel directory name. Assuming that there are no newlines in file names and that you only want non-dot directories in the current directory:

find */ | sed 's!/.*!!' | uniq -c

If you want to show output for all directories, with the count for each directory including its subdirectories, you need to perform some arithmetic.

find . -depth | awk '{
    # Count the current depth in the directory tree
    slashes = $0; gsub("[^/]", "", slashes); current_depth = length(slashes);
    # Zero out counts for directories we exited
    for (i = previous_depth; i <= current_depth; i++) counts[i] = 0;
    # Count 1 for us and all our parents
    for (i = 0; i <= current_depth; i++) ++counts[i];
    # We don´t know which are regular files and which are directories.
    # Non-directories will have a count of 1, and directories with a
    # count of 1 are boring, so print only counts above 1.
    if (counts[current_depth] > 1) printf "%d\t%s\n", counts[current_depth], $0;
    # Get ready for the next round
    previous_depth = current_depth;
}'
  • Gilles, i usually love your answers, but here I think you can have better results with a ... -printf "x" | wc -c, avoiding the need to sed it down, and avoiding to count a file with embedded newlines as multiple inodes? and the 2nd way should be on find .*/ */ instead of find * ? – Olivier Dulac Jul 15 '15 at 00:06
  • 1
    @OlivierDulac You mean call find separately for each directory? That's another possibility, indeed. It does require GNU find (but, unlike du --inodes, not a recent version). find .* would include ., so it' either find */ (exclude dot files) or find . (and then tweak the text processing to match). – Gilles 'SO- stop being evil' Jul 15 '15 at 00:08
  • Now i see why you do it that way, my mistake, sorry! it indeed allows only 1 find, more efficient and the sed is needed to get rid of the subdir+filenames. Sorry i misunderstood (read too quickly ^^) – Olivier Dulac Jul 15 '15 at 00:10
  • 1
    Your du -s equivalent worked: turns out a filesystem wasn't mounted, meaning a directory tree with a zillion small files got put on the root SSD rather than the RAID array. – Mark Jul 15 '15 at 02:03