244

So I received a warning from our monitoring system on one of our boxes that the number of free inodes on a filesystem was getting low.

df -i output shows this:

Filesystem       Inodes  IUsed    IFree IUse% Mounted on
/dev/xvda1       524288 422613   101675   81% /

As you can see, the root partition has 81% of its inodes used.
I suspect they're all being used in a single directory. But how can I find where that is at?

phemmer
  • 71,831

9 Answers9

272

I saw this question over on stackoverflow, but I didn't like any of the answers, and it really is a question that should be here on U&L anyway.

Basically an inode is used for each file on the filesystem. So running out of inodes generally means you've got a lot of small files laying around. So the question really becomes, "what directory has a large number of files in it?"

In this case, the filesystem we care about is the root filesystem /, so we can use the following command:

{ find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n; } 2>/dev/null

This will dump a list of every directory on the filesystem prefixed with the number of files (and subdirectories) in that directory. Thus the directory with the largest number of files will be at the bottom.

In my case, this turns up the following:

   1202 /usr/share/man/man1
   2714 /usr/share/man/man3
   2826 /var/lib/dpkg/info
 306588 /var/spool/postfix/maildrop

So basically /var/spool/postfix/maildrop is consuming all the inodes.

*Note, this answer does have three caveats that I can think of. It does not properly handle anything with newlines in the path. I know my filesystem has no files with newlines, and since this is only being used for human consumption, the potential issue isn't worth solving and one can always replace the \n with \0 and use -z options for the sort and uniq commands above as following:

{ find / -xdev -printf '%h\0' |sort -z |uniq -zc |sort -zk1rn; } 2>/dev/null

Optionally you can add head -zn10 to the command to get top 10 most used inodes.

It also does not handle if the files are spread out among a large number of directories. This isn't likely though, so I consider the risk acceptable. It will also count hard links to a same file (so using only one inode) several times. Again, unlikely to give false positives*


The key reason I didn't like any of the answers on the stackoverflow answer is they all cross filesystem boundaries. Since my issue was on the root filesystem, this means it would traverse every single mounted filesystem. Throwing -xdev on the find commands wouldn't even work properly.
For example, the most upvoted answer is this one:

for i in `find . -type d `; do echo `ls -a $i | wc -l` $i; done | sort -n

If we change this instead to

for i in `find . -xdev -type d `; do echo `ls -a $i | wc -l` $i; done | sort -n

even though /mnt/foo is a mount, it is also a directory on the root filesystem, so it'll turn up in find . -xdev -type d, and then it'll get passed to the ls -a $i, which will dive into the mount.

The find in my answer instead lists the directory of every single file on the mount. So basically with a file structure such as:

/foo/bar
/foo/baz
/pop/tart

we end up with

/foo
/foo
/pop

So we just have to count the number of duplicate lines.

αғsнιη
  • 41,407
phemmer
  • 71,831
  • ls -a bad point for scripting in recursion, because it show . and .. Then you'll have duplicated data, you can use -A instead of -a – PersianGulf Feb 26 '14 at 18:13
  • 4
    @MohsenPahlevanzadeh that isn't part of my answer, I was commenting on why I dislike the solution as it's a common answer to this question. – phemmer Feb 26 '14 at 18:23
  • 7
    Using a bind mount is a more robust way to avoid searching other file systems as it allows access to files under mount points. Eg, imagine I create 300,000 files under /tmp and then later the system is configured to mount a tmpfs on /tmp. Then you won't be able to find the files with find alone. Unlikely senario, but worth noting. – Graeme Feb 26 '14 at 18:25
  • @Graeme good point, I forgot about that one. – phemmer Feb 26 '14 at 18:27
  • @StephaneChazelas Why did you put an intermediate sort in the command? That should not be necessary. The entries will already be grouped. – phemmer Feb 26 '14 at 20:38
  • find may output a/b, a/b/c, a/b (try find . -printf '%h\n' | uniq | sort | uniq -d) – Stéphane Chazelas Feb 26 '14 at 20:39
  • ah, good catch. I forgot about directories in the middle of the files. – phemmer Feb 26 '14 at 20:41
  • @Patrick, I recently encountered a similar sort of issue. However, in my case I knew the directory responsible for the large inode count. I could verify it by using ls -l | wc -l. But if I had seen this post earlier, I could have checked the file system once before backing up. But neverthless, +1 for a great answer and the explanation :) – Ramesh Jul 03 '14 at 23:55
  • 2
    Both work just had to remove sort because sort needs to create a file when the output is big enough, which wasn't possible since I hit 100% usage of inodes. – qwertzguy Aug 06 '15 at 02:55
  • 2
    Note that -printf appears to be a GNU extension to find, as the BSD version available in OS X does not support it. – Xiong Chiamiov Jun 11 '16 at 19:19
  • @Graeme are bind-mounts posix (contrasted to linux-only)? Patrick: best workaround ever (out of total 1 I care about)! – n611x007 Jan 25 '17 at 01:44
  • @XiongChiamiov seems like you're right http://pubs.opengroup.org/onlinepubs/009695399/utilities/find.html – n611x007 Jan 25 '17 at 01:48
  • 1
    The assumption that all files are in a single directory is a difficult one. A lot of programs know that many files in a single directory has bad performance and thus hash one or two levels of directories – PlasmaHH Jul 10 '18 at 07:21
  • @PlasmaHH du --inodes -x / | sort -n – OrangeDog Oct 02 '19 at 10:56
  • is there a way to limit the depth, like e.g. --max-depth=1 in du? – ᴍᴇʜᴏᴠ Dec 03 '19 at 13:36
  • This lists any directory that contains more than 1000 inodes (files, directories, or other): sudo find / -xdev -printf "%h\n" | gawk '{a[$1]++}; END{for (n in a){ if (a[n]>1000){ print a[n],n } } }' | sort -nr | less – Aaron_H Dec 16 '19 at 08:33
38

This is reposted from here at the asker's behest:

du --inodes --separate-dirs | sort -rh | sed -n \
        '1,50{/^.\{71\}/s/^\(.\{30\}\).*\(.\{37\}\)$/\1...\2/;p}'

And if you want to stay in the same filesystem you do:

du --inodes --one-file-system --separate-dirs

Here's some example output:

15K     /usr/share/man/man3
4.0K    /usr/lib
3.6K    /usr/bin
2.4K    /usr/share/man/man1
1.9K    /usr/share/fonts/75dpi
...
519     /usr/lib/python2.7/site-packages/bzrlib
516     /usr/include/KDE
498     /usr/include/qt/QtCore
487     /usr/lib/modules/3.13.6-2-MANJARO/build/include/config
484     /usr/src/linux-3.12.14-2-MANJARO/include/config

NOW WITH LS:

Note that the above require GNU du (i.e., from GNU coreutils), because POSIX du does not support --inodes, --one-file-system or --separate-dirs.  (If you have Linux, you probably have GNU coreutils.  And if you have GNU du, you can abbreviate --one-file-system to -x (lower case) and --separate-dirs to -S (upper case).  POSIX du recognizes -x, but not -S or any long options.)  Several people mentioned they do not have up-to-date coreutils and the --inodes option is not available to them.  (But it was present in GNU coreutils version 8.22; if you have a version older than that, you should probably upgrade.)  So, here's ls:

ls ~/test -AiR1U |
    sed -rn '/^[./]/{h;n;}; G;
        s|^ *([0-9][0-9]*)[^0-9][^/]*([~./].*):|\1:\2|p' |
    sort -t : -uk1.1,1n |
    cut -d: -f2 | sort -V |
    uniq -c | sort -rn | head -n10

If you're curious, the heart-and-soul of that tedious bit of regex there is replacing the filename in each of ls's recursive search results with the directory name in which it was found.  From there it's just a matter of squeezing repeated inode numbers, then counting repeated directory names and sorting accordingly.

The -U option is especially helpful with the sorting in that it specifically does not sort, and instead presents the directory list in original order – or, in other words, by inode number.

And of course -A for (almost) all, -i for inode and -R for recursive and that's the long and short of it.  The -1 (one) option was included out of force of habit.

The underlying method to this is that I replace every one of ls's filenames with its containing directory name in sed. Following on from that... Well, I'm a little fuzzy myself. I'm fairly certain it's accurately counting the files, as you can see here:

% _ls_i ~/test
  100 /home/mikeserv/test/realdir
    2 /home/mikeserv/test
    1 /home/mikeserv/test/linkdir

(where _ls_i represents the above ls-sed-... pipeline, defined as an alias or a script).

This is providing me pretty much identical results to the du command:

DU:

15K     /usr/share/man/man3
4.0K    /usr/lib
3.6K    /usr/bin
2.4K    /usr/share/man/man1
1.9K    /usr/share/fonts/75dpi
1.9K    /usr/share/fonts/100dpi
1.9K    /usr/share/doc/arch-wiki-markdown
1.6K    /usr/share/fonts/TTF
1.6K    /usr/share/dolphin-emu/sys/GameSettings
1.6K    /usr/share/doc/efl/html

LS:

14686   /usr/share/man/man3:
4322    /usr/lib:
3653    /usr/bin:
2457    /usr/share/man/man1:
1897    /usr/share/fonts/100dpi:
1897    /usr/share/fonts/75dpi:
1890    /usr/share/doc/arch-wiki-markdown:
1613    /usr/include:
1575    /usr/share/doc/efl/html:
1556    /usr/share/dolphin-emu/sys/GameSettings:

If you tediously compare the above, line by line, you'll notice that the 8th line of the du output is /usr/share/fonts/TTF (1.6K) while the 8th line of the ls output is /usr/include (1613).  I think the include thing just depends on which directory the program looks at first – because they're the same files and hardlinked.  Kinda like the thing above.  I could be wrong about that though – and I welcome correction....

DU DEMO

% du --version
du (GNU coreutils) 8.22

Make a test directory:

% mkdir ~/test ; cd ~/test
% du --inodes --separate-dirs
1       .

Some children directories:

% mkdir ./realdir ./linkdir
% du --inodes --separate-dirs
1       ./realdir
1       ./linkdir
1       .

Make some files:

% printf 'touch ./realdir/file%s\n' `seq 1 100` | . /dev/stdin
% du --inodes --separate-dirs
101     ./realdir
1       ./linkdir
1       .

Some hard links:

% printf 'n="%s" ; ln ./realdir/file$n ./linkdir/link$n\n' `seq 1 100` | 
    . /dev/stdin
% du --inodes --separate-dirs
101     ./realdir
1       ./linkdir
1       .

Look at the hard links:

% cd ./linkdir
% du --inodes --separate-dirs
101

% cd ../realdir % du --inodes --separate-dirs 101

They're counted alone, but go one directory up...

% cd ..
% du --inodes --separate-dirs
101     ./realdir
1       ./linkdir
1       .

Then I ran my ran script from below and:

100     /home/mikeserv/test/realdir
100     /home/mikeserv/test/linkdir
2       /home/mikeserv/test

And output from Graeme's answer to a similar question:

101 ./realdir
101 ./linkdir
3 ./

So I think this shows that the only way to count inodes is by inode.  And because counting files means counting inodes, you cannot doubly count inodes – to count files accurately inodes cannot be counted more than once.

mikeserv
  • 58,310
  • 2
    which version added --inodes? which "variants"/"flavors"/"posix-wannabes"/"implementations"/whatever have it? – n611x007 Jan 25 '17 at 01:42
  • 1
    Ubuntu 14.04.5: du: unrecognized option '--inodes' – Putnik Jul 18 '17 at 10:22
  • du (GNU coreutils) 8.23 from 2014 has it (it's in my outdated Debian Jessie). Debian > Ubuntu sorry for that pun :P Ubuntu has so old packages... – Daniel W. Apr 18 '19 at 11:43
8

I find it quicker and easier to drill down using the following command:

$ sudo du -s --inodes * | sort -rn

170202  var
157325  opt
103134  usr
53383   tmp
<snip>

You can then go in to var for example and see what the big inode using directories are in there.

JonoB
  • 181
  • 1
  • 3
  • Was looking for something like this. Thank you – Anwar Apr 03 '20 at 11:35
  • 1
    If you remove the -s it will "drill down" for you and save a lot of time. – OrangeDog May 26 '20 at 23:05
  • @OrangeDog that will list every file recursively, which makes it a bit hard to see which directories are contributing the most. You normally don't have to go too many directory levels deep until you find the culprit in my experience (so it doesn't take long). – JonoB May 28 '20 at 00:44
  • @JonoB that's what the | sort is for. You get exactly the information as manually repeating it at every step, except much quicker. – OrangeDog May 28 '20 at 09:24
  • @OrangeDog each leaf directory will be listed when -s is not specified. With -s, on /var:
    34106 folders
    3643 db
    124 log
    32 run
    19 spool
    11 rpc
    7 at
    
    ```

    Without:

    34110 folders
    34014 folders/s0
    34013 folders/s0/y6dlxn_x0wsfb5yv_fpglzlc0000gn
    28592 folders/s0/y6dlxn_x0wsfb5yv_fpglzlc0000gn/T
    17369 folders/s0/y6dlxn_x0wsfb5yv_fpglzlc0000gn/T/broccoli- 10284w53i1ho5mEPf
    <snip>
    
    

    It makes it pretty hard to know which directories are contributing the most, IMO.

    – JonoB May 28 '20 at 23:30
  • @JonoB your first way all you know is it's in folders, the second way you already know ~ two thirds is in folders/s0/y6dlxn_x0wsfb5yv_fpglzlc0000gn/T/broccoli- 10284w53i1ho5mEPf, which would've taken at least 5 runs to work out your way. And you can drill deeper simply by reading further, instead of waiting to sum them all up yet again. – OrangeDog May 29 '20 at 09:07
7

I used this answer from SO Q&A titled: Where are all my inodes being used? when our NAS ran out about 2 years ago:

$ find . -type d -print0 \
    | while IFS= read -rd '' i; do echo $(ls -a "$i" | wc -l) "$i"; done \
    | sort -n

Example

$ find . -type d -print0 \
    | while IFS= read -rd '' i; do echo $(ls -a "$i" | wc -l) "$i"; done \
    | sort -n
...
110 ./MISC/nodejs/node-v0.8.12/out/Release/obj.target/v8_base/deps/v8/src
120 ./MISC/nodejs/node-v0.8.12/doc/api
123 ./apps_archive/monitoring/nagios/nagios-check_sip-1.3/usr/lib64/nagios
208 ./MISC/nodejs/node-v0.8.12/deps/openssl/openssl/doc/crypto
328 ./MISC/nodejs/node-v0.8.12/deps/v8/src
453 ./MISC/nodejs/node-v0.8.12/test/simple

Checking device's Inodes

Depending on your NAS it may not offer a fully featured df command. So in these cases you can resort to using tune2fs instead:

$ sudo tune2fs -l /dev/sda1 |grep -i inode
Filesystem features:      has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super huge_file uninit_bg dir_nlink extra_isize
Inode count:              128016
Free inodes:              127696
Inodes per group:         2032
Inode blocks per group:   254
First inode:              11
Inode size:           128
Journal inode:            8
Journal backup:           inode blocks

Crossing filesystem boundaries

You can use the -xdev switch to direct find to narrow it's search to only the device where you're initiating the search.

Example

Say I have my /home directory automounting via NFS shares from my NAS, whose name is mulder.

$ df -h /home/sam 
Filesystem            Size  Used Avail Use% Mounted on
mulder:/export/raid1/home/sam
                      917G  572G  299G  66% /home/sam

Notice that the mount point is still considered local to the system.

$ df -h /home/ .
Filesystem            Size  Used Avail Use% Mounted on
-                        0     0     0   -  /home
/dev/mapper/VolGroup00-LogVol00
                      222G  159G   52G  76% /

Now when I initiate find:

$ find / -xdev  | grep '^/home'
/home

It found /home but none of the automounted contents because they're on a different device!

Filesystem types

You can utilize the switch to find, -fstype to control which type's of filesystems find will look into.

   -fstype type
          File is on a filesystem of type type.  The valid filesystem types 
          vary among different versions of Unix; an incomplete list of 
          filesystem  types that are accepted on some version of Unix or 
          another is: ufs, 4.2, 4.3, nfs, tmp, mfs, S51K, S52K.  You can use 
          -printf with the %F directive to see the types of your
          filesystems.

Example

What filesystem's do I have?

$ find . -printf "%F\n" | sort -u
ext3

So you can use this to control the crossing:

only ext3

$ find . -fstype ext3 | head -5
.
./gdcm
./gdcm/gdcm-2.0.16
./gdcm/gdcm-2.0.16/Wrapping
./gdcm/gdcm-2.0.16/Wrapping/CMakeLists.txt

only nfs

$ find . -fstype nfs | head -5
$ 

ext3 & ext4

$ find . -fstype ext3 -o -fstype ext4 | head -5
.
./gdcm
./gdcm/gdcm-2.0.16
./gdcm/gdcm-2.0.16/Wrapping
./gdcm/gdcm-2.0.16/Wrapping/CMakeLists.txt
slm
  • 369,824
  • What would be your solution to prevent it from crossing filesystem boundaries? Like if / is whats full, and you have network filesystems mounted, you don't want to go diving into the network filesystems. – phemmer Feb 26 '14 at 18:34
  • @Patrick - see updates, you can control it using -fstype to find. – slm Feb 26 '14 at 18:55
  • 1
    @Gilles - simple answer...didn't page down all the way in find's man page 8-) – slm Feb 27 '14 at 00:33
  • @Gilles - the man page doesn't seem to indicate that -xtype excludes filesystems, it looks to look at the type of file. I'm only finding examples like this: find . \( -fstype nfs -prune \) – slm Feb 27 '14 at 00:39
  • @Gilles - I was addressing Patrick's Q in the comments about how to keep find from crossing filesystem boundaries. In his ex. he mentions "Like if / is whats full, and you have network filesystems mounted, you don't want to go diving into the network filesystems". – slm Feb 27 '14 at 01:51
  • @Gilles - yes that was only meant to address Patrick's comment, I'll relax the heading since it's over selling it. – slm Feb 27 '14 at 01:56
  • @Gilles - actually in looking through the man page can I not use -xdev to do the boundaries? I'm testing on a system that has a mounted NFS and it seems to be staying on the local disk. – slm Feb 27 '14 at 02:01
  • @Gilles - I've brought back the cross boundaries but utilized -xdev now. – slm Feb 27 '14 at 02:07
  • Gah, yes, I meant -xdev, not -xtype (which happens to exist but is unrelated). – Gilles 'SO- stop being evil' Feb 27 '14 at 02:39
  • @Gilles - OK that makes a lot more sense. 8-) – slm Feb 27 '14 at 02:51
5

Command to find inode used:

for i in /*; do echo $i; find $i |wc -l | sort ; done
TPS
  • 2,481
3

To list the detailed inode usage for /, use the following command:

echo "Detailed Inode usage for: $(pwd)" ; for d in `find -maxdepth 1 -type d |cut -d\/ -f2 |grep -xv . |sort`; do c=$(find $d |wc -l) ; printf "$c\t\t- $d\n" ; done ; printf "Total: \t\t$(find $(pwd) | wc -l)\n" 
raylu
  • 103
  • 2
3

Definitely answer with maximum upvotes help understanding the concept of inodes in linux and unix however it doesn't really help when it comes to deal the actual problem of deleting or removing the inodes from disk. A simpler way to do this on ubuntu based systems is remove unwanted linux kernel headers and images.

sudo apt-get autoremove

Would do that for you. In my case, inodes usage was at 78% due to which I received alert.

$ df -i
Filesystem     Inodes  IUsed  IFree IUse% Mounted on
/dev/xvda1     524288 407957 116331   78% /
none           957443      2 957441    1% /sys/fs/cgroup
udev           956205    388 955817    1% /dev
tmpfs          957443    320 957123    1% /run
none           957443      1 957442    1% /run/lock
none           957443      1 957442    1% /run/shm
none           957443      5 957438    1% /run/user

After running sudo apt-get autoremove command it had gone down to 29%

$ df -i
Filesystem     Inodes  IUsed  IFree IUse% Mounted on
/dev/xvda1     524288 150472 373816   29% /
none           957443      2 957441    1% /sys/fs/cgroup
udev           956205    388 955817    1% /dev
tmpfs          957443    320 957123    1% /run
none           957443      1 957442    1% /run/lock
none           957443      1 957442    1% /run/shm
none           957443      5 957438    1% /run/user

This was just my observation that saved my time. People may find some better solution than this.

3

Every answer so far assumes the problem is with many files in a single directory, instead of many subdirectories all contributing to the problem. Fortunately the solution is to simply use fewer flags.

# du --inodes --one-file-system /var | sort --numeric-sort
...
2265    /var/cache/salt/minion
3818    /var/lib/dpkg/info
3910    /var/lib/dpkg
4000    /var/cache/salt/master/gitfs/refs
4489    /var/lib
5709    /var/cache/salt/master/gitfs/hash
12954   /var/cache/salt/master/gitfs
225058  /var/cache/salt/master/jobs
241678  /var/cache/salt/master
243944  /var/cache/salt
244078  /var/cache
248949  /var

Or with shorter options: du --inodes -x /var | sort -n. Unfortunately not all versions of du have the inodes option.

OrangeDog
  • 1,015
  • This solution worked way better for me than any of the fancier ones. I ran

    sudo du --inodes | sort -k1,2 -n > /home/mas/inodes.txt And it worked perfectly where the others did not.

    – mas Aug 02 '21 at 15:10
0

First of all, if you encounter this situation, as I did, when the system has already run out of inodes, all of the above solutions that actually address the question will fail for the simple reason that they all require that you pipe output to other commands which requires the allocation of inodes when there are none to allocate. You can address this by manually hunting through the system to delete files in any of a variety of ways but estimating how many inodes you will need to run the other answers is problematic. To get around this you can remote login, set the terminal program you're using to unlimited scrollback and run the initial du --inodes command. Then you can save the scrollback to a file as in this answer. Once that is done you can pipe a cat of the saved terminal log to whatever commands you need to find where the inodes are being hogged.

That said, secondly, answers that use the --separate-dirs aka -s option such this answer failed to expose the inode hog on my system:

$ sudo du -s --inodes * | sort -rn

I had to remove the -s option:

$ sudo du --inodes * | sort -rn

The failure occurred because the inode hog had 2 levels deep in directories, the first of which was just a relatively small number of directories, each of which contained many files but not enough in any one to show up in the sort resulting from the -s option on du -s --inodes.

  • I think there might be some confusion. Piping between commands does not require creation of inodes. The only reason this might be the case is if you have shell hooks or whatnot that are creating files. But a clean shell environment will not require inode creation. If it did, you wouldn't be able to pipe commands on a system mounted read-only. – phemmer Nov 14 '23 at 21:10
  • In my case /tmp was the location of the inode failure. – James Bowery Nov 15 '23 at 22:19