78

I had a problem (new to me) last week. I have a ext4 (Fedora 15) filesystem. The application that runs on the server suddenly stopped. I couldn't find the problem at first look.

df showed 50% available space. After searching for about an hour I saw a forum post where the guy used df -i. The option looks for inodes usage. The system was out of inodes, a simple problem that I didn't realize. The partition had only 3.2M inodes.

Now, my questions are: Can I make the system have more inodes? Should/can it be set when formatting the disk? With the 3.2M inodes, how many files could I have?

piovisqui
  • 885
  • 1
    Every file or directory uses one inode. A hard link to a file does not create an inode. http://en.wikipedia.org/wiki/Inode – Paul Tomblin Dec 12 '11 at 01:59
  • 1
    Related: http://stackoverflow.com/questions/21397110/how-to-store-one-billion-files-on-ext4 & http://stackoverflow.com/questions/3618820/how-many-bytes-per-inodes?rq=1 – lepe Aug 20 '15 at 03:38

6 Answers6

43

It seems that you have a lot more files than normal expectation.

I don't know whether there is a solution to change the inode table size dynamically. I'm afraid that you need to back-up your data, and create new filesystem, and restore your data.

To create new filesystem with such a huge inode table, you need to use '-N' option of mke2fs(8).

I'd recommend to use '-n' option first (which does not create the fs, but display the use-ful information) so that you could get the estimated number of inodes. Then if you need to, use '-N' to create your filesystem with a specific inode numbers.

Mat
  • 52,586
cinsk
  • 1,019
  • 10
  • 4
  • 14
    You can use mke2fs -i to specify the number of inodes. Its documentation indicates that “it is not possible to expand the number of inodes on a filesystem after it is created”. – Gilles 'SO- stop being evil' Dec 12 '11 at 00:35
  • Can't change the number of inodes, but we can change the size (which is quite useless) tune2fs. Your answers were helpful, but the question now is what's the relation between inodes and total number of files? – piovisqui Dec 12 '11 at 01:35
  • 2
    @piovisqui: each file consume on inode, which is a pointer in the filesystem. if the file is an hard-link to another file it has the same inode. – Hanan Dec 12 '11 at 04:47
  • 9
    @Gilles The -i options specifies the size of the inode, not how many there are. The -N option sets the number inodes. – theillien Oct 23 '14 at 20:12
  • 1
    The relation between inodes and file numbers isn't necessarily 1:1. The first inode contains a list of pointers to blocks where the file is stored. If the list of blocks can't fit within one inode, then the inode contains a list of pointers to inodes which list the blocks where the file is stored. If it doesn't fit there then it goes 3 sets of inodes deep for that list of blocks etc. – StuWhitby Apr 14 '15 at 14:02
  • 6
    @StuWhitby That's not quite right. A single inode has several direct pointers, and a single, double, and triple indirect pointer. If the list of blocks can't fit in the direct pointers, then the single indirect pointer will point at a block of data (NOT another inode) that contains more pointers. If more pointers than that are needed, the double indirect pointer points at a block that contains single indirect pointers, and the triple indirect at a block with double indirect pointers. So a file does, in fact, just use one inode, regardless of size. – user125355 Nov 30 '17 at 14:13
22

With 3.2 million inodes, you can have 3.2 million files and directories, total (but multiple hardlinks to a file only use one inode).

Yes, it can be set when creating a filesystem on the partition. The options -T usage-type, -N number-of-inodes, or -i bytes-per-inode can all set the number of inodes. I generally use -i, after comparing the output of du -s and find | wc -l for a similar collection of files and allowing for some slack.

No, it can't be changed in-place on an existing filesystem. However:

  • If you're running LVM or the filesystem is on a SAN's LUN (either directly on the LUN, or as the last partition on the LUN), or you have empty space on the disk after the partition, you can grow the partition and then use resize2fs to expand the filesystem. This adds more inodes in proportion to the added space, roughly. If you want to avoid running out of inodes before space assuming that future files on average have about the same size, set a high enough reserved block percentage using tune2fs -m.
  • If you have enough space and can take the filesystem offline, then take it offline, create a new filesystem with more inodes, and copy all the files over.
  • If just a subset of the files are using a lot of the inodes and you have enough free space, create a filesystem on a loop device backed by a file on the filesystem, create a filesystem with more inodes (and maybe smaller blocks as well) on it, and move the offending directories into it. That's probably a performance hit and a maintenance hassle, but it is an alternative.
  • And of course, if you can delete a lot of unneeded files, that should help too.
david
  • 433
18

As another workaround I could suggest considering packing huge collections of files into an uncompressed(!) tar archive, and then using archivemount to mount it as a filesystem. A tar archive is better for sharing than a filesystem image and provides similar performance when backing up to a cloud or another storage.


If the collection is supposed to be read-only, squashfs may be an option, but it requires certain options enabled in the kernel, and xz compression is available for tar as well with the same performance.

tijagi
  • 922
  • 1
  • 11
  • 24
9

I have an alternate solutions for this situation. Lets say you have 1000 inodes in a partition of 10G. But due to inodes limit you are not suppose to use all space of partition. But in this solutions you will be able to use the remaining space of the partition without formatting it.

$ df -i  # see list ( I need just one free inode here so move just one file into other PARTITION)
/dev/part1  1000 999 1 99.9%     /data

$ dd if=/dev/zero of=/data/new_data
$ mkfs.ext4 /data/new_data
$ mkdir /data1
$ mount /data/new_data /data1

for permanent mounting

$ echo "/data/new_data /data1 ext4 defaults 0 1" >> /etc/fstab
Anthon
  • 79,293
SANJEET
  • 99
  • 3
    Welcome to U&L. I took the liberty to reformat your answer to the more usual representation of code here, inserting a prompt ($) to clearly distinguishing between commands and output (if they were only command, the prompt is normally left out). I also changed the SHOUTING in the bold phase emphasis, which is what I think you intended. You can roll back the changes if I misrepresented things – Anthon May 25 '15 at 05:26
  • I think this solution has logic, but you need to manage the size when running dd. – piovisqui May 26 '15 at 14:51
  • 3
    The details are wrong, you'd need to use a loop device, and maybe even unionfs depending on the application, but this is the only solution avoiding formatting and restoring from backup which is no fun when in a hurry with millions of files. There are circumstances where this could save the day ! – medoc Jul 29 '15 at 20:43
8

try du -s --inodes * 2>/dev/null |sort -g the cd into the last dir in output and repeat.

Full Disclosure: not all OS's support --inodes flag for du command (my Mac OS does not) but many Linux OS's do.

Siva
  • 9,077
6

Recently ran into this issue when using apt or aptitude upgrade.

df -h

Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      7.8G  5.1G  2.3G  70% /

df -i

Filesystem     Inodes  IUsed  IFree IUse% Mounted on
/dev/xvda1     524288 521497   2791  100% /

Issued command:

du /|sort -k1 -n

Revealed most of files were in subfolders for several kernel versions within:

/usr/src/linux-headers

Removed those subfolders and inode problem was fixed.

df -i

Filesystem     Inodes  IUsed  IFree IUse% Mounted on
/dev/xvda1     524288 104986 419302   21% /
kph0x1
  • 356
  • does "du /|sort -k1 -n" show inodes? – Orphans Jan 03 '17 at 14:13
  • 1
    No. That was to sort the directories, showing which ones had the most files in them, folders which were consuming lots of inodes but less actual space usage: the situation of 30% free disk space yet 100% inode usage shown above. – kph0x1 Jan 03 '17 at 17:42
  • I honestly dont get how "du" shown how many files there are with any flag? Could you please explain more in detail? – Orphans Jan 04 '17 at 17:57
  • No flags for the du command. Usage is for the root of file-system in above example, looking at space only. Output is piped to sort in order to show which directories contained the most files. There isn't a count done in above example for files, the 'how many files' portion of your question. Kernel source ones were the culprit shown in the du output though; e.g., many small files, sub-folders from past compilations, the very thing ideally suited for removal in order to free up inodes. There still remains a manual, human review of the du output, /usr/src/linux-headers was then obvious. – kph0x1 Jan 09 '17 at 15:52
  • 3
    du shows ONLY bytes - not files. And you are piping the outout only from the du command into sort. So how does sort -k1 -n sort the output in that way you proposed? The only thing I can see is that "du /|sort -k1 -n" only sort every row based on the size in bytes. Nothing else – Orphans Jan 09 '17 at 16:49
  • Was helpful for me, seeing directories which shouldn't normally have such size usage shown. It produced the listing revealing the /usr/src/linux-headers. Your system, experience may vary. If you need an individual file count by folder, summary for the filesystem can use gawk; I'd be glad to help. – kph0x1 Jan 10 '17 at 15:24
  • Alright, so I can basicly type du -h / | sort -k1 -n and get the same output but more readable. So it does'nt really show inodes, just size in bytes. – Orphans Jan 11 '17 at 06:59
  • This advice was really helpful. I ran out of inodes on a virtual machine, and clearing out /usr/src got me back like 800,000 inodes (on a system with about a million of them). – nomen Jul 03 '18 at 19:26