I want to know the total number of files on my server. Can this be done?
4 Answers
Depending on what exactly you want to count, you are better doing this per filesystem rather than counting all files under root. Counting everything under root would also count /proc
and /sys
files which you may not want to include.
To count everything on the root filesystem using GNU find
, you could do:
find / -xdev -type f -printf '\n' | wc -l
The -printf '\n'
will just print a newline for every file found, instead of the filename. This way, there are no problems with filenames that contain newlines themselves and would count as multiple files.
With a POSIX find you could simply do:
find / -xdev -type f | wc -l
Or POSIXly and avoiding any file containing newlines from being counted twice:
{ printf 0; find / -xdev -type f -exec sh -c 'printf "+ $#"' sh {} +; echo; } | bc
Here each file becomes a different argument to sh
which then prints the total number of arguments. In case more than one sh
process is called, as will be the case for many files, each sh
output is summed in bc
.
Update
A simpler (but slower) POSIX solution:
find / -xdev -type f -exec printf '%s\0' {} + | tr '\n\0' '?\n' | wc -l
Update 2
As noted by @Gilles, using -type f
with find
only counts regular files. To also include device files, you could use -type f -o -type b -o -type c
. To count directories as well, don't use any -type
option.
Another point by Gilles was that files with multiple hard links will be counted as different files. This may not be desirable on a filsystem where, for example increment backup trees have been created by hardlinking unchanged files in a newer tree to those in an older one. To overcome this with GNU tools you could do:
find / -xdev -type f -printf '%i\n' | sort -u | wc -l
Using POSIX tools:
find / -xdev -type f -exec ls -iq {} + | sort -buk 1,1 | wc -l
No problems with newlines in filenames here since the -q
option to ls
means that it will replace them with ?
.
-
-
@jordanm, thanks, fixed. Also missed the dummy
$0
argument forsh
, without it the count will be one short for eachsh
run. – Graeme Apr 07 '14 at 14:29 -
Why
-type f
? The question asked for a count of files, not of regular files. Note that all of these methods count files multiple times if they have multiple hard links (excluding.
and..
links to directories). – Gilles 'SO- stop being evil' Apr 07 '14 at 20:06 -
-
@Graeme, Instead of
sort -buk 1,1
, can we do same thing just with justsort -buk 1
?man sort
saysKEYDEF is F[.C][OPTS][,F[.C][OPTS]]
. Because we sort by inode number, only field 1 seems to be enough. – MS.Kim Apr 08 '14 at 01:10 -
@MS.Kim, have you tried it? The first
1
is the start field and the second is the end field. Without that you just get the same result as without the-k
option, which is the same as the solutions that don't count individual inodes. – Graeme Apr 08 '14 at 08:56 -
@Graeme I didn't remove anything, I just added the paragraph about what the
-printf '\n'
was for. Probably we just had an overlap in editing. Sorry for that. – Dubu Apr 08 '14 at 09:28 -
-
@Graeme, I had misunderstood the man page. I assumed that we could omit end field so that we could sort by field 1 with only specifying first field (
-k 1
) . Thank you for correcting my fault. – MS.Kim Apr 08 '14 at 21:29
UPDATE
This is the fastest method I can imagine that this can be done fully portably. I use tar
below because it will automatically add hard linked files only once:
find /./ -xdev ! -type d | tar -cf - - | tar -t | sed -n '\|^/\./|c.' | wc -l
portable and very fast:
find / -xdev | sed -n '/^[./]/c\.' | wc -l
I don't believe you do need all of the rest - though @Graeme was correct about the possible misses below. This, however, does not have the same shortcomings:
find /./ -xdev | sed -n '\|^/\./|c.' | wc -l
All you need to do is ensure a full path to root and you don't have to jump through all of the other hoops.
NOTE: As Gilles points out, using -type f is an egregious error. I have updated this post to reflect this.
Also for a more accurate count, you need only do:
du / --inodes -xs
Provided your tools are up to date, that will provide you with the exact number of files in your root filesystem. This can be used for any directory as well.
Here's a means of getting an exact count of all files in any filesystem or subdirectory excluding hardlinks with only very commonly available tools. First isolate target root with a mount --bind
away:
mkdir /tmp/ls ; sudo mount -B / $_ ; cd $_
Next, count your files:
sudo ls -ARUi1 ./ |
grep -o '^ *[0-9]*' |
sort -un |
wc -l
I did something very similar to this for another answer a few days ago - though that was a little more complicated because the goal was to sort by subdirectories containing the highest file counts - so the command there looks a little different than the one here. But this is still very fast, by the way.
Anyway, the heart of that is the ls
command. It -Recursively
searches the entire tree, listing -Almost-all -inodes -Unsorted
at -1
file per line. Then we grep -only
the inode [0-9]numbers, sort
on -unique -numbers
and wcount -lines.
-
1Still doesn't give any guarantees for awkwardly named files, although it does reduce the range of names that will fail. The overhead of counting the files should always be minimal compared to the
find
itself anyway. – Graeme Apr 07 '14 at 15:20 -
@Graeme It completely erases filenames entirely. In what case would it miss? As far as the overhead goes, it looks like your answer is invoking a shell per file? I need to look closer... – mikeserv Apr 07 '14 at 15:26
-
Filenames containing a newline before a dot would be counted twice. Files beneath a directory with a name ending in a newline would also be counted twice. The only robust way to exclude all edge cases with multiple filenames is to somehow pass each one as an argument or to have the names null separated (but POSIX tools rarely support this). – Graeme Apr 07 '14 at 15:36
-
-
Ah, very good, find won't produce the
/./
sequence anywhere else in the path. I will remember this trick. I think you can drop the backslash after thec
since this is not part of a regex. – Graeme Apr 07 '14 at 16:36 -
@Graeme Can I drop the ? I'll check it, but youre probably right - it makes sense. I've never used c before. Youre definitely right. – mikeserv Apr 07 '14 at 16:38
-
@Graeme and by the way - thanks man. I'll remember this trick too, but i wouldn't have learned it if you hadn't corrected me in the first place. – mikeserv Apr 07 '14 at 16:56
-
I updated my answer with an alternative POSIX solution which I though might be faster, turns out this one is about twice as fast for a cached tree. – Graeme Apr 07 '14 at 17:03
-
@Graeme It is very fast but it depends on
printf
being a pathed app - not a shell builtin. I dont know if thats all that reliable these days - i did think of the same and was actually doing exactly that -tr
and all, beforesed
. Printf is slower or faster - maybe i misunderstood? – mikeserv Apr 07 '14 at 17:06 -
printf '%s\0'
is the slowest out of all three. This one is about twice as fast on a cashed tree and thesh
+bc
approach is in the middle. I believe POSIX requires shell builtins to be available as binaries, but some systems do shirk this (not usually withprintf
though). – Graeme Apr 07 '14 at 17:15 -
@Graeme Yeah,
printf
isnt required as a builtin at all - but its pretty much a defacto standard. But thats fuzzy to me. My goal was still to get a single stream - i wish i could figure out how to do it with tr. What happens if you do yours but on the other side of a pipe? Like . source the pipe itself? I dont know if that would make a difference. But i think you could at least keep it down to one printf that way maybe... – mikeserv Apr 07 '14 at 17:19 -
Nah, as soon as you pipe you lose the separation that
find
can do with-exec
, this is what null separation avoids since null is the only character not allowed in a Unix filename. Since POSIXfind
can't do this you have to go the-exec
way or find a longer sequence of characters as a separator as above. – Graeme Apr 07 '14 at 17:37 -
df -i /
gives you the number of used inodes on the root filesystem (on filesystems that use inodes, which includes the ext2/ext3/ext4 family but not btrfs). This is the number of files on that filesystem, plus a few inodes that are preallocated for the use of fsck.
If you want the number of files in a directory tree, then you can use the following command:
du -a /some/directory | wc -l
Add the option -x
to du
if you don't want to traverse mount points, e.g. du -ax / | wc -l
. Note that this will return a larger count if you have file names containing newlines (a bad idea, but not impossible).
Another way to count is
find /some/directory | wc -l
or, to cope with file names containing newlines, with GNU find (non-embedded Linux or Cygwin):
find /some/directory -printf . | wc -c
Add -xdev
(e.g. find /some/directory -xdev -printf .
) to skip mount points. Note that this directory entries, not files: if a file has multiple hard links, then each link is counted (whereas the df
and du
methods count files, i.e. multiple hard links to the same file are only counted once).

- 829,060
-
This a very good point. I dont know why i didnt make it, except that i must be getting senile, as i made the very same point only a few days ago. And thank you for pointing out the -f thing - thats an egregious error that i also made. You can handle all filenames with a little use of /./ though, newlines or no. http://unix.stackexchange.com/a/122871/52934 – mikeserv Apr 07 '14 at 23:59
From your root directory:
find . -type f | wc -l
You can change the path (here .) to whatever the directory you want to count the files in.
If you don't want to go in subdirectories, add the option -maxdepth 1

- 844
-
-
-
-
-
-
@fraxture If a file name contains a newline character,
wc -l
will count the name as two lines. You could get around it usingfind / -type f -printf "%i\n"
which will print out the inode instead of the file name. – Jenny D Apr 07 '14 at 14:39 -
1
df -i
. – Simon Richter Apr 07 '14 at 17:21