14

If parsing the output of ls is dangerous because it can break on some funky characters (spaces, \n, ... ), what's the best way to know the number of files in a directory?

I usualy rely on find to avoid this parsing, but similarly, find mydir | wc -l will break for the same reasons.

I'm working on Solaris right now, but I'm looking for a answer as portable across different unices and different shells as possible.

rahmu
  • 20,023

10 Answers10

21

How about this trick?

find . -maxdepth 1 -exec echo \; | wc -l

As portable as find and wc.

  • 5
    This doesn't work (it displays n+1 files on my Debian system). It also doesn't filter for regular files. – Chris Down Dec 23 '11 at 13:35
  • 5
    I just gave a generic example. It does work, but how it works depends on how you adapt the find command to your specific needs. Yes, this one includes all the directories, including . (which might be why you see the result as n+1). – rozcietrzewiacz Dec 23 '11 at 15:35
  • I like this trick, very clever; but I'm surprised there's no simple straightforward way to do that! – rahmu Dec 23 '11 at 16:11
  • 4
    @ChrisDown the OP doesn't specify filtering for regular files, asks for number of files in a directory. To get rid of the n+1 issue, use find . -maxdepth 1 ! -name . -exec echo \; | wc -l; some older versions of find do not have -not. – Arcege Dec 23 '11 at 16:50
  • 4
    Note that -maxdepth is not standard (a GNU extension now also supported by a few other implementations). – Stéphane Chazelas May 09 '15 at 15:10
13

In bash, without external utilities, nor loops:

shopt -s dotglob
files=(*)
echo "${#files[@]}"

In ksh, replace shopt -s dotglob by FIGNORE=.?(.).
In zsh, replace it by setopt glob_dots, or remove the shopt call and use files=(*(D)). (Or just drop the line if you don't want to include dot files.)

Portably, if you don't care about dot files:

set -- *
echo "$#"

If you do want to include dot files:

set -- *
if [ -e "$1" ]; then c=$#; else c=0; fi
set .[!.]*
if [ -e "$1" ]; then c=$((c+$#)); fi
set ..?*
if [ -e "$1" ]; then c=$((c+$#)); fi
echo "$c"
enzotib
  • 51,661
  • 3
    The first example prints 1 for an empty directory when nullglob is not enabled. In zsh, a=(*(DN));echo ${#a} with the N (nullglob) qualifier does not result in an error for an empty directory. – nisetama May 11 '16 at 23:17
10
find . ! -name . -prune -print | grep -c /

Should be fairly portable to post-80s systems.

That counts all the directory entries except . and .. in the current directory.

To count files in subdirectories as well:

find .//. ! -name . | grep -c //

(that one should be portable even to Unix V6 (1975), since it doesn't need -prune)

  • 1
    One of the rare portable answers on this page, if not the only one. – xhienne Aug 15 '17 at 18:11
  • I upvoted this answer yesterday as I found it also works well for directories other than the current directory (find dirname ! -name dirname -prune -print). I have since been wondering if there's any particular reason to use grep -c / instead of wc -l (which is probably more commonly used for counting). – Anthony Geoghegan Nov 01 '18 at 15:11
  • 2
    find dirname ! -name dirname doesn't work if there are other directories within that are named dirname. It's better to use find dirname/. ! -name .. wc -l counts the number of lines, file names can be made of several lines as the newline character is as valid as any in a file name. – Stéphane Chazelas Nov 01 '18 at 15:14
8

Try:

ls -b1A | wc -l

The -b will have non-printable characters, -A will show all files except . and .. and one per line (the default on a pipe, but good to be explicit).

As long as we're including higher-level scripting languages, here's a one-liner in Python:

python -c 'import os; print len(os.listdir(os.sep))'

Or with full 'find':

python -c 'import os; print len([j for i in os.walk(os.sep) for j in i[1]+i[2]])'
Arcege
  • 22,536
2

The most simple version I use all the time and never had problems with is: ls -b1 | wc -l

Peter
  • 121
  • You might run into problems if the file name contains a \n or other funky chars (yeah, certain unices allow this). – rahmu Aug 09 '17 at 14:43
  • 1
    I tried this explicitly before posting my answer and had no problems with it. I used nautilus file manager to rename a file to contain \n to try this. – Peter Aug 15 '17 at 14:32
  • You'r right it doesn't work like that. I don't know what I did when I tested this first. Tried again and updated my answer. – Peter Aug 15 '17 at 14:39
  • No, the command is OK, but there is already a similar solution and hidden files are not counted. – xhienne Aug 15 '17 at 17:54
  • ls -1 | wc -l fine on OpenBSD ksh – Lee May 20 '23 at 20:03
1

Yoc can use such construction:

I=0; for i in * ; do ((I++)); done ; echo $I

But I'm afraid, you can cath error like Argument list too long. in case you have too many files in directory. However I tested it on directory with 10 billion files, and it worked well.

rush
  • 27,403
1

Have you considered perl, which should be relatively portable?

Something like:

use File::Find;

$counter = 0;

sub wanted { 
  -f && ++$counter
}

find(\&wanted, @directories_to_search);
print "$counter\n";
cjc
  • 2,837
0

Try this => Using ls with -i ( for node number ) & -F (appends directory name with '/' ) options.

ls -ilF | egrep -v '/' | wc -l
Saumil
  • 31
0

With a perl one-liner (reformatted for readability):

perl -e 'opendir($dh, ".");
         while ( readdir($dh) ) {$count++};
         closedir $dh;
         print "$count\n";'

or

perl -e 'opendir($dh, ".");
         @files = readdir($dh);
         closedir $dh;
         print $#files+1,"\n";'

You can use perl functions that modify arrays like grep or map with the second version. See perldoc -f readdir for an example using grep.

cas
  • 78,579
0

In addition to the find-based answer proposed by Stéphane, here is a POSIX-compliant answer based on ls:

ls -qf | tail -n +3 | wc -l
xhienne
  • 17,793
  • 2
  • 53
  • 69