It depends mainly on your version of the Linux kernel.
You should be able to see the limit for your system by running
getconf ARG_MAX
which tells you the maximum number of bytes a command line can have after being expanded by the shell.
In Linux < 2.6.23, the limit is usually 128 KB.
In Linux >= 2.6.25, the limit is either 128 KB, or 1/4 of your stack size (see ulimit -s
), whichever is larger.
See the execve(2) man page for all the details.
Unfortunately, piping ls *.txt
isn't going to fix the problem, because the limit is in the operating system, not the shell.
The shell expands the *.txt
, then tries to call
exec("ls", "a.txt", "b.txt", ...)
and you have so many files matching *.txt
that you're exceeding the 128 KB limit.
You'll have to do something like
find . -maxdepth 1 -name "*.txt" | wc -l
instead.
(And see Shawn J. Goff's comments below about file names that contain newlines.)
ls
's output, which is a bad idea, so better avoid it. For counting see What's the best way to count the number of files in a directory?, for a tricky workaround see why for loop doesn't raise “argument too long” error?. – manatwork May 18 '12 at 16:00set -- *.txt; printf 'There are %s .txt files\n' "$#"
instead. This would give you the correct count regardless of whether a name contains newlines. – Kusalananda Dec 11 '19 at 08:05