18

Is there a maximum to bash file name expansion (globbing) and if so, what is it? See globbing on tldp.org.

Let's say I want to run a command against a subset of files:

grep -e bar foo*
rm -f bar*

Is there a limit to how many files bash will expand to, and if so what is it?

I am not looking for alternative ways to perform those operations (e.g. by using find).

2 Answers2

22

There is no limit (other than available memory) to the number of files that may be expanded by a bash glob.

However when those files are passed as arguments to a command that is executed (as opposed to a shell builtin or function), then you may run into a limit of the execve() system call on some systems. On most systems, that system call has a limit on the cumulative size of the arguments and environment passed to it, and on Linux also a separate limit on the size of a single arguments.

For more details, see:

To work around that limit, you can use (assuming GNU xargs or compatible):

printf '%s\0' foo* | xargs -r0 rm -f

Above, since printf is built-in (in bash and most Bourne-like shells), we don't hit the execve() limit. And xargs will split the list of arguments into as many rm invocations as needed to avoid the execve() limitation.

With zsh:

autoload zargs
zargs foo* -- rm -f

With ksh93:

command -x rm -f foo*
8

You can see the limit for the total size of the arguments with:

getconf ARG_MAX

This is determined generally not by the shell, but by the underlying operating system according to this answer.

DopeGhoti
  • 76,081
  • 2
    I think it's the total length, not the number? – ilkkachu Apr 06 '17 at 15:36
  • 1
    You are of course correct; I have updated my answer to reflect this. Because of this, the limit to the number of arguments will be a function of the length of the arguments. – DopeGhoti Apr 06 '17 at 15:40