10

I found that this would raise the "argument too long" error:

ls *.*

And this would not raise it:

for file in *.*
do
    echo $file
done

Why?

3 Answers3

15

The “argument too long” error is E2BIG, raised by the execve system call if the total size of the arguments (plus the environment, on some systems) is too large. The execve call is the one that starts external processes, specifically loading a different executable file (there's a different call, fork, for running a separate process whose code is still from the same executable file). The for loop is an internal shell construct, so it doesn't involve calling execve. The command ls *.* raises the error not when the glob is expanded but when ls is called.

execve fails with the error E2BIG when the total size of the arguments to the command is larger than the ARG_MAX limit. You can see the value of this limit on your system with the command getconf ARG_MAX. (It's possible that you can go over this limit if you have enough memory; keeping under ARG_MAX guarantees that execve will work as long as no unrelated error occurs.)

  • and why did shell not having an limit? – lamwaiman1988 Sep 15 '11 at 01:16
  • 1
    @gunbuster363 The execve limit is enforced by the kernel, it puts limits because the arguments need to be copied via kernel memory at one point and user processes can't be allowed to request an arbitrary amount of shell memory. Inside the shell, there's no reason to have any limit, anything that fits in virtual memory is fine. – Gilles 'SO- stop being evil' Sep 15 '11 at 07:21
5

I suppose that in the first example ls is executed from bash through a fork/exec pair system call, in the second one, all the working is internal to bash.

The exec call has limits, the internal working of bash instead has not (or better, has different limits that have nothing to do with exec, perhaps the amount of available memory).

enzotib
  • 51,661
5

Because in the case of ls it is an argument, and the number of arguments is limited.

In case of the for cycle, it is just a list of items. There are no limits (as far as I'm aware) for that.

Šimon Tóth
  • 8,238
  • There is definitely a limit for the shell expansion. It is very much related to how much RAM you have available .. Try this; my 4GB RAM system blows a gasket at about 15.2 million 8-byte args: for i in {00000001..20000000} ;do ((10#$i==1)) && break; done – Peter.O Sep 14 '11 at 12:48
  • 4
    @fred I didn't really think that mentioning RAM as a limit is needed. – Šimon Tóth Sep 14 '11 at 12:49
  • 2
    It may not be needed, but that is the nature of comments.. someone may find it interesting, or even of value. – Peter.O Sep 14 '11 at 13:01
  • @fred: actually yes, if expanding very large arguments was a common problem, it would be possible to implement it without keeping everything in memory. – Matteo Sep 15 '11 at 05:35