10

Im using a script to delete old file in a directory, but the script kept showing

/usr/bin : Argument list too long

It turned out the number of files were 40000 and more, So I want to know what is the maximum limit of files that can be listed. Is there anyway to find it? Is it system specific?

4 Answers4

13

This message is not from ls, but from execve. you can see the maximum size of the argument list by running getconf ARG_MAX.

It's not how many files that can be handled by the application, but the size of the arguments that can be passed using exec to the operating system, which returns E2BIG if the size of the arguments is beyond the acceptable range.

This limit was traditionally (until Linux 2.6.23) conferred using the kernel define ARG_MAX, found in sys/linux/limits.h. However, nowadays this is specific to the environment you are running on. Typically, the maximum length of the arguments can now be as big as a quarter of the userspace stack size.

Chris Down
  • 125,559
  • 25
  • 270
  • 266
  • What? ARG_MAX is no longer relevant? Really? Could you expand on that a little? Which environment do you mean? The shell? Where is the limit set now? – terdon Mar 05 '14 at 14:34
  • @terdon Where did I say it was no longer relevant? The limit is still enforced by the kernel, it's just not hardcoded (it's dynamic based upon the size of the stack) -- see man 2 execve and exec.c, especially the RLIMIT_STACK checks. – Chris Down Mar 05 '14 at 14:40
  • Oh, I read your answer to mean that ARG_MAX is no longer used. What do you mean by "is specific to the environment you are running on" then? Note that I'm not saying you're mistaken, I'm sure you're not, I'm just wondering about the details. – terdon Mar 05 '14 at 14:43
  • @terdon The size of the userspace stack varies depending on the environment (and user settings). Since the size of ARG_MAX is typically one quarter of the userspace stack size, it varies depending on the size of the userspace stack allocated (which may differ from computer to computer, or it may also be changed with ulimit -s in bash). – Chris Down Mar 06 '14 at 01:37
  • 1
    Ah, OK, so that's why I have 131072 in my /usr/include/linux/limits.h (Debian) and getconf ARG_MAX returns 2097152 which is 16*131072. Thanks! – terdon Mar 06 '14 at 01:42
8

The limit varies between operating systems and versions. The limit is not number of files, but bytes. You can (usually) get your local limit like this:

$ getconf ARG_MAX
2097152

See also: BASH FAQ 095

grebneke
  • 4,671
3

The situation is not the same if you do

ls /usr/bin

or

ls /usr/bin/*

Because in the second case, bash glob returns a list that gets passed to ls and this is very limited (which is among reasons why xargs exists). I even think shell's limit is lower than exec limit.

orion
  • 12,502
1

I ran into this situation when working with 4.2 BSD long ago. Under the covers, the shell has a memory space in the environment to use to pass command line arguments to invoked programs. As has been mentioned, it varies from Unix to Unix, but is often a tunable parameter in the OS.

Expanding this space on the system bumps up the memory requirement for the whole system, so that may not be optimal for you. What would be useful to explore for your purpose is the use of the -exec option in a 'find' syntax, as in this syntax to recursively find and delete files (type f) in the current directory (and below) which are older than 30 days (-mtime +30):

find . -type f -mtime +30 -exec rm {} \;

Its very powerful, and find itself is doing the removal, so there is no limit on the number of files removed. If your removal needs to match files of certain name patterns, you can generate that with find as well.

terdon
  • 242,166