49

When I run a command like ls */*/*/*/*.jpg, I get the error

-bash: /bin/ls: Argument list too long

I know why this happens: it is because there is a kernel limit on the amount of space for arguments to a command. The standard advice is to change the command I use, to avoid requiring so much space for arguments (e.g., use find and xargs).

What if I don't want to change the command? What if I want to keep using the same command? How can I make things "just work", without getting this error? What solutions are available?

D.W.
  • 4,070
  • 1
    Useful reading: Bash FAQ 95. Without changing your command there's not much you can do besides recompiling to increase your argument list maximum size, or change your directory structure so that there are fewer files. – jw013 Aug 15 '12 at 23:12
  • 1
    @jw013 based on the linux kernel version it may be possible to increase the argument list - see http://unix.stackexchange.com/a/45161/8979 for details about the change in recent systems. – Ulrich Dangel Aug 16 '12 at 00:39
  • @UlrichDangel, Yup, it is possible! See my answer; my answer shows how to do it (on Linux, with a recent enough kernel). – D.W. Aug 16 '12 at 06:06

4 Answers4

61

On Linux, the maximum amount of space for command arguments is 1/4th of the amount of available stack space. So, a solution is to increase the amount of space available for the stack.

Short version: run something like

ulimit -s 65536

Longer version: The default amount of space available for the stack is something like 8192 KB. You can see the amount of space available, as follows:

$ ulimit -s
8192

Choose a larger number, and set the amount of space available for the stack. For instance, if you want to try allowing up to 65536 KB for the stack, run this:

$ ulimit -s 65536

You may need to play around with how large this needs to be, using trial-and-error. In many cases, this is a quick-and-dirty solution that will eliminate the need to modify the command and work out the syntax of find, xargs, etc. (though I realize there are other benefits to doing so).

I believe that this is Linux-specific. I suspect it probably won't help on any other Unix operating system (not tested).

D.W.
  • 4,070
  • 3
    You can verify like this that it worked:

    $ getconf ARG_MAX 2097152 $ ulimit -s 65535 $ getconf ARG_MAX 16776960

    – Alex Jun 13 '19 at 07:42
  • 2
    Does this mean that if I make the stacksize unlimited with ulimit -s unlimited the command line size will be unlimited, too? –  Jan 25 '21 at 14:22
  • @UncleBilly I tried that and it didn't work. – Code42 Aug 31 '21 at 09:37
  • Didn't work for me either. getconf ARG_MAX does show an increase but the command line still fails. In my case the argument list is 258k, which should fit comfortably in ulimit -s 60000 (which is in k). There's some other limit kicking in that gives this message. – Britton Kerin Jan 07 '22 at 23:04
  • I suspect this solution is not working for some people because of how make ends up passing commands to the shell as discussed here: https://stackoverflow.com/questions/11475221/why-do-i-get-bin-sh-argument-list-too-long-when-passing-quoted-arguments. I could easily be wrong though – Britton Kerin Jan 07 '22 at 23:21
  • 1
    This answer does not seem to work as consistently as some of the others, but it is by far the simplest to use when it does work and I think may also be faster than options that use for loops or the "find" command – Danny Mar 08 '22 at 16:13
11

Instead of ls */*/*/*/*.jpg, try:

echo */*/*/*/*.jpg | xargs ls

xargs(1) knows, what the maximum number of arguments is on the system, and will break up its standard input to call the specified command-line multiple times with no more arguments than that limit, whatever it is (you can also set it lower than the OS' maximum using the -n option).

For example, suppose, the limit is 3 arguments and you have five files. In that case xargs will execute ls twice:

  1. ls 1.jpg 2.jpg 3.jpg
  2. ls 4.jpg 5.jpg

Often this is perfectly suitable, but not always -- for example, you can not rely on ls(1) sorting all of the entries for you properly, because each separate ls-invocation will sort only the subset of entries given to it by xargs.

Though you can bump the limit as suggested by others, there will still be a limit -- and some day your JPG-collection will outgrow it again. You should prepare your script(s) to deal with an infinite number...

  • Thanks for the idea! This is not a bad workaround. Two caveats: 1. This breaks on directories and filenames that have spaces in their name, so it's not a perfect substitute. 2. Is that going to run into the same issue with Argument list too long, but for echo instead of ls, on shells where echo is not a shell built-in command? (Maybe that's not an issue in most shells, so maybe that's irrelevant.) – D.W. Jul 05 '17 at 16:51
  • 2
    Yes, special characters in filenames are a problem. Your best bet is to use find with the -print0 predicate -- and pipe its output into xargs with -0 option.

    echo is a shell built-in and does not suffer from the command-line limitation of exec(3).

    – Mikhail T. Jul 05 '17 at 19:50
  • This works with commands that expect the variable argument as the last parameter, such as for ls, but what do I do when I want to mv many files to a single dir, e.g. mv * destdir? If * gives the "too many args" error, I can't make xargs to pass the paths as the first at to mv somehow, or can I? – Thomas Tempelmann Feb 01 '21 at 16:14
  • 1
    Check the man-page for xargs on your system -- on FreeBSD, for example, the -J will help you with this task. If on your OS there is no way to do this, you'll need to write a custom script to reorder the arguments. Something like: destdir=$1; shift; mv "$@" "$destdir". Then give this new script to xargs: .... | xargs newscript $destdir – Mikhail T. Feb 01 '21 at 22:06
6

This Linux Journal article gives 4 solutions. Only the fourth solution does not involve changing the command:

Method #4 involves manually increasing the number of pages that are allocated within the kernel for command-line arguments. If you look at the include/linux/binfmts.h file, you will find the following near the top:

/*
 * MAX_ARG_PAGES defines the number of pages allocated for   arguments
 * and envelope for the new program. 32 should suffice, this gives
 * a maximum env+arg of 128kB w/4KB pages!
 */
#define MAX_ARG_PAGES 32

In order to increase the amount of memory dedicated to the command-line arguments, you simply need to provide the MAX_ARG_PAGES value with a higher number. Once this edit is saved, simply recompile, install and reboot into the new kernel as you would do normally.

On my own test system I managed to solve all my problems by raising this value to 64. After extensive testing, I have not experienced a single problem since the switch. This is entirely expected since even with MAX_ARG_PAGES set to 64, the longest possible command line I could produce would only occupy 256KB of system memory--not very much by today's system hardware standards.

The advantages of Method #4 are clear. You are now able to simply run the command as you would normally, and it completes successfully. The disadvantages are equally clear. If you raise the amount of memory available to the command line beyond the amount of available system memory, you can create a D.O.S. attack on your own system and cause it to crash. On multiuser systems in particular, even a small increase can have a significant impact because every user is then allocated the additional memory. Therefore always test extensively in your own environment, as this is the safest way to determine if Method #4 is a viable option for you.

I agree that the limitation is seriously annoying.

0

Bash system calls and its direct variants (arguments and environment variables), will yield "Argument list too long", if pool space limit was reached.

I found how to avoid this limit, while dealing with special characters in the file names (e.g. spaces), by using printf with xargs:

printf '%s\0' */*/*/*/*.jpg | xargs -0 ls
Noam Manos
  • 1,031
  • This seems to be changing the command, it also would also generate a different result since each batch of files would be sorted by ls individually. – Kusalananda Jun 22 '21 at 14:15
  • @Kusalananda when I used echo instead of printf, file paths with spaces failed on "ls: cannot access ... No such file or directory". What is the error you're getting ? – Noam Manos Jun 22 '21 at 15:12
  • Yes, why would you change printf to echo? What I said was that the output of your command may be ordered differently from the command that the user in the question is trying to execute, would it have been possible to run it at all. – Kusalananda Jun 22 '21 at 15:42
  • It's running what the OP wanted ls */*/*/*/*.jpg, but in a safer way, by breaking the multi-paths "///*/.jpg" arguments with xargs, and piping it to the "ls" command. (Regarding the echo, I thought you referred to other answer here). – Noam Manos Jun 23 '21 at 20:10