2

The other question asks about the limit on building up commands by find's -exec ... {} +. Here I'd like to know how those limits compare to shells' inner limits. Do they mimic system limits or are they independent? What are they?

I'm a Bash user, but will learn of any Unix and Linux shells if only out of curiosity.

1 Answers1

3

Does the system-wide limit on argument count apply in shell functions?

No, that's a limit on the execve() system call used by processes to execute a different executable to replace the current one. That does not apply to functions which are interpreted by the current shell interpreter in the same process. That also doesn't apply to built-in utilities.

execve() wipes the memory of the process before loading and starting the new executable. The whole point of functions and builtins is for that not to happen so the function can modify the variables and other parameters of the shell, so they will typically not use execve().

Do they mimic system limits

No.

or are they independent?

Yes.

What are they?

As much as the resource limits for the current shell process allows.

The bash manual says:

There is no maximum limit on the size of an array, nor any requirement that members be indexed or assigned contiguously.

This seem to apply, since function arguments are an internal shell array (not passed to the exec kernel function).

Historically, ksh88 and pdksh had a low limit on array indices, but not on number of function arguments. You could only access $1, ... $9 directly in the Bourne shell, but you could still pass as many arguments as you'd like to functions and for instance loop over all of them with for arg do... or pass them along to another function or builtin with "$@".