There isn't really; as long your computer's memory can handle the queue, the shell should do its best. According to POSIX:
The shell shall read its input in terms of lines from a file, from a terminal in the case of an interactive shell, or from a string in the case of sh -c
or system()
. The input lines can be of unlimited length. These lines shall be parsed using two major modes: ordinary token recognition and processing of here-documents.
Basically all of those || &&
strung together amount to a single input line for the shell's parser, because it has to parse tokens for each command list before then evaluating and executing the list's constituent simple commands.
I once covered something like this here - and there are a lot of command examples there detailing how the parser works (or at least how I understand it works).
dash
handles here-documents - it uses anonymous pipes and can be very handy - and very fast. Shells usually exhaust stack or otherwise go out of band long before my 24gbs of RAM is consumed. – mikeserv Aug 06 '14 at 01:53nc
is pretty handy for that too. – mikeserv Aug 06 '14 at 01:59If you're compairing longer pipelines of simpler tools with a single invocation of a more complex tool
from Gilles answer to my question I thought it is bad for using a lot of piping, the same for Avinash Raj answer – Nidal Aug 06 '14 at 02:10awk
. He definitely does make a good case about why it may be hard to use a lot of piping - as in the different commands talk different languages - but no case for or against performance or security. – mikeserv Aug 06 '14 at 02:30