428

I often see tutorials online that connect various commands with different symbols. For example:

command1 |  command2
command1 &  command2
command1 || command2    
command1 && command2

Others seem to be connecting commands to files:

command1  > file1
command1  >> file1

What are these things? What are they called? What do they do? Are there more of them?


Meta thread about this question..

Jeff Schaller
  • 67,283
  • 35
  • 116
  • 255
terdon
  • 242,166

3 Answers3

585

These are called shell operators and yes, there are more of them. I will give a brief overview of the most common among the two major classes, control operators and redirection operators, and how they work with respect to the bash shell.

A. Control operators

POSIX definition

In the shell command language, a token that performs a control function.
It is one of the following symbols:

&   &&   (   )   ;   ;;   <newline>   |   ||

And |& in bash.

A ! is not a control operator but a Reserved Word. It becomes a logical NOT [negation operator] inside Arithmetic Expressions and inside test constructs (while still requiring an space delimiter).

A.1 List terminators

  • ; : Will run one command after another has finished, irrespective of the outcome of the first.

      command1 ; command2
    

First command1 is run, in the foreground, and once it has finished, command2 will be run.

A newline that isn't in a string literal or after certain keywords is not equivalent to the semicolon operator. A list of ; delimited simple commands is still a list - as in the shell's parser must still continue to read in the simple commands that follow a ; delimited simple command before executing, whereas a newline can delimit an entire command list - or list of lists. The difference is subtle, but complicated: given the shell has no previous imperative for reading in data following a newline, the newline marks a point where the shell can begin to evaluate the simple commands it has already read in, whereas a ; semi-colon does not.

  • & : This will run a command in the background, allowing you to continue working in the same shell.

       command1 & command2
    

Here, command1 is launched in the background and command2 starts running in the foreground immediately, without waiting for command1 to exit.

A newline after command1 is optional.

A.2 Logical operators

  • && : Used to build AND lists, it allows you to run one command only if another exited successfully.

       command1 && command2
    

Here, command2 will run after command1 has finished and only if command1 was successful (if its exit code was 0). Both commands are run in the foreground.

This command can also be written

    if command1
    then command2
    else false
    fi

or simply if command1; then command2; fi if the return status is ignored.

  • || : Used to build OR lists, it allows you to run one command only if another exited unsuccessfully.

       command1 || command2
    

Here, command2 will only run if command1 failed (if it returned an exit status other than 0). Both commands are run in the foreground.

This command can also be written

    if command1
    then true
    else command2
    fi

or in a shorter way if ! command1; then command2; fi.

Note that && and || are left-associative; see Precedence of the shell logical operators &&, || for more information.

  • !: This is a reserved word which acts as the “not” operator (but must have a delimiter), used to negate the return status of a command — return 0 if the command returns a nonzero status, return 1 if it returns the status 0. Also a logical NOT for the test utility.

      ! command1
    

    [ ! a = a ]

And a true NOT operator inside Arithmetic Expressions:

    $ echo $((!0)) $((!23))
    1 0

A.3 Pipe operator

  • | : The pipe operator, it passes the output of one command as input to another. A command built from the pipe operator is called a pipeline.

       command1 | command2
    

    Any output printed by command1 is passed as input to command2.

  • |& : This is a shorthand for 2>&1 | in bash and zsh. It passes both standard output and standard error of one command as input to another.

      command1 |& command2
    

A.4 Other list punctuation

;; is used solely to mark the end of a case statement. Ksh, bash and zsh also support ;& to fall through to the next case and ;;& (not in ATT ksh) to go on and test subsequent cases.

( and ) are used to group commands and launch them in a subshell. { and } also group commands, but do not launch them in a subshell. See this answer for a discussion of the various types of parentheses, brackets and braces in shell syntax.

B. Redirection Operators

POSIX definition of Redirection Operator

In the shell command language, a token that performs a redirection function. It is one of the following symbols:

<     >     >|     <<     >>     <&     >&     <<-     <>

These allow you to control the input and output of your commands. They can appear anywhere within a simple command or may follow a command. Redirections are processed in the order they appear, from left to right.

  • < : Gives input to a command.

      command < file.txt
    

The above will execute command on the contents of file.txt.

  • <> : same as above, but the file is open in read+write mode instead of read-only:

      command <> file.txt
    

If the file doesn't exist, it will be created.

That operator is rarely used because commands generally only read from their stdin, though it can come handy in a number of specific situations.

  • > : Directs the output of a command into a file.

      command > out.txt
    

The above will save the output of command as out.txt. If the file exists, its contents will be overwritten and if it does not exist it will be created.

This operator is also often used to choose whether something should be printed to standard error or standard output:

    command >out.txt 2>error.txt

In the example above, > will redirect standard output and 2> redirects standard error. Output can also be redirected using 1> but, since this is the default, the 1 is usually omitted and it's written simply as >.

So, to run command on file.txt and save its output in out.txt and any error messages in error.txt you would run:

    command < file.txt > out.txt 2> error.txt

  • >| : Does the same as >, but will overwrite the target, even if the shell has been configured to refuse overwriting (with set -C or set -o noclobber).

      command >| out.txt
    

If out.txt exists, the output of command will replace its content. If it does not exist it will be created.

  • >> : Does the same as >, except that if the target file exists, the new data are appended.

      command >> out.txt
    

If out.txt exists, the output of command will be appended to it, after whatever is already in it. If it does not exist it will be created.

  • >& : (per POSIX spec) when surrounded by digits (1>&2) or - on the right side (1>&-) either redirects only one file descriptor or closes it (>&-).

A >& followed by a file descriptor number is a portable way to redirect a file descriptor, and >&- is a portable way to close a file descriptor.

If the right side of this redirection is a file please read the next entry.

  • >&, &>, >>& and &>> : (read above also) Redirect both standard error and standard output, replacing or appending, respectively.

      command &> out.txt
    

Both standard error and standard output of command will be saved in out.txt, overwriting its contents or creating it if it doesn't exist.

    command &>> out.txt

As above, except that if out.txt exists, the output and error of command will be appended to it.

The &> variant originates in bash, while the >& variant comes from csh (decades earlier). They both conflict with other POSIX shell operators and should not be used in portable sh scripts.

  • << : A here document. It is often used to print multi-line strings.

       command << WORD
           Text
       WORD
    

    Here, command will take everything until it finds the next occurrence of WORD, Text in the example above, as input . While WORD is often EoF or variations thereof, it can be any alphanumeric (and not only) string you like. When any part of WORD is quoted or escaped, the text in the here document is treated literally and no expansions are performed (on variables for example). If it is unquoted, variables will be expanded. For more details, see the bash manual.

    If you want to pipe the output of command << WORD ... WORD directly into another command or commands, you have to put the pipe on the same line as << WORD, you can't put it after the terminating WORD or on the line following. For example:

       command << WORD | command2 | command3...
           Text
       WORD
    
  • <<< : Here strings, similar to here documents, but intended for a single line. These exist only in the Unix port or rc (where it originated), zsh, some implementations of ksh, yash and bash.

      command <<< WORD
    

Whatever is given as WORD is expanded and its value is passed as input to command. This is often used to pass the content of variables as input to a command. For example:

     $ foo="bar"
     $ sed 's/a/A/' <<< "$foo"
     bAr
     # as a short-cut for the standard:
     $ printf '%s\n' "$foo" | sed 's/a/A/'
     bAr
     # or
     sed 's/a/A/' << EOF
     $foo
     EOF

A few other operators (>&-, x>&y x<&y) can be used to close or duplicate file descriptors. For details on them, please see the relevant section of your shell's manual (here for instance for bash).

That only covers the most common operators of Bourne-like shells. Some shells have a few additional redirection operators of their own.

Ksh, bash and zsh also have constructs <(…), >(…) and =(…) (that latter one in zsh only). These are not redirections, but process substitution.

ilkkachu
  • 138,973
terdon
  • 242,166
  • 3
    It would probably be worthwhile noting that not all shells are equal, and specifically highlighting the bash-specific features. – Greg Hewgill Oct 06 '14 at 02:34
  • 1
    @GregHewgill yeah, I weaseled out of it by saying that I am discussing with respect to bash. This is being groomed as a canonical Q&A to close the various "What does this weird thingy do" questions and most of them are from users of bash. I'm hoping someone else will pitch in and answer for non bash shells, but highlighting the bash-specific ones makes a lot of sense. I'll have to check though, I don't know which they are off the top of my head. – terdon Oct 06 '14 at 02:51
  • &>, >>>, and <<< are all non-posix as is the reference to not-only non-alphanum chars in a here-doc's name. This answer also discusses very little about how they work - for example, it is almost worse than useless to talk about a simple command and a command without explaining what these are and how the shell decides. – mikeserv Oct 06 '14 at 03:23
  • @mikeserv thanks. They work on bash and zsh though. I don't know what, if anything, is truly bash-specific in that list. I should go through this and add the shells each works in but that would involve finding out first. – terdon Oct 06 '14 at 03:26
  • And ksh93, yash, mksh, and many others. But they will all likely operate slightly differently in edge-cases between shells - thats why these types of things are iffy. – mikeserv Oct 06 '14 at 03:29
  • This may help with the comparative angle. –  Oct 06 '14 at 04:52
  • @mikeserv >>> ? what does that do? where can I find out more about it? I know that you are not talking about >> which is standard. – hildred Mar 11 '15 at 05:06
  • @hildred - I dunno. Maybe I was just trying to be thorough...? More likely it was a typo. Sorry to disappoint... – mikeserv Mar 11 '15 at 07:29
  • The && and || uses seem counterintuitive. Doesn't this mean that exit code 0 evaluates to True and non-zero evaluates to False? – Arc676 Sep 15 '17 at 14:42
  • 2
    @Arc676 No, they don't evaluate to true or false, that's a completely different context. This just means that an exit value of non-0 indicates a problem (not false) and an exit code of 0 indicates success (not true). That's always been the way and is quite standard. A non-0 exit code indicates an error in every environment I know of. – terdon Sep 15 '17 at 14:54
  • It should be noted that these symbols lose their special meaning and become ordinary textual characters when they are quoted (with '' or "") or escaped (with \); e.g., echo 'Tom & Jerry' or grep \< prog.c. Bash, specifically, also supports a $'' syntax. These quoting methods — '' and "" (and $'') — differ in ways that are discussed elsewhere. – G-Man Says 'Reinstate Monica' Dec 11 '17 at 22:06
  • There's also the >! analogue of >| to be aware of, in some non-bash shells. – Amit Naidu May 22 '18 at 19:12
  • Can this excellent answer be edited to clarify one small aspect of redirects: Is there actually any difference between cmd1 | cmd2 and cmd2 < $( cmd1 )? Why are there two ways, syntactically, to direct stdin from one process to another, and when do shell scripts tend to use one or the other? Are there any commonly-encountered limitations in other usual syntax + operators one can use, or any common confusions, caused by the "<" appearing after the command it's directing into? – Stilez Mar 08 '19 at 04:40
  • So command < file.txt is the same as cat file.txt | command? – felwithe Jun 24 '20 at 15:54
  • @felwithe essentially, yes. In fact, you very rarely (if ever) need cat file | command, this is a classic example of UUoC (useless use of cat). – terdon Jun 24 '20 at 16:13
  • When you're using &, you should be aware that ctrl + c only stops last command, other commands run in the background and you've to manually find and kill the processes. – Imran Mar 15 '23 at 11:53
93

Warning regarding ‘>’

Unix beginners who have just learned about I/O redirection (< and >) often try things like

commandinput_file > the_same_file

or

command … < file     > the_same_file

or, almost equivalently,

cat file | command … > the_same_file

(grep, sed, cut, sort, and spell are examples of commands that people are tempted to use in constructs like these.)  Users are surprised to discover that these scenarios result in the file becoming empty.

A nuance that doesn’t seem to be mentioned in the other answer can be found lurking in the first sentence of the Redirection section of bash(1):

Before a command is executed, its input and output may be redirected using a special notation interpreted by the shell.

The first five words should be bold, italic, underlined, enlarged, blinking, colored red, and marked with a exclamation mark in red triangle icon, to emphasize the fact that the shell performs the requested redirection(s) before the command is executed.  And remember also

Redirection of output causes the file … to be opened for writing ….  If the file does not exist it is created; if it does exist it is truncated to zero size.

  1. So, in this example:

    sort roster > roster
    

    the shell opens the roster file for writing, truncating it (i.e., discarding all its contents), before the sort program starts running.  Naturally, nothing can be done to recover the data.

  2. One might naïvely expect that

    tr "[:upper:]" "[:lower:]" < poem > poem
    

    might be better.  Because the shell handles redirections from left to right, it opens poem for reading (for tr’s standard input) before it opens it for writing (for standard output).  But it doesn’t help.  Even though this sequence of operations yields two file handles, they both point to the same file.  When the shell opens the file for reading, the contents are still there, but they still get clobbered before the program is executed. 

So, what to do about it?

Solutions include:

  • Check whether the program you’re running has its own, internal, capability to specify where the output goes.  This is often indicated by a -o (or --output=) token.  In particular,

    sort -o roster roster
    

    is roughly equivalent to

    sort roster > roster
    

    except, in the first case, the sort program opens the output file.  And it’s smart enough not to open the output file until after it has read all of the input file(s).

    Similarly, at least some versions of sed have a -i (edit in place) option that can be used to write the output back out to the input file (again, after all the input have been read).  Editors like ed/ex, emacs, pico, and vi/vim allow the user to edit a text file and save the edited text in the original file.  Note that ed (at least) can be used non-interactively.

    • vi has a related feature.  If you type :%!commandEnter, it will write the contents of the edit buffer out to command, read the output, and insert it into the buffer (replacing the original contents).
  • Simple but effective:

    commandinput_file > temp_file  &&  mv temp_file input_file

    This has the drawback that, if input_file is a link, it will (probably) be replaced by a separate file.  Also, the new file will be owned by you, with default protections.  In particular, this carries the risk that the file will be end up being world-readable, even if the original input_file wasn’t.

    Variations:

    • commandinput_file > temp_file && cp temp_file input_file && rm temp_file
      which will still (potentially) leave the temp_file world-readable.  Even better:
    • cp input_file temp_file && commandtemp_file > input_file && rm temp_file
      These preserve the link status, owner, and mode (protection) of the file, potentially at the cost of twice as much I/O.  (You may need to use an option like -a or -p on cp to tell it to preserve attributes.)
    • commandinput_file > temp_file &&
      cp --attributes-only --preserve=all input_file temp_file &&
      mv temp_file input_file
      (broken into separate lines only for readability)  This preserves the mode of the file (and, if you’re root, the owner), but makes it owned by you (if you’re not root), and makes it a new, separate file.
  • This blog (“In-place” editing of files) suggests and explains

    { rm input_file  &&  command … > input_file; } < input_file

    This requires that the command be able to process standard input (but almost all filters can).  The blog itself calls this a risky kludge and discourages its use.  And this will also create a new, separate file (not linked to anything), owned by you and with default permissions.

  • The moreutils package has a command called sponge:

    commandinput_file | sponge the_same_file

    See this answer for more information.

Here’s something that came as a complete surprise to me: syntaxerror says:

[Most of these solutions] will fail on a read-only file system, where “read-only” means that your $HOME will be writable, but /tmp will be read-only (by default).  For instance, if you have Ubuntu, and you’ve booted into the Recovery Console, this is commonly the case.  Also, the here-document operator <<< will not work there either, as it requires /tmp to be read/write because it will write a temporary file into there as well.
(cf. this question includes an strace’d output)

The following may work in that case:

  • For advanced users only: If your command is guaranteed to produce the same amount of output data as there is input (e.g., sort, or tr without the -d or -s option), you can try
    commandinput_file | dd of=the_same_file conv=notrunc

    See this answer and this answer for more information, including an explanation of the above, and alternatives that work if your command is guaranteed to produce the same amount of output data as there is input or less (e.g., grep, or cut).  These answers have the advantage that they do not require any free space (or they require very little).  The answers above of the form commandinput_file > temp_file && … clearly require that there be enough free space for the system to be able to hold the entire input (old) file and output (new) file simultaneously; this is non-obviously true for most of the other solutions (e.g., sed -i and sponge) as well.  Exception: sort … | dd … will probably require lots of free space, because sort needs to read all of its input before it can write any output, and it probably buffers most if not all of that data in a temporary file.

  • For advanced users only:
    commandinput_file 1<> the_same_file

    may be equivalent to the dd answer, above.  The n<> file syntax opens the named file on file descriptor n for both input and output, without truncating it – sort of a combination of n< and n>.  Note: Some programs (e.g., cat and grep) may refuse to run in this scenario because they can detect that the input and the output are the same file.  See this answer for a discussion of the above, and a script that makes this answer work if your command is guaranteed to produce the same amount of output data as there is input or less.
    Warning: I haven’t tested Peter’s script, so I don’t vouch for it.

So, what was the question?

This has been a popular topic on U&L; it is addressed in the following questions:

… and that’s not counting Super User or Ask Ubuntu.  I have incorporated a lot of the information from the answers to the above questions here in this answer, but not all.  (I.e., for more information, read the above-listed questions and their answers.)

P.S. I have no affiliation with the blog that I cited, above.

Kusalananda
  • 333,661
  • Since this question keeps on coming up, I thought I’d try my hand at writing a “canonical answer”. Should I post it here (and maybe link to it from some of the other more heavily trafficked questions), or should I move it to one of the questions that actually raises this issue? Also, is this perhaps a situation where questions should be merged? – Scott - Слава Україні Feb 21 '15 at 20:43
  • 1
    */tmp* A directory made available for applications that need a place to create temporary files. Applications shall be allowed to create files in this directory, but shall not assume that such files are preserved between invocations of the application. – mikeserv Feb 21 '15 at 22:05
  • @mikeserv: Yeah, (1) I'm quoting syntaxerror, and (2) I said I was surprised. I thought that, if anything would be read-write, it would be /tmp. – Scott - Слава Україні Feb 21 '15 at 22:09
  • Well, the thing @syntaxerror said is doubly strange because, as I think, dash would be the default recovery shell on Ubuntu and it not only does not understand a <<< herestring, but it also gets anonymous pipes for << heredocuments and doesn't mess with ${TMPDIR:-/tmp} for that purpose at all. See this or this for demos on here-document handling. Also why the same amount of output or less** warning? – mikeserv Feb 21 '15 at 22:15
  • 1
    @mikeserv: Well, the dd … conv=notrunc and the 1<> answers never truncate the output file, so, if the output of the command is less than the input (e.g., grep), there will be some bytes of the original left over at the end of the file. And, if the output is larger than the input (e.g., cat -n, nl, or (potentially) grep -n), there's a risk of overwriting old data before you've read it. – Scott - Слава Україні Feb 21 '15 at 23:32
  • Well, because most commands will buffer anyway, you run that risk regardless, or, if not that, then a neverending loop of reading the output as it is written. The dd conv=notrunc is more risky where the pipe is concerned - it isn't reading the input file, but the pipe, and, in that scenario, not even common input file is same as output file safeguards will prohibit the loop mentioned. In general, unless you are completely certain of how a filesystem and a command's i/o lib will behave together, you can never safely overwrite a file in place without first assuring your own buffer. – mikeserv Feb 21 '15 at 23:45
  • I don't understand. If a program is buffering reads, then, at any given time, it has probably read more data than it has processed; i.e., the read pointer (at the kernel level) is leading the read pointer at the stdio level, where the processing is occurring. And, conversely, it has written less data than it has processed; i.e., the kernel-level write pointer is trailing the stdio write pointer. Therefore, unless you're doing something that bloats the data (e.g., cat -n), the write pointer will always lag behind the read pointer. – Scott - Слава Україні Feb 21 '15 at 23:58
  • Commands also buffer writes, and sometimes do so by line rather than by byte count (though admittedly, usually only when stdout is a tty - but that is not always the case). In those cases, given variable line lengths, the buffers could overrun one another. It's just not a safe way to approach writing to a file - not without (at least) an intermediate ring-buffer and/or guaranteed safe blocking factor of some kind. And there's the race involved between the shell's opening 1<> and the command's opening the named file. In one of the links I used deleted/here-doc files as ring-buffers. – mikeserv Feb 22 '15 at 00:09
  • In the same vein as the dd, and probably not handling as many caveats, but conceptually simpler - cat a | wc | tee a. – Manav Nov 14 '19 at 08:19
  • @Manav: I don’t understand what you’re saying — how is your command “in the same vein as the dd”? — and I wonder how well you understand my answer.  But I see now that my answer was incomplete — it doesn’t mention that some commands that attempt to work around the problem will work sometimes, at random.  Your command is an example of that — it will work sometimes, but not always. – Scott - Слава Україні Nov 14 '19 at 18:48
  • Is this true also on copy on write file systems like zfs? (Asking just for completeness sake) – user482745 Nov 27 '20 at 11:29
37

More observations on ;, &, ( and )

  • Note that some of the commands in terdon’s answer may be null.  For example, you can say

    command1 ;
    

    (with no command2).  This is equivalent to

    command1
    

    (i.e., it simply runs command1 in the foreground and waits for it to complete.  Comparably,

    command1 &
    

    (with no command2) will launch command1 in the background and then issue another shell prompt immediately.

  • By contrast, command1 &&, command1 ||, and command1 | don’t make any sense.  If you type one of these, the shell will (probably) assume that the command is continued onto another line.  It will display the secondary (continuation) shell prompt, which is normally set to >, and keep on reading.  In a shell script, it will just read the next line and append it to what it has already read.  (Beware: this might not be what you want to happen.)

    Note: some versions of some shells may treat such incomplete commands as errors.  In such cases (or, in fact, in any case where you have a long command), you can put a backslash (\) at the end of a line to tell the shell to continue reading the command on another line:

    command1  &&  \
    command2
    

    or

    find starting-directory -mindepth 3 -maxdepth 5 -iname "*.some_extension" -type f \
                            -newer some_existing_file -user fred -readable -print
    
  • As terdon says, ( and ) can be used to group commands.  The statement that they are “not really relevant” to that discussion is debatable.  Some of the commands in terdon’s answer may be command groups.  For example,

    ( command1 ; command2 )  &&  ( command3; command4 )
    

    does this:

    • Run command1 and wait for it to finish.
    • Then, regardless of the result of running that first command, run command2 and wait for it to finish.
    • Then, if command2 succeeded,

      • Run command3 and wait for it to finish.
      • Then, regardless of the result of running that command, run command4 and wait for it to finish.

      If command2 failed, stop processing the command line.

  • Outside parentheses, | binds very tightly, so

    command1 | command2 || command3
    

    is equivalent to

    ( command1 | command2 )  ||  command3
    

    and && and || bind tighter than ;, so

    command1 && command2 ; command3
    

    is equivalent to

    ( command1 && command2 ) ;  command3
    

    i.e., command3 will be executed regardless of the exit status of command1 and/or command2.

  • Perfect, +1! I said they were not relevant because I did not want to go into that much detail. I wanted an answer that could work as a quick cheatsheet for newbies who're wondering what all the weird squiggles at the end of the various commands are. I did not mean to imply they're not useful. Thanks for adding all this. – terdon Oct 06 '14 at 18:07
  • 1
    I'm concerned about the "critical mass" problem -- if we post everything that we could possibly say about shells, we'll end up with our own TL;DR version of the Bash Reference Manual. – G-Man Says 'Reinstate Monica' Oct 06 '14 at 18:09
  • Also worth mentioning: Unlike in languages of the C family, ; by itself (or without a command preceding it) is a syntax error, and not an empty statement. Thus ; ; is an error. (A common pitfall for new users, IMHO). Also: ;; is a special delimiter, for case statements. – muru Oct 06 '14 at 18:10
  • @G-Man exactly, which is why I tried to keep mine more specific. The idea is to have a general Q&A against which we can close things like this or this as duplicates. – terdon Oct 06 '14 at 18:16
  • 1
    @muru: Good point, but let's generalize it. *Any* of the control operators that can appear between commands: ;, &&, ||, &, and |, are errors if they appear with nothing preceding them. Also, terdon addressed ;; (briefly) in his answer. – G-Man Says 'Reinstate Monica' Oct 06 '14 at 18:31
  • @G-Man, is there any POSIX-compliant shell (or even commonly used shell) you can point to that will reject a line that ends in a non-escaped newline after && (providing that the next line is a valid command)? I've never seen that behavior (a shell failing on that) and according to Gilles such constructs are standard POSIX grammar.... – Wildcard Mar 29 '16 at 23:43
  • @Wildcard: I'm obviously not understanding you.  It seems like you're asking whether I know of any POSIX-compliant shell that rejects standard POSIX grammar, and that would be an oxymoron. – G-Man Says 'Reinstate Monica' Mar 30 '16 at 03:04
  • @G-Man, well, your answer includes the sentence "Note: some versions of some shells may treat such incomplete commands as errors." So my question is, which versions of what shells? (Because, as I linked above, I thought that all POSIX shells will accept such.) – Wildcard Mar 30 '16 at 05:02
  • 1
    @Wildcard: OK, I see where you're coming from.  The key word is "may"; all I was saying was that I don't guarantee that all shells will accept such constructs (i.e., YMMV).  Obviously I wrote that before I knew about the use of the linebreak token in the POSIX shell grammar.  So perhaps it's safe to say that all POSIX-compliant shells will accept them.  I stand by my statement as a general disclaimer; if you find an old enough pre-POSIX shell, such as an actual Bourne shell or older, all bets are off. – G-Man Says 'Reinstate Monica' Mar 30 '16 at 05:35
  • Further, you might consider the following point to be off-topic, since this page is focused on bash.  But a lot of the information here applies to a wide variety of shells, including exotic ones.  The C shell (which was very popular in the 1980s, as it was the first shell to offer command history) recognizes |, &&, and ||, but it does not recognize them at the ends of lines (at least not tcsh version 6.18.01, dated 2012-02-14, which comes with Cygwin) — it gives the error Invalid null command. – G-Man Says 'Reinstate Monica' Mar 30 '16 at 05:37
  • Admittedly nitpicking, but curly braces would probably be a better choice when it comes to explaining command grouping because of the effects grouped commands may have on the current shell: compare e.g. true && foo=bar : ; echo "$foo" with ( true && foo=bar : ) ; echo "$foo" and { true && foo=bar : ; } ; echo "$foo" in dash, ksh93, or Bash/zsh in POSIX mode. – fra-san Apr 21 '20 at 11:15
  • ...but now I see that the question had the [tag:bash] tag, making any point about special builtins less relevant. It still makes sense if we allow command to be a compound one. – fra-san Apr 21 '20 at 11:30