0

I understand how redirecting output with > /dev/null causes it not to print to screen.

But for some reason this is not always enough, and some things still do get printed.

In those cases > /dev/null 2>&1 will achieve the desired result.

However this is a little confusing to me. Can someone break down exactly how this works?

What is particularly confusing is the &1 part. If I see & I think "run in background." Don't know what the 1 is for, what if it was a 2?

360man
  • 111

3 Answers3

6

Programs run by the shell get three streams:

0 - standard input [stdin]
1 - standard output [stdout]
2 - standard error (output) [stderr]

You can think of stdin as your keyboard (without pipes or redirection, it's a simplification).

Then, to print things on screen each program can write to either standard output or standard error, usually normal output goes to stdout and errors to stderr.

When you redirect with > you are redirecting just stdout. You could use 1>.

When you redirect with 2> you are redirecting just stderr.

So if your program prints something on stderr and you only did:

program > /dev/null

You will still see it.

You have at least two solutions to avoid seeing the stderr output, redirect stdout and stderr to /dev/null

program > /dev/null 2>/dev/null

Or, and this is the answer to your question, redirect stderr to stdout, that was already redirected to /dev/null

program > /dev/null 2>&1

That's what 2>&1 does, redirect stderr to stdout. Bash reads that from right to left, that's why it comes at the end.

  • Good & clear explanation I think – Seamus Jun 09 '20 at 22:57
  • Are you saying there at the end that if you swapped the order and did "program 2>&1 1>/dev/null" it would only send standard output to /dev/null while standard error would still make it to the screen as standard output? Also am I correct in surmising that it's impossible to swap the data which is fed to standard error and standard output (not to imply that would make any sense)? – M.J. Rayburn Oct 28 '22 at 23:34
1

It takes a while to wrap your head around the complexities of i/o redirection: I think the major points are:

  1. The file descriptor numbers, 0, 1, 2, etc, are variables like x, y, zm etc, or maybe indices into an array, eg, fd[0], fd[1], fd[2], ...
  2. The redirection operators <, >, >>, <&, and >& are operators that assign values to these variables.
  3. These assignments are executed strictly left to right across the command line, before the command executes, regardless of where they occur in the command line.
  4. If the file descriptor number is omitted from the left, there is a default, either 0 or 1, at least for the first three operators.

This probably isn't enough to clarify things entirely so let's work through a few examples.


cmd arg arg arg. Here we have no explicit redirections but this simple example helps establish the basic process of what the shell does. So, what does happen here?

  1. The shell forks a new process, still executing code in the shell, that will eventually run cmd, but not yet. Everything from here on happens in the new process.
  2. Set up default i/o redirection: these are roughly equivalent to the explicit redirections 0</dev/tty 1>/dev/tty 2>/dev/tty.
  3. In the middle here, any explicit redirection from the command line would be processed, possibly modifying the default assignments already established.
  4. Run cmd arg arg arg in the already-forked process, keeping the file descriptor associations that have been set up. (This is done with some variant of the exec system call.)

So, what about cmd arg arg arg >file?

  1. Fork a new process, and set up the default file descriptor assignments.
  2. Find the first explicit redirection, >file.
    • Convert this to the equivalent but rarely-seen form 1>file.
    • Open file for output and attach to to file descriptor 1. Note that this overwrites the previous default assignment of stdout done in step (1).
  3. Run cmd arg arg arg in the same process. File descriptor 1 is modified, so stdout goes to file instead of the terminal.

But now, let's say we're getting a lot of ouput, and we just want to throw it away, not save it in a file. Well, we want cmd arg arg arg>/dev/null.

  • This is exactly the same as the example above in all respects, except that /dev/null is a magic file that always exists, accepts all the bytes you want to give it, and throws them away.

Ok, so a bunch of output is still showing up. Why didn't the redirection above fix it? Well, especially historically, a lot of unix commands separate normal output and error output. The former goes to stdout (file descriptor 1) and the latter goes to stderr (file descriptor 2). This is often convenient, but gosh darn it, you just want the output to go away.

What do you do? Well, somehow, you need to also reassign (re-direct) any bytes going to stderr to /dev/null. Well, the following works:

cmd arg arg arg >/dev/null 2>/dev/null

(Note that this isn't quite the question you started with. That's up soon. Also, be careful of this form: it only does what you want because of the magic properties of /dev/null.)

All right. What's going on here?

  1. As always, fork a new process, and set up the default file descriptor assignments.
  2. Find the first explicit redirection, >/dev/null.
    • Convert this to the equivalent but rarely-seen form 1>/dev/null.
    • Open /dev/null for output and attach to to file descriptor 1. Note that this overwrites the previous default assignment of stdout done in step (1).
  3. Look at the next explicit redirection, '2>/dev/null.
    • Open /dev/null for output and attach that open file to file descriptor 2. This overrides the original file attached to descriptor 2.
  4. Run cmd arg arg arg in the same process. Both file descriptor 1 and 2 have been modified from the default setup.

It is important to realize that there are two separate openings of /dev/null. One each attached to file descriptor 1 and 2. This fact causes trouble if you repace /dev/null with a regular filename in an attempt to catch all the output in a file.


And finally, after all this, the form you actually asked about.

cmd arg arg arg >/dev/null 2>&1

  1. As always, fork a new process, and set up the default file descriptor assignments.
  2. Find the first explicit redirection, >/dev/null.
    • Convert this to the equivalent but rarely-seen form 1>/dev/null.
    • Open /dev/null for output and attach to to file descriptor 1. Note that this overwrites the previous default assignment of stdout done in step (1).
  3. Look at the next explicit redirection, '2>&1.
    • >& means to assign to 2 the same open file currently assigned to 1. Ie, it's like y = x or fd[2] = fd[1].
  4. Run cmd arg arg arg in the same process. Both file descriptor 1 and 2 have been modified from the default setup, but now both fd 1 and fd 2 point to the same open file.

In this case, we have opened /dev/null exactly once, not twice as in the previous example. Because /dev/null just discards output, the difference doesn't matter here.

So when does it matter?


Hooray, you've gotten rid of all the output. But after a while, you sort of wish you had saved all the output in a single file in the order you would have seen it on the screen.

Ignoring my earlier warnings, you start out with the seemingly simple

cmd arg arg arg >file 2>file

This works just as I described before, but when you look inside file you'll find stout and stderr interleaved weirdly, probably like bits of stderr have overwritten the front of the file. Wtf?

By using this form, you opened file twice. Well, duh... But, each open instance of file has an independent output position and both of them start at zero. So, as stdout races ahead producing megabytes of ouput, the output position of stderr stays at zero, and only moves ahead when the occasional error message is generated.


And finally, the right way to capture both stderr and stdout in a single file.

cmd arg arg arg >file 2>&1

This works exactly the same as explained a couple examples back. The important difference from the incorrect previous form is that both stdout and stderr reference the same open instance of file, and so share an output position.

0

In a nutshell:

  • > /dev/null means redirect stdout to /dev/null (don't print it).
  • 2> &1 means redirect stderr to stdout (which is already redirected to /dev/null).

But for some reason this is not always enough, and some things still do get printed.

Just in case you aren't familiar, things can get sent to one of two places:

  • File descriptor 1 is stdout. That's what is normally used when an application outputs content.
  • File descriptor 2 is stderr. It's less common, but applications will generally print to stderr whenever an error occurs.

In most shells both stdout and stderr are printed in the shell by default. When you used > /dev/null you directed stdout to a dummy sink so the shell didn't print stdout anymore. But, stderr was untouched. The "for some reason" you were talking about are the occurrences where applications write to stderr.

Stewart
  • 13,677