6

In my script, I currently have:

exec > >(tee -a /tmp/history.log) 2>&1

This writes both the stderr and stdout of all commands to both the log file and the tty. Unfortunately, this makes the tty very noisy, so I'd rather have only stdout on the terminal and both stdout and stderr in the file (in correct order, so opening the file twice for append won't work). For the life of me, I can't figure out the magic exec invocation (even using tee /dev/tty) necessary to get this to work.

Jeff Schaller
  • 67,283
  • 35
  • 116
  • 255
David
  • 925

4 Answers4

3

tee can't directly output to a file descriptor, but you can use process substitution with cat to solve it:

exec 3>&1 &>log 1> >(tee >(cat >&3))

So, stdout goes to the output through fd3, and both stdout and stderr go the log.

choroba
  • 47,233
1
exec 2>> /tmp/history.log | tee -a /tmp/history.log

This redirects standard error before kicking standard output to tee, which means that your errors are not being kicked into the standard output pipeline (and thereby the terminal).

DopeGhoti
  • 76,081
  • This doesn’t work, or at least not in bash. Bash runs both (all) components of a pipeline in subshells, so this redirects stderr in a subshell that then immediately evaporates (so its environment is lost), and pipes the (vacuous) output of that ephemeral subshell (and nothing else) into tee.  This could work for a command list; e.g., { cmd₁; cmd₂;} 2>> /tmp/history.log | tee -a /tmp/history.log. Does the version you posted work in some shell? – G-Man Says 'Reinstate Monica' Jan 29 '18 at 19:09
  • It worked when I tested it before posting, using bash. As this was a year and a half ago, I couldn't testify as to which version it was, though. – DopeGhoti Jan 29 '18 at 19:16
1

exec 2>>/tmp/history.log 1> >(tee -a /tmp/history.log >&1) may work for you, but there's no guarantee the order will be correct. This ordering appears to be a well-known problem, according to here and here.

This command redirects stderr to the history file with 2>>/tmp/history.log, then tees stdout to the same file using 1> >(tee -a /tmp/history.log. Finally, it directs it back to stdout: >&1.

There's a caveat with this method. In a few tests I've done, there is a possibility for an error in the ordering of the output.

For instance, I used find /etc/ -name interfaces as a test. The output of this command alone is:

$ find /etc/ -name interfaces
/etc/network/interfaces
find: `/etc/lvm/backup': Permission denied
find: `/etc/lvm/archive': Permission denied
find: `/etc/cups/ssl': Permission denied
/etc/cups/interfaces
find: `/etc/ssl/private': Permission denied
find: `/etc/polkit-1/localauthority': Permission denied

When I run find /etc/ -name interfaces 2>>output 1> >(tee -a output >&1) in a script, the output file contains this:

+ find /etc/ -name interfaces
++ tee -a output
find: `/etc/lvm/backup'/etc/network/interfaces
: Permission denied
find: `/etc/lvm/archive': Permission denied
find: `/etc/cups/ssl': Permission denied
find: `/etc/ssl/private': Permission denied
/etc/cups/interfaces
find: `/etc/polkit-1/localauthority': Permission denied

Notice that this part of stderr has been split across two lines:

find: `/etc/lvm/backup': Permission denied

This doesn't happen in every instance, but it's something to be aware of. Also, as mentioned above the ordering is inconsistent.

clk
  • 2,146
1

You want to duplicate stdout, and merge one of the copies with stderr. So only stdout must go through tee, and you will have two different processes writing to the log file. The simplest way is to open the log file twice:

exec > >(tee -a /tmp/history.log) 2>>/tmp/history.log

Alternatively, you can make tee write to the same file descriptor that's used for the direct output. To do this, first redirect stderr to the log file, then call tee to duplicate stdout to stderr.

exec 2>>/tmp/history.log
exec > >(tee -a /dev/fd/2)

This may result in data being out of order, because the program may keep writing data to stderr (going directly to the log file) while tee is busy copying some previous stdout output to the log file.

To avoid this reordering, both streams need to go through the same program that takes two input pipes. There's no standard command line tool that takes two inputs and processes them in order. Even with the same program, there's still potential for reordering due to buffering in the pipe itself. I can't think of a solution for that.

In addition, the program is likely to turn on size-based output buffering at the library level (stdio buffering) because its output is not going to a terminal anymore. This results in a lot more reordering. To avoid this (at a possible performance penalty), you can turn off buffering with stdbuf or unbuffer.