83

How to redirect standard output to multiple log files? The following does not work:

some_command 1> output_log_1 output_log_2 2>&1
Jeff Schaller
  • 67,283
  • 35
  • 116
  • 255
doubledecker
  • 1,807
  • 3
  • 15
  • 13

7 Answers7

119

See man tee:

NAME: tee - read from standard input and write to standard output and files

SYNOPSIS: tee [OPTION]... [FILE]...

Accordingly:

echo test | tee file1 file2 file3
akond
  • 1,622
  • can the stderr be also redirected in more than one file? – fromnaboo Jun 23 '12 at 09:07
  • Yes, it can be done by virtue of redirection: find / -name test 2>&1 | tee file1 file2 file3 – akond Jun 23 '12 at 18:27
  • @akond, cmd 2>&1 | tee log1 log2

    I tried executing like above, but i need to press ctrl-c to redirect it to second log file. also the output is printed on the console. I want command output to be redirected to logs but not on the console. any help is appreciated.

    – doubledecker Jun 25 '12 at 10:34
  • 1
    @doubledecker The tee command writes stdin to file(s) and also to stdout. If you don't want the output to appear on the terminal, you have to redirect to /dev/null like you normally would. – Minix Dec 16 '14 at 08:28
  • Just a side note: tee has a very useful -a switch, which allows you to append to multiple files, just like >> would. – Erathiel Oct 02 '15 at 10:11
  • 9
    It is also possible to append to multiple files: echo test | tee --append file1 file2 – user1364368 Jul 29 '16 at 17:53
  • The problem with tee i believe is it truncates existing output and overwrites. Is there any other solution than tee? – RajSanpui Aug 10 '16 at 08:43
  • tee -a will append – slashdottir Jun 03 '19 at 17:28
  • To avoid output in stdout you want to use a redirect to one of your files: ... | tee file1 file2 >fil3. Now output sent to stdout by tee gets written to file3 – Alexis Wilke Jan 18 '20 at 00:49
18

It's an old post but I just found it now...

Instead of redirecting the output to > /dev/null you can redirect it to the last file:

echo "foobarbaz" | tee file1 > file2

Or for appending the output:

echo "foobarbaz" | tee -a file1 >> file2
Archemar
  • 31,554
15

Let's say your output is generated from a function, cmd() :

cmd() {
    echo hello world!
}

To redirect the output from cmd to two files, but not to the console, you can use:

cmd | tee file1 file2 >/dev/null

This will work for multiple files, given any data source piping to tee:

echo "foobarbaz" | tee file1 file2 file3 file4 > /dev/null

This will also work:

echo $(cmd) | tee file1 file2 >/dev/null

Without the /dev/null redirection, tee will send output to stdout in addition to the files specified.

For example, if this is run from the console, you'll see the output there. Run from a crontab, the output will appear the status message which is mailed to you (also see Gilles' answer here https://unix.stackexchange.com/a/100833/3998).

This worked for me in bash on Ubuntu 12.04, and has been verified in Ubuntu 14.04 using GNU bash 4.3.11(1), so it should work on any recent GNU bash version.

  • @doubledecker -- this looks like it satisfies your conditions, so can be accepted as the answer. Also, +1 as I've tested this under GNU bash (version 4.3.11(1)-release (i686-pc-linux-gnu)) in Ubuntu 14.04. – belacqua Jun 20 '14 at 19:47
5

@strugee's answer for zsh is not safe to use. The safe way to do this would be to enclose some_command in brace brackets like:

{some_command} >output_log_1 >output_log_2

and not like this:

some_command >output_log_1 >output_log_2

zsh manual's Multios section explains the reason with an example.

There is a problem when an output multio is attached to an external program. A simple example shows this:

cat file >file1 >file2
cat file1 file2

Here, it is possible that the second ‘cat’ will not display the full contents of file1 and file2 (i.e. the original contents of file repeated twice).

The reason for this is that the multios are spawned after the cat process is forked from the parent shell, so the parent shell does not wait for the multios to finish writing data. This means the command as shown can exit before file1 and file2 are completely written. As a workaround, it is possible to run the cat process as part of a job in the current shell:

{ cat file } >file1 >file2

Here, the {...} job will pause to wait for both files to be written.

So the safe way would be to always enclose the command in braces if one wants to be sure that the copy finishes before they can consume it in following commands.

muru
  • 72,889
codepoet
  • 586
4

As @jofel mentioned in a comment under the answer, this can be done natively in zsh:

echo foobar >file1 >file2 >file3

or, with brace expansion:

echo foobar >file{1..3}

Internally this works very similarly to the tee answers provided above. The shell connects the command's stdout to a process that pipes to multiple files; therefore, there isn't any compelling technical advantage to doing it this way (but it does look real good). See the zsh manual for more.

don_crissti
  • 82,805
strugee
  • 14,951
3

Unable to comment, however, another way to express

echo "foobarbaz" | tee file1 file2 file3 file4 file5 file6 file7 file8 > /dev/null

Could be simplified to this, when dealing with many files.

echo "foobarbaz" | tee file{1..8} > /dev/null
  • 2
    How is this really different from the other answers already given? Especially since few people likely want literal file1 through file8 as their names and those are likely just example placeholders for the names of the files – Eric Renouf Dec 28 '15 at 14:56
  • 1
    Likely or not, this is exactly the solution I needed, and thought it could help someone else. – user149146 Dec 28 '15 at 19:32
0

This is the millenial way using JSON, and splitting on newlines. It requires that the producer process in the pipeline writes JSON to stdout and then that gets parsed and multiplexed.

#!/usr/bin/env bash

function fan_out_to_multiple_files() {(

set -eo pipefail;
local last_tidbit=''

local first='true'
local last_line=''

while read line; do

    for i in $(echo "$line" | tr ';' '\n'); do

        last_line="${i}"

        if [[ "$first" == 'true' ]]; then
           first='false';
           i="${last_tidbit}${i}"
           last_tidbit='';
        fi

        echo "i is '$i'"
       ((
         file_path="$(echo "$i" | jq -r '.file_path')"

         if [[ -n "$file_path" ]]; then
              echo "$i" > "$file_path"
          fi

       ) || { echo; }; ) &> /dev/null &

    done;

    # assign
    last_tidbit="$last_line"
    first='false';

  done;

  wait;

)}

you run it like this:

echo -e '{"file_path":"bar123.json"}\n{"file_path":"foo123.json"}' | fan_out_to_multiple_files

this will yield two files on your fs, named foo123.json and bar123.json.

note I write a lot of my bash functions like this:

function has_embedded_subshell {(
   set -eo pipefail; ## now I can set flags without ever effecting current shell
)}