How to redirect standard output to multiple log files? The following does not work:
some_command 1> output_log_1 output_log_2 2>&1
How to redirect standard output to multiple log files? The following does not work:
some_command 1> output_log_1 output_log_2 2>&1
See man tee
:
NAME: tee - read from standard input and write to standard output and files
SYNOPSIS: tee [OPTION]... [FILE]...
Accordingly:
echo test | tee file1 file2 file3
cmd 2>&1 | tee log1 log2
I tried executing like above, but i need to press ctrl-c to redirect it to second log file. also the output is printed on the console. I want command output to be redirected to logs but not on the console. any help is appreciated.
– doubledecker Jun 25 '12 at 10:34tee
command writes stdin
to file(s) and also to stdout
. If you don't want the output to appear on the terminal, you have to redirect to /dev/null
like you normally would.
– Minix
Dec 16 '14 at 08:28
tee
has a very useful -a
switch, which allows you to append to multiple files, just like >>
would.
– Erathiel
Oct 02 '15 at 10:11
echo test | tee --append file1 file2
– user1364368
Jul 29 '16 at 17:53
tee
i believe is it truncates existing output and overwrites. Is there any other solution than tee?
– RajSanpui
Aug 10 '16 at 08:43
stdout
you want to use a redirect to one of your files: ... | tee file1 file2 >fil3
. Now output sent to stdout
by tee
gets written to file3
– Alexis Wilke
Jan 18 '20 at 00:49
It's an old post but I just found it now...
Instead of redirecting the output to > /dev/null
you can redirect it to the last file:
echo "foobarbaz" | tee file1 > file2
Or for appending the output:
echo "foobarbaz" | tee -a file1 >> file2
Let's say your output is generated from a function, cmd()
:
cmd() {
echo hello world!
}
To redirect the output from cmd
to two files, but not to the console, you can use:
cmd | tee file1 file2 >/dev/null
This will work for multiple files, given any data source piping to tee:
echo "foobarbaz" | tee file1 file2 file3 file4 > /dev/null
This will also work:
echo $(cmd) | tee file1 file2 >/dev/null
Without the /dev/null
redirection, tee will send output to stdout in addition to the files specified.
For example, if this is run from the console, you'll see the output there. Run from a crontab, the output will appear the status message which is mailed to you (also see Gilles' answer here https://unix.stackexchange.com/a/100833/3998).
This worked for me in bash on Ubuntu 12.04, and has been verified in Ubuntu 14.04 using GNU bash 4.3.11(1), so it should work on any recent GNU bash version.
version 4.3.11(1)-release (i686-pc-linux-gnu)
) in Ubuntu 14.04.
– belacqua
Jun 20 '14 at 19:47
@strugee's answer for zsh is not safe to use. The safe way to do this would be to enclose some_command
in brace brackets like:
{some_command} >output_log_1 >output_log_2
and not like this:
some_command >output_log_1 >output_log_2
zsh manual's Multios section explains the reason with an example.
There is a problem when an output multio is attached to an external program. A simple example shows this:
cat file >file1 >file2 cat file1 file2
Here, it is possible that the second ‘
cat
’ will not display the full contents offile1
andfile2
(i.e. the original contents offile
repeated twice).The reason for this is that the multios are spawned after the cat process is forked from the parent shell, so the parent shell does not wait for the multios to finish writing data. This means the command as shown can exit before
file1
andfile2
are completely written. As a workaround, it is possible to run thecat
process as part of a job in the current shell:{ cat file } >file1 >file2
Here, the
{...}
job will pause to wait for both files to be written.
So the safe way would be to always enclose the command in braces if one wants to be sure that the copy finishes before they can consume it in following commands.
As @jofel mentioned in a comment under the answer, this can be done natively in zsh
:
echo foobar >file1 >file2 >file3
or, with brace expansion:
echo foobar >file{1..3}
Internally this works very similarly to the tee
answers provided above. The shell connects the command's stdout to a process that pipes to multiple files; therefore, there isn't any compelling technical advantage to doing it this way (but it does look real good). See the zsh
manual for more.
Unable to comment, however, another way to express
echo "foobarbaz" | tee file1 file2 file3 file4 file5 file6 file7 file8 > /dev/null
Could be simplified to this, when dealing with many files.
echo "foobarbaz" | tee file{1..8} > /dev/null
file1
through file8
as their names and those are likely just example placeholders for the names of the files
– Eric Renouf
Dec 28 '15 at 14:56
This is the millenial way using JSON, and splitting on newlines. It requires that the producer process in the pipeline writes JSON to stdout and then that gets parsed and multiplexed.
#!/usr/bin/env bash
function fan_out_to_multiple_files() {(
set -eo pipefail;
local last_tidbit=''
local first='true'
local last_line=''
while read line; do
for i in $(echo "$line" | tr ';' '\n'); do
last_line="${i}"
if [[ "$first" == 'true' ]]; then
first='false';
i="${last_tidbit}${i}"
last_tidbit='';
fi
echo "i is '$i'"
((
file_path="$(echo "$i" | jq -r '.file_path')"
if [[ -n "$file_path" ]]; then
echo "$i" > "$file_path"
fi
) || { echo; }; ) &> /dev/null &
done;
# assign
last_tidbit="$last_line"
first='false';
done;
wait;
)}
you run it like this:
echo -e '{"file_path":"bar123.json"}\n{"file_path":"foo123.json"}' | fan_out_to_multiple_files
this will yield two files on your fs, named foo123.json and bar123.json.
note I write a lot of my bash functions like this:
function has_embedded_subshell {(
set -eo pipefail; ## now I can set flags without ever effecting current shell
)}
zsh
, you can usesome_command >output_log_1 >output_log_2
. – jofel Jun 21 '12 at 08:36