285

There are some commands which filter or act on input, and then pass it along as output, I think usually to stdout - but some commands will just take the stdin and do whatever they do with it, and output nothing.

I'm most familiar with OS X and so there are two that come to mind immediately are pbcopy and pbpaste- which are means of accessing the system clipboard.

Anyhow, I know that if I want to take stdout and spit the output to go to both stdout and a file then I can use the tee command. And I know a little about xargs, but I don't think that's what I'm looking for.

I want to know how I can split stdout to go between two (or more) commands. For example:

cat file.txt | stdout-split -c1 pbcopy -c2 grep -i errors

There is probably a better example than that one, but I really am interested in knowing how I can send stdout to a command that does not relay it and while keeping stdout from being "muted" - I'm not asking about how to cat a file and grep part of it and copy it to the clipboard - the specific commands are not that important.

Also - I'm not asking how to send this to a file and stdout - this may be a "duplicate" question (sorry) but I did some looking and could only find similar ones that were asking about how to split between stdout and a file - and the answers to those questions seemed to be tee, which I don't think will work for me.

Finally, you may ask "why not just make pbcopy the last thing in the pipe chain?" and my response is 1) what if I want to use it and still see the output in the console? 2) what if I want to use two commands which do not output stdout after they process the input?

Oh, and one more thing - I realize I could use tee and a named pipe (mkfifo) but I was hoping for a way this could be done inline, concisely, without a prior setup :)

cwd
  • 45,389
  • Possible Duplicate of http://unix.stackexchange.com/questions/26964/evaluate-multiple-patterns-from-program-output-and-write-into-pattern-specific-f/26966#26966 – Nikhil Mulley Jan 07 '12 at 12:58

10 Answers10

341

You can use tee and process substitution for this:

cat file.txt | tee >(pbcopy) | grep errors

This will send all the output of cat file.txt to pbcopy, and you'll only get the result of grep on your console.

You can put multiple processes in the tee part:

cat file.txt | tee >(pbcopy) >(do_stuff) >(do_more_stuff) | grep errors
Mat
  • 52,586
  • tee but with two (or more) stdout options - sweet! – cwd Jan 07 '12 at 15:08
  • 38
    Not a concern with pbcopy, but worth mentioning in general: whatever the process substitution outputs is also seen by the next pipe segment, after the original input; e.g.: seq 3 | tee >(cat -n) | cat -e (cat -n numbers the input lines, cat -e marks newlines with $; you'll see that cat -e is applied to both the original input (first) and (then) the output from cat -n). Output from multiple process substitutions will arrive in non-deterministic order. – mklement0 Dec 09 '14 at 04:47
  • 74
    The >( only works in bash. If you try that using for instance sh it won't work. It's important to make this notice. – AAlvz Dec 16 '14 at 16:31
  • 13
    @AAlvz: Good point: process substitution is not a POSIX feature; dash, which act as sh on Ubuntu, doesn't support it, and even Bash itself deactivates the feature when invoked as sh or when set -o posix is in effect. However, it's not just Bash that supports process substitutions: ksh and zsh support them too (not sure about others). – mklement0 Apr 15 '15 at 02:59
  • 3
    @mklement0 that doesn't appear to be true. On zsh (Ubuntu 14.04) your line prints:
     1 1
     2 2
     3 3
    

    1$ 2$ 3$

    Which is sad, because I really wanted the functionality to be as you say it.

    – Aktau Oct 21 '16 at 09:56
  • ..follow-up. Just tested it on bash. There it works. It would seem that binding to stdout is done differently. – Aktau Oct 21 '16 at 09:57
  • 2
    @Aktau: Indeed, my sample command only work as described in bash and ksh - zsh apparently doesn't send output from output process substitutions through the pipeline (arguably, that's preferable, because it doesn't pollute what is sent to the next pipeline segment - though it still prints). In all shells mentioned, however, it's generally not a good idea to have a single pipeline in which regular stdout output and output from process substitutions is mixed - the output ordering will not be predictable, in a manner that may only surface infrequently or with large output data sets. – mklement0 Oct 22 '16 at 05:39
  • 2
    The tee trick works (thanks!). But it sends everything to standard output as well, which I prints loads of garbage to the console, so I added 2&>/dev/null at the end of the command. – Rolf May 21 '17 at 16:21
  • When using tee with pipes (as is the case with >(), it makes sense to take a look at the -p option to allow continue writing to the other pipes even if one of them closes early. – Tilman Vogel May 16 '23 at 09:25
182

You can specify multiple file names to tee, and in addition the standard output can be piped into one command. To dispatch the output to multiple commands, you need to create multiple pipes and specify each of them as one output of tee. There are several ways to do this.

Process substitution

If your shell is ksh93, bash or zsh, you can use process substitution. This is a way to pass a pipe to a command that expects a file name. The shell creates the pipe and passes a file name like /dev/fd/3 to the command. The number is the file descriptor that the pipe is connected to. Some unix variants do not support /dev/fd; on these, a named pipe is used instead (see below).

tee >(command1) >(command2) | command3

File descriptors

In any POSIX shell, you can use multiple file descriptors explicitly. This requires a unix variant that supports /dev/fd, since all but one of the outputs of tee must be specified by name.

{ { { tee /dev/fd/3 /dev/fd/4 | command1 >&9;
    } 3>&1 | command2 >&9;
  } 4>&1 | command3 >&9;
} 9>&1

Named pipes

The most basic and portable method is to use named pipes. The downside is that you need to find a writable directory, create the pipes, and clean up afterwards.

tmp_dir=$(mktemp -d)
mkfifo "$tmp_dir/f1" "$tmp_dir/f2"
command1 <"$tmp_dir/f1" & pid1=$!
command2 <"$tmp_dir/f2" & pid2=$!
tee "$tmp_dir/f1" "$tmp_dir/f2" | command3
rm -rf "$tmp_dir"
wait $pid1 $pid2
  • 15
    Thanks so much for providing the two alternative versions for those who don't want to rely on bash or a certain ksh. – trr Jul 02 '13 at 04:08
  • tee "$tmp_dir/f1" "$tmp_dir/f2" | command3 should surely be command3 | tee "$tmp_dir/f1" "$tmp_dir/f2", as you want stdout of command3 piped to tee, no? I tested your version under dash and tee blocks indefinitely waiting for input, but switching the order produced the expected result. – Adrian Günter Apr 10 '18 at 19:21
  • 3
    @AdrianGünter No. All three examples read data from standard input and send it to each of command, command2 and command3. – Gilles 'SO- stop being evil' Apr 10 '18 at 22:04
  • @Gilles I see, I misinterpreted the intent and tried to use the snippet incorrectly. Thanks for the clarification! – Adrian Günter Apr 10 '18 at 23:18
  • If you have no control on the shell used, but you can use bash explicitly, you can do <command> | bash -c 'tee >(command1) >(command2) | command3'. It helped in my case. – gc5 Oct 13 '18 at 18:09
27

Just play with process substitution.

mycommand_exec |tee >(grep ook > ook.txt) >(grep eek > eek.txt)

grep are two binaries which have the same output from mycommand_exec as their process specific input.

  • Thanks, this was a pretty straightforward response for how to split a pipe to two processes! However it should be noted that the output of mycommand_exec will still be passed UNFILTERED to stdout! – Reu Mar 31 '22 at 17:15
22

If you are using zsh then you can take advantage of the power of MULTIOS feature, i.e. get rid of tee command completely:

uname >file1 >file2

will just write the output of uname to two different files: file1 and file2, what is equivalent of uname | tee file1 >file2

Similarly redirection of standard inputs

wc -l <file1 <file2

is equivalent of cat file1 file2 | wc -l (please note that this is not the same as wc -l file1 file2, the later counts number of lines in each file separately).

Of course you can also use MULTIOS to redirect output not to files but to other processes, using process substitution, e.g.:

echo abc > >(grep -o a) > >(tr b x) > >(sed 's/c/y/')
jimmij
  • 47,140
12

There is also pee from the moreutils package. It is designed for it:

pee 'command1' 'command2' 'cat -'
Xorax
  • 575
  • Should be best answer!! fortune | pee cowsay espeak – zzapper Dec 02 '21 at 15:32
  • pee 'command1' 'command2' 'cat -' doesn't make sense, because pee expects the input on its own stdin, right? Instead, and to reuse the question's example: cat file.txt | pee 'pbcopy' 'grep -i errors' – Abdull Aug 25 '23 at 20:33
6

Capture the command STDOUT to a variable and re-use it as many times as you like:

commandoutput="$(command-to-run)"
echo "$commandoutput" | grep -i errors
echo "$commandoutput" | pbcopy

If you need to capture STDERR too, then use 2>&1 at the end of the command, like so:

commandoutput="$(command-to-run 2>&1)"
phemmer
  • 71,831
laebshade
  • 2,176
  • 4
    Where are variables stored? If you were dealing with a large file or something of that sort, wouldn't this hog up a lot of memory? Are variables limited in size? – cwd Jan 07 '12 at 04:09
  • 1
    what if $commandoutput is huge?, its better to use pipes and process substitution. – Nikhil Mulley Jan 07 '12 at 13:00
  • 4
    Obviously this solution is possible only when you know the size of the output will easily fit in memory, and you're OK with buffering the entire output before running the next commands on it. Pipes solve these two problems by allowing arbitrary length data and streaming it in real time to the receiver as it's generated. – trr Jun 23 '13 at 12:15
  • 2
    This is a good solution if you have small output, and you know that the output will be text and not binary. (shell variables often aren't binary safe) – Rucent88 Jul 20 '14 at 04:48
  • 1
    I can't get this to work with binary data. I think it's something with echo trying to interpret null bytes or some other noncharacter data. – Rolf May 21 '17 at 16:13
6

For a reasonably small output produced by a command, we can redirect the output to temporary file, and send those temporary file to commands in loop. This can be useful when order of executed commands might matter.

The following script , for example, could do that:

#!/bin/sh

temp=$( mktemp )
cat /dev/stdin > "$temp"

for arg
do
    eval "$arg" < "$temp"
done
rm "$temp"

Test run on Ubuntu 16.04 with /bin/sh as dash shell:

$ cat /etc/passwd | ./multiple_pipes.sh  'wc -l'  'grep "root"'                                                          
48
root:x:0:0:root:/root:/bin/bash
terdon
  • 242,166
3

Another take on this:

$ cat file.txt | tee >(head -1 1>&2) | grep foo

Works by redirecting tee's file argument to bash's process substitution, where this process is head which prints only one line (header), and redirects it's own output to stderr (in order it to be visible).

Anthony
  • 637
1

This may be of use: http://www.spinellis.gr/sw/dgsh/ (directed graph shell) Seems like a bash replacement supporting an easier syntax for "multipipe" commands.

sivann
  • 333
1

Here's a quick-and-dirty partial solution, compatible with any shell including busybox.

The more narrow problem it solves is: print the complete stdout to one console, and filter it on another one, without temporary files or named pipes.

  • Start another session to the same host. To find out its TTY name, type tty. Let's assume /dev/pty/2.
  • In the first session, run the_program | tee /dev/pty/2 | grep ImportantLog:

You get one complete log, and a filtered one.