3

It seems that the variable "PIPESTATUS" is unavailable in dash. Simple separate execution is not working because the left command produces very large output. I used a fifo to do this task:

#!/bin/dash
mkfifo command1 command2
dash -c "cat ./content;code=\${?};echo \${code} > command1 &" | dash -c "md5sum;code=\${?};echo \${code} > command2 &"
echo "$(cat ./command1)" "$(cat ./command2)"

but I don't know why it hanged?

Jeff Schaller
  • 67,283
  • 35
  • 116
  • 255

2 Answers2

1

You can use a named pipe and connect the two processes manually. Start them in the opposite order, so the left-hand side runs in the foreground and you get its exit status from $? as usual.

#!/bin/sh
dir=$(mktemp -d)
mkfifo "$dir/p"
cat < "$dir/p" > /dev/null &
( echo foo; exit 12; ) > "$dir/p"
echo "exit status: left: $?"
rm -r "$dir"

Or if you want both, get the PID of the background process from $! and wait to get the exit status.

#!/bin/sh
dir=$(mktemp -d)
mkfifo "$dir/p"
( echo foo; exit 12; ) > "$dir/p" &          # left-hand side
pidleft=$!
( cat; exit 34; ) < "$dir/p" > /dev/null &   # right-hand side
pidright=$!
wait "$pidleft"; exitleft=$?
wait "$pidright"; exitright=$?
echo "exit status: left: $exitleft right: $exitright"
rm -r "$dir"

You could still leave the second part of the pipe in the foreground, I just wanted to do it symmetrically.


You could also e.g. store the exit status to files, and fetch them from there:

#/bin/sh
( somecmd; echo "$?" > exit1 ) | ( cat; echo "$?" > exit2)
echo "exit status: left: $(cat exit1) right: $(cat exit2)"

I don't think a named pipe will be of much use here, since the exit status is only a couple of bytes. The shell will wait for the pipeline to complete before trying to read exit1 and exit2 on the second line.

If you want to use named pipes instead, you'll need to put the pipeline in the background, since the writes to the pipes block until the reading side is opened.

#/bin/sh
mkfifo exit1 exit2
( somecmd; echo "$?" > exit1 ) | ( cat; echo "$?" > exit2) &
echo "exit status: left: $(cat exit1) right: $(cat exit2)"

However, if the cats reading the pipes don't run for some reason, the subshells writing to them will block indefinitely in the background.

ilkkachu
  • 138,973
  • I think it is possible left command slow write and right quick read cause data lost.I think fifo can avoid such data lost. – illiterate Sep 23 '18 at 19:53
  • @illiterate, I'm not sure what you mean. If the right-hand side of the pipe reads until EOF, it shouldn't lose anything written to the pipe. – ilkkachu Sep 23 '18 at 20:16
  • "it shouldn't lose anything written to the pipe." Yes ,but you try use regular files instead of fifo (You could also e.g. store the exit status to files, and fetch them from there:...) in your answer. – illiterate Sep 24 '18 at 05:06
  • 1
    @illiterate, you mean the script might read the exit1 and exit2 files before they're written? The pipeline runs in the foreground, so the shell waits until both parts have completed. Try something like ( echo foo; exec >&-; sleep 4; false; echo "$?" > exit1 ) | ( cat; echo "$?" > exit2); cat exit1, you'll get the exit status of 1 (from false) four seconds later, when the sleep is over. The right-hand side of the pipeline is long finished by then, you can see that from the timestamps of exit1 and exit2. – ilkkachu Sep 24 '18 at 09:28
  • 1
    If you change exit1 and exit2 to named pipes, then you need to run the pipeline in the background, since the writes to the pipes will block until they're opened for reading. – ilkkachu Sep 24 '18 at 09:30
1

You can unhang your script by closing the stdout in the left hand of the pipe and the stdin on the right just after the actual commands have exited:

Example:

#! /bin/dash
rm -f /tmp/s1 /tmp/s2
mkfifo /tmp/s1 /tmp/s2
{ (echo yes; exit 13); s1=$?; exec >&-; echo $s1 >/tmp/s1 & } | { (cat; exit 17); s2=$?; exec <&-; echo $s2 >/tmp/s2 & }
echo "`cat /tmp/s1` `cat /tmp/s2`"

Replace the (...; exit ..) with your respective commands.

Closing the stdin on the right hand of the pipe after the actual command has exited is causing a write() on the left hand to receive a SIGPIPE or EPIPE instead of blocking when trying to pipe to the echo ... >fifo & command from the right (which is itself blocked in an open()), and closing the stdout on the left is causing a read() on the right hand of the pipe to receive an EOF instead of trying to pipe from the blocking echo ... >fifo & on the left.

Thanks to @ilkkachu for the correction.

  • is the wait command necessarily? seem fifo is blocking and natural "wait" – illiterate Sep 24 '18 at 06:12
  • Still hanged when pipe broken or other unexpected write order. & in command list({...command..}) is necessarily. – illiterate Sep 24 '18 at 08:58
  • not really 2. try closing the stdin in the second pipe md5sum; exec 0<&- to force a SIGPIPE in the cat from the first.
  • –  Sep 24 '18 at 09:27
  • 1
    @mosvy, it's actually the write end of the pipe staying open that keeps the right-hand side of the pipe and so the whole pipeline waiting. This waits for the sleep since it holds a copy of the pipe fd, so cat has to wait for it: ( echo yes; sleep 4 & ) | cat; echo done. However, this returns immediately (but leaves a backgrounded sleep running: ( echo yes; exec >&-; sleep 9999 & ) | cat; echo done. The same thing if you replace sleep with an echo to a named pipe without a reader, the echo has an fd open to the main pipeline so the cat/md5sum on the RHS doesn't see an EOF. – ilkkachu Sep 24 '18 at 09:43
  • @ilkkachu my bad, I'll update the answer. –  Sep 24 '18 at 09:49
  • Let I take more time to think ,but I simulate pipe broken by :mkfifo command1 command2 ; { dd if=/dev/zero bs=1M count=64 ;echo ${?} > command1 ;} | { true; echo ${?} > command2 ;} & echo "$(cat ./command1)" "$(cat ./command2)" – illiterate Sep 24 '18 at 10:22
  • 1
    @illiterate this should work: mkfifo command1 command2 ; { dd if=/dev/zero bs=1M count=64 ; s=$?; exec >&-; echo $s > command1; } | { true; s=$?; exec <&-; echo $s > command2;} & echo "$(cat ./command1)" "$(cat ./command2)". Sorry for the bogus explanation in the first version of the answer. –  Sep 24 '18 at 10:33
  • Seem should close pipe(stdout) in left(write-end) only? because close pipe(stdout) in left is enough and as normal, if close pipe(stdin) in right(read-end) will produce an unnecessary pipe broken. – illiterate Sep 26 '18 at 05:30
  • 1
    If the reading command exits before the writing one, the latter will receive a pipe broken anyway; it's the same as in cat ... | head -1. Also, without an exec <&- on the right (before echo $s > command2), the following will block reliably on my machine: rm -f command1 command2; mkfifo command1 command2; { dd if=/dev/zero bs=1M count=64; s=$?; exec >&-; echo $s > command1 &} | { false; s=$?; echo $s > command2 &}; echo "$(cat ./command1)" "$(cat ./command2)" –  Sep 29 '18 at 01:16