2

I am writing a script that executes a bunch of commands in the background all at once then waits for them to all finish:

#!/bin/sh -fx
./p1 &
./p2 &
./p3 &
./p4 &
./p5 &
wait

The problem is that if one or more of these processes fail, it just keeps going. How can I execute all of these commands at the same time and check if one or more fail?

1 Answers1

5

You can wait individual pids to get their exit status:

#! /bin/sh -
set -o xtrace -o noglob

./p1 & p1=$! ./p2 & p2=$! ./p3 & p3=$! ./p4 & p4=$! ./p5 & p5=$! fail=0 for pid in "$p1" "$p2" "$p3" "$p4" "$p5"; do wait "$pid" || fail=$(( fail + 1 )) done echo>&2 "$fail failed" exit "$(( fail > 0 ))"

Note that we don't necessarily wait for those processes in the order they terminate, but wait pid should still work at retrieving the pid's exit status even if wait is invoked after the pid has already exited.

Another approach is to use the pipefail option which is supported by most sh implementations¹ and will be in the next version of the POSIX standard:

#! /bin/sh -
set -o xtrace -o noglob -o pipefail
alias r='<&3 3<&- >&4 4>&-'

die() { printf>&2 '%s\n' "$@"; exit 1; }

{ r ./p1 | r ./p2 | r ./p3 | r ./p4 | r ./p5 } 3<&0 4>&1 || die "At least one of them failed"

We start them in a pipeline, but don't use the pipe. We restore stdin and stdout of those processes to the original ones as saved on fd 3 and 4 with the help of that r alias.


¹ though unfortunately not dash yet, even though most other ash-based shells already support it.