0

I want to be able to do the following:

command 2>&1 | shell_script.sh "subject line"

Where the stdout and sterr of running command will be piped to shell_script.sh for sending as the body of an email. The thing that I want to get is the exit status of command 2>&1 so that I can append the proper job status to the email subject. With the current implementation, when command 2>&1 is piped, the ${PIPESTATUS[0]} is always set to 0. If I remove the 2>&1, however, then the stderr output doesn't get piped into the email body. Been playing with several different variations but still can't figure it out.

The following is my shell_script.sh:

#!/usr/bin/env bash
SUBJ="$@"
read EMAIL_BODY
jobsuccess=${PIPESTATUS[0]}

if [ $jobsuccess -eq 0 ]  # Job succeeded
then
        echo $EMAIL_BODY | mail -s "$SUBJ Success" my@email.com
else
        echo $EMAIL_BODY | mail -s "$SUBJ Fail" my@email.com
fi
Kusalananda
  • 333,661
nwly
  • 103

1 Answers1

3
#!/bin/bash

subject="subject line"

tmpfile=$(mktemp)
trap 'rm -f "$tmpfile"' EXIT

if command >"$tmpfile" 2>&1; then
    subject+=" Success"
else
    subject+=" Failure"
fi

mail -s "$subject" my@email.com <"$tmpfile"

So, save the output to a temporary file, then mail the file. Set the subject line according to the exit status of the command before sending. No need for two scripts.

You can't access PIPESTATUS inside the pipeline as the pipeline hasn't yet finished executing. Also, the external script would not have access to it in any case as it's running in its own environment.

What is also not possible is to be piped the input from a generic command, and then act on the exit states of that command. There is no way to access the exit status of a previous command in a pipeline from within the pipeline itself.

What you could do is to wrap the command in code that outputs a piece of text, signifying success or failure. This text would be piped together with the rest of the data (by necessity, as an extra piece of info at the end):

{ if command 2>&1; then echo SUCCESS; else echo FAILURE; } | shell_script.sh

... with shell_script.sh being

#!/bin/bash

subject="subject line"

tmpfile=$(mktemp)
trap 'rm -f "$tmpfile"' EXIT

cat >"$tmpfile"

if [[ $(tail -n 1 "$tmpfile") == *SUCCESS* ]]; then
    subject+=" Success"
else
    subject+=" Failure"
fi

sed '$d' "$tmpfile" | mail -s "$subject" me@mail.com

This still has to save the data to a temporary file to access the last line of it (this is done using cat which reads standard input by default). It then decides whether it was a success or not and sets the subject accordingly. The data with the last line removed is then mailed.

Another option would be to run the mailer script after running the command (commands in a pipeline are executing concurrently), and then simply pass the exit status, which then would be available, as a command line argument:

command >command.log 2>&1
shell_script.sh "$?" <command.log
rm -f command.log

and the script would look like

#!/bin/bash

subject="subject line"

if [[ $1 == 0 ]]; then
    subject+=" Success"
else
    subject+=" Failure"
fi

mail -s "$subject" my@email.com

In this case, the data to be mailed is passed on standard input, which the mail utility will read by default (just like cat did in the previous example).

Kusalananda
  • 333,661
  • Thanks for the quick response. I'm writing it as an external script because command is meant to be any generic command/job I feed in. How would I change your code so that it's compatible to be run in the command | shell_script.sh "subject line" framework? – nwly Nov 09 '19 at 23:47
  • @nwly See updated answer. You simply can't have the exact pipeline that you propose as there is absolutely no way for the shell script to know whether the command failed or not. – Kusalananda Nov 09 '19 at 23:57
  • Thanks for the detailed explanation. – nwly Nov 10 '19 at 00:35