1

Somewhat re-asking my question since I mistakenly asked in SO with a bash shell tag and figured this is a more appropriate place.

I'm writing a shell script within AWS DataPipeline that connects to a relational database, and the exit code is 0 even for SQL errors, so I need to capture + redirect stderr.

I'm redirecting stderr to another file to check for its contents, but still want stderr to be populated. The reason is that DataPipeline captures all the stderr and stdout and puts them all into 1 log. It's not capturing the stderr from the failed sql command because of the redirection. Is there a way to still have stderr populated? What I currently have is:

/bin/snowsql -f /home/scripts/dev/dev.sql 2> /home/scripts/dev/stderr.txt

The following worked with Bash:

/bin/snowsql -f /home/scripts/dev/dev.sql 2> >(tee /home/scripts/dev/stderr.txt >&2)

I can't find the proper syntax for the process substitution for shell. What's the equivalent for doing above in shell?

1 Answers1

6

In sh, you can only pipe stdout to another command, so you have to swap stdout and stderr first:

/bin/snowsql -f /home/scripts/dev/dev.sql 3>&2 2>&1 >&3 3>- | tee /home/scripts/dev/stderr.txt

Redirections are executed from left to right, except the pipe, which is created at the very beginning, so what this does:

  1. open the pipeline and redirect stdout in it (|)
  2. duplicate stderr and redirect file descriptor 3 (which is arbritrary) to it, to reuse it later (3>&2)
  3. duplicate stdout (which is currently redirected to the pipe) and redirect stderr to it (2>&1)
  4. duplicate file descriptor 3 (which is currently redirected in the original stderr) and redirect stdout to it (>&3)
  5. close file descriptor 3 (3>-)

In this state, we have swapped stdout and stderr, and the pipeline receives stderr. We then let tee do its job to write stderr to the file, and output it again on its own stdout.

Thanks to this StackOverflow answer for the tip: https://stackoverflow.com/a/2381643

sduthil
  • 359