6

I am debugging a program that has a tendency to deadlock. On those occasions when it runs fine, ./program > log or ./program | tee log saves a copy of the output but if I do either of these, if the program deadlocks and I kill it with ^C (SIGINT) then the log is always empty. I know that the program has written something to stdout because that happens before the part where anything can get stuck and indeed when I just run ./program I see the output in the terminal.

I'd like a way to tee the output so that if I have to kill the program, the output so far is still saved in a log file. My shell is bash.

  • When you kill the program any output might be getting sent to STDERR instead of STDOUT, what happens when you do ./program > log 2>&1 – HBruijn Mar 03 '17 at 15:00
  • The lines that went to stderr in the first place are now in the log, but not the stdout ones. –  Mar 03 '17 at 15:08
  • What you are looking is to enable job control using set -m in your script and using trap on caught signal. http://mywiki.wooledge.org/ProcessManagement – Valentin Bajrami Mar 03 '17 at 15:17

1 Answers1

5

Many (most?) programs, and practically all C or C++ programs, 'fully' buffer stdout when it is a pipe or disk file, or in general not isattty(), and any output still in the buffer and not flushed to the OS when you kill it is lost. Programs normally do NOT do this buffering for stderr, which is why that works.

In general use stdbuf -oL program ... or possibly -o0. Some programs have their own private options e.g. GNU sed has -u/--unbuffered.

Alternatively use something that runs the program under a pty and thus turns off buffering, including script, screen, expect or its simplified form unbuffer, or ssh -t.

For more details and options see Turn off buffering in pipe