5

I am using daemontools to monitor a process and its output log. I am using multilog to write the logs to disk.

The run script for the log is:

#!/bin/bash
PATH=/usr/local/bin:/usr/bin:/bin
cd /usr/local/script_dir
exec multilog t s16777215 n50 '!tai64nlocal' '!/bin/gzip' /var/log/script_log

The process being monitored, also writes output to stderr. So in the run script for the process there is the following lines to redirect stderr to stdout:

exec 2>&1
exec ./my_process

However, while tailing the log file, I see hundreds of lines of output coming in bursts (the monitored process writes output every few seconds), and the timestamp on the log lines differs in sub-microsecond levels. I know from the nature of the process that time difference between the log lines is not so small. Clearly multilog is buffering output and then adding the timestamp when it is ready to write to file. I would like the timestamps to more closely reflect the time at which the line was output. How can this be fixed?

donatello
  • 375
  • Pretty sure it's not multilog that's buffering the data. Rather, I suspect that it's the program being monitored that's buffering, since buffering is the default behavior of standard output unless you do something special. What is the process? Do you have its source? – rra Mar 18 '13 at 06:07

1 Answers1

12

The script being monitor was a Python script. To make all standard streams unbuffered, I found that one can just pass the -u option to the interpreter. This solved the problem in my case.

donatello
  • 375