3

I am starting following command from a bash script (arguments omitted for simplicity)

avconv | sox | nc

I am starting about 150 such commands on the same box at the same time.

The last nc command sends the stream to another host. When that host dies, nc dies but avconv and sox can stay alive. When I then killall sox in this situation, avconv stays alive.

Should there not be a sigpipe?

When I execute the bash script manually and nc dies, the other two processes die too. But not when I start many such scripts .

Is it possible that sigpipe does not work when pipe buffers are full or the system is otherwise highly contended? How can I work around it?

jasonwryan
  • 73,126
Tynix
  • 73
  • If the nc program goes away, the sox program will only get a sigpipe when the sox tries to write some data to the pipe. So sox needs to be outputting data. sox probably buffers output, so it might need to output several thousand characters before it actually tries to push the data through the pipe. – icarus Nov 13 '16 at 15:26
  • http://unix.stackexchange.com/questions/29964/are-linux-utilities-smart-when-running-piped-commands/29970 is related – icarus Nov 13 '16 at 15:33
  • Is there a way to bind their lifetimes together? I want all processes to terminate if one terminates. – Tynix Nov 13 '16 at 17:05
  • or is there a way to kill the parent process when no data flows through the pipe for some time? Example usage: avconv | sox | check-data-flow-or-kill-parent | nc – Tynix Nov 13 '16 at 17:11
  • Of course you can bind the lifetimes together. The question is do you want to? For example if sox exits before nc has sent out its data do you want it to exit, dropping the unsent information? Certainly one could write a check-data-flow-or-kill-parent, it is not hard. One caveat is that the parent may not be what you think it is, it depends on which shell creates the pipeline. What is the problem you are trying to solve? Would doing killall avconv solve your problem? If avconv goes away then sox will get eof on its input, and probably will exit, then nc will get eof... – icarus Nov 13 '16 at 17:24
  • This is for a system that records a stream from the internet, converts it and sends it off to another host in a "best effort" way. It's totally okay for the whole chain to terminate if one process terminates and there is unprocessed data. If there is an issue with either conversion, the destination host or otherwise, some loss is unavoidable and acceptable. I just need the chain to fail fast when there is an issue.

    killall is not an option because I have 150 parallel instances of it. If one instance of nc has an issue, the other instances don't necessarily need to have an issue.

    – Tynix Nov 13 '16 at 19:08
  • The problem I am trying to solve: I have a retry mechanism for the avconv | sox | nc chain. I want the chain to restart doing its work whenever there is any problem with any of the 3 processes. Unfortunately when the destination host goes down, I see many instances of avconv and sox still up, although nc is terminated. For that reason, my retry mechanism has no effect because some child processes are still running. – Tynix Nov 13 '16 at 19:19

1 Answers1

1

The following will kill the process group when the nc ends

#!/bin/sh
avconv x y z | sox a b c | { nc somewhere port ; pkill -g 0 ; }

Depending on how this gets started you might need to use a utility like setsid to restrict the things which are in the group. You could also replace -g 0 with -P $$. This works by the shell running the pkill command after the nc finishes.

See also Kill all descendant processes

icarus
  • 17,920
  • pkill -g 0 seemed to somehow kill the parent's parent process too. pkill -P $$ worked wonderfully! Thanks! – Tynix Nov 14 '16 at 06:38