1

The problem is quite simple: I find useful to have the possibility to switch on (and off) the output of some running program in each moment I need. To be more precise I want to be free to redirect their standard output (and error) from the current shell to another one, to /dev/null, to a file or back to the original shell.

I search for something direct (and safe) that takes advantage from the fact I know in advance that I will switch the outputs on and off. I post an answer too.


Below my first attempt. Let's suppose I work in the shell number 35.

$ who am I                     # the command line to ask where am I
myuser  pts/35  ...            # The answer 

My try starts with a symbolic link to that terminal

ln -s /dev/pts/35  MyOutput  # The command to link

The idea is to launch the program with the redirection set to that link

./Execute_program > MyOutput

It works, it redirects the output to my terminal, but when I give the command to change the link,

ln -sf  /dev/null  MyOutput

the link changes, but the redirection doesn't change how I hoped. It doesn't change for the running process. If I launch a new program in the same way, the redirection follows the new prescription of the link.
If I start again and I set the link to /dev/null the output is suppressed as expected, but again when I change the link the redirection doesn't change.
If I do a link to a link I get the same result. The use of hard link doesn't change the situation.

Sadly the shell expands the command line at launch time.

Is there any possibility to do the job in a similar way or have I to find how to communicate to the process (or the PPID ones) that the link is changed?


Notes:

This is a question of the series "How to redirect the output of a running process?", but it doesn't start from the point "Ops I launched the program and I forget to redirect the output. Now I want to do it", but from the opposite: "How can I launch a program with the explicit aim to redirect elsewhere the output at a later stage?".

I intend to use a such procedure when it is not possible (or convenient) to modify the source code of the program.
I read that it is possible to disown the process, use screen and transfer form a screen to another... or to use a debugger, to intercept the process... All those solutions could function with a good amount of experience and effort, but some of them present a percentage of risk too.

Hastur
  • 2,355

3 Answers3

1

I find a solution via mkfifo, that creates a named pipe, or FIFOs.
Simple as to create a symbolic link and it's possible to use all the redirection allowed from the shell.

mkfifo MyOutput

ls -l gives

0 prw-r--r-- 1 My_username My_username  0 May 11 17:45 MyOut|

I can launch the program with redirection to that link

./Execute_program > MyOutput  &  cat MyOutput 

and the output starts to flow on the terminal.

If I press ctrl+c I interrupt the flow but not the running process (I've to use something like kill pid or kill %1 to do it).
When in a second time I will ask to the FIFOs to dump on terminal (again with cat MyOutput) it will start to dump the output from that moment.

Notes and warnings:

  • Untill I will not ask a dump from FIFOs it will hold all the output.
    As I will ask the first time, it will flush out all.
  • I can redirect (or append) to another file cat MyOutput >> NewRealFile
  • I can use cat MyOutput from another terminal too!
  • Warning: If I ask to a 2 diffent programs (or instances) to redirect output to the same FIFOs the flux will be merged (no a priori way to distinguish from which programs that line comes from).
  • Warning: If I ask 2 or more times (maybe from different terminals) it will give one line for each request splitting the output to the requester. Maybe there's a safe workaround...
Hastur
  • 2,355
  • 1
    Beware that after you kill cat, the program will hang again when the pipe gets full (64kB on Linux). – Stéphane Chazelas May 12 '14 at 08:46
  • @StephaneChazelas Thanks for the spot(+1), but it didn't happen in my system. I checked up to 30MB. From the moment in which I stop the 1st cat and the moment in which I do the new one the program continue to function regularly, to fill other files and so on. I only lose the output in that period. I think is redirected to /dev/null. That is what I want. The program remains hanged if I don't start calling the 1st cat when I launch the program and there's an output bigger than the buffer. – Hastur May 12 '14 at 11:13
1

If I understand your question correctly:

script 1>>~/out.fifo 2>>~/error.fifo 

then to monitor you can do something like:

watch cat ~/out.fifo

You could use real files instead of Fifos

script 1>/tmp/$0-out 2>/tmp/$0-error

then tail -f them, they will be replaced when you run the scripts again.

I prefer the second method, or just using a multiplexer (i.e. screen or what ever the popular reincarnation is this week)

screen -t "name" bash -c 'script'

then

screen -r

to "monitor" and ctrl+a d to detatch.

Make sure to pause at the end of your script if you want to see the output after it runs.

coteyr
  • 4,310
  • Thanks for the answer. Mainly I want to turn off the output in period not interesting and switch on again after (on request), maybe redirecting it in different files for different cases. Without stopping or pausing the program that generate the output. If I keep all the log this one can quickly grow up to become huge. screen solution it's more versatile, but I don't know e.g. how it can interact with a cluster when the process migrate between nodes. Did you know if there are buffer limit for screen? I mean if I leave running the process for days/weeks what happen to screen? – Hastur May 12 '14 at 12:05
  • You can leave screen running for extended periods of time. It's a multiplexer so it doesn't use buffers in the way you think, it's more like having 2 xterms open. – coteyr May 12 '14 at 23:44
0

if i were you id just redirect the output to a file, then tailf that file when i wanted to see the output... also you may want to limit the size of that file if you think the output is going to be rather long, or have the file written in ram instead of on disk if you think the output is going to be rather fast, but how to do those things is another question idk.