18

Problem

I execute command that outputs LOTS of information through SSH. For example, I foolishly add debug information inside a loop that executes million times, or just run cat /dev/urandom for kicks.

The terminal is flooded with information.

Example what I'm talking about

I want to terminate the command ASAP and fix my program. I don't care what it prints. Now, the thing is that I press Ctrl+C ASAP (in the example above I pressed it immediately after running the command), but it still takes its time to print all the information I don't even need.

What I've tried

I tried pressing Ctrl+C so hard that it had funny results when terminal finally caught up:

OUTPUT HERE^C
rr-@burza:~/xor$ ^C
rr-@burza:~/xor$ ^C
rr-@burza:~/xor$ ^C
^C^C

^C^C^C^C^C^C^C^C^C^C^C
^C^C^C^C^C^C^C^C^C^C
^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C
^C^C^C^C^C^C^C
^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C
^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C^C
^C^C^C^C^C^C^C^C^C^C^C^C^C
rr-@burza:~/xor$ ^C
rr-@burza:~/xor$ ^C
rr-@burza:~/xor$ ^C
rr-@burza:~/xor$ ^C
rr-@burza:~/xor$ ^C
rr-@burza:~/xor$ ^C
rr-@burza:~/xor$ ^C
rr-@burza:~/xor$ ^C
rr-@burza:~/xor$ ^C

I also read about Ctrl+S which apparently is used to tell terminal "stop output, I need to catch up" but apparently it does nothing.

Miscellaneous details

I'd like not to alter the command I run so I can rescue myself in any situation, even if I don't remember that the program I run could end up like that.

My SSH client runs on Cygwin (CYGWIN_NT-6.1-WOW64 luna 1.7.30(0.272/5/3) 2014-05-23 10:36 i686 Cygwin) in MinTTY with terminal type set to xterm-256color.

SSH server runs on Debian (Linux burza 3.2.0-4-686-pae #1 SMP Debian 3.2.51-1 i686 i686 i686 GNU/Linux).

rr-
  • 983
  • On which system are you running the actual programs that produce the output? Linux, Unix, or Windows? Linux and UNIX ought to accept Ctrl-O, which means "discard any output that is written to this terminal". – Mark Plotnick Jun 18 '14 at 18:24
  • The server runs on Debian. Edited the question. Ctrl-O seems to do nothing as well. Perhaps it's the client thing? – rr- Jun 18 '14 at 18:25
  • You might try starting your xterm with the -j option, to enable jump scrolling. The basic problem is that the remote can send data faster than the terminal window can display it - by default, it has to bitblt the contents of the window every time a new line is printed. A whole lot of data can get buffered up by the time your Ctrl-C gets received by the remote system, and your terminal program will try to display all of it. – Mark Plotnick Jun 18 '14 at 18:32
  • Just an idea: if you have some exact commands that you usually accidentally execute and they generate a lot of output, why not just append some aliases to .bashrc? – psimon Jun 18 '14 at 18:36
  • You could use mosh instead of ssh: https://mosh.mit.edu/ – gmatht Jul 27 '16 at 04:46
  • The answers are unsatisfying. The gist is: the data that got queued up between when the command started outputting and when you press ^C is enough to keep your terminal program busy for several seconds. So it seems that the terminal program should be able to simply throw away data for a while...skip some bytes, something? – lmat - Reinstate Monica Feb 24 '17 at 10:48

6 Answers6

10

I usually run the output into less so that I can kill it via less instead using the q key.

$ cmd | less

Example

$ cat /dev/urandom | less

   ss #2

After hitting q+Enter it'll quit and return back to your normal terminal, leaving it nice and clean.

Why does that happen?

The problem you're encountering is that there are buffers (for STDOUT) that are being queued up with the output of your display. These buffers get filled so quickly that you're unable to interrupt it fast enough to stop it.

                                    ss #1

To disable/limit this effect you can disable the STDOUT buffering which should make things a bit more responsive using stdbuf, but you'll likely have to play with these settings to get things the way you want. To unbuffer STDOUT you can use this command:

$ stdbuf -o0 <cmd>

The man page for stdbuf details the options at your disposal:

    If MODE is 'L' the corresponding stream will be line buffered.  This 
    option is invalid with standard input.

    If MODE is '0' the corresponding stream will be unbuffered.

    Otherwise MODE is a number which may be followed by one of the 
    following: KB 1000, K 1024, MB 1000*1000, M 1024*1024, and so
    on for G, T, P, E, Z, Y.  In this case the corresponding stream will be 
    fully buffered with the  buffer  size  set  to  MODE
    bytes.

For a nice background on how buffering works I highly suggest taking a look at this Pixel Beat articled titled: buffering in standard streams. It even includes nice pictures.

References

slm
  • 369,824
4

Some of that output will be buffered. You send your Ctrl+C to the remote end which interrupts the running program. The program exists and the shell send the characters to show you the prompt again. Before the prompt is shown your screen will first show all the data that was buffered and already on it's way to you.

What you're asking is for the program to stop and the data in transit to somehow disappear. That can't happen as it's already en-route.

The only way you can make sure that you don't see this data is to exit the terminal at your end and then re-connect to your remote - but that's probably far more effort than waiting for the buffered data to display.

TPS
  • 2,481
garethTheRed
  • 33,957
3

There are several levels of buffering. When you press Ctrl+C, this stops the program from emitting data to the terminal. This doesn't affect data that the terminal emulator hasn't displayed yet.

When you're displaying data at very high speed, the terminal can't keep up and will lag. That's what's going on here: displaying text is at lot more expensive than producing these random numbers. Yes, even with a bitmap font — producing cryptographic-quality random numbers is dirt cheap in comparison. (I just tried on my machine and the X process saturated the CPU, with xterm taking a few % and cat (which the random number generation is accounted against) barely reaching 1%. And that's with a bitmap font.)

If you want this to just stop now, kill the terminal emulator. If you don't wish to do that, at least minimize the window; intelligent terminal emulators (such as xterm) will not map the window, which saves the X CPU time, so the garbage will finish displaying quicker. The X server has high priority, so this will make a big difference to your machine's responsiveness while xterm is processing the data in the background.

When all this is going on in a remote shell, the lag is even worse, because the data produced by cat has to first go through the SSH connection. Your press of Ctrl+C also has to go through the SSH connection; it gets somewhat higher priority (it's sent out of band), but that still takes some time during which more output accumulates. There's no way to suppress data in transit short of closing the SSH connection (which you can do by pressing Enter then ~.).

  • The problem is related to SSH along. STDOUT buffer should not be used in interactive mode but SSH cannot handle the interactive mode properly. Although a lot of output may hang in transaction it is the SSH process which receives Ctrl+C so it is his responsibility to kill the output when it is impossible to pass Ctrl+C to remote. – user4674453 Sep 04 '19 at 16:41
  • @user4674453 Uh? Ctrl+C isn't supposed to kill local output. That's not its job at all. It's supposed to be passed to the remote side, which may or may not kill the remote process. – Gilles 'SO- stop being evil' Sep 04 '19 at 17:15
  • "It's supposed to be passed to the remote side, which may or may not kill the remote process." - it is not supposed to do that either. KILL signals, Ctrl+C is issuing one of them, are for local process only. If it is not used for local process, the notion of "supposed to" is not applicable at all. – user4674453 Sep 04 '19 at 17:29
  • @user4674453 No. Ctrl+C is not a kill signal. It's an interrupt signal. Its role is to return to an interactive prompt. It only kills programs that don't have an interactive prompt to return to. – Gilles 'SO- stop being evil' Sep 04 '19 at 17:39
  • "It's an interrupt signal." It is an argument to kill command, hence it is a kill signal. Sometimes it is being called as POSIX signal if you like. "Its role is to return to an interactive prompt. It only kills programs that don't have an interactive prompt to return to." Exactly!!! And SSH doesn't do that as expected. – user4674453 Sep 04 '19 at 17:46
  • @user4674453 No, there are plenty of signals that are not kill signals, even though they're arguments to kill. For example STOP, CONT and WINCH are not kill signals ever. SSH does treat it as expected: it doesn't have a prompt of its own but it runs a program (on a remote machine) which may or may not have a prompt, so it passes the signal to that program. – Gilles 'SO- stop being evil' Sep 04 '19 at 21:02
  • This answer makes a big assumption, namely that the data resides in the terminal emulator. But this also happened back in the days of hardware terminals, which had hardly any memory and so they couldn't pre-read and buffer the data, but they could be stuck endlessly displaying text under the right circumstances, just as shown in the question. Normally, turning the terminal off and back on would stop this. You'd get the login prompt from getty again. So that suggests the data is buffered in the kernel. – Throw Away Account Mar 02 '22 at 06:54
  • @ThrowAwayAccount With a hardware terminal, the equivalent of the terminal emulator is the terminal hardware plus some driver code in the kernel. Whether the data is buffered by the driver or by the hardware then depends on the hardware and its driver. – Gilles 'SO- stop being evil' Mar 02 '22 at 07:51
1

It should be enough to find a way to kill the cat command.
For the following proposals you may need a second ssh connection open.

  • Seldom CTRL+z can be more effective than CTRL+c: it can answer faster. After that you suspend the command you can kill it with kill %1 or whatever is its job number.
    This in the hope that you are still able to read anything from the screen (a flooding random binary text can easily mess up your characters set).
    As remembered by Gilles if you minimize the window probably the system will be faster to read the interrupt request than you to kill the process. So suspend/break, minimize, wait a little, maximize again, can be a solution too.
    Of course through an ssh connection I expect that you need however to wait some time.

  • In another terminal/session you can ask pgrep cat (if cat was the command invoked) and identify the cat process is using more your cpu. You can identify it with more precision with pstree:

    pgrep cat | awk '{print "pstree -sp "$1}' | sh | grep sshd

    answer with an output like

    init(1)───sshd(1062)───sshd(22884)───sshd(22951)───bash(22957)───cat(23131)

    In this case, after you have only to kill the cat PID: kill 23131

Note:

Hastur
  • 2,355
1

I had the same problem and wasn't satisfied with the answers here so I dug deeper. Others have already mentioned your command is outputting data faster than your ssh can take, so the data buffers, and buffers can't be stopped.

To fix this avoid buffering by throttling your command output to the maximum rate that your ssh session can take, commands already exist to do this.

Setup, first find out your sessions maximum rate:

# Get transfer <TIME> of a large file (>10MB preferable)
/usr/bin/time -f "%e" cat <FILENAME>

# Get file <SIZE> in bytes
stat --printf="%s\n" <FILENAME>

# Calculate <RATE>
echo "<SIZE> / <TIME>" | bc

Finally, throttle your real commands accordingly.

<YOUR_COMMAND> | pv -qL <RATE>

Example:

/usr/bin/time -f "%e" cat large_reference_file.txt
31.26

stat --printf="%s\n" cat large_reference_file.txt
17302734

echo "17302734 / 31.26" | bc
553510

# Throttle my command to 553510B/s
cat some_other_file.txt | pv -qL 553510

You may want to reduce the RATE a little in case your connection speed dips a little from time to time. If it dips, the behavior will return to issue, a non-responsive ctrl-c.

Optional throttled cat alias:

# bash
alias tcat='tcat(){ cat $@ | pv -qL 400k ; }; tcat'

# tcsh
alias tcat 'cat \!* | pv -qL 400k'

# usage: tcat <FILENAME>

Now ctrl-c works as expected, immediately killing the output since very little if any is buffered.

Eric
  • 11
  • cat output is rarely being a problem, unlike other software. Author only used it as example. The problem is usually due to the other software which may not be obvious if it is willing produce a lot of output. Using any sort of prefix or postfix command is no solution, as it takes time to type it. There will be no any gain in result. – user4674453 Sep 04 '19 at 17:41
0

There's a software on Linux which solves exactly this problem (a couple of others things too). You can also invoke it from a terminal emulator in Windows (you seem to be using Windows?).

Try mosh, a replacement for the SSH binary. It works exactly like SSH (you can do mosh user@hostname instead of ssh user@hostname and it would work exactly as you expect, will even do private key authentication etc.

It basically runs a separate process on the server which buffers the packets. So when you press Ctrl+C on mosh, it will convey this to the remote server, which will then stop sending the extra information. In addition, it will also predict the result of keystrokes, saving you a couple of milliseconds every time you press a key.

Downside: Currently it isn't possible to scroll up in history while using mosh.