7

Problem

Consider a command like this:

<binary_input ssh user@server 'sudo tool' >binary_output 2>error.log

where tool is arbitrary and ssh is a wrapper or some ssh-like-contraption that allows the above to work. With regular ssh it doesn't work.

I used sudo here but it's just an example of a command that requires tty. I'd like a general solution, not specific to sudo.


Research: the cause

With regular ssh it doesn't work because:

  • sudo needs tty to ask for password (or to work at all), so I need ssh -t; actually in this case I need ssh -tt.
  • On the other hand ssh -tt will make sudo read the password from binary_input. I want to provide the password via my local tty. Even if sudo is configured to work without password or if I inject the password to the binary_input, ssh -tt will make sudo and tool read from the remote tty and write output and errors and prompts to the remote tty. Not only I won't be able to tell the output and the errors/prompts apart locally. All the streams will be processed by the remote tty and this will mangle data (you can see this in some examples in this answer of mine, in the section entitled "Some practice").

Research: comparison to commands that work

  • This local command is the reference point. Let's assume it successfully processes some binary data:

    <binary_input tool >binary_output
    
  • If I need to run tool on a server, I can do this. Even if ssh asks for my password, this will work:

    <binary_input ssh user@server tool >binary_output
    

    In this case ssh is transparent for binary data.

  • Similarly local sudo can be transparent. The following command won't mangle the data even if sudo asks for my password:

    <binary_input sudo tool >binary_output
    
  • But running tool on the server with sudo is troublesome:

    <binary_input ssh user@server 'sudo tool' >binary_output
    

    In this configuration ssh and sudo together cannot be transparent in general. Finding a way to make them transparent is the gist of this question.


Research: similar questions

I have found few similar questions:


My explicit question

In the following command:

<binary_input ssh user@server 'requires-tty' >binary_output 2>error.log

requires-tty is a placeholder for code that requires a tty but processes binary data from its stdin to its stdout. It seems I need ssh -tt, otherwise requires-tty will not work; and at the same time I mustn't use ssh -tt, otherwise the binary data will be mangled. How can I solve this problem in a convenient way?

requires-tty can be sudo … but I don't want a solution specific to sudo.

I imagine the ideal(?) solution will be a script/tool that replaces ssh in the above invocation and just works. It should(?) connect the remote stdin, stdout and stderr each to its local counterpart, and the remote tty to the local tty.

If it's possible, I prefer a client-side solution that does not require any server-side companion program.

1 Answers1

4

Script

I'm the author of the question and this is my attempt to build a script that solves the problem. The script is intended to work on the client side, it replaces ssh in the command in question. It's experimental. I call it sshe. This is the script:

#!/bin/sh -

the name of the script

me="${0##*/}"

error handling functions

scream() { printf '%s\n' >&2 "$1"; } die() { scream "$2"; exit "$1"; }

initialization of variables

redir0='' redir1='' redir2='' tty="/dev/$(ps -p "$$" -o tty=)"

edge cases

[ "$tty" = '/dev/?' ] && { scream "$me: no tty detected, falling back to regular ssh" exec ssh "$@"; } [ "$#" -lt 2 ] && die 1 "usage: $me [options] [user@]hostname command"

see what needs to be redirected

exec 7>&1 if [ "$(<&0 tty 2>/dev/null)" != "$tty" ]; then redir0=y; fi if [ "$(<&7 tty 2>/dev/null)" != "$tty" ]; then redir1=y; fi if [ "$(<&2 tty 2>/dev/null)" != "$tty" ]; then redir2=y; fi exec 7>&-

edge case

[ "$redir0$redir1$redir2" ] || { scream "$me: no redirection detected, falling back to ssh -t" exec ssh -t "$@"; }

command line parsing, extract two last arguments: ... host command

z="$#" n="$z" for arg do if [ "$n" -eq "$z" ]; then set -- fi case "$n" in 1) command="$arg" ;; 2) host="$arg" ;; *) set -- "$@" "$arg" esac n="$(($n - 1))" done

prepare to clean on exit

trap 'status="$?"; rm -r "$tmpd" 2>/dev/null; trap - EXIT; exit "$status"' EXIT HUP INT QUIT PIPE TERM

temporary directory and socket

tmpd="$(mktemp -d)" [ "$?" -eq 0 ] || exit 1 sock="$tmpd/sock"

main pipe: ssh master connection -> background cat

( [ "$redir0" ] || exec 0</dev/null

ssh master connection, it will report the remote PID of the remote shell via its stdout

ssh -M -S "$sock" "$@" -T "$host" '</dev/null echo "$$"; exec sleep 2147483647' ) | {

read the remote PID

IFS= read -r rpid || exit 1

background process to pass data

exec 6<&0 cat <&6 2>/dev/null &

move original descriptors out of the way

exec </dev/tty >/dev/tty 6>&-

prepare remote redirections

if [ "$redir0" ]; then redir0="<&6"; fi if [ "$redir1" ]; then redir1=">&7"; fi if [ "$redir2" ]; then redir2="2>&8"; fi

ssh to run the command, with remote tty

ssh -S "$sock" -t "$host" " trap 'status=&quot;$?&quot;; kill $rpid 2>/dev/null; trap - EXIT; exit &quot;$status&quot;' EXIT HUP INT QUIT PIPE TERM exec 6</proc/$rpid/fd/0 7>/proc/$rpid/fd/1 8>/proc/$rpid/fd/2 9>/dev/tty $redir0 $redir1 $redir2 || exit 3; $command" }


General disclaimer

  • The script works well in many cases. I don't mean it fails randomly. It doesn't fail randomly. I don't mean it cannot handle some specific data. It handles arbitrary data.

    Cases when it "doesn't work well" arise from its interaction with the local tty only. Data (including arbitrary binary data) that flows via channels not including any tty is always fine. Please read the rest of this answer, especially the "Obstacles and caveats" section, to understand the problem and to learn what to avoid.

  • A command like this:

    <binary_input sshe user@server 'sudo tool' >binary_output 2>error.log
    

    avoids the problem. It should work fine, if only the technical requirements are met (see "Requirements" below).

  • The script is experimental and I tried to set traps, to preserve exit status and to clean on exit in a sane(?) way. I'm not sure if I succeeded.

  • The script was never intended to be foolproof. Treat it as a proof of concept.


Usage

Use sshe like this:

sshe … [user@]hostname command

where denotes options you would use if the executable was ssh. There's no need to put -t nor -tt (nor -T) here. The script assumes you want tty on the remote side (otherwise just use ssh). The script expects at least one of the local stdin, stdout, stderr to be redirected away from the local tty. The script will fall back to ssh -t if everything is connected to the local tty.

Important things:

  • command is the shell code you want to run on the server. It must be a single argument, the very last argument to sshe. It cannot be omitted.
  • hostname or user@hostname must be the second to last argument. It cannot be omitted.

Internally the script needs to know the command to add some code in front. It needs to know [user@]hostname because it uses it twice. The script just picks the last and the second to last argument respectively, hence the above limitations.

Not every valid ssh invocation can be converted to sshe invocation by just replacing ssh with sshe. But I believe any valid ssh invocation that runs code (as opposed to spawning an interactive shell) can be rearranged to be a valid sshe command. Example:

ssh user@server -p 1234 echo foo

should be rearranged to:

sshe -p 1234 user@server 'echo foo'

(except you don't really need sshe in this case; it's just an example of the right syntax). If you used sshe user@server -p 1234 echo foo then the script will take echo as the server and foo as the command because it does not parse its arguments like ssh would.

There are examples down below.


Requirements, portability issues

Local requirements (where sshe runs):

  • /dev/$(ps -p "$$" -o tty=) assumed to be the "real name" of the controlling terminal. Compare this question.
  • mktemp -d.
  • ssh supporting -M and -S; the script creates master and slave connections.

Remote requirements (on the server):

  • SSH server able to handle master and slave connections.
  • /proc pseudo-filesystem.
  • Ability to use /proc/nnnn/fd/N of another process that belongs to the same user.
  • POSIX-compliant shell.
  • Silent startup scripts (compare SCP doesn't work when echo in .bashrc, with sshe the situation is similar) .

During my tests I successfully connected from Kubuntu (18.04.5 LTS) to various Debian or Debian-derivative servers. My ssh and sshd are from OpenSSH.


Operation

sshe (unless it decides to fall back to ssh or to ssh -t) runs ssh twice:

  1. ssh -M … -T … is a master connection that does not allocate tty on the remote side. The shell code it runs there reports its PID via stdout and execs to a long-running sleep (about 68 years). The standard file descriptors of this process will be used by another process(es).

    The PID reported from the master ssh is picked up by read. After this the stdout of the master ssh will go to a background cat whose sole purpose is to relay it to the (local) stdout of sshe.

  2. Later ssh … -t … is a slave connection that does allocate tty on the remote side. Already knowing the remote PID from the master connection, it sets up redirections, so code supplied to sshe as command can use separate stdin, stdout, stderr (via the master ssh connection) and tty (via the slave ssh connection) on the remote side. The slave ssh does not use the original stdin nor stdout of sshe, it uses the local /dev/tty instead.

The idea is similar to what this answer (already linked to in the question) does. The code in the linked answer runs ssh (implicit ssh -T) twice to provide additional descriptors. My script runs ssh -T and ssh -t to provide standard descriptors and tty. And it uses the master-slave functionality of ssh, so it authenticates (e.g. asks for password) once.

If none of the local stdin, stdout, stderr is the local tty then this is how data flows:

  • Local stdin goes to the master ssh, no other local process reads from the stdin of the script. By reading from (remote) /proc/nnnn/fd/0 remote processes can access the local stdin. The slave ssh connection prepends redirections to the command, so the shell on the remote side uses /proc/nnnn/fd/0 as its stdin.

  • Similarly the shell on the remote side uses /proc/nnnn/fd/1 as its stdout. Whatever goes there will come out of the local master ssh. This is after the master ssh retrieved the right PID from (remote) shell code it run. The PID was consumed by read, any data that follows goes to the original stdout of sshe via the background cat.

  • Similarly the shell on the remote side uses /proc/nnnn/fd/2 as its stderr. The stream will come directly from the local master ssh to the stderr of sshe. Some local processes spawned by the script use the stderr of the script as their stderr, so if you do sshe … 2>error.log then the log will contain their error messages as well. In particular expect Shared connection to server closed.. This is similar to ssh -T … 2>error.log where the log gathers messages from remote command(s) and from the ssh itself. I think it's possible to make a variant of sshe that will pass stderr from remote commands via a channel associated with stdout of yet another ssh; in this case one will be able to tell apart the remote stderr from diagnostic messages generated by local tools. The script does not do this though.

  • The local tty is available for the master ssh (if it needs to ask for password) and then for the slave ssh. (Frankly more local tools used by the script have access to the local /dev/tty, they just don't use it.) The slave ssh -t uses /dev/tty as its stdin and stdout. This way it connects the local and the remote /dev/tty despite other redirections (like ssh -t run in a terminal without redirections would). Remote processes reading from their /dev/tty will get what the local slave ssh reads from the local /dev/tty. Remote processes writing to their /dev/tty will make the local slave ssh write to the local /dev/tty.

If the local stdin, stdout or stderr is the local tty then its respective counterpart on the remote side (for command run remotely by the slave ssh) will not be redirected to /proc/nnnn/fd/N and it will stay connected to the remote tty. It would get to the local tty either way. The point is it should not bypass the remote tty. The reason for this will be clear in a moment.

There are few local and remote redirections not necessarily required for sshe to work. It's because of my other experimental things. I decided to keep the extra redirections, just in case they are more important to the sole sshe than I remember.


Obstacles and caveats

The whole concept is not as easy as it may seem. A tty can process what you type (e.g. translate ^M into ^J) and what is about to be printed (e.g. if I cat a file with *nix line endings to the terminal, each newline character will work like carriage_return + newline). Invoke stty -a to see plenty of settings.

This is why you don't want tty when processing arbitrary binary data. And you do want it when interacting.

Processes can configure the tty so it meets their needs. See raw vs cooked.

When you ssh in a way so tty is allocated on the server, the processes there will see it as their tty. If they need to configure their tty, they will configure the tty they see on the server. They have no means to directly configure the local tty of ssh. All(?) the "cooking" is done by the remote tty and ssh configures the local tty so it does not interfere.

This is the reason sshe should not bypass the remote tty when redirecting remote descriptors that ultimately should be connected to the local tty. If the remote tty is bypassed then there will be no entity to "cook" the stream. By connecting the stdin, stdout or stderr of sshe to the local tty you indicate you want it "cooked". This makes sshe similar to ssh … command run in an interactive shell, i.e. the case where all the standard streams are "cooked" by the local tty (note this ssh is like ssh -T and it does not place the local terminal in "raw" mode).

So sshe "cooks" what apparently you want to be "cooked". A problem occurs when you do something like this:

sshe … command | whatever

Data will flow from remote command to local whatever without being "cooked" (as one would expect from ssh … command | whatever) but the output of whatever will not be "cooked" locally. sshe could reconfigure the local tty so it "cooks", but if the command happens to print to its tty (i.e. to the remote tty that may or may not "cook", depending on its settings) then the local tty should not "cook".

sshe does not try to solve this. It's basically intended to support cases where the ultimate output goes to somewhere else than the terminal (e.g. to a regular file or to a block device). The following code is better in this matter:

sshe … command | whatever >some_file

although stderr from whatever won't be "cooked". Expect diagnostic messages to look weird. Note you can redirect them to a file (or to another local tty that will "cook" them).

It's even worse on the input end. If another local process tries to read from the local tty, it will not only read raw data, it will compete with sshe for the input. This is a general problem of two processes reading from the same terminal.

To summarize: build a local command (pipeline), so only (a single) sshe wants to read from the local tty; do not let tools other than sshe print to the local tty, unless you can stand "raw" output.

I developed sshe to be able to pass or process binary data. In my case there's hardly ever a need to read data from the local tty or to write data to the local tty. I can stand diagnostic messages from local tools not "cooked" enough. In return sshe allows me to use remote sudo as if it was local.


Examples

  • Reading from or writing to a remote block device that needs sudo access.

    • Reading:

      sshe user@server 'sudo cat /dev/sdx1' >local_file
      # or
      sshe user@server 'sudo pv  /dev/sdx1' >local_file
      
    • Writing:

      <local_file sshe user@server 'sudo tee /dev/sdx1 >/dev/null'
      

      In my tests local pv apparently doesn't mind the local tty being "raw"; or rather the configuration it imposes and the configuration sshe imposes are not(?) contradictory and it doesn't(?) matter which tool configures the local tty first. So this seems to work:

      pv local_file | sshe user@server 'sudo tee /dev/sdx1 >/dev/null'
      

      Note if settings from pv disturbed sshe then you might not be able to supply password to the remote sudo. If settings from sshe disturbed pv then what pv prints to the terminal might look mangled. Even in these hypothetical cases the content of local_file will get to the remote /dev/sdx1 verbatim.

  • Local sudo and remote sudo together.

    If local sudo is going to ask for your password, sudo … | sshe … 'sudo …' or sshe … 'sudo …' | sudo … is not a good idea because the local sudo and sshe will both read from the local tty at the same time. Totally local sudo … | sudo … works because sudo implements a locking mechanism, so two local sudos don't interact with the same terminal simultaneously. This won't work with a mixture of local and remote sudos.

    Hopefully your local sudo allows timeout. If so, invoke sudo -v beforehand to supply the local password (if needed) without interference; then go with a pipe:

    • Copying from a remote device to a local device:

      sudo -v   # input local password if needed
      sshe user@server 'sudo cat /dev/sdx1' | sudo tee /dev/sdy1 >/dev/null
      
    • Copying from a local device to a remote device:

      sudo -v   # input local password if needed
      sudo cat /dev/sdy1 | sshe user@server 'sudo tee /dev/sdx1 >/dev/null'
      
  • Exactly what the question requested.

    • With sudo:

      <binary_input sshe user@server 'sudo tool' >binary_output 2>error.log
      
    • Or more generally:

      <binary_input sshe user@server 'requires-tty' >binary_output 2>error.log
      

Final note

I used to think tunneling stdin to stdin, stdout to stdout, stderr to stderr and /dev/tty to /dev/tty is trivial. I used to wonder why ssh doesn't provide an option (similar to -t) to do it. Now I know it's not that simple; and I suspect maybe I'm still missing something.

  • [ -e /dev/tty ] doesn't test for the presence of a controlling terminal, just whether that device file exists are not. Since you already do ps -p "$$" -o tty=, you can check that against ? to check if there's a controlling terminal. – Stéphane Chazelas Feb 04 '22 at 10:40
  • linked ticket https://bugzilla.mindrot.org/show_bug.cgi?id=3542 – Et7f3XIV Feb 17 '23 at 16:03