17

When connecting to a host with SSH, usually three "pipes" are provided between host and guest, for stdin, stdout, and stderr.

Is there a command-line option to create forwards for additional file descriptors (3 and onward)?

For example, I'd like to do

ssh --forwardfd=10:3 remotehost 'echo test >&3'

which would print 'test' to the locally-opened file descriptor 10.

mic_e
  • 639
  • 6
  • 14
  • 2
    Probably not without tricky source edits, given the various closefrom(STDERR_FILENO + 1) calls under the OpenSSH source code. What are you trying to do that demands this? – thrig Aug 31 '15 at 17:59
  • The protocol supports tunneling additional streams besides stdin/out/err, but AFAIK, no server/client provides support in any way that feature. – salva Feb 15 '17 at 08:08
  • @thrig Not OP, and it's been a long time, but in case you're still curious what this could be useful for, what I was hoping to find here was a clue on how to pipe through ssh, a script for bash and the stdin for that script. Something akin to: infinite-output-cmd | ssh user@host bash /proc/self/fd/3 3< local-script-to-execute-remotely.sh – JoL Sep 17 '18 at 22:08
  • @thrig It occurs to me that something like --forwardfd shouldn't even be needed. ssh could check out what the open file descriptors are before opening any other thing, and forward them automatically to the same file descriptors on the remote side. It could be totally transparent like my example. I wonder how difficult it would be to patch ssh for that. Like you said, it could be tricky depending on the reasons behind those closefrom(STDERR_FILENO + 1). – JoL Sep 17 '18 at 22:23

3 Answers3

9

You can do this using socket forwarding, which is available since openssh-6.7. This is some kind of pipe. This technique is described for example here: http://www.25thandclement.com/~william/projects/streamlocal.html

You will gain two-direction route for your data. There is example with mysql:

Proxy MySQL client connections on a remote server to your local instance:

ssh -R/var/run/mysql.sock:/var/run/mysql.sock \
    -R127.0.0.1:3306:/var/run/mysql.sock somehost 
Jakuje
  • 21,357
5

The problem with the answer from @jakuje is: it only works with sockets, but you cannot user standard UNIX tools expecting files with them:

ssh -R/tmp/sock.remote:/tmp/sock.local "$HOST" 'LANG=C cat >/tmp/sock.remote'

bash: /tmp/sock.remote: No such device or address

Also there's the problem that the local socket file is not deleted on the remote host; when next you run the same command, you get a warning and the socket is not re-created correctly. You can give the option -o StreamLocalBindUnlink=yes to ssh to unlink that old socket, but in my tests it was not enough; you also have to edit you sshd_config to contain StreamLocalBindUnlink=yes for that option to work.

But you can use socat or netcat or any other similar tool supporting UNIX local sockets (netcat-traditional is NOT enough!) to use the local socket forwarding for file transfer:

# start local process listening on local socket, which receives the data when ssh opens the connections and saves it to a local file
nc -l -N -U /tmp/sock.local >/tmp/file.local &
# connect to remote $HOST and pipe the remote file into the remote socket end
ssh \
 -o ExitOnForwardFailure=yes \
 -o StreamLocalBindUnlink=yes \
 -R /tmp/sock.remote:/tmp/sock.local \
 "$HOST" \
 'nc -N -U /tmp/sock.remote </tmp/file.remote'

You can also run interactive commands, in which case you should use ssh -t to allocate TTYs.

The problem with this solution is that you must hard-code the paths of the UNIX local sockets: Locally this is not that much a problem as you can include $$ in the path to make it unique per process or user a temporary directory, but on the remote-end you better do not use the world-writeable directory /tmp/ as I do in my example. The directory must also already exist when the ssh session is started. And the socket inode will remain even after the session is closed, so using something like "$HOME/.ssh.$$" will clutter your directory with dead inodes over the time.

You also could use TCP sockets bound to localhost, which will save you from cluttering your file systems with dead inodes, but even with them you still have to problem to choose a (unique) unused port number. So still not ideal. (ssh has code to dynamically allocate ports, but I found no way to retrieve that information on the remote host.)

Probably the easiest solution to copy files is to use ssh's built-in connection sharing functionality and to do a scp or sfrp command while your interactive session is still running in parallel. See Copy a file back to local system with ssh.

pmhahn
  • 125
  • 1
  • 5
4

I'm sure it ought to be possible. I can only suggest a hack where you use extra ssh connections to each carry another pair of file descriptors. E.g. the following proof of concept script does a first ssh to run a dummy command (sleep) to connect up local fds 5 and 6 to remote stdin and stdout, presuming these fds are the ones you want to add to the usual 0,1,2.

Then the real ssh is done, and on the remote it connects up remote fds 5 and 6 to the stdin and stdout of the other ssh.

Just as an example, this script passes a gzipped man page through to the remote, which unzips and runs it through man. The stdin and stdout of the real ssh are still available for other things.

#!/bin/bash
exec 5</usr/share/man/man1/ssh.1.gz 6>/tmp/out6 # pretend need 5 and 6

ssh remote 'echo $$ >/tmp/pid; exec sleep 99999' <&5 >&6 &
sleep 1 # hack. need /tmp/pid to be set

ssh remote '
  pid=$(</tmp/pid) 
  exec 5</proc/$pid/fd/0 6>/proc/$pid/fd/1
  echo start
  gzip -d <&5 | man /dev/stdin >&6
  echo stop
  kill -hup $pid
'
wait
less /tmp/out6
meuh
  • 51,383
  • 1
    +1. I had the same idea but my goal was to be able to do things like this: <binary_input ssh user@server 'sudo tool' >binary_output. Without ssh -t the remote sudo cannot ask for password. With -t (or rather -tt) sudo reads from binary_input; additionally the remote tty mangles the data anyway. It's totally possible to set things up, so remote sudo works and binary data is not corrupted. It's a shame ssh does not provide a way to tunnel file descriptors as easily as TCP ports. – Kamil Maciorowski May 27 '21 at 15:17