The problem with the answer from @jakuje is: it only works with sockets, but you cannot user standard UNIX tools expecting files with them:
ssh -R/tmp/sock.remote:/tmp/sock.local "$HOST" 'LANG=C cat >/tmp/sock.remote'
bash: /tmp/sock.remote: No such device or address
Also there's the problem that the local socket file is not deleted on the remote host; when next you run the same command, you get a warning and the socket is not re-created correctly.
You can give the option -o StreamLocalBindUnlink=yes
to ssh
to unlink that old socket, but in my tests it was not enough; you also have to edit you sshd_config
to contain StreamLocalBindUnlink=yes
for that option to work.
But you can use socat
or netcat
or any other similar tool supporting UNIX local sockets (netcat-traditional
is NOT enough!) to use the local socket forwarding for file transfer:
# start local process listening on local socket, which receives the data when ssh opens the connections and saves it to a local file
nc -l -N -U /tmp/sock.local >/tmp/file.local &
# connect to remote $HOST and pipe the remote file into the remote socket end
ssh \
-o ExitOnForwardFailure=yes \
-o StreamLocalBindUnlink=yes \
-R /tmp/sock.remote:/tmp/sock.local \
"$HOST" \
'nc -N -U /tmp/sock.remote </tmp/file.remote'
You can also run interactive commands, in which case you should use ssh -t
to allocate TTYs.
The problem with this solution is that you must hard-code the paths of the UNIX local sockets: Locally this is not that much a problem as you can include $$
in the path to make it unique per process or user a temporary directory, but on the remote-end you better do not use the world-writeable directory /tmp/
as I do in my example. The directory must also already exist when the ssh
session is started. And the socket inode will remain even after the session is closed, so using something like "$HOME/.ssh.$$" will clutter your directory with dead inodes over the time.
You also could use TCP sockets bound to localhost
, which will save you from cluttering your file systems with dead inodes, but even with them you still have to problem to choose a (unique) unused port number. So still not ideal. (ssh
has code to dynamically allocate ports, but I found no way to retrieve that information on the remote host.)
Probably the easiest solution to copy files is to use ssh's built-in connection sharing functionality and to do a scp
or sfrp
command while your interactive session is still running in parallel. See Copy a file back to local system with ssh.
closefrom(STDERR_FILENO + 1)
calls under the OpenSSH source code. What are you trying to do that demands this? – thrig Aug 31 '15 at 17:59stdin
/out
/err
, but AFAIK, no server/client provides support in any way that feature. – salva Feb 15 '17 at 08:08infinite-output-cmd | ssh user@host bash /proc/self/fd/3 3< local-script-to-execute-remotely.sh
– JoL Sep 17 '18 at 22:08--forwardfd
shouldn't even be needed.ssh
could check out what the open file descriptors are before opening any other thing, and forward them automatically to the same file descriptors on the remote side. It could be totally transparent like my example. I wonder how difficult it would be to patchssh
for that. Like you said, it could be tricky depending on the reasons behind thoseclosefrom(STDERR_FILENO + 1)
. – JoL Sep 17 '18 at 22:23