4

I have a script that looks like the following:

ssh host command1
scp file1 host:/tmp
ssh host command2

Is there a way to combine them all in one ssh/scp call? The problem I have is combining copying files with executing commands, which has to be done interleaved. Was hoping for something like:

ssh_sometool host -c command1 -cp file1 /tmp -c command2

It's not about the syntax ( I could write a function like that), it's about everything happening within the same ssh session

Chirlo
  • 377
  • what is the problem with first option ? you have to type password 3 times ? – Archemar Nov 23 '17 at 12:26
  • @Archemar, no, the keys are properly configured so no pw is needed, it just fells cumbersome to open and close a new ssh session for each line. Imagine there would be 100 ssh/scp in the script – Chirlo Nov 23 '17 at 12:28
  • @Chirlo If you have to transfer a lot back and forth I'd recommend you to use provisioning tool of some sort. It would make things less cumbersome as most of such tools optimize a number of connections made. Please take a look at my answer. – ddnomad Nov 23 '17 at 13:41

5 Answers5

4

First of all, there are no straightforward or standard way to do what you want. Additionally, let me clarify: there is no way to do what you want with a default ssh client.

There is an extremely old fork ssh-xfer. Note that even the developers of this tool emphasize that their solution is "hackish" and really old so you should not use it. The last SSH version that was patched for being usable with this tool is OpenSSH-3.8.1p1 (on Arch Linux the current version is OpenSSH_7.6p1).

If it's important for you to keep things quick minimizing the number of connections/handshakes done between hosts you might look into some kind of provisioning/orchestration tools. Ansible seems to be the easiest way to accomplish the task without much of an additional configuration while Salt might be the quickest.

Anyway both Ansible and Salt utilize additional connections to make things work and I'm not sure you'd win something performance-wise using them instead of ssh/scp combination.

UPDATE: If you have to push and pull a lot of files with some additional scripts running between these transfers almost any provisioning tool (Chef, Puppet, Salt, Ansible) will do the thing quicker than plain ssh/scp.

ddnomad
  • 1,978
  • 2
  • 16
  • 31
4

SSH allows to have a session alive for a while after you've disconnected, so you don't have to initiate the connection next time. Here are the options for ssh config to enable it:

Host *
ControlMaster auto
ControlPath ~/.ssh/sockets/%r@%h-%p
ControlPersist 30

ControlMaster enables the sharing of multiple sessions over a single network connection. When set to auto, each new attempt checks if the respective connection exists and either uses it or creates new connection.

ControlPath specifies the path where connection are stored. Make sure that this path exists! That is, create the directory (sockets here, but name it as you want) manually.

ControlPersist specifies the how long the connection lives after you have disconnected. It accepts time in seconds.

For further clarification on the options use man ssh_config.

After you add these to your ~/.ssh/config, only one session will be opened for these three commands.

On my computer it gives 5x speedup: 100 connections in row to some (nearby) host work in 30s without persistent sessions and only 5-6s with them.

P.S. Sometimes (for example, if you changed the network) your existing ssh connections make break and new ones won't establish because the file with the connection does exist. In this case just go to ~/.ssh/sockets and delete the necessary file manually.

3

You can join and pass all your files thorough ssh standard input and receive them in your remote session, I mean something like this:

tar -cf - file1 file2 | ssh host 'TDIR=`mktemp -d`; tar -xf - -C "$TDIR"; cat "$TDIR/file1"; command1; command2 ...'


  • The cat "$TDIR/file1" command is just an example how to access your received files;
  • Use tar with -j or -J switch both on local and remote side to pass your files compressed;
  • You can put all your commands in a bash script, include it in your archive and execute on remote side like ...tar -xf - -C "$TDIR"; bash "$TDIR/script.sh"'
tifssoft
  • 591
  • This is an awesome solution that worked very well! I just try to pass local ENV variables to the ssh commands. – jfk Apr 05 '23 at 16:30
  • @jfk As far as I know it's impossible to pass local environment to the remote session, but you can save the variables in interest to some file, pass it to the remote side and than set the env in a remote script. Or even save them to the ~/.ssh/environment each time before connecting. – tifssoft Apr 09 '23 at 14:18
  • Yes, I echo'ed the value into another file for tar and used it via cat — as your answer already showed tar handles not only a single file or folder. Works well enough. But, as everything my ssh does is a big, big one-liner concatenation, I might eventually convert it into a (generated?) script that is transferred and executed. – jfk Apr 17 '23 at 10:47
2

Try this

cat file1 | ssh host 'command1 && cat >> /tmp/file1 && command2'
rajaganesh87
  • 1,057
-2

In your script, once the first line runs...

ssh host command1

Execution is handed off to ssh + your second command (scp) will only run when command1 finishes. One way to have these run in parallel, is background each task, like this...

ssh host command1 &
scp file1 host:/tmp &
ssh host command2 &

So now all three tasks run in background, concurrently.