You could consider to use ssh connection sharing:
Host *.some-domain
ControlMaster auto
ControlPath ~/.ssh/master-%r@%h:%p
You connect only once to your destination, and put the ssh process in background. Then you execute the other commands and you finally kill the first process.
In alternative you can try to encapsulate a bunch of actions within one single ssh connection.
For example, imagine that you have a directory:
mycommands
|
+- run
+- new_data
Then you can pack this data in a tar and send it to your script:
tar cf - -C mycommands . | ssh localhost 'D=`mktemp -d`; tar xf - -C $D; $D/run'
Now you run
script can access to all your input data (here we have only the new_data
as in your example). To stick with your example, here's the run
script:
#!/bin/sh
BASE=`dirname $0`
/etc/init.d/some_service stop
cat /var/some_service/events
mv $BASE/new_data /var/some_service/new_data
/etc/init.d/some_service start
rm -rf $BASE
So, now you just have to save the events
file:
tar cf - -C mycommands . | ssh localhost 'D=`mktemp -d`; tar xf - -C $D; $D/run' >./events
and more generally you can have your run
script produce a tar and then pipe ssh
into tar to unpack it locally.
sync.sh
on your main server. Let's say it contains the commands to stop and start your service, as well as a call to scp the files you need. You can execute this script on the remote server without copying it to the remote server by using:ssh user@remote.host < sync.sh
This is handy if you don't want to have to manage the script on multiple remote systems. I use this for [hardware/inventory collection](http://serverfault.com/questions/365238/documenting-server-de – ewwhite Dec 06 '12 at 13:22