I have a remote server on which I am deploying code. I am currently using scp
to upload code, but scp
is very slow and I'd like to switch to rsync
(or something else which is more efficient). The bottleneck is in making the ssh
connection, so I'd like to minimize the number of connections I make to the server.
Locally, my build process outputs a directory out
containing bin
, lib
and etc
subdirectories. I am pushing a subset of the output files to matching positions on the remote server.
My scp
commands look like this:
scp out/bin/$BINFILES remote:/usr/bin
scp out/lib/$LIBFILES remote:/usr/lib
scp out/etc/$ETCFILES remote:/etc
This makes three connections to the remote server, and copies the files in full even if they haven't been modified. This is what I did with rsync
:
rsync -e ssh -avz out/bin/$BINFILES remote:/usr/bin
rsync -e ssh -avz out/lib/$LIBFILES remote:/usr/lib
rsync -e ssh -avz out/etc/$ETCFILES remote:/etc
which is faster (compression and elimination of duplicates), but still makes three connections to the remote server.
Is there a way to achieve these three copies using a single rsync
command? I'm open to e.g. putting all {src,dest} pairs in a temporary file before copying, if that will work.
I'm also open to trying a different tool, but it should be available for OS X (my current development platform), and it should preferably support transfer optimizations like rsync
does.