9

I need to recursively copy a folder from a Ubuntu remote server where I have ssh access. I don't want to follow symbolic links, nor to copy permissions/owner/group, because my client system (Ubuntu too) doesn't have the same users as the server.

This rsync solution could be the best one. But the server does not have rsync and I can't install it there; so that command gives me error.

Is there another way to copy the remote folder?

BowPark
  • 4,895
  • tar or cpio ... these do have limitations but they copy symlinks as symlinks – Skaperen May 20 '15 at 11:20
  • or upload your own copy of the rsync executable ... it does not need root permissions to work your own files – Skaperen May 20 '15 at 11:22
  • i had same situation at crazydomains - i just downloaded source of rsync from here https://download.samba.org/pub/rsync/ - then compiled, placed in remote user's ~/bin dir, and used --rsync-path=/home/user/bin/rsync on local rsync command. worked a treat. – bhu Boue vidya Feb 05 '17 at 08:40

3 Answers3

11

You can use scp -r to copy files recursively between different hosts. Your syntax could be like scp -r user@Ubuntu-Server:/home/myuser ./from_Ubuntu_server

Besides, you might be able to upload your local rsync binary using scp to the Ubuntu server and add the --rsync-path=/home/myuser/rsync to your original rsync command to let your client rsync know which rsync it should invoke on the Ubuntu server.

Lambert
  • 12,680
  • 1
    your tip on using --rsync-path=/... helped me on my remote host where i was able to easily compile my own copy of rsync from source, and point to it from local rsync cmd. worked a treat! thx sooooooo much.... – bhu Boue vidya Feb 05 '17 at 08:42
10

If you have the permission to use FUSE on your local machine, install the sshfs package. SSHFS lets you access remote files via normal filesystem access: it mounts a directory tree accessed over SFTP. You only need to have SFTP access on the remote side (which is enabled by default with OpenSSH on Ubuntu). Once the remote directory is mounted, you can use the tools of your choice to manipulate files, without having to care whether they're local or remote.

mkdir ~/net/remote-server
sshfs remote-server:/ ~/net/remote-server
rsync -a --no-copy-links ~/net/remote-server/remote/path/ /local/path/
fusermount -u ~/net/remote-server
3

You can use tar and ssh.

As an example, to upload the contents of a local directory somewhere_local, via ssh, to the path /somewhere

tar czf - -C ./somewhere_local . | ssh {yourserver} 'tar xzf - -C /somewhere'

Alternatively, to download the contents of a remote directory /somewhere, via ssh, to the path ./somewhere_local

ssh {yourserver} "tar czf - -C /somewhere ." | tar xzf - -C somewhere_local
krt
  • 1,239
  • There's nothing in the question which suggests the OP needs to use sudo. – Kenster May 22 '15 at 18:35
  • 1
    That copies local to remote; it should be reversed ssh @remote "tar cf- from" | tar xf- for this question. Both tar z and ssh -C is almost certainly a waste of CPU; even one is needed only if large data or slow net. Using v on both concurrent tars will often produce very confusing output; I would do it only on the destination. – dave_thompson_085 May 23 '15 at 23:25