125

I'm having some trouble uploading directories(which contain other directories a few levels deep) by sftp. I realize I could work around this by gzipping, but I don't see why that's necessary.

Anyway, I try

sftp> put bin/
Uploading bin/ to /home/earlz/blah/bin
bin/ is not a regular file
sftp> put -r bin/
Uploading bin/ to /home/earlz/blah/bin
Couldn't canonicalise: No such file or directory
Unable to canonicalise path "/home/earlz/blah/bin"

I think the last error message is completely stupid. So the directory doesn't exist? Why not create the directory?

Is there anyway around this issue with sftp, or should I just use scp?

Earlz
  • 2,264
  • 2
  • 17
  • 12

11 Answers11

161

I don't know why sftp does this but you can only recursive copy if the destination directory already exists. So do this...

sftp> mkdir bin
sftp> put -r bin
Useful Dude
  • 1,611
88

CORRECTED: I initially claimed wrongly that OpenSSH did not support put -r. It does, but it does it in a very strange way. It seems to expect the destination directory to already exist, with the same name as the source directory.

sftp> put -r source
 Uploading source/ to /home/myself/source
 Couldn't canonicalize: No such file or directory
 etc.
sftp> mkdir source
sftp> put -r source
 Uploading source/ to /home/myself/source
 Entering source/
 source/file1
 source/file2

What's especially strange is that this even applies if you give a different name for the destination:

sftp> put -r source dest
 Uploading source/ to /home/myself/dest
 Couldn't canonicalize: ...
sftp> mkdir dest
sftp> put -r source dest
 Uploading source/ to /home/myself/dest/source
 Couldn't canonicalize: ...
sftp> mkdir dest/source
sftp> put -r source dest
 Uploading source/ to /home/myself/dest/source
 Entering source/
 source/file1
 source/file2

For a better-implemented recursive put, you could use the PuTTY psftp command line tool instead. It's in the putty-tools package under Debian (and most likely Ubuntu).

Alternately, Filezilla will do what you want, if you want to use a GUI.

Jander
  • 16,682
21

You might be interested in using rsync instead. The command for that would be

 rsync --delete --rsh=ssh -av bin/ remote-ip-or-fqdn:/home/earlz/blah/bin/

This will copy everything in bin/ and place it in on the remote server in /home/earlz/blah/bin/. As an added benefit, it will first check to see if the file on the remote side hasn't changed, and if it hasn't, it won't re-send it. Additionally, you can add a -z option and it will compress it for you.

Shawn J. Goff
  • 46,081
18

lcd: your local folder (with subfolders)

cd: your remote folder

put -r .

jasonwryan
  • 73,126
eliseu
  • 181
  • 1
    actually, I think this is the most correct answer... for the purpose of putting my whole folder there – nonopolarity Jan 27 '16 at 14:21
  • sftp complained when I cd'd into the local parent folder and tried to put the directory by name. But cd'ing into the directory I wanted to upload did it. Thank you! – karimkorun Jul 07 '16 at 10:15
8

May I suggest a somewhat complicated answer, without zipping, but including tar?

Here we go:

tar -cf - ./bin | ssh target.org " ( cd /home/earlz/blah ; tar -xf - ) "

This will pack the directory ./bin with tar (-cf:=create file), filename - (none, stdout) and pipe it through the ssh-command to target.org (which might as well be an IP) where the command in quotes is performed, which is: cd to blah, and tar -xf (extract file) - none, no name, just stdin.

It's as if you pack a package at home, bring it to the post, then drive to work, where you expect the package and open it.

Maybe there is a much more elegant solution which just uses sftp.

user unknown
  • 10,482
  • 2
    A piped tar is a very good solution, however this needs ssh login support (sftp is a different protocol on top of ssh). tar, unlike others, by default, runs recursively, transfers all special files (FIFO, block/character devices etc.), tries to translate the UID/GID mapping from the source to the target system and has a traditional short commandline. (One exception though: "Unix domain sockets" are not transferred. But who needs those?) – Tino May 03 '16 at 12:22
  • I use this method when I need compression between nodes also you can use the pv tool to watch speed in long transfers – Felipe Alcacibar May 03 '17 at 14:40
2

You can use yafc (Yet anoter FTP/SFTP client). The -r option works there very well.

sr_
  • 15,384
1

You can use rsync, which is a very powerful alternative for scp and sftp, especially when updating the copies from machine A to machine B, as it doesn't copy the files that haven't been altered; it's also able to remove files from machine B that have been deleted from machine A (only when it's told to of course).

for example :

rsync -zrp /home/a/ user@remote.host.com:/home/b/  

The -r option is for recursively copying files, -z enables compression during the transfer, and -p preserves the file permissions (file creation, edit, etc.) when copying, which is something that scp doesn't do AFAIK. Many more options are possible; as usual, read the man pages.
Original answer by Karolos

Sherlock
  • 111
  • This is a dupe of Shawn J. Goff's answer from Feb 7 '11 at 19:24. Using rsync like this requires SSH access; it doesn't work with an account that only has SFTP access. – GuyPaddock Apr 29 '20 at 17:34
1

Login to the remote server with ssh, use sftp to connect back to your box, then use the get -r command to transfer directories to the remote server. The get command allows you to transfer directories recursively without having the directory already created.

ssh remote ip
sftp local ip 
get -r whichever-dir
Archemar
  • 31,554
0

SFTP case:

I needed to copy that structure on my ftp:

mainfolder --- folder --- subfolder
                  |           |
              file1.txt   file2.txt

That solved my problem:

cd ./mainfolder
mkdir folder
put -r /from/source/folder/* /mainfolder/folder/
cd ./folder
mkdir subfolder
put -r /from/source/folder/subfolder/* /mainfolder/folder/subfolder/
0

I just learned from the Arch Linux Wiki that it is possible to mount the sftp-share using sshfs. I'm running an sftp-server with chroot and jail and sshfs works very well.

  1. Mount: sshfs <sftpuser>@<server>:<read/writable/directory> <your/local/mount/directory>
  2. Unmount: fusermount -u <your/local/mount/directory>
0

Given the following local structure (inspired by by @nikita-malovichko):

~ --- mainfolder --- folder --- subfolder
                        |           |
                    file1.txt   file2.txt

I use this one-liner to upload a whole local directory mainfolder:

# Powershell:
Write-Output 'put mainfolder' | sftp -r -i "~/.ssh/ssh.pem" ubuntu@my-url.com:~/my-remote-target
Bash
echo 'put mainfolder' | sftp -r -i "~/.ssh/ssh.pem" ubuntu@my-url.com:~/my-remote-target

If you want to upload everything in the directory mainfolder but unpack it (so to speak) use put mainfolder/* or as the whole command:

# Powershell:
Write-Output 'put mainfolder/*' | sftp -r -i "~/.ssh/ssh.pem" ubuntu@my-url.com:~/my-remote-target
Bash
echo 'put mainfolder/*' | sftp -r -i "~/.ssh/ssh.pem" ubuntu@my-url.com:~/my-remote-target