0

Our administrator said us to backup our data from one cluster to another because the first has a chance to fall out in the near future.

I encountered a very strange phenomenon with anaconda3 directory (this is not necessary data but I am very interested to figure it out). It weighs 9.4 G before transferring & 20 G after it (I checked through du -sh).

I use sftp as sftp -Cpr -o ServerAliveInterval=60 -o ServerAliveCountMax=5 -P [*****] user@host.domain & a command get get -pr ~/anaconda3 ~/backup/.

Why are the sizes so different?

Vovin
  • 103
  • 2
    Lots of hard links, perhaps? Do you get different outputs with du -lsh? – muru Jun 03 '22 at 09:22
  • This could be down to the difference in filesystems. E.g directories with thousands of files on ext4 take a lot more space than on XFS. – Artem S. Tashkinov Jun 03 '22 at 09:34
  • @muru I got equal sizes & acquainted with a hard link term. Is there a way to copy a directory through sftp preserving hard links as they are. Or should I use another tool? – Vovin Jun 03 '22 at 09:44
  • I did not find anything useful in sftp manual. – Vovin Jun 03 '22 at 09:45
  • 1
    @Vovin rsync has a --hard-links/-H option – muru Jun 03 '22 at 09:47
  • Is it possible that one or more files is sparse ? find has a %S print option to report sparseness, but you cannot select files based on sparseness. – Paul_Pedant Jun 03 '22 at 10:06
  • @Paul_Pedant I thought about this but not, muru is right, this turned out because of copying hard links as whole files by sftp. – Vovin Jun 03 '22 at 10:16

0 Answers0