I need to copy a large directory containing all kinds of files onto a drive that already has a lot of important data.
I'd like to be able to copy munged symlinks but the drive is exfat and symlinks are not supported. Because the file already has a lot of data on it, I'd like to avoid having to transfer that data in order to format it to something that does support symlinks.
Is there a way that rsync could read a symlink such as this:
/etc/apache2/sites-enabled/mysite.conf -> ../sites-available/mysite.conf
and generate a regular file like
<rsync-destination>/etc/apache2/sites-enabled/mysite.conf.rsync-munged
containing the text
../sites-available/mysite.conf
I would prefer if the solution was something built into rsync or done in bash pipeline, or at least part of the basic tools found on any linux, but I'm willing to accept an other software that offers what rsync does + this functionnality
Edit:
I see that rsync can not do such a thing. I think I found a way to do it manually:
rsync everything but the symlinks, then seperately run a 'find' operation to identify all symlinks in the source, and manually create them with a bash script.
I'd have to manually extract the symlink data from stdio and I'm not sure how to do that. ls -l has a lot more data that would get in the way
tar
orzip
or other), and then transfer that instead? – Kusalananda Nov 11 '21 at 18:56rsync
, you could justtar -c -f - directory | ssh remote 'cat >directory.tar'
or the other way around,ssh remote 'tar -c -f - directory' >directory.tar
. – Kusalananda Nov 11 '21 at 21:07