# rclone --checkers=64 sync -P an:xxx xxx
2022-02-10 11:55:09 ERROR : : error reading source directory: directory not found
2022-02-10 11:55:09 ERROR : Local file system at xxx: not deleting files as there were IO errors
2022-02-10 11:55:09 ERROR : Local file system at xxx: not deleting directories as there were IO errors
2022-02-10 11:55:09 ERROR : Attempt 1/3 failed with 2 errors and: directory not found
2022-02-10 11:55:09 ERROR : : error reading source directory: directory not found
2022-02-10 11:55:09 ERROR : Local file system at xxx: not deleting files as there were IO errors
2022-02-10 11:55:09 ERROR : Local file system at xxx: not deleting directories as there were IO errors
2022-02-10 11:55:09 ERROR : Attempt 2/3 failed with 2 errors and: directory not found
2022-02-10 11:55:09 ERROR : : error reading source directory: directory not found
2022-02-10 11:55:09 ERROR : Local file system at xxx: not deleting files as there were IO errors
2022-02-10 11:55:09 ERROR : Local file system at xxx: not deleting directories as there were IO errors
2022-02-10 11:55:09 ERROR : Attempt 3/3 failed with 2 errors and: directory not found
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors: 2 (retrying may help)
Checks: 0 / 0, -
Transferred: 0 / 0, -
Elapsed time: 0s
2022/02/10 11:55:09 Failed to sync with 2 errors: last error was: directory not found
#
well, that's the latest version in Ubuntu 20.04. Not that I am not willing to update to the latest downloadable, but "somebody" would need to update the ubuntu packages.
-L does not do what I want. I want to copy the links, not the contents of where the links are pointing to.
I also tried -l, but that didn't work either.
I will update to the newest rclone and see if that helps.
Any repo has a maintainer as that's not something done by rclone. Feel free to ask them to maintain it as the only one we support is the install path from here.
If you were doing a local -> anything transfer then using this flag would be the right thing to do.
-l, --links Translate symlinks to/from regular files with a '.rclonelink' extension
However the SFTP backend doesn't support that flag at the moment - it is only a local backend flag.
So we could have an equivalent flag for the SFTP backend - say --sftp-links which would cause symlinks to show as link.rclonelink files and that would allow them to be copied to/from local systems or elsewhere.
There is actually an issue about this already
If you want you could help implementing it, or maybe your company would like to sponsor me to implement it?
Yes, I tried the "-l" flag, too, and it didn't work. I am using sftp -> local (but local -> sftp would also work for me, if I start the job from the remote machine).
You said the sftp backend does not support copying that flag. However, since sftp usually works from/to unix machines, I don't see any reason why there is a need to support the "-l" flag (that creates the .rclonelink files), or any other flag, for that matter.
Looks like the right solution would be for the sftp backend to just do the right thing for symlinks; no additional flags needed.
felix@gemini:~/test$ ls -al
total 4
drwxrwxr-x 1 felix felix 8 Feb 11 12:39 .
drwxr-xr-x 1 felix felix 336 Feb 11 12:40 ..
lrwxrwxrwx 1 felix felix 10 Feb 11 12:39 blah -> /etc/hosts
felix@gemini:~/test$ scp blah home:~/test2
blah 100% 184 744.0KB/s 00:00
felix@gemini:~/test$ cd ..
felix@gemini:~$ cd test2
felix@gemini:~/test2$ ls
blah
felix@gemini:~/test2$ ls -al
total 4
drwxrwxr-x 1 felix felix 8 Feb 11 12:42 .
drwxr-xr-x 1 felix felix 336 Feb 11 12:40 ..
-rw-r--r-- 1 felix felix 184 Feb 11 12:42 blah
I've only ever seen that work with rsync but maybe I'm missing a flag or misunderstanding how you are doing it.
I am familiar with rsync, too, and it works just fine there.
I am trying to duplicate the rsync behavior with rclone and I haven't been able to. So, when you're asking "how it works", I can only answer "it doesn't", and I'm considering this a bug.
Let me reword, hopefully this time it makes sense:
.rclonelink files make sense for cloud stores that don't support symlinks, for example S3 and/or Google Cloud Storage and others. For backends with an underlying system that is unix-based and has symlinks, I don't see any reason why symlinks should not be transferred correctly with no additional flags.
-L and -l have their own meanings, both varying the base behavior of a "clone" when symlinks are encountered. A "clone", however, should be just that. The destination should be an exact copy of the source.
Every other program I've used to copy files follow symlinks and copies the data like the basic cp command:
felix@gemini:~/test$ cp blah test
felix@gemini:~/test$ ls -al
total 8
drwxrwxr-x 1 felix felix 16 Feb 11 23:35 .
drwxr-xr-x 1 felix felix 336 Feb 11 17:11 ..
lrwxrwxrwx 1 felix felix 10 Feb 11 12:39 blah -> /etc/hosts
-rw-r--r-- 1 felix felix 184 Feb 11 23:35 test
I'm not sure where 'clone' comes in as the commands are 'copy' and 'sync'.
It is how many programs under Linux are designed and operate.
If you'd like it to operate in a different way, feel free to log a feature request and someone can pick it up or develop the solution and submit a pull request.
For copying symlinks and items over that nature, you'd be much better off using tar or if you want to copy something that is Linux to Linux, use rsync as it handles links fine as it was specifically built for Linux to achieve exactly that goal.
felix@gemini:~$ rsync -avh /home/felix/test home:/home/felix/test2
sending incremental file list
test/
test/blah -> /etc/hosts
test/test
sent 362 bytes received 42 bytes 269.33 bytes/sec
total size is 194 speedup is 0.48
felix@gemini:~$ cd test2
felix@gemini:~/test2$ ls -al
total 0
drwxrwxr-x 1 felix felix 8 Feb 11 23:41 .
drwxr-xr-x 1 felix felix 336 Feb 11 17:11 ..
drwxrwxr-x 1 felix felix 16 Feb 11 23:35 test
felix@gemini:~/test2$ cd test
felix@gemini:~/test2/test$ ls
blah test
felix@gemini:~/test2/test$ ls -al
total 8
drwxrwxr-x 1 felix felix 16 Feb 11 23:35 .
drwxrwxr-x 1 felix felix 8 Feb 11 23:41 ..
lrwxrwxrwx 1 felix felix 10 Feb 11 12:39 blah -> /etc/hosts
-rw-r--r-- 1 felix felix 184 Feb 11 23:35 test
Rclone deals with cloud storage and very few cloud storage systems can cope with symlinks which are a very UNIX concept
The core of rclone doesn't understand symlinks - that's why the symlinks would have to be translated to .rclonelink files and translated back by the sftp or local backends.
It wouldn't be too hard to add symlink -l support to SFTP. It could possibly be added to FTP but that's about it!
From my experience, there is a reason that symlinks generally don't work for a lot of commands as you'd have broken links everywhere especially when going from server a -> server b.
If I'm using Linux to Linux, I'm not really using rclone as rsync handles things a bit better with permissions/links/etc. Right tool for the job rather than trying to shoe horn more local stuff into a tool based for cloud remotes.
The reason why I was using rclone to begin with is throughput. I am doing a site-to-site copy of about 60TB, and rsync (at least in our network) tends to be a bit finicky; throughput is not-consistent.
With rclone being able to do multiple connections in parallel it's much faster and more consistent than rsync (in terms of total throughput).
I think what will work for us is do the bulk of the copy with rclone, and then do a final rsync to fix synlinks/permissions/etc.
@ncw, having the core of rclone not understanding symlinks is the crux of the issue here. Makes sense now.