Problem copying symlink to remote. Just a symlink

What is the problem you are having with rclone?

I am trying to backup a complex filesystem where I have to use a script that runs multiple rclones in parallel in order to have it finish in hours rather than days. Thus I have an rclone that is trying to copy just a symlink to S3, while others copy the directory trees at the same level.

The problem is that the symlink is not being copied successfully. The exact failure differs depending on the version of rclone and whether copy or sync is used.

Run the command 'rclone version' and share the full output of the command.

rclone v1.48.0

  • os/arch Linux/amd64
  • go version: go1.11.2

rclone v1.57.0-DEV

  • os/version: redhat 8.5 (64 bit)
  • os/kernel 4.18.0-348.23.1.el8_5.x86_64 (x86_64)
  • os/type: Linux
  • os/arch: amd64
  • go/version: go1.16.12
  • go/linking: dynamic
  • go/tags: none

No, but I have the lastest version packaged for EL8.

Which cloud storage system are you using? (eg Google Drive)

AWS S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy /here/there/symlink/ S3:bucket/somewhere/here/there/ --links
rclone sync /here/there/symlink/ S3:bucket/somewhere/here/there/ --links
rclone copy /here/there/symlink S3:bucket/somewhere/here/there/ --links
rclone sync /here/there/symlink S3:bucket/somewhere/here/there/ --links

The rclone config contents with secrets removed.

[s3]
type = s3
env_auth = true
provider = AWS
region = us-east-2
server_side_encryption = AWS:Kms
sse_kms_key_id = <id>
no_check_bucket=true

A log from the command with the -vv flag

I can't copy and paste from the environment. Everything here is typed by hand, so please forgive any typos.

With rclone 1.57 both sync and copy follow the symlink to another filesystem and start uploading, which I guess is the expected behaviour with the trailing / on the source. If I remove the trailing / from the end of the source I get the same behaviour as 1.48 with copy, whether it has a trailing / on the source or not. Which is that rclone fails with "ERROR: Attempt 1/3 failed with 1 errors and: object not found" repeated 3 times followed by "Failed to {sync/copy}: object not found". I don't think anything from the verbose logging is useful, other than maybe "Transferred: 0 B / 0 B"

With rclone 1.48 and sync rclone exits with a code of zero, doesn't copy anything to S3 and in the verbose logging I see "DEBUG: symlink.rclonelink: Excluded", alongside exclude messages for all the other files/directories/symlinks in /here/there.

If I sync/copy the whole of /here/there then somewhere/here/there/symlink.rclonelink is created OK, it appears to only be an issue if rclone is asked to copy/sync just a symlink.

I would have thought you could do this in rclone by increasing --checkers and --transfers, but maybe you are running on multiple machines, I don't know.

The problem is that when you supply --links rclone is translating the names for you from symlink to symlink.rclonelink. That happens before rclone starts trying to decide whether /here/there/symlink is a file or a directory.

So this should work

rclone copy /here/there/symlink.rclonelink S3:bucket/somewhere/here/there/ --links

However it doesn't appear to and that looks like a bug!

You can use this as a workaround

rclone copy /here/there/ --include "/symlink.rclonelink" S3:bucket/somewhere/here/there/ --links --no-traverse

However if you've got lots of symlinks to copy it would be more efficient to stick them in a file and use --files-from than invoke rclone individually on each one.

I have just tried the workaround and it works perfectly, thanks! Since I already have a script to do the splitting it is actually very easy for me to add special handling of symlinks which are to be synced on their own. So the quick supply of a workaround is awesome!

I quite thought bumping up --checkers and --transfers would help at first, but the issue is I have 10 million files which can be 250GB each, alongside 10 million symlinks. Hundreds of transfers works well for the symlinks, but not so well for the files (8 is good, with concurrency of 16. I have 10s of Gb of bandwidth to S3). I have thought of suggesting adding transfer bands, so you have one size pool transfering small files and another size pool for large. I wasn't sure if anyone else would find it useful. Also I have been stuck on 1.48 for a while, I am now moving to 1.57, so if a feature is added I am not sure when I can practically get it, because sad packaging reasons.

hello and welcome to the forum,

the only way to get the latest stable is
https://rclone.org/downloads/#script-download-and-install

Actually we have this already. Check out the docs for --order-by , you can order by size but take a certain percentage from each end of the pipe, effectively making a small pool and a large pool.

Can you make an issue on GitHub about the original problem as I think it is a bug which needs fixing.

Thanks

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.