Remember the files that were copied in the session

What is the problem you are having with rclone?

Checks when I copy to from local to remote takes an age. I just want to copy existing files that don't change.

Run the command 'rclone version' and share the full output of the command.

rclone v1.65.2

  • os/version: ubuntu 22.04 (64 bit)
  • os/kernel: 5.15.0-100-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.21.6
  • go/linking: static
  • go/tags: snap

Which cloud storage system are you using? (eg Google Drive)

type = protondrive
username = XXX
password = XXX
client_uid = XXX
client_access_token = XXX
client_refresh_token = XXX
client_salted_key_pass = XXX

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy "D:\backup" remote: --multi-thread-streams 6 --transfers 6 --auto-confirm -P -vv

Like quoted from here

So in my case, rclone takes ~45 minutes to go through all the files that have been already copied in the previous line and ignores them, and when it comes to new, uncopied contents, it takes ~7 minutes to copy them till it hits the limit and goes to the second line.. The same happens with every line.

On some slow machines it runs checks for 10 hours...

Basically, is there a way to make rclone "remember" the files it copied in the past 24 hours and skip checking them every time

Is this possible? --no-traverse and --files-from is not what I'm looking for. It possible to cache what has been transferred or not? I tried some cache flags from documentation without success.

I also tried listing all transferred path+files in a list and use --exlude-from= but I'm getting tons of errors because of symbols in the filenames/path.

Any solutions?

welcome to the forum,

please post the output of rclone config redacted remote: -vv
i am going to guess that the remote is S3.
if so, before rclone can upload a file, rclone has to calculate the checksum and that takes time.
it is possible to skip that but then rclone is not verify transfers files using checksums.

no. there are no cache flags for that.
need to post a debug log, so we can see exactly what is going on.

to check, rclone has to compare the source to dest.
to reduce the total number of checks, might try ``--max-agemight try--fast-list, --use-server-modtime`

and check out

need to post the exact command and full debug output.

Run once:

rclone copy "D:\backup" remote: 

and then if you run it daily run (it will be much faster):

rclone copy "D:\backup" remote: --no-traverse --max-age 24h

this method is explained in rclone copy docs

It will only process files changed or added in the last 24h. From time to time you can run it without it to catch if anything missing.

I added redact.

Why rclone can't normally export a file than can be used for --exclude-from= without tons of glob errors. When I remove the symbols that cause glob errors it exclude ALL files. Lets not debug that please. Why there is no option to ignore symbols?

ProtonDrive. I will try --fast-list. I don't want to use --max-age. Doesn't suit me. There are really no other options to skip all the file checks?

I have use other remotes too. Much faster. But still slow when it comes to 300GB+ files

Because nobody needed it so much that felt like implementing it? This is open source project - feel free to make rclone better.

No idea what you are talking about. If you need some advice please provide some examples.

It is slow and buggy remote - still in beta, which is clearly stated in docs. Development at the moment is stalled IMO. So do not use for anything important. But beta testers feedback is always welcomed.

Not supported by ProtonDrive.

There is --no-traverse flag which does exactly that. Together with all other options it gives you all tools required to use it the way which suits you the best.

perhaps rclone check --exclude-from

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.