Rclone sync only discovering 1/4 of files/folders

What is the problem you are having with rclone?

rclone sync is only listing about 1/4 of my files... shows that it's trying to upload 1.2 Tb of data when the full amount in the folder (no duplicates) is closer to 4.5Tb. Without the full amount I don't know that everything is getting sync'ed, but I also don't get an accurate ETA.

2021/12/07 07:11:53 NOTICE: 
Transferred:   	   63.612 GiB / 1.206 TiB, 5%, 2.078 MiB/s, ETA 6d16h17m43s
Checks:              2472 / 2472, 100%
Transferred:         1587 / 11651, 14%
Elapsed time:    8h19m0.3s

What is your rclone version (output from rclone version)

rclone v1.57.0

  • os/version: darwin 10.15.7 (64 bit)

  • os/kernel: 19.6.0 (x86_64)

  • os/type: darwin

  • os/arch: amd64

  • go/version: go1.17.2

  • go/linking: dynamic

  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Backblaze b2

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync -i /Volumes/EXTERNAL1/FILES myblaze:FILES --transfers 16 --checkers 16

The rclone config contents with secrets removed.

[myblaze]
type = b2
account = [REMOVED]
key = [REMOVED]

A log from the command with the -vv flag

using --log-file=mylogfile.txt, no file is ever generated :(

You have -i which is indicating interactive, but logging to a file which contract each other.

      --max-backlog int                      Maximum number of objects in sync or check backlog (default 10000)

Run doesn't get a full list as it only grabs the first items to fill it's backlog and it continues so what you are seeing is expected. It'll grow over time as it continues to run.

I've used --max-backlog and --check-first, which ups the size count to 1.8Tb but certainly far from 4.5Tb.

I can only see what you've shared and you have not.

That's your command you've shared.

If you have done something else, you'd have to run it and share the output.

Thanks for taking the time Animosity, I do appreciate it.

For anyone who finds this thread later and has a similar problem, using --check-first (which effectively sets --max-backlog to inf) has kicked off a checking process where I can see the total size growing and it's already at 3.4Tb. No transfers have begun yet but I'm optimistic. If you don't see another post from me, that means --check-first fixed it.

It wasn't ever broken.

If you just let it run, it'll eventually list out all the files and complete.

It's just a pay now or pay later. If. you aren't ordering, you are just wasting cycles enumerating out everything and making it start slower and take more memory / CPU since it has to store all that before doing anything.

Unless you are ordering your transfers, there's no reason to check first.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.