What is the problem you are having with rclone?
When syncing files to Wasabi S3 storage I see that the same file gets synced over and over.
I noticed that the files that get copied multiple times are file numbers 1001 and higher in the directory. So it seems that rclone is only fetching the first 1,000 files in the directory when doing the comparison and then assumes everything after that needs to be uploaded.
This is causing my storage usage to sky-rocket, because it's actually uploading the file again each time it syncs.
To make things worse, because it thinks the file is new it is also overwriting the modify date which there is currently no fix for (see Rclone sync between two WebDav issues ), except to delete the file and re-upload, which would also be very bad for usage charges.
I looked at this same folder in the WebUI and it also seems to only show the first 1,000 files.
rclone ls "wasabi3:/derek/path to/timelapse files" also shows only 1,000 files.
What is your rclone version (output from
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Linux, 64 Bits
Which cloud storage system are you using? (eg Google Drive)
Nextcloud (with Local Storage) and Wasabi S3
The command you were trying to run (eg
rclone copy /tmp remote:tmp)
rclone sync NC: wasabi3:/derek -P --size-only -v
A log from the command with the
-vv flag (eg output from
rclone -vv copy /tmp remote:tmp)
2020-02-02 21:12:10 INFO : Photos/Timelapse/File1001.JPG: Copied (new)