I have about 500 GB of data in my S3 storage, about 280,000 files in total. I haven't used encryption until now. When I ran the sync command (no data was changed), the check in the local Synology storage took about 1.5 hours.
Now I have encryption enabled. Re-uploaded all data to S3 storage (same size and number of files). Now the same check takes over 12 hours. At the beginning the check goes just as fast and in about 15 minutes about 70000 files are checked. Gradually the check slows down until finally the speed is only a few files per second.
Now I have encryption enabled. Re-uploaded all data to S3 storage (same size and number of files). Now the same check takes over 12 hours.
Run the command 'rclone version' and share the full output of the command.
1.60.1.0
Which cloud storage system are you using? (eg Google Drive)
Aruba Cloud Object Storage S3
The command you were trying to run (eg rclone copy /tmp remote:tmp)
Just to illustrate a few lines. The detail log has hundreds of thousands of lines, but they are all of this type.
2023/05/19 18:34:15 DEBUG : folder/211100002 Melting group.pdf: Size and modification time the same (differ by 0s, within tolerance 100ns)
2023/05/19 18:34:16 DEBUG : folder/PolUcDPK.DAT: Unchanged skipping
2023/05/19 18:34:16 DEBUG : folder/RozJUBK.MDT: Unchanged skipping
2023/05/19 18:34:16 DEBUG : folder/201100003 Tesco.pdf: Unchanged skipping
2023/05/19 18:34:16 DEBUG : folder/UcOsnova.DAT: Size and modification time the same (differ by 0s, within tolerance 100ns)
2023/05/19 18:34:16 DEBUG : folder/RozPUKP.DIA: Unchanged skipping
2023/05/19 18:34:16 DEBUG : folder/PVPOJ-29161592-2019-brezen_radne_17.04.2019_Obalka.xml: Size and modification time the same (differ by 0s, within tolerance 100ns)
2023/05/19 18:34:17 DEBUG : folder/PolUcDPK.DIA: Size and modification time the same (differ by 0s, within tolerance 100ns)
2023/05/19 18:34:17 DEBUG : folder/211100002 Melting group.pdf: Unchanged skipping
2023/05/19 18:34:17 DEBUG : folder/RozJUBK.MIX: Size and modification time the same (differ by 0s, within tolerance 100ns)
2023/05/19 18:34:17 DEBUG : folder/RozPUKP.DIX: Size and modification time the same (differ by 0s, within tolerance 100ns)
2023/05/19 18:34:17 DEBUG : folder/UcOsnova.DAT: Unchanged skipping
Unfortunately, it behaves in such a way that the speed of loading files gradually decreases until finally it stops completely and the synchronization is not completed. For example, it will say that 257258 files have been checked out of 257258, the job time continues to run, but no more files are loaded. While the actual number of files is about 280000 Not a single thread used is visible. Increasing to 16 would not help in this case.
After about four hours, it slowed down so much that I manually stopped it because it would run for more than 12 hours again. Yes I'm running on Intel core i7, 24GB RAM, 60/60 Mbps internet connection, Windows 10 x64 OS. The command is as I wrote,
yes, I use the log file via smb. However, I restarted the sync completely without logging and the progress is like this:
First ten minutes: status 134000 files checked
Second ten minutes: status 163000 files checked.
From then on, the check rate gradually decreases until it basically stops.
There is nothing suspicious in the process explorer, only rclone gradually increases RAM consumption. After 15 minutes about 600 MB.
Example: 1/12/123.txt is encrypted to p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0
Could it be possible that some paths exceeded this S3 provider limit? And instead of failing some checks enter never ending retry cycle? As retries accumulate overtime they will bring all sync speed to halt.
But in this case I would imagine you should see something in the log file - does the end of log shows any retries or anything unusual?
I use encrypted remotes with even bigger number of files and amount of data without any issues. It sort of tells me that there is no obvious bug in rclone but you hit some edge case.
Existing debug is not sufficient to point towards some obvious issue.