When encryption was enabled, the synchronisation time was significantly increased

What is the problem you are having with rclone?

I have about 500 GB of data in my S3 storage, about 280,000 files in total. I haven't used encryption until now. When I ran the sync command (no data was changed), the check in the local Synology storage took about 1.5 hours.

Now I have encryption enabled. Re-uploaded all data to S3 storage (same size and number of files). Now the same check takes over 12 hours. At the beginning the check goes just as fast and in about 15 minutes about 70000 files are checked. Gradually the check slows down until finally the speed is only a few files per second.

Now I have encryption enabled. Re-uploaded all data to S3 storage (same size and number of files). Now the same check takes over 12 hours.

Run the command 'rclone version' and share the full output of the command.

1.60.1.0

Which cloud storage system are you using? (eg Google Drive)

Aruba Cloud Object Storage S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync -P remote:bucket/ \local\folder --transfers 8

The rclone config contents with secrets removed.

[basic]
type = s3
provider = Other
access_key_id = xxxxxxxxxxxxxxxxxxxxx
secret_access_key = xxxxxxxxxxxxxxxxxxxxx
endpoint = url.s3storage.com

[crypt]
type = crypt
remote = basic:crypted
password = xxxxxxxxxxxxxxxxxx
password2 = xxxxxxxxxxxxxxxxxxxxxx

A log from the command with the -vv flag

Just to illustrate a few lines. The detail log has hundreds of thousands of lines, but they are all of this type.

2023/05/19 18:34:15 DEBUG : folder/211100002 Melting group.pdf: Size and modification time the same (differ by 0s, within tolerance 100ns)
2023/05/19 18:34:16 DEBUG : folder/PolUcDPK.DAT: Unchanged skipping
2023/05/19 18:34:16 DEBUG : folder/RozJUBK.MDT: Unchanged skipping
2023/05/19 18:34:16 DEBUG : folder/201100003 Tesco.pdf: Unchanged skipping
2023/05/19 18:34:16 DEBUG : folder/UcOsnova.DAT: Size and modification time the same (differ by 0s, within tolerance 100ns)
2023/05/19 18:34:16 DEBUG : folder/RozPUKP.DIA: Unchanged skipping
2023/05/19 18:34:16 DEBUG : folder/PVPOJ-29161592-2019-brezen_radne_17.04.2019_Obalka.xml: Size and modification time the same (differ by 0s, within tolerance 100ns)
2023/05/19 18:34:17 DEBUG : folder/PolUcDPK.DIA: Size and modification time the same (differ by 0s, within tolerance 100ns)
2023/05/19 18:34:17 DEBUG : folder/211100002 Melting group.pdf: Unchanged skipping
2023/05/19 18:34:17 DEBUG : folder/RozJUBK.MIX: Size and modification time the same (differ by 0s, within tolerance 100ns)
2023/05/19 18:34:17 DEBUG : folder/RozPUKP.DIX: Size and modification time the same (differ by 0s, within tolerance 100ns)
2023/05/19 18:34:17 DEBUG : folder/UcOsnova.DAT: Unchanged skipping

First I would always update rclone to the latest release ( v1.62.2 as for now):

rclone selfupdate

always there are fixes and improvements.

Second you could try to increase number of checkers:

rclone sync -P remote:bucket/ \local\folder --transfers 8 --checkers 16

Of course encryption adds some overhead but usually on modern hardware it is almost not noticeable. Internet speed usually is the biggest bottleneck.

Unfortunately, it behaves in such a way that the speed of loading files gradually decreases until finally it stops completely and the synchronization is not completed. For example, it will say that 257258 files have been checked out of 257258, the job time continues to run, but no more files are loaded. While the actual number of files is about 280000 Not a single thread used is visible. Increasing to 16 would not help in this case.

agree with you but for the start would like to clear the obvious. There is no point debugging old version of rclone for example.

image

I'll try running it again on the latest version

can you post the exact command you ran?
ok to redact bucket/folder/filenames

does it mean it finished in 4h15 min? vs 12h before? and 2.5h without crypt?

also what is hardware you are running it on? Intel? Arm? what OS?

After about four hours, it slowed down so much that I manually stopped it because it would run for more than 12 hours again. Yes I'm running on Intel core i7, 24GB RAM, 60/60 Mbps internet connection, Windows 10 x64 OS. The command is as I wrote,

rclone sync -P remote:bucket/ \10.0.0.1\storage --transfers 8

the reason i ask, from the screen snippet,
looks like you are using --log-file pointing to a smb/samba server, 192.xxx.xxx.xxx

have you tried using resource monitor or process explorer to help narrow down the cause?

yes, I use the log file via smb. However, I restarted the sync completely without logging and the progress is like this:

First ten minutes: status 134000 files checked
Second ten minutes: status 163000 files checked.
From then on, the check rate gradually decreases until it basically stops.

There is nothing suspicious in the process explorer, only rclone gradually increases RAM consumption. After 15 minutes about 600 MB.

ok, that is nice and clear.
the dest is \local\folder or \10.0.0.1\storage - local or nework?

i do not know what is going on but someone else might know the exact reason.

in the mean time, if i were to experiment:

  • increase --checkers, increase --max-backlog, try --check-first, try --fast-list
  • if the source and dest are the same, no files would be transferred, try rclone copy

All but slow down look perfectly reasonable. I am thinking what has changed with adding crypto layer.

One thing is path length as e.g. from crypt description:

Example: 1/12/123.txt is encrypted to p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0

Could it be possible that some paths exceeded this S3 provider limit? And instead of failing some checks enter never ending retry cycle? As retries accumulate overtime they will bring all sync speed to halt.

But in this case I would imagine you should see something in the log file - does the end of log shows any retries or anything unusual?

This is what the end of the log of a running job looks like. Basically, loading more files from the S3 repository has stopped.

Transferred: 22370 / 22370, 100%
Elapsed time: 4h43m36.8sTransferred: 55.810 GiB / 55.810 GiB, 100%, 6.978 MiB/s, ETA 0s
Checks: 253495 / 253495, 100%
Transferred: 22370 / 22370, 100%
Elapsed time: 4h43m37.3sTransferred: 55.810 GiB / 55.810 GiB, 100%, 6.978 MiB/s, ETA 0s
Checks: 253495 / 253495, 100%
Transferred: 22370 / 22370, 100%
Elapsed time: 4h43m37.8sTransferred: 55.810 GiB / 55.810 GiB, 100%, 6.978 MiB/s, ETA 0s
Checks: 253495 / 253495, 100%
Transferred: 22370 / 22370, 100%
Elapsed time: 4h43m38.3sTransferred: 55.810 GiB / 55.810 GiB, 100%, 6.978 MiB/s, ETA 0s
Checks: 253495 / 253495, 100%
Transferred: 22370 / 22370, 100%
Elapsed time: 4h43m38.8sTransferred: 55.810 GiB / 55.810 GiB, 100%, 6.978 MiB/s, ETA 0s
Checks: 253495 / 253495, 100%
Transferred: 22370 / 22370, 100%
Elapsed time: 4h43m39.3sTransferred: 55.810 GiB / 55.810 GiB, 100%, 6.978 MiB/s, ETA 0s
Checks: 253495 / 253495, 100%
Transferred: 22370 / 22370, 100%
Elapsed time: 4h43m39.8sTransferred: 55.810 GiB / 55.810 GiB, 100%, 6.978 MiB/s, ETA 0s
Checks: 253495 / 253495, 100%
Transferred: 22370 / 22370, 100%
Elapsed time: 4h43m40.3sTransferred: 55.810 GiB / 55.810 GiB, 100%, 6.978 MiB/s, ETA 0s

This is what the end of the log file looked like before the encryption was enabled.

Elapsed time: 2h26m52.6sTransferred: 56.915 GiB / 56.915 GiB, 100%, 7.129 MiB/s, ETA 0s
Checks: 278242 / 278242, 100%
Deleted: 10642 (files), 4142 (dirs)
Transferred: 21104 / 21104, 100%
Elapsed time: 2h26m52.6s

I am not going to pretend I know what is going on here.

I would try to add

--low-level-retries 1

effectively disabling retries.

This is to rule out that some stuck retries are responsible.

Also there is no any ERROR in all log file?

I use encrypted remotes with even bigger number of files and amount of data without any issues. It sort of tells me that there is no obvious bug in rclone but you hit some edge case.

Existing debug is not sufficient to point towards some obvious issue.

I think we need @ncw expertise here....

Now I'll try not encrypting file and directory names to see if the situation improves.

I think it is worth to get to the bottom of this issue. It definitely can benefit others.