What is the problem you are having with rclone?
Rclone is being used on big S3 buckets ranging from 100M objects and 40TB of data to 40M objects and 18TB of data.
The cost of the API calls is really high, especially if the pod gets OOMKilled and has to list everything again. But even if it doesn't restart it is doing a lot of API calls, more info below.
Run the command 'rclone version' and share the full output of the command.
1.70.2
Which cloud storage system are you using? (eg Google Drive)
From AWS to rsync.net
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
rclone sync --progress --size-only --stats=1m --retries=10 --inplace --sftp-set-modtime=false --transfers=15 --checkers=32 --bwlimit=70M --ignore-checksum aws:source/archive/2025 encrypted:
Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.
[aws]
type = s3
provider = AWS
env_auth = true
[rsync]
shell_type = unix
type = sftp
key_file = /root/.ssh/id_rsa
[encrypted]
type = crypt
filename_encryption = off
directory_name_encryption = false
password =
password2 =
Logs related to listed files and transfers.
This bucket has 60M objects and 22TB of data.
After running for 3 hours it has listed exactly 60M objects.
After 7 hours it has listed 146M objects.
After 2 days, it is still running and tranferring, it has listed 1.2B objects.
2025-07-30T18:06:44.92615341Z stdout F Elapsed time: 3h40m0.0sTransferred: 88.888 MiB / 88.888 MiB, 100%, 4.886 MiB/s, ETA 0s
2025-07-30T18:06:44.926184951Z stdout F Checks: 21035252 / 21035252, 100%, Listed 60754815
2025-07-30T21:39:44.926761392Z stdout F Elapsed time: 7h13m0.0sTransferred: 88.888 MiB / 88.888 MiB, 100%, 4.886 MiB/s, ETA 0s
2025-07-30T21:38:44.926957971Z stdout F Checks: 53819013 / 53819013, 100%, Listed 146616171
2025-08-01T14:25:09.346996156Z stdout F Elapsed time: 1d23h59m24.4sTransferred: 987.669 MiB / 987.669 MiB, 100%, 4.886 MiB/s, ETA 0s
2025-08-01T14:25:09.34699295Z stdout F Checks: 510681366 / 510681366, 100%, Listed 1246803655
When the job starts again the next day to sync, it takes it 3~ hours to check all the files as expected but this time it lists 120M objects before it starts transferring anything.
2025-08-01T17:37:11.001953116Z stdout F Elapsed time: 3h11m0.0sTransferred: 53.866 KiB / 53.866 KiB, 100%, 0 B/s, ETA -
2025-08-01T17:37:11.001992826Z stdout F Transferred: 1 / 1, 100%
2025-08-01T17:37:11.001989446Z stdout F Checks: 50505678 / 50505678, 100%, Listed 122492912
2025-08-01T17:36:11.002086257Z stdout F Checks: 50484805 / 50484805, 100%, Listed 122409493
2025-08-01T17:35:44.33132743Z stdout F Transferred: 0 B / 53.819 KiB, 0%, 0 B/s, ETA -
I know the buckets are pretty big, but I want to know if there is a way to reduce the rechecks of the objects. It seems like the job is continuously listing object in a stable pace of 60M~ per 3 hours.
Thank you