Hello there,
I am trying to backup files from an OVH object storage to an AWS S3 bucket using the rclone copy
command.
The first execution worked fine, but if I use it again it slowly consume all the memory available (28G total).
I have around 2M objects for 94GB, which don't move a lot.
Most of the file checked by rclone are ignored, but it looks like that's what consume the memory.
Here is the command I used
rclone copy swift:... aws:... \
--transfers=16 \
--checkers=32 \
--update \
--size-only \
--s3-chunk-size=64M \
--s3-upload-concurrency=10 \
--s3-disable-checksum \
--fast-list \
--log-level=ERROR \
-P
I tried with lower values, it is slower but still consume as much memory. Same with default --s3-chunk-size
.
--transfers=4 \
--checkers=8 \
--s3-chunk-size=32M \
--s3-upload-concurrency=4 \
I also tried without --fast-list
and with --disable ListR
, but same result once again.
According to the documentation, --fast-list
could be a reason why it consume that much memory but apparently --disable ListR
didn't help either.
About values, I forgot for the first one but for the lower flags I stopped at 1M checked files for 1500 transferred (100MiB).
I also tried with rclone sync
but it did pretty much the same.
Is there a way to limit memory used? Or something similar that could help in this case?
Result of the rclone version
command
rclone v1.69.3
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 5.15.0-119-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.24.3
- go/linking: static
- go/tags: none
Result of the rclone config redacted
command (mostly credentials)
[aws]
type = s3
provider = AWS
env_auth = true
access_key_id = XXX
secret_access_key = XXX
region = XXX
location_constraint = XXX
acl = private
[swift]
type = swift
user = XXX
key = XXX
user_id = XXX
auth = https://auth.cloud.ovh.net/v3
tenant = XXX
tenant_id = XXX
region = XXX
domain = XXX
auth_version = 3
endpoint_type = admin