What is the problem you are having with rclone?
Faced with a large data set in a source IBM COS bucket that includes 1.5 billion objects + 4.7 TB data. Needs to be synced to another destination IBM COS bucket. It seems an rclone instance with 660 GB RAM is not sufficient to run the rclone sync.
Two questions:
a) Is there some sizing guidance that helps with estimating the required memory for the rclone server instance in this scenario?
b) Is there any rclone sync setting that is recommended to tweak with to optimize the memory footprint in particular during the initialization phase before the sync execution (GET/PUT)?
Thanks a lot !
Run the command 'rclone version' and share the full output of the command.
rclone v1.64.2
- os/version: redhat 9.2 (64 bit)
- os/kernel: 5.14.0-284.30.1.el9_2.x86_64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.21.3
- go/linking: static
- go/tags: none
Which cloud storage system are you using? (eg Google Drive)
IBM Cloud Object Storage - Source and Destination.
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
rclone --log-file=${log_dir}/rclone_${DATE}.log -P --checkers 500 --max-backlog 50000 --transfers 128 --s3-upload-concurrency 8 -v sync --checksum remote-source:bucket-source remote-destination:bucket-destination
Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.
[remote-source]
type = s3
provider = IBMCOS
access_key_id = XXX
secret_access_key = XXX
endpoint = s3.direct.eu-de.cloud-object-storage.appdomain.cloud
[remote-destination]
type = s3
provider = IBMCOS
access_key_id = XXX
secret_access_key = XXX
endpoint = s3.direct.eu-de.cloud-object-storage.appdomain.cloud
A log from the command that you were trying to run with the -vv
flag
Fails with an out-of-memory kill.