What is the problem you are having with rclone?
I use rclone copy
to produce daily backups (top-off) of a AWS S3 bucket into another AWS S3 bucket. The source bucket contains 1.8M objects, between 1K and 10M per object (total size appr. 400G). Every day I get around 25K new objects in the source bucket. We previously used rclone copy remote:bucket-01 remote:bucket-02 --checksum
, which terminates successfully after appr. 2 hours for a top-off backup on our machine. We are trying to reduce the number of List-Requests and have one directory per object, and are therefore experimenting with adding --fast-list
to our command. Our problem: Even after 4 hours of execution, rclone does not start to transfer objects. We tried to debug with --dump headers
: For the first 5 minutes rclone sends LIST-requests to both bucket-01 and bucket-02. From the markers in the query strings of the LIST-requests I can deduce, that both buckets are fully listed in the first 5 minutes. After the listing completes, --dump headers
shows no subsequent requests at all. The memory usage on my machine grows to 3.3GB/4GB during the listing, which is more or less consistent with the expected 1K memory per listed object using --fast-list
. After the 5 minutes, the CPU load consistently stays at 100%, but the logs show no output other than the progress update each minute. I also ran the command with a small number of objects (10 objects in the source bucket, 5 objects in the destination bucket), which terminated without problems. What is rclone computing after the listing? What is causing the 100% CPU load after the listing? Is there a way to speed this up? Do I need a better machine?
Run the command 'rclone version' and share the full output of the command.
rclone v1.57.0
- os/version: debian 11.2 (64 bit)
- os/kernel: 5.10.0-8-amd64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.17.2
- go/linking: static
- go/tags: none
Which cloud storage system are you using? (eg Google Drive)
AWS S3
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
rclone copy remote:bucket-01 remote:bucket-02 --fast-list --checksum -vv
The rclone config contents with secrets removed.
[remote]
type = s3
provider = AWS
env_auth = false
access_key_id = XXX
secret_access_key = XXX
region = eu-central-1
endpoint =
location_constraint =
acl =
server_side_encryption =
storage_class =
A log from the command with the -vv
flag
rclone copy remote:bucket-01 remote:bucket-02 --fast-list --checksum -vv
2022/05/20 15:37:47 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rclone" "copy" "remote:bucket-01" "remote:bucket-02" "--fast-list" "--checksum" "-vv"]
2022/05/20 15:37:47 DEBUG : Creating backend with remote "remote:bucket-01"
2022/05/20 15:37:47 DEBUG : Using config file from "/home/XXX/.config/rclone/rclone.conf"
2022/05/20 15:37:47 DEBUG : Creating backend with remote "remote:bucket-02"
2022/05/20 15:38:47 INFO :
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Elapsed time: 1m0.0s
2022/05/20 15:39:47 INFO :
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Elapsed time: 2m0.0s
2022/05/20 15:40:47 INFO :
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Elapsed time: 3m0.0s
2022/05/20 15:41:47 INFO :
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Elapsed time: 4m0.0s
...
(the status message keeps coming, the longest I have waited is 4 hours)