What is the problem you are having with rclone?
I'm trying to upload a 3GiB file to S3. I set the chunk size to 500 MiB.
It starts blazing fast, but after 4th chunk is completed it stops for several minutes before uploading the next one. There are also more pauses later.
This is fully reproducible on my side, I run it several times changing some params, and it always behaves the same.
If I change the chunk size to something similar, like 800 MiB, it uploads the first 4 chunks fast and then stops. It does not happen with small chunks (like 5 or 10 MiB), but I also didn't wait for 3 GiB to be uploaded this way, as it takes long.
Run the command 'rclone version' and share the full output of the command.
rclone v1.57.0
- os/version: darwin 12.2 (64 bit)
- os/kernel: 21.3.0 (x86_64)
- os/type: darwin
- os/arch: amd64
- go/version: go1.17.2
- go/linking: dynamic
- go/tags: none
Which cloud storage system are you using? (eg Google Drive)
AWS S3
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
I tried to use rcat first:
tar -vzcf - "/my-dir" | rclone rcat -P -vvv --s3-chunk-size 500Mi "backup:my-bucket/my-dir"
I thought the problem may be because of the rcat, so I created an archive /to-upload/my-dir.tar.gz
first and run the upload with copy:
rclone copy -P -vvv --s3-chunk-size 500Mi "/to-upload" "backup:my-bucket/my-dir"
The same behavior with both commands.
The rclone config contents with secrets removed.
[backup]
type = s3
provider = aws
env_auth = true
acl = private
region = eu-west-1
A log from the command with the -vv
flag
2022/02/06 22:47:15 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rclone" "copy" "-P" "-vvv" "--s3-chunk-size" "500Mi" "--log-file" "/tmp/log" "/to-upload" "backup:my-bucket/my-dir"]
2022/02/06 22:47:15 DEBUG : Creating backend with remote "/to-upload/"
2022/02/06 22:47:15 DEBUG : Using config file from "/Users/mradzikowski/.config/rclone/rclone.conf"
2022/02/06 22:47:15 DEBUG : Creating backend with remote "backup:my-bucket/my-dir"
2022/02/06 22:47:15 DEBUG : backup: detected overridden config - adding "{E03ez}" suffix to name
2022/02/06 22:47:16 DEBUG : fs cache: renaming cache item "backup:my-bucket/my-dir" to be canonical "backup{E03ez}:my-bucket/my-dir"
2022/02/06 22:47:16 DEBUG : S3 bucket my-bucket path my-dir: Waiting for checks to finish
2022/02/06 22:47:16 DEBUG : S3 bucket my-bucket path my-dir: Waiting for transfers to finish
2022/02/06 22:47:21 DEBUG : my-dir.tar.gz: multipart upload starting chunk 1 size 500Mi offset 0/3.130Gi
2022/02/06 22:47:23 DEBUG : my-dir.tar.gz: multipart upload starting chunk 2 size 500Mi offset 500Mi/3.130Gi
2022/02/06 22:47:24 DEBUG : my-dir.tar.gz: multipart upload starting chunk 3 size 500Mi offset 1000Mi/3.130Gi
2022/02/06 22:47:25 DEBUG : my-dir.tar.gz: multipart upload starting chunk 4 size 500Mi offset 1.465Gi/3.130Gi
2022/02/06 22:50:41 DEBUG : my-dir.tar.gz: multipart upload starting chunk 5 size 500Mi offset 1.953Gi/3.130Gi
2022/02/06 22:52:28 DEBUG : my-dir.tar.gz: multipart upload starting chunk 6 size 500Mi offset 2.441Gi/3.130Gi
2022/02/06 22:54:10 DEBUG : my-dir.tar.gz: multipart upload starting chunk 7 size 204.785Mi offset 2.930Gi/3.130Gi
2022/02/06 22:56:11 DEBUG : my-dir.tar.gz: md5 = f37768d920ee7ed98adc1ca458636a34 OK
2022/02/06 22:56:11 INFO : my-dir.tar.gz: Copied (new)
2022/02/06 22:56:11 INFO :
Transferred: 3.130 GiB / 3.130 GiB, 100%, 5.617 KiB/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 8m55.6s
2022/02/06 22:56:11 DEBUG : 8 go routines active
As you can see, first 4 chunks were uploaded quickly. The progress showed that transferred was 1.953GiB / 1.953GiB and it just waited for a few minutes before starting the next chunk.
Logs are not showing that anything was done during that time.
Any thoughts on this? At least how I can debug it more? Or maybe use some other config params?