Rclone upload very slow on Hetzner cloud storage

What is the problem you are having with rclone?

Rclone upload seems to be exceptionally slow on Hetzner cloud storage. Other upload tools (like aws s3 cli) do not seem to have this issue.

Except for the upload speed, I do not see any obvious difference in the logs when uploading on Hetzner or any other cloud provider with the -vvv flag. Might have to checks the rclone source code to check what is happening under the hood.

Run the command 'rclone version' and share the full output of the command

rclone v1.72.0
- os/version: ubuntu 24.04 (64 bit)
- os/kernel: 6.8.0-90-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.25.4
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Hetzner, AWS, OVH

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Using aws cli on Hetzner or any cloud providers, it takes around 30 seconds to upload a 1GB file:

aws s3 --debug cp --region nbg1 --endpoint-url https://nbg1.your-objectstorage.com/ /tmp/file1.txt s3://blg-backup-unittest/file1.txt

aws-debug-hetzner-upload.txt (1.8 MB)

β†’ time: 0:32.49

Using rclone on any other cloud provider (AWS, OVH), it also takes around 30 seconds to upload a 1GB file:

rclone copy /tmp/file1.txt --s3-no-check-bucket --s3-chunk-size 300M ":s3,provider=OVHcloud,env_auth=true,region=sbg,endpoint='https://s3.sbg.io.cloud.ovh.net/':blg-backup-unittest"

β†’ time: 0:25.30

But, using rclone on Hetzner, the upload speed drops and it takes a few minutes to upload a 1GB file:

rclone copy /tmp/file1.txt --s3-no-check-bucket --s3-chunk-size 300M ":s3,provider=Hetzner,env_auth=true,region=nbg1,endpoint='https://nbg1.your-objectstorage.com/':blg-backup-unittest"

β†’ time: 2:02.83

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

rclone config redacted
[cryptremote]
type = crypt
remote = s3remote:blg-backup-unittest

[s3remote]
type = s3
provider = Other
env_auth = true
region = sbg
endpoint = https://s3.sbg.io.cloud.ovh.net/
### Double check the config for sensitive info before posting publicly

A log from the command that you were trying to run with the -vv flag

On Hetzner:

2025/12/18 18:31:51 DEBUG : rclone: Version "v1.72.0" starting with parameters ["rclone" "copy" "-vv" "/tmp/file1.txt" "--s3-no-check-bucket" "--s3-chunk-size" "300M" ":s3,provider=Hetzner,env_auth=true,region=nbg1,endpoint='https://nbg1.your-objectstorage.com/':blg-backup-unittest"]
2025/12/18 18:31:51 DEBUG : Creating backend with remote "/tmp/file1.txt"
2025/12/18 18:31:51 DEBUG : Using config file from "/home/francois/.config/rclone/rclone.conf"
2025/12/18 18:31:51 DEBUG : fs cache: renaming child cache item "/tmp/file1.txt" to be canonical for parent "/tmp"
2025/12/18 18:31:51 DEBUG : Creating backend with remote ":s3,provider=Hetzner,env_auth=true,region=nbg1,endpoint='https://nbg1.your-objectstorage.com/':blg-backup-unittest"
2025/12/18 18:31:51 DEBUG : :s3: detected overridden config - adding "{BAmKG}" suffix to name
2025/12/18 18:31:51 DEBUG : fs cache: renaming cache item ":s3,provider=Hetzner,env_auth=true,region=nbg1,endpoint='https://nbg1.your-objectstorage.com/':blg-backup-unittest" to be canonical ":s3{BAmKG}:blg-backup-unittest"
2025/12/18 18:31:51 DEBUG : file1.txt: Need to transfer - File not found at Destination
2025/12/18 18:31:51 DEBUG : file1.txt: multi-thread copy: disabling buffering because source is local disk
2025/12/18 18:31:54 DEBUG : file1.txt: open chunk writer: started multipart upload: 2~ABqKgNg-mvq0G0mVsL_9-jMcZ4lPDrm
2025/12/18 18:31:54 DEBUG : file1.txt: multi-thread copy: using backend concurrency of 4 instead of --multi-thread-streams 4
2025/12/18 18:31:54 DEBUG : file1.txt: Starting multi-thread copy with 4 chunks of size 300Mi with 4 parallel streams
2025/12/18 18:31:54 DEBUG : file1.txt: multi-thread copy: chunk 4/4 (943718400-1000000000) size 53.674Mi starting
2025/12/18 18:31:54 DEBUG : file1.txt: multi-thread copy: chunk 2/4 (314572800-629145600) size 300Mi starting
2025/12/18 18:31:54 DEBUG : file1.txt: multi-thread copy: chunk 1/4 (0-314572800) size 300Mi starting
2025/12/18 18:31:54 DEBUG : file1.txt: multi-thread copy: chunk 3/4 (629145600-943718400) size 300Mi starting
2025/12/18 18:31:55 DEBUG : file1.txt: Seek from 56281600 to 0
2025/12/18 18:31:57 DEBUG : file1.txt: Seek from 314572800 to 0
2025/12/18 18:31:58 DEBUG : file1.txt: Seek from 314572800 to 0
2025/12/18 18:31:58 DEBUG : file1.txt: Seek from 314572800 to 0
2025/12/18 18:32:15 DEBUG : file1.txt: multipart upload wrote chunk 4 with 56281600 bytes and etag "264f266d5f4b19cc776dc440e0e917cf"
2025/12/18 18:32:15 DEBUG : file1.txt: multi-thread copy: chunk 4/4 (943718400-1000000000) size 53.674Mi finished
2025/12/18 18:32:51 INFO  : 
Transferred:   	  486.909 MiB / 953.674 MiB, 51%, 8.139 MiB/s, ETA 57s
Transferred:            0 / 1, 0%
Elapsed time:        59.9s
Transferring:
 *                                     file1.txt: 51% /953.674Mi, 8.238Mi/s, 56s

2025/12/18 18:33:48 DEBUG : file1.txt: multipart upload wrote chunk 3 with 314572800 bytes and etag "d1029ef8e75cc126bee56151d803f64f"
2025/12/18 18:33:48 DEBUG : file1.txt: multi-thread copy: chunk 3/4 (629145600-943718400) size 300Mi finished
2025/12/18 18:33:49 DEBUG : file1.txt: multipart upload wrote chunk 1 with 314572800 bytes and etag "cc52338c454b64d21efd14554839e292"
2025/12/18 18:33:49 DEBUG : file1.txt: multi-thread copy: chunk 1/4 (0-314572800) size 300Mi finished
2025/12/18 18:33:49 DEBUG : file1.txt: multipart upload wrote chunk 2 with 314572800 bytes and etag "6d446590f6c52c865d874e895f844f34"
2025/12/18 18:33:49 DEBUG : file1.txt: multi-thread copy: chunk 2/4 (314572800-629145600) size 300Mi finished
2025/12/18 18:33:49 DEBUG : file1.txt: multipart upload "2~ABqKgNg-mvq0G0mVsL_9-jMcZ4lPDrm" finished
2025/12/18 18:33:49 DEBUG : file1.txt: Finished multi-thread copy with 4 parts of size 300Mi
2025/12/18 18:33:49 DEBUG : file1.txt: size = 1000000000 OK
2025/12/18 18:33:49 DEBUG : file1.txt: md5 = 8bc5f047fb96e30435237160edba5cdf OK
2025/12/18 18:33:49 INFO  : file1.txt: Multi-thread Copied (new)
2025/12/18 18:33:49 INFO  : 
Transferred:   	  953.674 MiB / 953.674 MiB, 100%, 7.689 MiB/s, ETA 0s
Transferred:            1 / 1, 100%
Elapsed time:      1m58.3s

2025/12/18 18:33:49 DEBUG : 5 go routines active

On OVHcloud:

2025/12/19 10:07:43 DEBUG : rclone: Version "v1.72.0" starting with parameters ["rclone" "copy" "-vv" "/tmp/file1.txt" "--s3-no-check-bucket" "--s3-chunk-size" "300M" ":s3,provider=OVHcloud,env_auth=true,region=sbg,endpoint='https://s3.sbg.io.cloud.ovh.net/':blg-backup-unittest"]
2025/12/19 10:07:43 DEBUG : Creating backend with remote "/tmp/file1.txt"
2025/12/19 10:07:43 DEBUG : Using config file from "/home/francois/.config/rclone/rclone.conf"
2025/12/19 10:07:43 DEBUG : fs cache: renaming child cache item "/tmp/file1.txt" to be canonical for parent "/tmp"
2025/12/19 10:07:43 DEBUG : Creating backend with remote ":s3,provider=OVHcloud,env_auth=true,region=sbg,endpoint='https://s3.sbg.io.cloud.ovh.net/':blg-backup-unittest"
2025/12/19 10:07:43 DEBUG : :s3: detected overridden config - adding "{LXLuE}" suffix to name
2025/12/19 10:07:43 DEBUG : fs cache: renaming cache item ":s3,provider=OVHcloud,env_auth=true,region=sbg,endpoint='https://s3.sbg.io.cloud.ovh.net/':blg-backup-unittest" to be canonical ":s3{LXLuE}:blg-backup-unittest"
2025/12/19 10:07:43 DEBUG : file1.txt: Need to transfer - File not found at Destination
2025/12/19 10:07:43 DEBUG : file1.txt: multi-thread copy: disabling buffering because source is local disk
2025/12/19 10:07:45 DEBUG : file1.txt: open chunk writer: started multipart upload: ZWY4Y2RmNDUtZjMzMC00NzYyLTg3MDYtNzk2MDEwOGI2NDNm
2025/12/19 10:07:45 DEBUG : file1.txt: multi-thread copy: using backend concurrency of 4 instead of --multi-thread-streams 4
2025/12/19 10:07:45 DEBUG : file1.txt: Starting multi-thread copy with 4 chunks of size 300Mi with 4 parallel streams
2025/12/19 10:07:45 DEBUG : file1.txt: multi-thread copy: chunk 4/4 (943718400-1000000000) size 53.674Mi starting
2025/12/19 10:07:45 DEBUG : file1.txt: multi-thread copy: chunk 1/4 (0-314572800) size 300Mi starting
2025/12/19 10:07:45 DEBUG : file1.txt: multi-thread copy: chunk 2/4 (314572800-629145600) size 300Mi starting
2025/12/19 10:07:45 DEBUG : file1.txt: multi-thread copy: chunk 3/4 (629145600-943718400) size 300Mi starting
2025/12/19 10:07:46 DEBUG : file1.txt: Seek from 56281600 to 0
2025/12/19 10:07:48 DEBUG : file1.txt: Seek from 314572800 to 0
2025/12/19 10:07:49 DEBUG : file1.txt: Seek from 314572800 to 0
2025/12/19 10:07:49 DEBUG : file1.txt: Seek from 314572800 to 0
2025/12/19 10:07:51 DEBUG : file1.txt: multipart upload wrote chunk 4 with 56281600 bytes and etag "264f266d5f4b19cc776dc440e0e917cf"
2025/12/19 10:07:51 DEBUG : file1.txt: multi-thread copy: chunk 4/4 (943718400-1000000000) size 53.674Mi finished
2025/12/19 10:07:57 DEBUG : file1.txt: multipart upload wrote chunk 2 with 314572800 bytes and etag "6d446590f6c52c865d874e895f844f34"
2025/12/19 10:07:57 DEBUG : file1.txt: multi-thread copy: chunk 2/4 (314572800-629145600) size 300Mi finished
2025/12/19 10:08:05 DEBUG : file1.txt: multipart upload wrote chunk 3 with 314572800 bytes and etag "d1029ef8e75cc126bee56151d803f64f"
2025/12/19 10:08:05 DEBUG : file1.txt: multi-thread copy: chunk 3/4 (629145600-943718400) size 300Mi finished
2025/12/19 10:08:10 DEBUG : file1.txt: multipart upload wrote chunk 1 with 314572800 bytes and etag "cc52338c454b64d21efd14554839e292"
2025/12/19 10:08:10 DEBUG : file1.txt: multi-thread copy: chunk 1/4 (0-314572800) size 300Mi finished
2025/12/19 10:08:10 DEBUG : file1.txt: multipart upload "ZWY4Y2RmNDUtZjMzMC00NzYyLTg3MDYtNzk2MDEwOGI2NDNm" finished
2025/12/19 10:08:10 DEBUG : file1.txt: Finished multi-thread copy with 4 parts of size 300Mi
2025/12/19 10:08:10 DEBUG : file1.txt: size = 1000000000 OK
2025/12/19 10:08:10 DEBUG : file1.txt: md5 = 8bc5f047fb96e30435237160edba5cdf OK
2025/12/19 10:08:10 INFO  : file1.txt: Multi-thread Copied (new)
2025/12/19 10:08:10 INFO  : 
Transferred:   	  953.674 MiB / 953.674 MiB, 100%, 32.293 MiB/s, ETA 0s
Transferred:            1 / 1, 100%
Elapsed time:        27.3s

2025/12/19 10:08:10 DEBUG : 12 go routines active

Try disabling HTTP2 with --disable-http2: Documentation

1 Like

Yep, that was it !
Seems like Hetzner has some issues with http2

Thank you !

It’s Go with issues with http2 and consequently rclone because of that.

Relevant Links:

1 Like

I see.

I was too focused on Hetzner (since it worked fine with other providers), I did not look into other known issues that could cause rclone upload to be slow, my bad.

Thanks for the links, i will have a look :slight_smile:

now I’m curious if I should disable http/2 for my rclone mount to improve performance. how one would check about this?

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.