Hetzner S3 (beta) upload error

What is the problem you are having with rclone?

I set up Hetzner S3 as a new remote (with crypt on top), but upon uploading a file I get an error.

Run the command 'rclone version' and share the full output of the command.

rclone v1.68.0

  • os/version: ubuntu 22.04 (64 bit)
  • os/kernel: 5.15.0-119-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.23.1
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Hetzner S3 (beta)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy /path/to/directory/ hetzners3enc:path/to/dir -P

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[hetzners3]
type = s3
provider = Other
access_key_id = XXX
secret_access_key = XXX
endpoint = mybucketname.fsn1.your-objectstorage.com

[hetzners3enc]
type = crypt
remote = hetzners3:
password = XXX
password2 = XXX

A log from the command that you were trying to run with the -vv flag

2024/09/30 18:02:40 DEBUG : rclone: Version "v1.68.0" starting with parameters ["rclone" "copy" "media/storage/Movies/1990 - 1999/movie/" "hetzners3enc:Movies/1990 - 1999/movie/" "-P" "-vv"]
2024/09/30 18:02:40 DEBUG : Creating backend with remote "media/storage/Movies/1990 - 1999/movie/"
2024/09/30 18:02:40 DEBUG : Using config file from "/home/dinosm/.config/rclone/rclone.conf"
2024/09/30 18:02:40 DEBUG : fs cache: renaming cache item "media/storage/Movies/1990 - 1999/movie/" to be canonical "/home/dinosm/media/storage/Movies/1990 - 1999/movie"
2024/09/30 18:02:40 DEBUG : Creating backend with remote "hetzners3enc:Movies/1990 - 1999/movie/"
2024/09/30 18:02:40 DEBUG : Creating backend with remote "hetzners3:jics193k6r3n42g9tis0oqlfvg/9v0itv9k1h920u48ubklikpk7s/6ggqqc1v7sif4qmj4e9r88m6i3rm6c9300b0q31sgbpfd8sitodg"
2024/09/30 18:02:41 DEBUG : movie - 1080p.mp4: Need to transfer - File not found at Destination
2024/09/30 18:02:41 DEBUG : movie - 2160p HDR.mkv: Need to transfer - File not found at Destination
2024/09/30 18:02:41 DEBUG : Encrypted drive 'hetzners3enc:Movies/1990 - 1999/movie/': Waiting for checks to finish
2024/09/30 18:02:41 DEBUG : Encrypted drive 'hetzners3enc:Movies/1990 - 1999/movie/': Waiting for transfers to finish
2024/09/30 18:02:41 DEBUG : movie - 1080p.mp4: Computing md5 hash of encrypted source
2024/09/30 18:02:41 DEBUG : movie - 2160p HDR.mkv: Computing md5 hash of encrypted source
2024/09/30 18:03:30 DEBUG : 5htrof47ea2479ofums852iuskbih4ceqbjrkuf3cvqasa4ai1qct64tb7o554uls9i7emc4bo4hi: open chunk writer: started multipart upload: 2~q1TpaQQT7ao6Hkaay9yBSCoyqJYcnzF
2024/09/30 18:03:30 DEBUG : movie - 1080p.mp4: multipart upload: starting chunk 0 size 5Mi offset 0/3.556Gi
2024/09/30 18:03:30 DEBUG : movie - 1080p.mp4: multipart upload: starting chunk 1 size 5Mi offset 5Mi/3.556Gi
2024/09/30 18:03:30 DEBUG : movie - 1080p.mp4: multipart upload: starting chunk 2 size 5Mi offset 10Mi/3.556Gi
2024/09/30 18:03:30 DEBUG : movie - 1080p.mp4: multipart upload: starting chunk 3 size 5Mi offset 15Mi/3.556Gi
2024/09/30 18:03:30 DEBUG : movie - 1080p.mp4: Cancelling multipart upload
2024/09/30 18:03:30 DEBUG : movie - 1080p.mp4: Failed to cancel multipart upload: failed to abort multipart upload "2~q1TpaQQT7ao6Hkaay9yBSCoyqJYcnzF": operation error S3: AbortMultipartUpload, https response error StatusCode: 404, RequestID: tx0000091f527fd24566763-0066fad9e2-8169a-fsn1-prod1-ceph3, HostID: 8169a-fsn1-prod1-ceph3-fsn1-prod1, NoSuchUpload:
2024/09/30 18:03:30 ERROR : movie - 1080p.mp4: Failed to copy: failed to upload chunk 1 with 5242880 bytes: operation error S3: UploadPart, https response error StatusCode: 404, RequestID: tx00000ddceb2bfe33b5f81-0066fad9e2-3efea-fsn1-prod1-ceph3, HostID: 3efea-fsn1-prod1-ceph3-fsn1-prod1, api error NoSuchUpload: UnknownError

I think given that you are testing beta service you should report it to Hetzner to help them iron remaining rough edges.

1 Like

I would also try to change:

to "Minio". I think it is what powers Hetzner's S3

Or am I wrong...?

maybe it is Ceph?

Either way it is fantastic that you are testing it so we can make it ready before it is out of beta. From my experience with Hetzner I would expect them to be supportive here. Unlike services like Proton they do want people to store data in their system:)

:joy:

Does a different provider make a big difference?
I've tried ceph, same error.

I've registered at Hetzner Forum but waiting for approval before I can post there.

Doesn't their beta service have some feedback email etc.? It is not public beta as I understand and you have to request it and get approved.

I guess I can email but in the welcome email it specifically says here's the forum thread, feel free to come and post feedback and questions :man_shrugging:

If you have any feedback lets us know. Maybe there are some peculiarities of their setup we have to take into account and maybe add Hetzner as distinctive S3 provider.

1 Like

i am been testing hetzner s3 and was planning to post about it. but you beat me to it ;wink
so far, everything as been working but i have more testing to do...

here are some differences between your setup and my setup

your s3 remote, it is missing:
a. region
b. acl

your crypt remote, it is missing:
a. bucket

this is my setup

rclone config redacted

[hetzners3]
type = s3
provider = Other
access_key_id = XXX
secret_access_key = XXX
acl = private
region = fsn1
endpoint = fsn1.your-objectstorage.com

[hetzners3enc]
type = crypt
remote = hetzners3:zork
password = XXX
password2 = XXX
rclone copy d:\files\zork\file.ext hetzners3enc: --s3-upload-cutoff=0 -vv 
DEBUG : rclone: Version "v1.68.0" starting with parameters ["c:\\data\\rclone\\rclone_v1.68.0.exe" "copy" "d:\\files\\zork\\file.ext" "hetzners3enc:" "--s3-upload-cutoff=0" "-vv"]
DEBUG : Creating backend with remote "d:\\files\\zork\\file.ext"
DEBUG : Using config file from "c:\\data\\rclone\\rclone.conf"
DEBUG : fs cache: adding new entry for parent of "d:\\files\\zork\\file.ext", "//?/d:/files/zork"
DEBUG : Creating backend with remote "hetzners3enc:"
DEBUG : Creating backend with remote "hetzners3:zork"
DEBUG : hetzners3: detected overridden config - adding "{Wgtlt}" suffix to name
DEBUG : fs cache: renaming cache item "hetzners3:zork" to be canonical "hetzners3{Wgtlt}:zork"
DEBUG : file.ext: Need to transfer - File not found at Destination
DEBUG : file.ext: Computing md5 hash of encrypted source
DEBUG : ahrl0n64llr77b3on1s4o6dgno: open chunk writer: started multipart upload: 2~PQqYVc7MJ82dnQ6VzgoMGbLY_phVnJK
DEBUG : file.ext: multipart upload: starting chunk 0 size 5Mi offset 0/15.949Mi
DEBUG : file.ext: multipart upload: starting chunk 1 size 5Mi offset 5Mi/15.949Mi
DEBUG : file.ext: multipart upload: starting chunk 2 size 5Mi offset 10Mi/15.949Mi
DEBUG : file.ext: multipart upload: starting chunk 3 size 972.032Ki offset 15Mi/15.949Mi
DEBUG : ahrl0n64llr77b3on1s4o6dgno: multipart upload wrote chunk 4 with 995361 bytes and etag "44be7c93b9e014bd42467850d3b88762"
DEBUG : ahrl0n64llr77b3on1s4o6dgno: multipart upload wrote chunk 2 with 5242880 bytes and etag "642b433e5ef02c5cc8dcb6d14f22ea65"
DEBUG : ahrl0n64llr77b3on1s4o6dgno: multipart upload wrote chunk 1 with 5242880 bytes and etag "30bff478863786b152cf8c7d0b9fedde"
DEBUG : ahrl0n64llr77b3on1s4o6dgno: multipart upload wrote chunk 3 with 5242880 bytes and etag "2d3380fa082fa435561658818fef7223"
DEBUG : ahrl0n64llr77b3on1s4o6dgno: multipart upload "2~PQqYVc7MJ82dnQ6VzgoMGbLY_phVnJK" finished
DEBUG : file.ext: md5 = 27918e63d3d6aa465858e3e65e5bec35 OK
INFO  : file.ext: Copied (new)
INFO  : 
Transferred:   	   15.949 MiB / 15.949 MiB, 100%, 1.772 MiB/s, ETA 0s
Transferred:            1 / 1, 100%
Elapsed time:         9.5s

so far, i am not impressed at all with hetzner s3 as compared other S3 providers.
and also, as compared to hetzner storagebox.
i have multiple storagebox accounts and as of now, will not be using hetzner s3.

  1. S3 does not support session tokens
  2. S3 does not support MFA delete
  3. S3 ingress is free but free egress is limited and can get expensive.
  4. object lock can only be set at bucket creation, but not after.
  5. delete a bucket, and cannot re-use the bucket name for 30 days.
  6. hetzner storagebox offer sftp with checksums, smb with checksums and webdav, borg, rsync and more
  7. hetzner storagebox runs over zfs and have automatic and manual snapshots.
  8. S3=$5.60USD/TiB/month and that price EXCLUDES VAT
    storagebox=$3.60USB/TiB/Month and includes VAT
  9. so far, in limited testing. S3 is super slow

This is the main reason I want to test it, as for me the Storage Box ticks all the boxes except speed, it's not fast enough. But if S3 is even slower, then it's a non-starter.

i played around with setting such as --s3-upload-concurrency=10 --s3-chunk-size=512M
but still, super, duper slow.

Transferred:   	    6.934 GiB / 6.934 GiB, 100%, 1.794 MiB/s, ETA 0s
Transferred:            1 / 1, 100%
Elapsed time:     28m46.6s

i mostly copy veeam backup files and .7z from place to place.
for me, i can rent the cheapest cloud vm from hetzner, turn it into a veeam backup and replication backup repository and use smb of the storagebox to store files.
the safety of automatic zfs snapshots is very important for backup files.

if you need speed and very low latency with S3, i store recent backups files in wasabi.
for downloads, and for veeam instant recovery i can saturate a 1Gbps connection.

could be hetzner s3, whilst in beta, is focused on reliability and features.
maybe when it goes live, the speeds will increase. i hope so but doubt it.


idrive e2 is a decent overall comprise for speed and price.
plus they sponsor rclone!

however, every year over year, there are major price increases without any change in features and speed.
this year it is insane, at 24% and i wrote about it at

I added the region and acl settings, and a small transfer worked, but I cannot see the file anywhere except Hetzner's web panel.

Rclone ls, rclone lsd, even mounting the remote shows nothing.

ok, good, making progress...


no idea, as you are not posting any actionable info??

works just fine for me

rclone ls hetzners3: -vv 
DEBUG : rclone: Version "v1.68.1" starting with parameters ["c:\\data\\rclone\\rclone.exe" "ls" "hetzners3:" "-vv"]
DEBUG : Creating backend with remote "hetzners3:"
DEBUG : Using config file from "c:\\data\\rclone\\rclone.conf"
 16724001 zork/u4arutgi5ht4nrrfgpm0qppbng
rclone ls hetzners3enc: -vv 
DEBUG : rclone: Version "v1.68.1" starting with parameters ["c:\\data\\rclone\\rclone.exe" "ls" "hetzners3enc:" "-vv"]
DEBUG : Creating backend with remote "hetzners3enc:"
DEBUG : Using config file from "c:\\data\\rclone\\rclone.conf"
DEBUG : Creating backend with remote "hetzners3:zork"
 16719873 file.ext

me@server:~$ rclone ls hetzners3enc: -vv
2024/10/01 11:13:09 DEBUG : rclone: Version "v1.68.0" starting with parameters ["rclone" "ls" "hetzners3enc:" "-vv"]
2024/10/01 11:13:09 DEBUG : Creating backend with remote "hetzners3enc:"
2024/10/01 11:13:09 DEBUG : Using config file from "/home/me/.config/rclone/rclone.conf"
2024/10/01 11:13:09 DEBUG : Creating backend with remote "hetzners3:stuff"
2024/10/01 11:13:09 DEBUG : 5 go routines active

I can post lsd, and also try including the bucket name, it still shows no content.

Here's what Hetzner panel shows:

This is the file I uploaded sitting in the directory 'stuff', but ls and lsd show nothing.

post output of rclone ls hetzners3: -vv

me@server:~$ rclone ls hetzners3: -vv
2024/10/01 15:07:35 DEBUG : rclone: Version "v1.68.0" starting with parameters ["rclone" "ls" "hetzners3:" "-vv"]
2024/10/01 15:07:35 DEBUG : Creating backend with remote "hetzners3:"
2024/10/01 15:07:35 DEBUG : Using config file from "/home/me/.config/rclone/rclone.conf"
2024/10/01 15:07:36 DEBUG : 5 go routines active

endpoint = mybucketname.fsn1.your-objectstorage.com
should be
endpoint = fsn1.your-objectstorage.com