to "Minio". I think it is what powers Hetzner's S3
Or am I wrong...?
maybe it is Ceph?
Either way it is fantastic that you are testing it so we can make it ready before it is out of beta. From my experience with Hetzner I would expect them to be supportive here. Unlike services like Proton they do want people to store data in their system:)
If you have any feedback lets us know. Maybe there are some peculiarities of their setup we have to take into account and maybe add Hetzner as distinctive S3 provider.
i am been testing hetzner s3 and was planning to post about it. but you beat me to it ;wink
so far, everything as been working but i have more testing to do...
here are some differences between your setup and my setup
your s3 remote, it is missing:
a. region
b. acl
your crypt remote, it is missing:
a. bucket
this is my setup
rclone config redacted
[hetzners3]
type = s3
provider = Other
access_key_id = XXX
secret_access_key = XXX
acl = private
region = fsn1
endpoint = fsn1.your-objectstorage.com
[hetzners3enc]
type = crypt
remote = hetzners3:zork
password = XXX
password2 = XXX
rclone copy d:\files\zork\file.ext hetzners3enc: --s3-upload-cutoff=0 -vv
DEBUG : rclone: Version "v1.68.0" starting with parameters ["c:\\data\\rclone\\rclone_v1.68.0.exe" "copy" "d:\\files\\zork\\file.ext" "hetzners3enc:" "--s3-upload-cutoff=0" "-vv"]
DEBUG : Creating backend with remote "d:\\files\\zork\\file.ext"
DEBUG : Using config file from "c:\\data\\rclone\\rclone.conf"
DEBUG : fs cache: adding new entry for parent of "d:\\files\\zork\\file.ext", "//?/d:/files/zork"
DEBUG : Creating backend with remote "hetzners3enc:"
DEBUG : Creating backend with remote "hetzners3:zork"
DEBUG : hetzners3: detected overridden config - adding "{Wgtlt}" suffix to name
DEBUG : fs cache: renaming cache item "hetzners3:zork" to be canonical "hetzners3{Wgtlt}:zork"
DEBUG : file.ext: Need to transfer - File not found at Destination
DEBUG : file.ext: Computing md5 hash of encrypted source
DEBUG : ahrl0n64llr77b3on1s4o6dgno: open chunk writer: started multipart upload: 2~PQqYVc7MJ82dnQ6VzgoMGbLY_phVnJK
DEBUG : file.ext: multipart upload: starting chunk 0 size 5Mi offset 0/15.949Mi
DEBUG : file.ext: multipart upload: starting chunk 1 size 5Mi offset 5Mi/15.949Mi
DEBUG : file.ext: multipart upload: starting chunk 2 size 5Mi offset 10Mi/15.949Mi
DEBUG : file.ext: multipart upload: starting chunk 3 size 972.032Ki offset 15Mi/15.949Mi
DEBUG : ahrl0n64llr77b3on1s4o6dgno: multipart upload wrote chunk 4 with 995361 bytes and etag "44be7c93b9e014bd42467850d3b88762"
DEBUG : ahrl0n64llr77b3on1s4o6dgno: multipart upload wrote chunk 2 with 5242880 bytes and etag "642b433e5ef02c5cc8dcb6d14f22ea65"
DEBUG : ahrl0n64llr77b3on1s4o6dgno: multipart upload wrote chunk 1 with 5242880 bytes and etag "30bff478863786b152cf8c7d0b9fedde"
DEBUG : ahrl0n64llr77b3on1s4o6dgno: multipart upload wrote chunk 3 with 5242880 bytes and etag "2d3380fa082fa435561658818fef7223"
DEBUG : ahrl0n64llr77b3on1s4o6dgno: multipart upload "2~PQqYVc7MJ82dnQ6VzgoMGbLY_phVnJK" finished
DEBUG : file.ext: md5 = 27918e63d3d6aa465858e3e65e5bec35 OK
INFO : file.ext: Copied (new)
INFO :
Transferred: 15.949 MiB / 15.949 MiB, 100%, 1.772 MiB/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 9.5s
so far, i am not impressed at all with hetzner s3 as compared other S3 providers.
and also, as compared to hetzner storagebox.
i have multiple storagebox accounts and as of now, will not be using hetzner s3.
S3 does not support session tokens
S3 does not support MFA delete
S3 ingress is free but free egress is limited and can get expensive.
object lock can only be set at bucket creation, but not after.
delete a bucket, and cannot re-use the bucket name for 30 days.
hetzner storagebox offer sftp with checksums, smb with checksums and webdav, borg, rsync and more
hetzner storagebox runs over zfs and have automatic and manual snapshots.
S3=$5.60USD/TiB/month and that price EXCLUDES VAT
storagebox=$3.60USB/TiB/Month and includes VAT
This is the main reason I want to test it, as for me the Storage Box ticks all the boxes except speed, it's not fast enough. But if S3 is even slower, then it's a non-starter.
i mostly copy veeam backup files and .7z from place to place.
for me, i can rent the cheapest cloud vm from hetzner, turn it into a veeam backup and replication backup repository and use smb of the storagebox to store files.
the safety of automatic zfs snapshots is very important for backup files.
if you need speed and very low latency with S3, i store recent backups files in wasabi.
for downloads, and for veeam instant recovery i can saturate a 1Gbps connection.
could be hetzner s3, whilst in beta, is focused on reliability and features.
maybe when it goes live, the speeds will increase. i hope so but doubt it.
idrive e2 is a decent overall comprise for speed and price.
plus they sponsor rclone!
however, every year over year, there are major price increases without any change in features and speed.
this year it is insane, at 24% and i wrote about it at