Signature doesn't match for minio behind Cloudflare tunnel

What is the problem you are having with rclone?

I was running a minio instance w/ docker on a x86 machine w/ Debian 12, and as it doesn't have a public ip address (only a local one), I used Cloudflared to create a tunnel and bind my domain name io.example.dev to the endpoint of minio. However, when I try to config rclone with http://local ip:port, everything works (listing, uploading, etc.), but if I try to use https://io.example.dev as an endpoint, it always says the signature does not match.
I could confirm this is not a Cloudflare proxy problem which might affect headers, because when I try to use minio client mc, I can get correct output. Please check this (in this case, mc alias set one, and rclone remote named minio, they are for the same endpoint, https://io.example.dev , and I used the same access key pair:

[yuki@manjaro ~]$ rclone ls minio:              
2025/04/02 21:47:01 NOTICE: Failed to ls: operation error S3: ListBuckets, https response error StatusCode: 403, RequestID: <requestid>, HostID: <hostid>, api error SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.                                                                              
[yuki@manjaro ~]$ mc ls one    
[2025-04-02 15:42:43 CST]     0B test/      
[yuki@manjaro ~]$ mc ls one/test                  
[2025-04-02 16:40:17 CST]  33KiB STANDARD <picturename>.jpg  

Changing the endpoint to io.example.dev without https:// doesn't change the output.
By the way, exactly same problem (ip:port works and domain doesnt) happens for my Android phone with round sync (it's a GUI for rclone but ig that wouldn't be discussed, i just provide this information to prove it's probably not a problem with my environment.)

Run the command 'rclone version' and share the full output of the command.

rclone v1.69.1

- os/version: arch 25.0.0 (64 bit)

- os/kernel: 6.12.19-1-MANJARO (x86_64)

- os/type: linux

- os/arch: amd64

- go/version: go1.24.0

- go/linking: static

- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

S3 compatible, MinIO

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone ls minio:

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[hz]
type = webdav
url = https://<myusername>.your-storagebox.de
user = XXX
pass = XXX

[minio]
type = s3
provider = Minio
access_key_id = XXX
secret_access_key = XXX
region = us-east-1
endpoint = https://io.example.dev
disable_http2 = true

A log from the command that you were trying to run with the -vv flag

2025/04/02 22:11:34 DEBUG : rclone: Version "v1.69.1" starting with parameters ["rclone" "ls" "minio:" "-vv"]
2025/04/02 22:11:34 DEBUG : Creating backend with remote "minio:"
2025/04/02 22:11:34 DEBUG : Using config file from "/home/yuki/.config/rclone/rclone.conf"
2025/04/02 22:11:36 DEBUG : 5 go routines active
2025/04/02 22:11:36 NOTICE: Failed to ls: operation error S3: ListBuckets, https response error StatusCode: 403, RequestID: 183285C7252CB364, HostID: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8, api error SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.

welcome to the forum,

for a deeper look at api calls, can use --dump flags such as --dump=headers

i see, please check this:

[yuki@manjaro ~]$ rclone ls minio: -vv --dump=headers
2025/04/02 22:16:00 NOTICE: Automatically setting -vv as --dump is enabled
2025/04/02 22:16:00 DEBUG : rclone: Version "v1.69.1" starting with parameters ["rclone" "ls" "minio:" "-vv" "--dump=headers"]
2025/04/02 22:16:00 DEBUG : Creating backend with remote "minio:"
2025/04/02 22:16:00 DEBUG : Using config file from "/home/yuki/.config/rclone/rclone.conf"
2025/04/02 22:16:00 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2025/04/02 22:16:00 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2025/04/02 22:16:00 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2025/04/02 22:16:00 DEBUG : HTTP REQUEST (req 0xc0002f1040)
2025/04/02 22:16:00 DEBUG : GET /?x-id=ListBuckets HTTP/1.1
Host: io.example.dev
User-Agent: rclone/v1.69.1
Accept-Encoding: identity
Amz-Sdk-Invocation-Id: 6dc61547-4555-4d05-ab6e-14b1f926419e
Amz-Sdk-Request: attempt=1; max=10
Authorization: XXXX
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20250402T141600Z

2025/04/02 22:16:00 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2025/04/02 22:16:02 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2025/04/02 22:16:02 DEBUG : HTTP RESPONSE (req 0xc0002f1040)
2025/04/02 22:16:02 DEBUG : HTTP/1.1 403 Forbidden
Content-Length: 388
Accept-Ranges: bytes
Alt-Svc: h3=":443"; ma=86400
Cf-Cache-Status: DYNAMIC
Cf-Ray: 92a0f1127aba8850-AMS
Connection: keep-alive
Content-Type: application/xml
Date: Wed, 02 Apr 2025 14:16:02 GMT
Nel: {"success_fraction":0,"report_to":"cf-nel","max_age":604800}
Report-To: {"endpoints":[{"url":"https:\/\/a.nel.cloudflare.com\/report\/v4?s=racwma9VeeaaSEKTOHQ9UgtjpgmTH2USkW1WHbe4VQpZ1Wr4GvKbsE1qeYISbW4jk5y58jYrT4lFXU9NoBB0Mkcos3h3BkxPGlvF1VSx%2FE6nvlDIaGT74y3PFBVF"}],"group":"cf-nel","max_age":604800}
Server: cloudflare
Server-Timing: cfL4;desc="?proto=TCP&rtt=4544&min_rtt=4543&rtt_var=1706&sent=5&recv=5&lost=0&retrans=0&sent_bytes=3085&recv_bytes=953&delivery_rate=954096&cwnd=195&unsent_bytes=0&cid=6e389e3f032fb01b&ts=1319&x=0"
Speculation-Rules: "/cdn-cgi/speculation"
Strict-Transport-Security: max-age=31536000; includeSubDomains
Vary: Origin
Vary: Accept-Encoding
X-Amz-Bucket-Region: us-east-1
X-Amz-Id-2: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8
X-Amz-Request-Id: 183286051545E7C1
X-Content-Type-Options: nosniff
X-Ratelimit-Limit: 1658
X-Ratelimit-Remaining: 1658
X-Xss-Protection: 1; mode=block

2025/04/02 22:16:02 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2025/04/02 22:16:02 DEBUG : 5 go routines active
2025/04/02 22:16:02 NOTICE: Failed to ls: operation error S3: ListBuckets, https response error StatusCode: 403, RequestID: 183286051545E7C1, HostID: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8, api error SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.

There are some suggestions ab cloudflare removing/adding headers but I am not sure how to solve these, because I tried to disable caching from cf and things didn't work and I used tcpdump and tshark to catch traffic and I didn't get anything helpful though

might be a permission issue when listing buckets.
maybe test --s3-no flags such as --s3-no-check-bucket

I am afraid the output is the same...

But I might have found where the issue rooted:

I ran mc --debug ls one/test
and got smth like

X-Amz-Date: 20250402T142607Z  
Authorization: AWS4-HMAC-SHA256 Credential=<accesskey>/20250402/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=xxx  

I ran rclone -vv --dump headers --dump auth ls minio:
and got this

X-Amz-Date: 20250402T142710Z  
Authorization: AWS4-HMAC-SHA256 Credential=<accesskey>/20250402/us-east-1/s3/aws4_request, SignedHeaders=accept-encoding;amz-sdk-invocation-id;amz-sdk-request;host;x-amz-content-sha256;x-amz-date, Signature=xxx  

The Authorization is slightly different right? Extra ones for rclone: accept-encoding;amz-sdk-invocation-id;amz-sdk-request;

maybe test --s3-list-version

that doesn't work either, same outpot :frowning:

I have the same problem with Ceph RGW and CloudFlare Tunnel.

I think CloudFlare changes the accept-encoding header when reaching the origin.
In other words, rclone sends accept-encoding:identity to cloudflare, but cloudflare sends accept-encoding:gzip, br to the origin.

Any ideas how to fix that? Maybe we can use some configuration/page/compression rules in CloudFlare settings?

1 Like
1 Like

Oh yes, this header thing is very likely to be the problem

Similar issues happen to Caddy and Nginx users in cases they don't properly pass headers in config, there is a MinIO official nginx example but I don't know how to fix the config for CF tunnel and Caddy... My best effort is to use minio official cli mc as a cli tool instead of rclone on computer because they doesn't seem to have these issues, or try to use ip:port because http without reverse proxy doesn't bother the header thingy

This question actually has a long history and was replicated in many different S3 compatible providers, sometimes even AWS itself. I couldn't make an aggregate of all these issues but it does happen, and this shouldnt be a bug of minio or cloudflare, this is rclone, because I tried to use the domain name as endpoint with identical key pair at obsidian extension remotely save and minio official cli, either would work smoothly

You can use a beta version of rclone like this:

rclone ls minio:testbucket --s3-sign-accept-encoding=false

or just add sign_accept_encoding: false to the rclone config.

Beta can be installed like this:

sudo -v ; curl https://rclone.org/install.sh | sudo bash -s beta