R2 auth broken for API keys locked to a bucket

Repro steps

  • Create a cloudflare R2 account.
  • Create an API keypair with RW access, locked to a particular bucket.
  • Create a config in rclone for R2.
  • Run the following, to copy a file to the r2 bucket root
rclone copy file.txt r2config:

Depending on the endpoint you've specified in rclone.conf, you'll have mixed results. If you specify the bucket in R2 you have locked your api keypair to, ie endpoint = https://<account>.r2.cloudflarestorage.com/<bucket>, the transfer will work if you copy to a subdirectory, like this:

rclone copy file.txt r2config:/subdir

However, if you don't specify a subdirectory, you get the following error:

rclone copy file.txt r2config:
- minimum field size of 1, HeadObjectInput.Key.

If you try to remove the bucket from the endpoint, authorization fails entirely. It doesn't matter if you append the bucket to the copy destination. This is probably expected S3 behavior, just noting here for posterity.

2023/08/09 09:19:50 Failed to copy: Forbidden: Forbidden
        status code: 403, request id: , host id:

Any workarounds for transferring to R2 bucket root, using api keys locked to a particular bucket? Is this a bug, or am I missing something?

Thanks!

-M

More details:

If I remove the bucket name from the endpoint like this:

#endpoint = https://<account>.r2.cloudflarestorage.com/protected-bucket
endpoint = https://<account>.r2.cloudflarestorage.com/

Read reqs work.

And then include the bucket name in a tree request:

rclone tree r2config:protected-bucket
/
└── RELEASES

0 directories, 1 files

Writes do not.

The tree request works, but a copy does not:

rclone copy file.txt r2config:protected-bucket
2023/08/09 09:28:56 Failed to copy: AccessDenied: Access Denied
        status code: 403, request id: , host id:

welcome to the forum,

try
rclone copy file.txt r2config:protected-bucket --s3-no-check-bucket
rclone tries to head the bucket before file copy, and that is the cause of the error.
--s3-no-check-bucket prevents rclone from doing that head.

in this example, the locked bucket is zork

2023/08/09 11:53:24 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2023/08/09 11:53:24 DEBUG : file.ext: Need to transfer - File not found at Destination
2023/08/09 11:53:24 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2023/08/09 11:53:24 DEBUG : HTTP REQUEST (req 0xc000b06a00)
2023/08/09 11:53:24 DEBUG : PUT /zork HTTP/1.1
Host: redacted.r2.cloudflarestorage.com
User-Agent: rclone/v1.63.0
Content-Length: 148
Authorization: XXXX
X-Amz-Acl: private
X-Amz-Content-Sha256: ff3d8073a0d2d5772934f4998496695fa231028bf5de7412e9525b08e4463062
X-Amz-Date: 20230809T155324Z
Accept-Encoding: gzip

2023/08/09 11:53:24 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2023/08/09 11:53:24 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2023/08/09 11:53:24 DEBUG : HTTP RESPONSE (req 0xc000b06a00)
2023/08/09 11:53:24 DEBUG : HTTP/1.1 403 Forbidden
Transfer-Encoding: chunked
Cf-Ray: 7f412bf91bdf42ab-EWR
Connection: keep-alive
Content-Type: application/xml
Date: Wed, 09 Aug 2023 15:53:24 GMT
Server: cloudflare
Vary: Accept-Encoding

maybe the rclone docs needs a tweak?

rclone is designed to work like this - you always need to put the bucket in the command line. You can use the alias backend if you want to use it without.

if that works, then can hardcode that into the config file with no_check_bucket = true

[r2config_remote]
type = s3
provider = Cloudflare
access_key_id = redacted
secret_access_key = redacted
region = auto
endpoint = https://redacted.r2.cloudflarestorage.com
acl = private
no_check_bucket = true

and can create an alias remote.

[r2config]
type = alias
remote = r2config_remote:protected-bucket

and run commands against that alias remote

rclone copy file.txt r2config: -vv --dry-run

Excellent - this seemed to do the trick! After adding no_check_bucket = true to conf, I can transfer without issue.

This command:

rclone -vv copy myfile.txt r2config:protected-bucket

Results in a successful transfer. Cloudflare has official docs for r2+rclone, they should probably note this there.

Thanks for the quick reply! :slight_smile:

sure, good to know this is not a rclone bug.

could be that the bucket policy for the locked bucket does not have "Action": "s3:ListAllMyBuckets"
which makes sense and good that cloudflare does that.

fwiw, whenever i create a bucket on s3 provider, i never add "Action": "s3:ListAllMyBuckets"
so i always use no_check_bucket = true

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.