S3: Rclone v.1.52.0 or after: permission denied

What is the problem you are having with rclone?

rclone copy to Amazon S3 buckets with versions after 1.52.0 fail ("Failed to copy: AccessDenied: Access Denied") before it works.

What is your rclone version (output from rclone version)

rclone v1.53.4
- os/arch: linux/amd64
- go version: go1.15.6

(I've tried with other versions)

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Linux amd64 (Debian 10.7 but with the rclone in .zip)

Which cloud storage system are you using? (eg Google Drive)

Amazon S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

./rclone copy ~/test10.txt carlesprova-2021-01:carlesprova-2021-01

The rclone config contents with secrets removed.

[carlesprova-2021-01]
type = s3
provider = AWS
env_auth = false
access_key_id = xxxx
secret_access_key = xxxxx
region = eu-north-1
acl = private

A log from the command with the -vv flag

carles@pinux:~/Baixades/rclone-v1.53.4-linux-amd64$ ./rclone -vv copy ~/test10.txt carlesprova-2021-01:carlesprova-2021-01
2021/01/29 13:48:37 DEBUG : rclone: Version "v1.53.4" starting with parameters ["./rclone" "-vv" "copy" "/home/carles/test10.txt" "carlesprova-2021-01:carlesprova-2021-01"]
2021/01/29 13:48:37 DEBUG : Creating backend with remote "/home/carles/test10.txt"
2021/01/29 13:48:37 DEBUG : Using config file from "/home/carles/.rclone.conf"
2021/01/29 13:48:37 DEBUG : fs cache: adding new entry for parent of "/home/carles/test10.txt", "/home/carles"
2021/01/29 13:48:37 DEBUG : Creating backend with remote "carlesprova-2021-01:carlesprova-2021-01"
2021/01/29 13:48:38 DEBUG : test10.txt: Need to transfer - File not found at Destination
2021/01/29 13:48:38 ERROR : test10.txt: Failed to copy: AccessDenied: Access Denied
	status code: 403, request id: 562686156C186B53, host id: 89qbqnyK7SugFY/zvplYqIZNfntW96HNyAm6JaWlZu++UBBtfv9VfGgDdcbLfTGpyARQcoNPriI=
2021/01/29 13:48:38 ERROR : Attempt 1/3 failed with 1 errors and: AccessDenied: Access Denied
	status code: 403, request id: 562686156C186B53, host id: 89qbqnyK7SugFY/zvplYqIZNfntW96HNyAm6JaWlZu++UBBtfv9VfGgDdcbLfTGpyARQcoNPriI=
2021/01/29 13:48:38 DEBUG : test10.txt: Need to transfer - File not found at Destination
2021/01/29 13:48:38 ERROR : test10.txt: Failed to copy: AccessDenied: Access Denied
	status code: 403, request id: 7859FBA954F614E1, host id: g7/ReKc+Z/YpkHe94T1VUz6af3cLQCIlgDhLpRit+d3WFFUK52YVwYwmBm1mSAxCABnfslY12G0=
2021/01/29 13:48:38 ERROR : Attempt 2/3 failed with 1 errors and: AccessDenied: Access Denied
	status code: 403, request id: 7859FBA954F614E1, host id: g7/ReKc+Z/YpkHe94T1VUz6af3cLQCIlgDhLpRit+d3WFFUK52YVwYwmBm1mSAxCABnfslY12G0=
2021/01/29 13:48:38 DEBUG : test10.txt: Need to transfer - File not found at Destination
2021/01/29 13:48:39 ERROR : test10.txt: Failed to copy: AccessDenied: Access Denied
	status code: 403, request id: 742F482C936BFEA3, host id: AH/zoC/4Q19P1Sm1xGhZzaeqGafvozCmiVSCaEm1IjjLrN+Fh1tnxO0XFYIC+ta8zNUKljhwewc=
2021/01/29 13:48:39 ERROR : Attempt 3/3 failed with 1 errors and: AccessDenied: Access Denied
	status code: 403, request id: 742F482C936BFEA3, host id: AH/zoC/4Q19P1Sm1xGhZzaeqGafvozCmiVSCaEm1IjjLrN+Fh1tnxO0XFYIC+ta8zNUKljhwewc=
2021/01/29 13:48:39 INFO  : 
Transferred:   	         0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors:                 1 (retrying may help)
Elapsed time:         1.9s

2021/01/29 13:48:39 DEBUG : 4 go routines active
2021/01/29 13:48:39 Failed to copy: AccessDenied: Access Denied
	status code: 403, request id: 742F482C936BFEA3, host id: AH/zoC/4Q19P1Sm1xGhZzaeqGafvozCmiVSCaEm1IjjLrN+Fh1tnxO0XFYIC+ta8zNUKljhwewc=

But if I use an older rclone:

carles@pinux:~$ rclone --version
rclone v1.49.5
- os/arch: linux/amd64
- go version: go1.12.10
carles@pinux:~$ rclone -vv copy ~/test10.txt carlesprova-2021-01:carlesprova-2021-01
2021/01/29 13:49:11 DEBUG : rclone: Version "v1.49.5" starting with parameters ["rclone" "-vv" "copy" "/home/carles/test10.txt" "carlesprova-2021-01:carlesprova-2021-01"]
2021/01/29 13:49:11 DEBUG : Using config file from "/home/carles/.rclone.conf"
2021/01/29 13:49:12 DEBUG : test10.txt: Couldn't find file - need to transfer
2021/01/29 13:49:13 DEBUG : test10.txt: MD5 = 2b0055084a5941fe0333e9b8b9e67b94 OK
2021/01/29 13:49:13 INFO  : test10.txt: Copied (new)
2021/01/29 13:49:13 INFO  : 
Transferred:   	         7 / 7 Bytes, 100%, 13 Bytes/s, ETA 0s
Errors:                 0
Checks:                 0 / 0, -
Transferred:            1 / 1, 100%
Elapsed time:       500ms

2021/01/29 13:49:13 DEBUG : 5 go routines active
2021/01/29 13:49:13 DEBUG : rclone: Version "v1.49.5" finishing with parameters ["rclone" "-vv" "copy" "/home/carles/test10.txt" "carlesprova-2021-01:carlesprova-2021-01"]

I've checked the headers and with the older rclone (the one that it works) I see differences in the Host: HTTP header and the PUT path. I'm not sure if this is related to the problem (and we have something "wrong" on the Amazon side):

It works:

2021/01/29 12:19:49 DEBUG : PUT /spi-carlesprova/test2.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAWRAEZVVOZDBBAV6Y%2F20210129%2Feu-north-1%2Fs3%2Faws4_request&X-Amz-Date=20210129T121949Z&X-Amz-Expires=900&X-Amz-SignedHeaders=content-md5%3Bcontent-type%3Bhost%3Bx-amz-acl%3Bx-amz-meta-mtime&X-Amz-Signature=777d060abfca8c75dec0c9aa3a70e8bce464224f54f90d214324dd8fbe781d7a HTTP/1.1
Host: s3.eu-north-1.amazonaws.com
User-Agent: rclone/v1.49.5
Content-Length: 6
content-md5: EmqKUbnRu9B/3cZYGaVCww==
content-type: text/plain; charset=utf-8
x-amz-acl: private
x-amz-meta-mtime: 1611922784.352533738
Accept-Encoding: gzip

It fails:

2021/01/29 13:51:58 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2021/01/29 13:51:58 DEBUG : test11.txt: Need to transfer - File not found at Destination
2021/01/29 13:51:58 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2021/01/29 13:51:58 DEBUG : HTTP REQUEST (req 0xc000132100)
2021/01/29 13:51:58 DEBUG : PUT / HTTP/1.1
Host: carlesprova-2021-01.s3.eu-north-1.amazonaws.com
User-Agent: rclone/v1.53.4
Content-Length: 154
Authorization: XXXX
X-Amz-Acl: private
X-Amz-Content-Sha256: d1c1b6593fbf2b2c36a9d8d76d668024b53ffbc58142f15314be66a7dfbd95c5
X-Amz-Date: 20210129T135158Z
Accept-Encoding: gzip

2021/01/29 13:51:58 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2021/01/29 13:51:58 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2021/01/29 13:51:58 DEBUG : HTTP RESPONSE (req 0xc000132100)
2021/01/29 13:51:58 DEBUG : HTTP/1.1 403 Forbidden
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Fri, 29 Jan 2021 13:52:00 GMT
Server: AmazonS3
X-Amz-Id-2: FMJv/J8XnO4Af7KvbwlBbq4ebMpIQHv7+sHnC1c6ADHXdQ9kz8AO0RPEB2mgmaWGqgjjrIilJJk=
X-Amz-Request-Id: BKAQ9Q5HAYBQ5V8W

The Amazon configuration is the same. I created the bucket today. If there is anything on the Amazon side that you would like to know (or that I should include in the policy) please let me know.

I guess that it is a problem on my side not on rclone or I would have found some information :slight_smile: and feel free to point me to any resources or other threads/issues.

Thank you!

Try adding no_check_bucket = true into your config or use the --s3-no-check-bucket flags.

Thank you very much, once more! :slight_smile:

It worked! I see that then rclone puts the credentials, etc. in the PUT path instead of the different HTTP header.

Are we missing some permissions on the Amazon S3 side that rclone expects? I had the same ones as other buckets but the other buckets were created long time ago (probably before the Amazon S3 path changes?)

Thank you!

What happened is that I fixed a bug in the s3 backend which wasn't reporting errors on bucket creation properly.

This then surfaced the fact that people like yourself had been depending on that bug!

The no_check_bucket = true means that rclone won't attempt to see if the bucket really exists or try to create it.

That is to work around a limitation of the SDK, but it shouldn't make any difference.

If you don't want to use that flag, then rclone would like to be able to create buckets. If you don't want that then no_check_bucket = true is the workaround.

Do you know what also confused me? (rclone v1.53.3) We have this in a cron:

rclone copy an_s3_no_aws_configuration://a_bucket_in_this_s3_no_aws/ an_aws_configuration:an_s3_bucket/

This works well. Then I do:

rclone copy test01.txt an_aws_configuration:an_s3_bucket/

It fails with the 403. I somehow assumed that rclone had nothing to do with my yesterday's problem because a similar case was working. I guess that it doesn't try to create the destination bucket when copying between buckets.

I've read again the --dump requests and I see what you mean that rclone is trying to do. I don't know why rclone tries to create the bucket before using (for us the buckets are very static and should only be created via admin, tagging, etc...). I though that should complain if the bucket does not exist so the user could create it. I will get used to it but I thought that a bucket creation should be done explicitly (via mkdir). I guess that other workflows would prefer to have a bucket creation implicit with the "copy" (but then the copy-between buckets as well?).

Thanks very much! all solved! :slight_smile:

:slight_smile: I think of that cartoon often when I fix bugs!

If rclone tries to list the bucket and it is sucessful, then it knows the bucket exists and doesn't need to create it.

That is because rclone didn't need to list the bucket to see if test01.txt existed so it assumes it doesn't exist.

In general rclone creates directories as required without asking. So if you do rclone copy files/ googledrive:dir and rclone copy files/ s3:bucket both will work even if dir and bucket don't exist. Rclone treats buckets as another kind of directory... This is maybe a mistake, but it is probably too late to change now :slight_smile:

Great!

I am having same issue. I will try the no-check-bucket but cannot you check the bucket first before trying to create either a file or a folder in the bucket? Rclone Mount works fine; Sync works fine.

Just an idea

hello,

you are not providing any details, so hard to know that the issue really is?

And yes that works. Bottom line -- it doesn't put the bucket name in the create backend command. Now that I know this I will re-check docs. Thanks for the great product. Since I believe all backend copies to S3 will require a bucket name I assumed it would know that and try not to create a bucket unless it was missing. An idea but in this case an extremely low priority since the software will be used primarily for sync.

% rclone -vv copy --s3-no-check-bucket Junk/abc.txt Wasabi:als-rclone-testing/Junk
2021/03/22 12:16:58 DEBUG : rclone: Version "v1.54.1" starting with parameters ["rclone" "-vv" "copy" "--s3-no-check-bucket" "Junk/abc.txt" "Wasabi:als-rclone-testing/Junk"]
2021/03/22 12:16:58 DEBUG : Creating backend with remote "Junk/abc.txt"
2021/03/22 12:16:58 DEBUG : Using config file from "/Users/allenstrand/.config/rclone/rclone.conf"
2021/03/22 12:16:58 DEBUG : fs cache: adding new entry for parent of "Junk/abc.txt", "/Users/allenstrand/docker/rc_sync/Main/Junk"
2021/03/22 12:16:58 DEBUG : Creating backend with remote "Wasabi:als-rclone-testing/Junk"
2021/03/22 12:16:59 DEBUG : abc.txt: Need to transfer - File not found at Destination
2021/03/22 12:16:59 DEBUG : abc.txt: MD5 = 04ba1fe3bb08907ed4ff1386e6183399 OK
2021/03/22 12:16:59 INFO : abc.txt: Copied (new)
2021/03/22 12:16:59 INFO :
Transferred: 16 / 16 Bytes, 100%, 56 Bytes/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 0.6s

if you have a problem,
best to start a new post, using the help and support template and supply all the requested information.

I understand some of the rationale, but honestly, this behavior feels like a bug (maybe bug-by-design) or wrong behavior. I would expect that rclone copy SRC DST will behave the same whether SRC is a file or a folder. Why would RCLONE attempt creating a bucket if it already exists?
When configuring permission to cloud storage, we control bucket and object level differently. In many cases an account/role/user will have permissions to list, modify and create object, but will not have those on the write permissions on the bucket level. RCLONE should succeed if it can succeed. So basically now, the following works:

rclone copy RMT1:SRC/a/b/ RMT2:DST/c/d/

but this doesn't work:

rclone copy RMT1:SRC/a/b/bla.txt RMT2:DST/c/d/

hello and welcome to the forum,

  • what version of rclone?
  • why would the second example fail?
  • what is the debug output of that command?

Hi @asdffdsa and thank you for welcoming me!

  • version: rclone v1.54.0
  • it will fail exactly for the reason described in this ticket - apparently RCLONE tries to create the bucket without listing first and RMT2 doesn't have bucket create permissions (which aren't really necessary).
  • As I said, this is exactly as described by OP. The used account does not have CreateBucket permissions and it fails when attempting to copy a single file.

Working Command DEBUG:

╰─$ rclone copy -vv bla/ s3-placer-guy-test:guyarad-test
2021/04/23 09:27:49 DEBUG : rclone: Version "v1.54.0" starting with parameters ["rclone" "copy" "-vv" "bla/" "s3-placer-guy-test:guyarad-test"]
2021/04/23 09:27:49 DEBUG : Creating backend with remote "bla/"
2021/04/23 09:27:49 DEBUG : Using config file from "/Users/guyarad/.config/rclone/rclone.conf"
2021/04/23 09:27:49 DEBUG : fs cache: renaming cache item "bla/" to be canonical "/Users/guyarad/src/placer-django-server/bla"
2021/04/23 09:27:49 DEBUG : Creating backend with remote "s3-placer-guy-test:guyarad-test"
2021/04/23 09:27:50 DEBUG : S3 bucket guyarad-test: Waiting for checks to finish
2021/04/23 09:27:50 DEBUG : S3 bucket guyarad-test: Waiting for transfers to finish
2021/04/23 09:27:51 DEBUG : bla.txt: MD5 = 3cd7a0db76ff9dca48979e24c39b408c OK
2021/04/23 09:27:51 INFO  : bla.txt: Copied (new)
2021/04/23 09:27:51 INFO  :

Failed Command DEBUG:

╰─$ rclone copy -vvv bla/bla.txt s3-placer-guy-test:guyarad-test
2021/04/23 09:23:27 DEBUG : rclone: Version "v1.54.0" starting with parameters ["rclone" "copy" "-vvv" "bla/bla.txt" "s3-placer-guy-test:guyarad-test"]
2021/04/23 09:23:27 DEBUG : Creating backend with remote "bla/bla.txt"
2021/04/23 09:23:27 DEBUG : Using config file from "/Users/guyarad/.config/rclone/rclone.conf"
2021/04/23 09:23:27 DEBUG : fs cache: adding new entry for parent of "bla/bla.txt", "/Users/guyarad/src/placer-django-server/bla"
2021/04/23 09:23:27 DEBUG : Creating backend with remote "s3-placer-guy-test:guyarad-test"
2021/04/23 09:23:27 DEBUG : bla.txt: Need to transfer - File not found at Destination
2021/04/23 09:23:28 ERROR : bla.txt: Failed to copy: AccessDenied: Access Denied
	status code: 403, request id: K64RGDM8SXYWN039, host id: iLEI5TtvmJNOOmTMjhr17Zv7W6H7kc5Oxzbq3V9/072+dOC6XPhzdJqjqM0hQ+sfKr3YS0HDmLk=
2021/04/23 09:23:28 ERROR : Attempt 1/3 failed with 1 errors and: AccessDenied: Access Denied
	status code: 403, request id: K64RGDM8SXYWN039, host id: iLEI5TtvmJNOOmTMjhr17Zv7W6H7kc5Oxzbq3V9/072+dOC6XPhzdJqjqM0hQ+sfKr3YS0HDmLk=
2021/04/23 09:23:28 DEBUG : bla.txt: Need to transfer - File not found at Destination
2021/04/23 09:23:28 ERROR : bla.txt: Failed to copy: AccessDenied: Access Denied

After adding CreateBucket policy for the user, this is output (notice the "bucket created"):

2021/04/23 09:41:00 DEBUG : Creating backend with remote "bla/bla.txt"
2021/04/23 09:41:00 DEBUG : Using config file from "/Users/guyarad/.config/rclone/rclone.conf"
2021/04/23 09:41:00 DEBUG : fs cache: adding new entry for parent of "bla/bla.txt", "/Users/guyarad/src/placer-django-server/bla"
2021/04/23 09:41:00 DEBUG : Creating backend with remote "s3-placer-guy-test:guyarad-test"
2021/04/23 09:41:00 DEBUG : bla.txt: Need to transfer - File not found at Destination
2021/04/23 09:41:01 INFO  : S3 bucket guyarad-test: Bucket "guyarad-test" created with ACL "private"
2021/04/23 09:41:01 DEBUG : bla.txt: MD5 = 3cd7a0db76ff9dca48979e24c39b408c OK
2021/04/23 09:41:01 INFO  : bla.txt: Copied (new)

Rclone does this to minimise the number of transactions in both the bucket exists and bucket not exists cases it takes exactly one transaction.

However this behavior annoys enough people that it is probably worth changing. I could just make rclone stop auto creating buckets unless you explicitly rclone mkdir them. This would be a backwards incompatible change and would undoubtedly break peoples workflows.

However it occurs to me that creating the bucket is not a frequent operation so if that took two API calls (list -> get error, create bucket) that wouldn't be a big deal. That would be a neat and easy way to fix the problem at minimal cost. Though is listing the bucket the right API call to discover a bucket exists? That implies list permissions which rclone may not have.

Rclone could assume the bucket exists at all times and only create it if it gets an error indicating that it doesn't exist. That would then need a retry on the operation. That would remove an API call for all the transactions at the cost of some complexity. Perhaps this avenue might be worth exploring more. I'm not sure what the error returned is when trying to upload something to a non-existent bucket is though.

A quick test with

echo hello | rclone rcat s3:rclone-does-not-exist/file.txt -vv --dump bodies --low-level-retries 1 --retries 1 --s3-no-check-bucket

Gives this error

<Error><Code>NoSuchBucket</Code><Message>The specified bucket does not exist</Message><BucketName>rclone-does-not-exist</BucketName>

Which seems pretty definitive.

Other things to consider

  • is this behavior emulated correctly by the various S3 clones.
  • should this behavior be ported to the other bucket based systems (azureblob, gcs etc)

As you can see @guyarad it is complicated! However if you open a new issue on Github about this with a link to the forum and a brief description then I'll consider further.

In my opinion: I didn't expect rclone to try to create a bucket unless I explicitly said so. If you asked me without any rclone written :slight_smile: I would have said that only via "rclone createbucket" command (not even "mkdir").

In one of the systems that I use the permissions are less granular. The default way is to give "lots of" permissions to an access key and it includes creating buckets. On this one I made once a typo on the bucket name and I expected rclone to say "no bucket name..." and what happened is that it created a bucket name with files inside... that I had to then move.

Alas, rclone doesn't work like that and I think it is too late to change it now, given that there are 10s of thousands of users (at least!), so a workaround is needed.

You can have this behaviour with the flag --s3-no-check-bucket or the equivalent no_check_bucket = true in the config.

Yep, this is what I've been using for a while :slight_smile: and it works for me. I understand that cannot be easily changed.

Thanks very much!

@ncw Thanks for the detailed analysis! as a software engineer myself, I'm well aware the nothing is as simple as it sounds, and I didn't imply that.
I believe the right approach is keep the same behavior as with copying a directory, where rclone attempts listing the files first. I would not worry about permissions, as rclone will definitely not have create-bucket permission if it can't even list the bucket. I mean, just try to list the top-level bucket (and not all the buckets).
That being said, listing first is also backwards incompatible in the sense that, as you said, you are adding an API call after all. So it won't break correctness but might impact performance a little.

Created a ticket. Thank you!

Thanks for making the issue - hopefully we can get this sorted!

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.