Rclone attempts to create an S3 bucket even though the bucket and path already exist

What is the problem you are having with rclone?

I'm trying to copy files to an existing S3 bucket but I get an error. The bucket already exists and other copy jobs are working as expected. In the config below, cryptbucket works as expected.

IAM policy for the credentials that rclone are using have PutObject rights to the path in question and does not have permissions to CreateBucket intentionally and would like to keep it that way.

I've gotten this error a number of times and often it's not immediately clear as to why I get it. I'm assuming rclone is trying to perform some sort of check and when it fails, it assumes the bucket doesn't exist and attempts to create it.

(aside: I've gotten this error when performing a copy using the cryptbucket remote with --no-traverse option. Removing the option resolved the issue.)

Run the command 'rclone version' and share the full output of the command.

$ rclone version
rclone v1.68.2
- os/version: linuxmint 21.3 (64 bit)
- os/kernel: 6.8.0-50-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.23.3
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

AWS S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copyto -vv --header-upload "x-amz-tagging: backup-ring=0&backup-home-<<USERNAME>>=2025-01" --interactive /path/to/home-<<USERNAME>>-level0.tar.gz home-<<USERNAME>>:/<<HOSTNAME>>

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[aws-s3]
type = s3
provider = AWS
env_auth = true
access_key_id = XXX
secret_access_key = XXX
region = us-east-1
server_side_encryption = AES256

[cryptbucket]
type = crypt
remote = aws-s3:<<REDACTED>>/archives/<<HOSTNAME>>/<<PERSONAL_DIRECTORY>>
password = XXX

[home-<<USERNAME>>]
type = alias
remote = aws-s3:<<REDACTED>>/archives/<<USERNAME>>
description = AWS S3 path for <<USERNAME>>

Please note that cryptbucket works fine. home-<<USERNAME>> is NOT a crypt remote and uses an alias to make typing into the console a bit easier. I have tried running the the copy command against the aws-s3:<<BUCKET>>/archives/<<USERNAME>> and got the same error.

A log from the command that you were trying to run with the -vv flag

$ rclone copyto -vv   --header-upload "x-amz-tagging: backup-ring=0&backup-home-<<USERNAME>>=2025-01"   --interactive   /path/to/home-<<USERNAME>>-level0.tar.gz home-<<USERNAME>>:/<<HOSTNAME>>
2025/01/04 12:40:03 DEBUG : rclone: Version "v1.68.2" starting with parameters ["rclone" "copyto" "-vv" "--header-upload" "x-amz-tagging: backup-ring=0&backup-home-<<USERNAME>>=2025-01" "--interactive" "/path/to/home-<<USERNAME>>-level0.tar.gz" "home-<<USERNAME>>:/<<HOSTNAME>>"]
2025/01/04 12:40:03 DEBUG : Creating backend with remote "/path/to/home-<<USERNAME>>-level0.tar.gz"
2025/01/04 12:40:03 DEBUG : Using config file from "/home/<<USERNAME>>/.config/rclone/rclone.conf"
2025/01/04 12:40:03 DEBUG : fs cache: adding new entry for parent of "/path/to/home-<<USERNAME>>-level0.tar.gz", "/path/to"
2025/01/04 12:40:03 DEBUG : Creating backend with remote "home-<<USERNAME>>:/"
2025/01/04 12:40:03 DEBUG : Creating backend with remote "aws-s3:<<S3-BUCKET>>/archives/michael.soh"
2025/01/04 12:40:03 DEBUG : fs cache: renaming cache item "home-<<USERNAME>>:/" to be canonical "aws-s3:<<S3-BUCKET>>/archives/michael.soh"
2025/01/04 12:40:03 DEBUG : home-<<USERNAME>>-level0.tar.gz: Need to transfer - File not found at Destination

rclone: copy "home-<<USERNAME>>-level0.tar.gz"?
y) Yes, this is OK (default)
n) No, skip this
s) Skip all copy operations with no more questions
!) Do all copy operations with no more questions
q) Exit rclone now.
y/n/s/!/q> y
2025/01/04 12:40:06 DEBUG : home-<<USERNAME>>-level0.tar.gz: multi-thread copy: disabling buffering because source is local disk
2025/01/04 12:40:06 ERROR : home-<<USERNAME>>-level0.tar.gz: Failed to copy: multi-thread copy: failed to open chunk writer: failed to prepare upload: operation error S3: CreateBucket, https response error StatusCode: 403, RequestID: YMTJ56YZVGRDKR97, HostID: U0khycZoH/gRB7NadLa3FZzbcnrvsHQTBKCR7b0I+8pvcX02Mc5qvO/i0HitkmOSLr7Y9xNz3ak=, api error AccessDenied: User: arn:aws:iam::<<AWS-ACCOUNT-ID>>:user/rclone-user is not authorized to perform: s3:CreateBucket on resource: "arn:aws:s3:::<<S3-BUCKET>>" because no identity-based policy allows the s3:CreateBucket action
2025/01/04 12:40:06 ERROR : Attempt 1/3 failed with 1 errors and: multi-thread copy: failed to open chunk writer: failed to prepare upload: operation error S3: CreateBucket, https response error StatusCode: 403, RequestID: YMTJ56YZVGRDKR97, HostID: U0khycZoH/gRB7NadLa3FZzbcnrvsHQTBKCR7b0I+8pvcX02Mc5qvO/i0HitkmOSLr7Y9xNz3ak=, api error AccessDenied: User: arn:aws:iam::<<AWS-ACCOUNT-ID>>:user/rclone-user is not authorized to perform: s3:CreateBucket on resource: "arn:aws:s3:::<<S3-BUCKET>>" because no identity-based policy allows the s3:CreateBucket action
2025/01/04 12:40:06 DEBUG : home-<<USERNAME>>-level0.tar.gz: Need to transfer - File not found at Destination

try --s3-no-check-bucket

Thanks! I remembered vaugely that there was an option to prevent this check but couldn't remember it.

That said, any reason why the rclone attempts to create the bucket? The docs says:

If set, don't attempt to check the bucket exists or create it.

If rclone checks the bucket exists, it should pass and thus CreateBucket wouldn't need to be called.

I'm still pretty new to rclone so I'm not 100% sure how the workflow works.

before rclone copies a file, it wants to confirm that the bucket exists.

there are subtle differences between rclone copy and rclone copyto
fwiw, always use rclone copy unless there is a very specific reason to use rclone copyto ?

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.