Help with AWS IAM policy

What is the problem you are having with rclone?

Unable to upload files using an aws s3 profile.

I can use the same credentials to upload files to the same bucket and region using the AWS CLI. So that leads me to believe that something in my IAM policy is incorrect.

Ironically, other S3 backends work (eg: digitalocean spaces). Only S3 itself seem to be causing me fits.

Here is the policy I have in place - it looks like it has all the elements from the rclone docs but still not having any luck.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation",
                "s3:ListBucketMultipartUploads",
                "s3:DeleteObject",
                "s3:GetObject",
                "s3:PutObject",
                "s3:PutObjectAcl"
            ],
            "Resource": "arn:aws:s3:::mybucketnamehere",
            "Condition": {}
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:AbortMultipartUpload",
                "s3:DeleteObject",
                "s3:DeleteObjectVersion",
                "s3:GetObject",
                "s3:GetObjectAcl",
                "s3:GetObjectVersion",
                "s3:GetObjectVersionAcl",
                "s3:PutObject",
                "s3:PutObjectAcl",
                "s3:PutObjectVersionAcl"
            ],
            "Resource": "arn:aws:s3:::mybucketnamehere/*",
            "Condition": {}
        },
        {
            "Effect": "Allow",
            "Action": "s3:ListAllMyBuckets",
            "Resource": "*",
            "Condition": {}
        }
    ]
}

Any suggestions to resolve this issue would be greatly appreciated.

Run the command 'rclone version' and share the full output of the command.

rclone v1.58.1

  • os/version: ubuntu 20.04 (64 bit)
  • os/kernel: 5.11.0-38-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.17.9
  • go/linking: static
  • go/tags: none

Are you on the latest version of rclone? You can validate by checking the version listed here: Rclone downloads
Yes.

Which cloud storage system are you using? (eg Google Drive)

AWS S3.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mkdir  testaws:MYBUCKET/test

The rclone config contents with secrets removed.

[testaws]
type = s3
provider = AWS
access_key_id = XXX
secret_access_key =XXX
region = us-east-2
location_constraint = us-east-2
acl = private
server_side_encryption = AES256
storage_class = STANDARD
env_auth = false

Can you post the errors you are getting please?

Ugg. So sorry. Looks like I accidentally deleted that entire section at the bottom of the original post. Not sure how that happened. Any attempt to use the bucket gives the same AccessDenied message.

2022/06/14 08:44:07 DEBUG : rclone: Version "v1.58.1" starting with parameters ["rclone" "-vv" "mkdir" "MYBUCKET/test"]
2022/06/14 08:44:07 DEBUG : Creating backend with remote "MYBUCKET/test"
2022/06/14 08:44:07 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2022/06/14 08:44:08 NOTICE: S3 bucket MYBUCKET path test: Warning: running mkdir on a remote which can't have empty directories does nothing
2022/06/14 08:44:08 DEBUG : S3 bucket MYBUCKET path test: Making directory
2022/06/14 08:44:08 ERROR : Attempt 1/3 failed with 1 errors and: AccessDenied: Access Denied
status code: 403, request id: XXX, host id: HOST ID REMOVED
2022/06/14 08:44:08 DEBUG : S3 bucket MYBUCKET path test: Making directory
2022/06/14 08:44:08 ERROR : Attempt 2/3 failed with 1 errors and: AccessDenied: Access Denied
status code: 403, request id: XXX, host id: HOST ID REMOVED
2022/06/14 08:44:08 DEBUG : S3 bucket MYBUCKET path test: Making directory
2022/06/14 08:44:08 ERROR : Attempt 3/3 failed with 1 errors and: AccessDenied: Access Denied
status code: 403, request id: XXX, host id: HOST ID REMOVED
2022/06/14 08:44:08 DEBUG : 4 go routines active
2022/06/14 08:44:08 Failed to mkdir: AccessDenied: Access Denied
status code: 403, request id: XXX, host id: HOST ID REMOVED

Try

  --s3-no-check-bucket     If set, don't attempt to check the bucket exists or create it

You canput this in the config with no_check_bucket = true if it works.

hello and welcome to the forum,

i suggest that you start with a policy that is known to work and tweak from there
https://rclone.org/s3/#s3-permissions

i often use this, a more locked down iam bucket policy, which requires the use of --s3-no-check-bucket

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::100000065159:user/user.vserver03.br.vserver03.en08"
      },
      "Action": [
        "s3:ListBucket",
        "s3:GetObject",
        "s3:PutObject",
        "s3:PutObjectAcl"
      ],
      "Resource": [
        "arn:aws:s3:::bucketname/*",
        "arn:aws:s3:::bucketname"
      ]
    },
    {
      "Effect": "Allow",
      "Action": "s3:ListAllMyBuckets",
      "Resource": "arn:aws:s3:::bucketname"
    }
  ]
}

i think you need to specify a iam user, like my example and the rclone doc example.

Thanks for the idea. That gets past the initial mkdir error but, unfortunately, when attempting to upload real files the error returns.

Hi:

I did try adding the PRINCIPAL into the policy but AWS reported an error that the PRINCIPAL element is unsupported.

Unsupported Principal: The policy type IDENTITY_POLICY does not support the Principal element. Remove the Principal element.

please understand i cannot see into your computer, each time you change the iam policy, need to post it.

i can only tell you what i did.

  1. start with the working iam policy from rclone docs and get that working.
  2. tweak as needed.

aws s3 website has a nice gui interface for creating iam polices, user and bucket.
it checks the syntax and validates the policy.
have you tried that?

and there are two gui programs the can help with creating iam polices.
--- s3browser.
--- cloudberry explorer has a nice gui wizard.

Hi, yes, the "Unsupported Principal" error is from the AWS IAM editor, not from rclone. Sorry, I should have been clearer about who was reporting the error.

cannot help much as i have never used the aws iam editor or any gui iam editor.

I understand. Thanks for the ideas - much appreciated. I'm sure it's something silly and simple since the policy works otherwise, just not with rclone for some reason. If I ever figure it out I'll update this topic.

up above it was suggested to test using --s3-no-check-bucket?
that did not work?

No, unfortunately not. It suppressed the access denied errors for the rmdir command but attempting to upload files didn't work (same errors).

The way to debug this is to add -vv --dump headers to show the http transactions and you can then see exactly what sort of thing is failing which should give you a good idea which permission you are missing.

Hi:

Thanks for the suggestion. The output does add a bit more color - the first request returns a 404 (not found) error and the remainder returns a 403 (forbidden).

Not sure why the first request would return a 404. The bucket is definitely in region us-east-2.

(If I take the host shown in the debug output and navigate to it using a browser I get an access denied message which is expected since I didn't send credentials with the browser request. I don't get a 404 error there.)

Any additional ideas would be appreciated.

Thanks!

fwiw, this is what one of my remotes looks like.
i never use location_constraint, acl, or env_auth

[aws_vserver03_veeam_br_en0701_remote]
type = s3
provider = AWS
access_key_id = xxx
secret_access_key = xxx
region = us-east-1
storage_class = DEEP_ARCHIVE

Thanks - I did try stripping down the config to it's barest essentials just like you have in your file and it didn't make a difference. I think I'll try creating a new bucket in the same region. Maybe it's something on the bucket itself (transition rules or other bucket attributes).

If you post the requests/responses I can help you decode them.

Hi - here are the headers. I created a new bucket and then used the aws iam template from the rclone docs to create a new iam in-line policy attached to a new user.

For some reason AWS doesn't like the PRINCIPAL statement in the rclone iam template but other than that the policy is the same as docs.

So, new bucket, new user, new policy, same region (us-east-2). Same errors and same headers as the original bucket and policy mentioned in this thread. Here is the headers output (the last couple are just repeated so I removed them for the sake of brevity).

2022/06/15 23:05:14 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2022/06/15 23:05:14 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2022/06/15 23:05:14 DEBUG : HTTP REQUEST (req 0xc0007d4f00)
2022/06/15 23:05:14 DEBUG : HEAD /test HTTP/1.1
Host: rclone-test-useast-ohio.s3.us-east-2.amazonaws.com
User-Agent: rclone/v1.58.1
Authorization: XXXX
X-Amz-Content-Sha256: xxx
X-Amz-Date: 20220615T230514Z

2022/06/15 23:05:14 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2022/06/15 23:05:15 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2022/06/15 23:05:15 DEBUG : HTTP RESPONSE (req 0xc0007d4f00)
2022/06/15 23:05:15 DEBUG : HTTP/1.1 404 Not Found
Connection: close
Content-Type: application/xml
Date: Wed, 15 Jun 2022 23:05:14 GMT
Server: AmazonS3
X-Amz-Id-2: xxx
X-Amz-Request-Id: xxx

2022/06/15 23:05:15 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2022/06/15 23:05:15 NOTICE: S3 bucket rclone-test-useast-ohio path test: Warning: running mkdir on a remote which can't have empty directories does nothing
2022/06/15 23:05:15 DEBUG : S3 bucket rclone-test-useast-ohio path test: Making directory
2022/06/15 23:05:15 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2022/06/15 23:05:15 DEBUG : HTTP REQUEST (req 0xc0007d4500)
2022/06/15 23:05:15 DEBUG : PUT / HTTP/1.1
Host: rclone-test-useast-ohio.s3.us-east-2.amazonaws.com
User-Agent: rclone/v1.58.1
Content-Length: 153
Authorization: XXXX
X-Amz-Acl: private
X-Amz-Content-Sha256: xxxx
X-Amz-Date: 20220615T230515Z
Accept-Encoding: gzip

2022/06/15 23:05:15 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2022/06/15 23:05:15 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2022/06/15 23:05:15 DEBUG : HTTP RESPONSE (req 0xc0007d4500)
2022/06/15 23:05:15 DEBUG : HTTP/1.1 403 Forbidden
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Wed, 15 Jun 2022 23:05:14 GMT
Server: AmazonS3
X-Amz-Id-2: xxx
X-Amz-Request-Id: xxx

2022/06/15 23:05:15 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2022/06/15 23:05:15 ERROR : Attempt 1/3 failed with 1 errors and: AccessDenied: Access Denied
        status code: 403, request id: ABS02MF1X5QW3DPK, host id: 1pz/z5GSpLmFCGL3j61a8KwifPsRT/25GSXy5ViFz3linXnYGYobphvZssA0cxLvtVZRGKr3TdQ=
2022/06/15 23:05:15 DEBUG : S3 bucket rclone-test-useast-ohio path test: Making directory
2022/06/15 23:05:15 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2022/06/15 23:05:15 DEBUG : HTTP REQUEST (req 0xc0007d5100)
2022/06/15 23:05:15 DEBUG : PUT / HTTP/1.1
Host: rclone-test-useast-ohio.s3.us-east-2.amazonaws.com
User-Agent: rclone/v1.58.1
Content-Length: 153
Authorization: XXXX
X-Amz-Acl: private
X-Amz-Content-Sha256: xxx
X-Amz-Date: 20220615T230515Z
Accept-Encoding: gzip

That looks like rclone trying to create the bucket . Are you using --s3-no-check-bucket ?