Rclone copy/file failig

STOP and READ USE THIS TEMPLATE NO EXCEPTIONS - By not using this, you waste your time, our time and really hate puppies. Please remove these two lines and that will confirm you have read them.

What is the problem you are having with rclone?

I'm using rcd command copyfile to upload a file from local windows file system to aws s3. It works perfectly when complete s3 access policies are provided at user level.
After adding filter policy at s3 bucket level as following upload started to break
"arn:aws:s3:::bucket/.txt",
"arn:aws:s3::bucket/
.csv""

The error is as following:
2024/01/29 08:21:41 ERROR : ac.txt: Failed to copy: AccessDenied: Access Denied

From rclone logs :
switching user supplied name ":s3,provider='AWS',secret_access_key='',access_key_id='',region='us-east-1',server_side_encryption='aws:kms':bucket\foldername\" for canonical name ":s3{yJtAc}:bucket/foldername"

There doesnt seem to be a "/" at the end of canonical name and it treats it like a file which throws access denied error

Run the command 'rclone version' and share the full output of the command.

http://localhost:5572/operations/copyfile

rclone v1.61.1

  • os/version: Microsoft Windows 10 Enterprise 21H2 (64 bit)
  • os/kernel: 10.0.19044.2846 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.19.4
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

AWS S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Paste command here
http://localhost:5572/operations/copyfile

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

Paste config here

A log from the command that you were trying to run with the -vv flag

Paste  log here

hello,

perhaps try arn:aws:s3:::bucket/*.txt

please post the full bucket policy

complete policy
s3_policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:ListBucket"
],
"Resource": ""
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::bucket/
.txt",
"arn:aws:s3:::bucket/*.csv"
]
}
]
}

can you pleae format the text correctly, using ~~~ before and after the text

here is an example
image

so it is formated like so.

text
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:ListAllMyBuckets",
        "s3:ListBucket"
      ],
      "Resource": ""
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject"
      ],
      "Resource": [
        "arn:aws:s3:::bucket/.txt",
        "arn:aws:s3:::bucket/*.csv"
      ]
    }
  ]
}

image
--- your policy does not look valid, did you try my suggestion from up above, *.txt
--- or just start over from scratch, and re-type the policy, not copy-paste.
--- or use the rclone official policy, get that working and then tweak it.

It is /*.txt indeed.
But that is not working.
Additionally it works if I provide S3 full access

image
image
image
that is three times now, not correct, no * and some of the forward slashes do not look correct.

i made a number of suggestions???

I tried the following and it still throws the same access denied error

"Resource": [
        "arn:aws:s3:::bucket/*.txt",
        "arn:aws:s3:::bucket/*.csv"
      ]

@ncw @asdffdsa
Please help

let's do the simple stuff, before invoking ncw...
i made several simple suggestions and repeatably asked you about them???
please understand, we are just volunteers and you need to work with us, ok?

The only suggestion I see is to try

bucket/*.txt

thus I did, and isn't working which I repeatedly reported in above comments.
The policy which I pasted above is correct, it is copied directly from S3.
Again Im pointing out the same, this works perfectly if full access is provided on s3 policy. but fails if bucket has restricted access with some file extensions.

i setup a bucket policy, to only allow .txt files to be uploaded and here is the output.

rclone copy file.txt remote:bucket -v --s3-no-check-bucket
INFO  : file.txt: Copied (new)
INFO  :           1 B / 1 B, 100%, 0 B/s, ETA -

rclone copy file.ext remote:bucket -v --s3-no-check-bucket
ERROR : Attempt 1/1 failed with 1 errors and: Forbidden: Forbidden
	status code: 403, request id: 98ED7BE98CCC6E5C:B, host id: rehm3DYX4AtVYtmGh2OR1k58XE9WHEd2vYrOgVzR5ysH4P0swT6oAN56Jhu8ZJHwaJoE5YLIi75F

I'm using rcd api to do the same
This is what Im trying

http://localhost:5572/operations/copyfile
{
       "dstFs": ":s3,provider='AWS',secret_access_key='',access_key_id='',region='us-east-2',server_side_encryption='aws:kms':bucket\\folder1\\folder2\\",
       "dstRemote": "newfile.txt",
       "srcFs": "C:\\folder1\\folder2",
       "srcRemote":"newfile.txt"
}

Policy part for file extensions

"Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject"
      ],
"Resource": [
        "arn:aws:s3:::bucket/*.txt",
        "arn:aws:s3:::bucket/*.csv"
      ]

Please correct me if Im doing anything wrong ?

Did you try --s3-no-check-bucket?

This worked. Thanks!
Can you provide some insight into this flag ? Why was create bucket check failing ?

about --s3-no-check-bucket, i used that in the command i shared with you.
i should be been more clear about that.

one option, instead of adding the flag on every command, save it one time to the config file.
no_check_bucket = true

check out the docs, If set, don't attempt to check the bucket exists or create it.
by default, rclone does a HEAD on the bucket, to see it the bucket exists.
if you want a deeper look at the API calls made by rclone, add --dump=headers

your policy is a bit more locked down, then the official rclone policy, it is missing.

"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"

so when rclone tries to HEAD the bucket, the policy does not allow.

fwiw, i never want rclone to create buckets, so i always use a locked down bucket policy, and always use --s3-no-check-bucket

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.