Rclone check on s3 using GetObjectAttributes (WITHOUT using GetObject permission)?

What is the problem you are having with rclone?

Trying to run rclone check on s3, but can't give GetObject permission. Should rclone be able to run a check using GetObjectAttributes only?

Run the command 'rclone version' and share the full output of the command.

rclone v1.62.2
- os/version: Microsoft Windows Server 2019 Standard 1809 (64 bit)
- os/kernel: 10.0.17763.4131 Build 17763.4131.4131 (x86_64)
- os/type: windows
- os/arch: amd64
- go/version: go1.20.2
- go/linking: static
- go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

AWS S3 bucket

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone check c:\foldername aws-remote:bucketname/foldername

The rclone config contents with secrets removed.

[aws-remote]
type = s3
provider = AWS
access_key_id = 
secret_access_key = 
region = eu-west-2
location_constraint = eu-west-2
acl = private
server_side_encryption = aws:kms
storage_class = GLACIER_IR

A log from the command with the -vv flag

2023/04/01 17:17:24 ERROR : filename.ppt: Failed to calculate dst hash: Forbidden: Forbidden
        status code: 403, request id: ZNP778DQHKYJJGWR, host id: sxEG6AKKmzC0uqeVd+70ne2OX85Zu1cyLLOcFZIZyBckZlen6GtvbMTnXl0Fa1KAaFuy5oJM6PE=
(Sorry it's not a full log, it would take too long to redact filenames, etc. but this shows the error message)

Try with

-vv --dump headers

And you'll see exactly what http request is failing.

hello and welcome to the forum,

what is the policy for aws-remote:bucketname?

from a quck read of https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html

potential problems:

  1. "If you don’t have the s3:ListBucket permission, Amazon S3 returns an HTTP status code 403 Forbidden (“access denied”) error."

or

  1. rclone check on S3 uses header x-amz-meta-md5chksum
    i think, need to use GetObject to return that value.

might try --size-only and/or one or more of these
--s3-no-check-bucket --s3-no-head-object --s3-no-head


ncw, tho of note, GetObjectAttributes does return the Etag.

1 Like

Policy on S3 bucket

"Action": [
				"s3:PutObject",
				"s3:ListBucket",
				"s3:DeleteObject",
				"s3:ListBucketVersions",
				"s3:GetObjectAttributes"
			],

Permissions allow users to backup data to the bucket, if credentials are compromised, the data can't be downloaded with the same credentials.

It would be great for rclone to use GetObjectAttributes instead for checks.

--size-only will do for now. Thanks for the suggestion :+1:

might check out this topic i started, about using a minimal policy.
https://forum.rclone.org/t/in-some-cases-rclone-does-not-use-etag-to-verify-files/36095

but if you want the most minimal policy that can upload a file.

"Action": "s3:PutObject",
rclone copy d:\files\1GiB\1GiB.file zork: --s3-no-check-bucket --s3-no-head --s3-no-head-object --s3-chunk-size=256M -vv 
DEBUG : Setting --config "C:\\data\\rclone\\rclone.conf" from environment variable RCLONE_CONFIG="C:\\data\\rclone\\rclone.conf"
DEBUG : rclone: Version "v1.62.2" starting with parameters ["C:\\data\\rclone\\rclone.exe" "copy" "d:\\files\\1GiB\\1GiB.file" "zork:" "--s3-no-check-bucket" "--s3-no-head" "--s3-no-head-object" "--s3-chunk-size=256M" "-vv" ]
DEBUG : Creating backend with remote "d:\\files\\1GiB\\1GiB.file"
DEBUG : Using config file from "C:\\data\\rclone\\rclone.conf"
DEBUG : fs cache: adding new entry for parent of "d:\\files\\1GiB\\1GiB.file", "//?/d:/files/1GiB"
DEBUG : Creating backend with remote "zork:"
DEBUG : Creating backend with remote "wasabi_zork_remote:zork"
DEBUG : wasabi_zork_remote: detected overridden config - adding "{d6Ro9}" suffix to name
DEBUG : Resolving service "s3" region "us-east-1"
DEBUG : fs cache: renaming cache item "wasabi_zork_remote:zork" to be canonical "wasabi_zork_remote{d6Ro9}:zork"
DEBUG : fs cache: renaming cache item "zork:" to be canonical "wasabi_zork_remote{d6Ro9}:zork"
DEBUG : 1GiB.file: Sizes differ (src 1073741824 vs dst 0)
DEBUG : 1GiB.file: multipart upload starting chunk 1 size 256Mi offset 0/1Gi
DEBUG : 1GiB.file: multipart upload starting chunk 2 size 256Mi offset 256Mi/1Gi
DEBUG : 1GiB.file: multipart upload starting chunk 3 size 256Mi offset 512Mi/1Gi
DEBUG : 1GiB.file: multipart upload starting chunk 4 size 256Mi offset 768Mi/1Gi
DEBUG : 1GiB.file: Multipart upload Etag: fb45c1c8b5eab382b93bec76f28907f2-4 OK
DEBUG : 1GiB.file: md5 = cd573cfaace07e7949bc0c46028904ff OK
INFO  : 1GiB.file: Copied (replaced existing)
INFO  : 
Transferred:   	        1 GiB / 1 GiB, 100%, 42.082 MiB/s, ETA 0s
Transferred:            1 / 1, 100%
Elapsed time:        24.1s

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.