Rclone respects s3 policy to restrict dir access but still creates signed public url

What is the problem you are having with rclone?

I am using rclone to access S3 storage. A non-admin user is (expectedly) denied access to certain S3 paths but if that user is aware of a full path to a file (say s3:bucket1/root/.ssh/id_rsa), they can (ideally should not) create a public url for that file.

I have two S3 policies that enables 1. read-only access for non-admin users to most of S3 paths except a few admin-related paths, and 2. disallows creating signed public url for S3 objects. I read about S3 policies and I think if an user has a read-access to S3 object, that automatically grants a permission to create a public url (except bucket level policy restricts such action).

My issue may not be related to rclone and perhaps related to invalid S3 policies I have for these buckets.

Run the command 'rclone version' and share the full output of the command.

rclone v1.67.0
- os/version: redhat 8.8 (64 bit)
- os/kernel: 4.18.0-477.74.1.el8_8.x86_64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.22.4
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

# listing of root/* is denied and works as expected
rclone lsl s3nonadmin:bucket1/root/.ssh/id_rsa

# this should be disallowed too but I see a valid public url as an output
rclone link s3nonadmin:bucket1/root/.ssh/id_rsa

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[s3nonadmin]
type = s3
provider = XXX
access_key_id = XXX
secret_access_key = XXX
endpoint = s3.XXX.com
acl = private

A log from the command that you were trying to run with the -vv flag

2025/01/05 12:41:13 DEBUG : rclone: Version "v1.67.0" starting with parameters ["rclone" "lsl" "s3nonadmin:bucket1/root/.ssh/id_rsa" "-vv"]
2025/01/05 12:41:13 DEBUG : Creating backend with remote "s3nonadmin:bucket1/root/.ssh/id_rsa"
2025/01/05 12:41:13 DEBUG : Using config file from "/home/foo/.config/rclone/rclone.conf"
2025/01/05 12:41:13 DEBUG : Resolving service "s3" region "XXX"
2025/01/05 12:41:13 DEBUG : fs cache: adding new entry for parent of "s3nonadmin:bucket1/root/.ssh/id_rsa", "s3nonadmin:bucket1/root/.ssh"
2025/01/05 12:41:13 ERROR : : error listing: AccessDenied: Access Denied
        status code: 403, request id: XXXX:A, host id: XXXX
2025/01/05 12:41:13 DEBUG : 6 go routines active
2025/01/05 12:41:13 Failed to lsl with 2 errors: last error was: AccessDenied: Access Denied
        status code: 403, request id: XXXX:A, host id: XXXX
2025/01/05 12:44:06 DEBUG : rclone: Version "v1.67.0" starting with parameters ["rclone" "link" "s3nonadmin:bucket1/root/.ssh/id_rsa" "-vv"]
2025/01/05 12:44:06 DEBUG : Creating backend with remote "s3nonadmin:bucket1/root/.ssh/id_rsa"
2025/01/05 12:44:06 DEBUG : Using config file from "/home/foo/.config/rclone/rclone.conf"
2025/01/05 12:44:06 DEBUG : Resolving service "s3" region "XXX"
2025/01/05 12:44:06 DEBUG : fs cache: adding new entry for parent of "s3nonadmin:bucket1/root/.ssh/id_rsa", "s3nonadmin:bucket1/root/.ssh"
2025/01/05 12:44:06 NOTICE: S3 bucket bucket1 path root/.ssh/id_rsa: Public Link: Reducing expiry to 1w as off is greater than the max time allowed
https://bucket1.<endpoint>.com/root/.ssh/id_rsa?X-Amz-Algorithm=AWS4...

S3 policies for s3nonadmin user

{
  "Id": "readonly",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowUserToSeeBucketListInTheConsole",
      "Effect": "Allow",
      "Action": [
        "s3:ListAllMyBuckets",
        "s3:GetBucketLocation",
        "s3:GetBucketCompliance"
      ],
      "Resource": "arn:aws:s3:::*"
    },
    {
      "Sid": "AllowRootListingOfCompanyBucket",
      "Effect": "Allow",
      "Action": "s3:ListBucket",
      "Resource": [
        "arn:aws:s3:::bucket1",
        "arn:aws:s3:::bucket2",
        "arn:aws:s3:::bucket1/*",
        "arn:aws:s3:::bucket2/*"
      ],
      "Condition": {
        "StringEquals": {
          "s3:delimiter": "/",
          "s3:prefix": ""
        }
      }
    },
    {
      "Sid": "AllowReadOnlyForBucketsExcept",
      "Effect": "Allow",
      "Action": [
        "s3:Get*",
        "s3:List*"
      ],
      "Resource": [
        "arn:aws:s3:::bucket1",
        "arn:aws:s3:::bucket2",
        "arn:aws:s3:::bucket1/*",
        "arn:aws:s3:::bucket2/*"
      ],
      "Condition": {
        "StringNotLike": {
          "s3:prefix": [
            "admin1/*",
            "admin2/*",
            "root/*"
          ]
        }
      }
    }
  ]
}
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DenyACLedits",
      "Effect": "Deny",
      "Action": [
        "s3:GetObjectAcl",
        "s3:PutObjectAcl"
      ],
      "Resource": "arn:aws:s3:::*"
    }
  ]
}

Thanks!

While anyone can create an presigned URL for any path within your bucket, the person using the URL will be restricted to the credentials used to create the URL.

From the docs:

Anyone with valid security credentials can create a presigned URL. But for someone to successfully access an object, the presigned URL must be created by someone who has permission to perform the operation that the presigned URL is based upon.

Have you tried using the generated URL to download the files in question? Based on the documentation, your attempt should result in a 403 error.

You can explicitly deny them access to sensitive paths with a deny policy:

{
    "Sid": "ExplictDenyToSensitivePaths",
    "Effect": "Deny",
    "Action": [
        "s3:Get*",
        "s3:List*"
    ],
    "Resource": [
        "arn:aws:s3:::bucket1/*",
        "arn:aws:s3:::bucket2/*"
    ],
    "Condition": {
        "StringLike": {
            "s3:prefix": [
                "admin1/*",
                "admin2/*",
                "root/*"
            ]
        }
    }
}

Thanks! I tried downloading from an url and I can successfully download a file with a valid md5 checksums and content. I replaced DenyACLedits with one you suggested to explicitly all get/list queries for those sensitive paths but I am still able to generate urls to those paths and can download those files. I guess it's an order of policies causing this behavior. I will post more if I find a solution.

Following worked: Removed AllowReadOnlyForBucketsExcept block and added Deny block in a single policy.

Thanks!

{
  "Id": "readonly",
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowUserToSeeBucketListInTheConsole",
      "Effect": "Allow",
      "Action": [
        "s3:ListAllMyBuckets",
        "s3:GetBucketLocation",
        "s3:GetBucketCompliance"
      ],
      "Resource": "arn:aws:s3:::*"
    },
    {
      "Sid": "AllowRootListingOfCompanyBucket",
      "Effect": "Allow",
      "Action": "s3:ListBucket",
      "Resource": [
        "arn:aws:s3:::bucket1",
        "arn:aws:s3:::bucket2",
        "arn:aws:s3:::bucket1/*",
        "arn:aws:s3:::bucket2/*"
      ],
      "Condition": {
        "StringEquals": {
          "s3:delimiter": "/",
          "s3:prefix": ""
        }
      }
    },
	{
	    "Sid": "ExplictDenyToSensitivePaths",
	    "Effect": "Deny",
	    "Action": [
	        "s3:Get*",
	        "s3:List*"
	    ],
	    "Resource": [
	        "arn:aws:s3:::bucket1/*",
	        "arn:aws:s3:::bucket2/*"
	    ],
	    "Condition": {
	        "StringLike": {
	            "s3:prefix": [
	                "admin1/*",
	                "admin2/*",
	                "root/*"
	            ]
	        }
	    }
	}
  ]
}