List on AWS Access Point does not work, but all other commands do

What is the problem you are having with rclone?

I have an access point set up controlling a sub-folder of an AWS bucket. I’ve given read/write access to this access point via an access point policy, and appropriate users are able to write to the access point via that policy, but for some reason not list the bucket.

When attempting to list, they get a directory not found. However, this same action works just fine using the aws cli.

Run the command 'rclone version' and share the full output of the command.

rclone v1.71.0-DEV
- os/version: rocky 8.10 (64 bit)
- os/kernel: 4.18.0-553.56.1.el8_10.x86_64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.24.5
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

 rclone lsd maripen_apex_ut:CMC/

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[maripen_apex_ut]
type = s3
provider = AWS
access_key_id = XXX
secret_access_key = XXX
region = us-east-2
location_constraint = us-east-2
endpoint = https://cmc-msi-accesspoint-2-254319122668.s3-accesspoint.us-east-2.amazonaws.com
no_check_bucket = false
force_path_style = false
use_arn_region = true
storage_class = STANDARD

A log from the command that you were trying to run with the -vv flag

2025/09/02 10:03:39 DEBUG : rclone: Version "v1.71.0-DEV" starting with parameters ["rclone" "lsd" "maripen_apex_ut:CMC/" "-vv"]
2025/09/02 10:03:39 DEBUG : Creating backend with remote "maripen_apex_ut:CMC/"
2025/09/02 10:03:39 DEBUG : Using config file from "/users/5/huxfo013/.config/rclone/rclone.conf"
2025/09/02 10:03:39 DEBUG : fs cache: renaming cache item "maripen_apex_ut:CMC/" to be canonical "maripen_apex_ut:CMC"
2025/09/02 10:03:39 ERROR : error listing: directory not found
2025/09/02 10:03:39 DEBUG : 6 go routines active
2025/09/02 10:03:39 NOTICE: Failed to lsd with 2 errors: last error was: directory not found

As it might be relevant, I am adding the IAM policy for this user here:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetObject",
                "s3:GetObjectTagging",
                "s3:PutObject",
                "s3:PutObjectTagging",
                "s3:DeleteObject"
            ],
            "Resource": "*"
        }
    ]
}

And the relevant portion of the access point policy:

{
        "Version": "2012-10-17",
        "Statement": [
                {
                        "Sid": "CMCListUsers",
                        "Effect": "Allow",
                        "Principal": {
                                "AWS": [
                                        "arn:aws:iam::376129864689:user/myeatts.psoct.cmc",
                                        
                                ]
                        },
                        "Action": [
                                "s3:ListBucket"
                        ],
                        "Resource": "arn:aws:s3:us-east-2:254319122668:accesspoint/cmc-msi-accesspoint-2",
                        "Condition": {
                                "StringLike": {
                                        "s3:prefix": [
                                                "CMC/*",
                                                "CMC/"
                                        ]
                                }
                        }
                },

I did see a similar issue here: Rclone with Amazon S3 access point - #6 by Bjorn_Olsen But it seems like their issue was with pulling their auth from the env, and I am providing it via the config here.

Thanks!

I thought I answered this already - did you make an issue too? (Found it here! Rclone cannot list or access files via S3 Access Point (cross-account), but AWS CLI works · Issue #8686 · rclone/rclone · GitHub )

I think the problem is here

endpoint = https://cmc-msi-accesspoint-2-254319122668.s3-accesspoint.us-east-2.amazonaws.com

Take the bucket name out of the endpoint and put it in your rclone command

endpoint = https://s3-accesspoint.us-east-2.amazonaws.com

and

rclone lsd maripen_apex_ut:cmc-msi-accesspoint-2-254319122668/CMC/

Ah I apologize! I had commented on the existing GitHub issue first before I realized y’all preferred people post here before heading to the GitHub issues, so I changed course.

The name of the bucket actually doesn’t appear anywhere in the url for the access point. Its a bit of an odd access point because it is set-up to work for a sub-folder of an existing bucket and not the entire bucket. cmc-msi-accesspoint-2 is actually just the name of the access point.

So all of the other rclone commands (besides list) work as expected using the endpoint:

endpoint = https://cmc-msi-accesspoint-2-254319122668.s3-accesspoint.us-east-2.amazonaws.com

and commands such as

rclone copy path/to/copy maripen_apex_ut:CMC/path/to/destination

where CMC is the sub-folder of the bucket that the access point is made for.

Can you try what I suggested anyway? Rclone Will be adding what it thinks is the bucket to the endpoint.

You could also try --s3-force-path-style=true

And that might help with your config as-is.

Okay, so listing the bucket works with what you suggested, but then writing fails.

So listing works with this endpoint and this command, but writing fails with access denied:

 rclone lsd maripen_apex_ut:cmc-msi-accesspoint-2-254319122668/CMC/
endpoint = https://s3-accesspoint.us-east-2.amazonaws.com

And writing works with this endpoint and this command, but listing fails with folder not found:

rclone copy dummy_file maripen_apex_ut:CMC/
endpoint = https://cmc-msi-accesspoint-2-254319122668.s3-accesspoint.us-east-2.amazonaws.com

But both the listing and writing are given to the user for those folders.

Is there any way to get both the listing and writing to work with the same configuration? That seems like it might be a bit clunky for some of my less savvy users.

Side note: I did also try both with the force_path_style=true as well, but that didn’t seem to make a difference

Can you try adding -vv --dump headers to the rclone commands and see what rclone is sending - make a note of the Host: and the path in the HTTP request and hopefully we can work out what works and what doesn't.

Ah – I sorted it out. The write just needed the –s3-no-check-bucket flag.

So when the access point name is in the endpoint provided in the config, why is it that listing fails, but writing doesn’t? I’m not sure I understand what’s happening behind the scenes for listing that makes it different from writing in this case.

Great, well done

The AWS SDK turns the endpoint+bucket+path into a URL. However depending on the setting of force path style there are two different ways of doing this with the bucket ending up in the host or URL path.

So if the config has the bucket on the endpoint this can work as that is one of the URL forms allowed, however rclone may treat the first directory in your path as the bucket.

Anyway why it works for some things and not others is because the URL forms are different depending on what you are doing.

Very confusing! The rule of thumb for rclone is never put the bucket in the endpoint.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.