What is the problem you are having with rclone?
I use self-hosted minio, and give STS with minimal permissions to end-user for files uploading (with 1 day purge lifetime).
There's an upload
bucket in the minio, and also directories for each user (e.g. upload/user_1
, upload/user_2
). Each user only have access to their own directory, namely user_1 can fully CRUD on upload/user_1
and does not even know there's a upload/user_2
beside him.
So, I come up with following policy in python:
policy = {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket",],
"Resource": [f"arn:aws:s3:::upload"],
"Condition": {
"StringEquals": {
"s3:prefix": username,
}
},
},
{
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": [
f"arn:aws:s3:::upload/{username}",
f"arn:aws:s3:::upload/{username}/*",
],
},
]
}
And I can successfully list or put objects inside upload/username
using following code:
from minio import Minio
from minio.credentials import AssumeRoleProvider
import json
# ACCESS_KEY and SECRET_KEY is acquired from minio admin account
credentials_provider = AssumeRoleProvider(
ENDPOINT,
access_key=ACCESS_KEY,
secret_key=SECRET_KEY,
duration_seconds=3600,
policy=json.dumps(policy),
)
credentials = credentials_provider.retrieve()
restricted_mc = Minio(
endpoint=ENDPOINT,
access_key=credentials.access_key,
secret_key=credentials.secret_key,
session_token=credentials.session_token,
region='us-east-1',
secure=True)
# directory listing
for p in restricted_mc.list_objects('upload', prefix=username, recursive=True):
print(p.object_name)
# object retriving
for p in restricted_mc.list_objects('upload', prefix=username, recursive=True):
resp = restricted_mc.get_object('upload', p.object_name)
print(len(resp.data))
All above was tested on python sdk, but when I tried to use rclone to do similar things, problems coming:
Rclone config:
[minio]
type = s3
provider = Minio
region = us-east-1
endpoint = ENDPOINT
access_key_id = STS_FROM_ABOVE
secret_access_key = STS_FROM_ABOVE
session_token = STS_FROM_ABOVE
> rclone ls minio:upload/user_a/ -v
2025/06/04 23:37:29 DEBUG : rclone: Version "v1.69.3" starting with parameters ["rclone" "ls" "minio:upload/" "-vv"]
2025/06/04 23:37:29 DEBUG : Creating backend with remote "minio:upload/"
2025/06/04 23:37:29 DEBUG : Using config file from "C:\\Users\\xingl\\AppData\\Roaming\\rclone\\rclone.conf"
2025/06/04 23:37:29 DEBUG : fs cache: renaming cache item "minio:upload/" to be canonical "minio:upload"
2025/06/04 23:37:29 DEBUG : 3 go routines active
2025/06/04 23:37:29 NOTICE: Failed to ls: operation error S3: ListObjectsV2, https response error StatusCode: 403, RequestID: 1845E105594A2AFD, HostID: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8, api error AccessDenied: Access Denied.
> rclone copy .\test_sync\test2.csv minio:upload/user_a/ -v
2025/06/04 23:38:15 DEBUG : rclone: Version "v1.69.3" starting with parameters ["rclone" "copy" ".\\test_sync\\test2.csv" "minio:upload/upload-4d6f2706-3836-4567-8893-0c3a484785bd/" "-vv"]
2025/06/04 23:38:15 DEBUG : Creating backend with remote ".\\test_sync\\test2.csv"
2025/06/04 23:38:15 DEBUG : Using config file from "C:\\Users\\xingl\\AppData\\Roaming\\rclone\\rclone.conf"
2025/06/04 23:38:15 DEBUG : fs cache: renaming child cache item ".\\test_sync\\test2.csv" to be canonical for parent "//?/D:/rclone/test_sync"
2025/06/04 23:38:15 DEBUG : Creating backend with remote "minio:upload/upload-4d6f2706-3836-4567-8893-0c3a484785bd/"
2025/06/04 23:38:15 DEBUG : fs cache: renaming cache item "minio:upload/upload-4d6f2706-3836-4567-8893-0c3a484785bd/" to be canonical "minio:upload/upload-4d6f2706-3836-4567-8893-0c3a484785bd"
2025/06/04 23:38:15 DEBUG : test2.csv: Modification times differ by 49m51s: 2025-06-04 22:12:32 +0800 CST, 2025-06-04 15:02:23 +0000 UTC
2025/06/04 23:38:15 DEBUG : test2.csv: md5 = af424f4f867ca19c518b7720e2a3412b OK
2025/06/04 23:38:15 INFO : test2.csv: Updated modification time in destination
2025/06/04 23:38:15 DEBUG : test2.csv: Unchanged skipping
2025/06/04 23:38:15 INFO :
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Elapsed time: 0.0s
2025/06/04 23:38:15 DEBUG : 3 go routines active
> rclone sync .\test_sync minio:upload/upload-4d6f2706-3836-4567-8893-0c3a484785bd/
2025/06/04 23:38:38 DEBUG : rclone: Version "v1.69.3" starting with parameters ["rclone" "sync" ".\\test_sync" "minio:upload/upload-4d6f2706-3836-4567-8893-0c3a484785bd/" "-vv"]
2025/06/04 23:38:38 DEBUG : Creating backend with remote ".\\test_sync"
2025/06/04 23:38:38 DEBUG : Using config file from "C:\\Users\\xingl\\AppData\\Roaming\\rclone\\rclone.conf"
2025/06/04 23:38:38 DEBUG : fs cache: renaming cache item ".\\test_sync" to be canonical "//?/D:/rclone/test_sync"
2025/06/04 23:38:38 DEBUG : Creating backend with remote "minio:upload/upload-4d6f2706-3836-4567-8893-0c3a484785bd/"
2025/06/04 23:38:38 DEBUG : fs cache: renaming cache item "minio:upload/upload-4d6f2706-3836-4567-8893-0c3a484785bd/" to be canonical "minio:upload/upload-4d6f2706-3836-4567-8893-0c3a484785bd"
2025/06/04 23:38:39 ERROR : S3 bucket upload path upload-4d6f2706-3836-4567-8893-0c3a484785bd: error reading destination root directory: operation error S3: ListObjectsV2, https response error StatusCode: 403, RequestID: 1845E115720C8CAB, HostID: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8, api error AccessDenied: Access Denied.
2025/06/04 23:38:39 DEBUG : S3 bucket upload path upload-4d6f2706-3836-4567-8893-0c3a484785bd: Waiting for checks to finish
2025/06/04 23:38:39 DEBUG : S3 bucket upload path upload-4d6f2706-3836-4567-8893-0c3a484785bd: Waiting for transfers to finish
2025/06/04 23:38:39 ERROR : S3 bucket upload path upload-4d6f2706-3836-4567-8893-0c3a484785bd: not deleting files as there were IO errors
2025/06/04 23:38:39 ERROR : S3 bucket upload path upload-4d6f2706-3836-4567-8893-0c3a484785bd: not deleting directories as there were IO errors
2025/06/04 23:38:39 ERROR : Attempt 1/3 failed with 1 errors and: operation error S3: ListObjectsV2, https response error StatusCode: 403, RequestID: 1845E115720C8CAB, HostID: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8, api error AccessDenied: Access Denied.
2025/06/04 23:38:39 ERROR : S3 bucket upload path upload-4d6f2706-3836-4567-8893-0c3a484785bd: error reading destination root directory: operation error S3: ListObjectsV2, https response error StatusCode: 403, RequestID: 1845E1157481B9E3, HostID: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8, api error AccessDenied: Access Denied.
2025/06/04 23:38:39 DEBUG : S3 bucket upload path upload-4d6f2706-3836-4567-8893-0c3a484785bd: Waiting for checks to finish
2025/06/04 23:38:39 DEBUG : S3 bucket upload path upload-4d6f2706-3836-4567-8893-0c3a484785bd: Waiting for transfers to finish
2025/06/04 23:38:39 ERROR : S3 bucket upload path upload-4d6f2706-3836-4567-8893-0c3a484785bd: not deleting files as there were IO errors
2025/06/04 23:38:39 ERROR : S3 bucket upload path upload-4d6f2706-3836-4567-8893-0c3a484785bd: not deleting directories as there were IO errors
2025/06/04 23:38:39 ERROR : Attempt 2/3 failed with 1 errors and: operation error S3: ListObjectsV2, https response error StatusCode: 403, RequestID: 1845E1157481B9E3, HostID: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8, api error AccessDenied: Access Denied.
2025/06/04 23:38:39 ERROR : S3 bucket upload path upload-4d6f2706-3836-4567-8893-0c3a484785bd: error reading destination root directory: operation error S3: ListObjectsV2, https response error StatusCode: 403, RequestID: 1845E11576EB2957, HostID: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8, api error AccessDenied: Access Denied.
2025/06/04 23:38:39 DEBUG : S3 bucket upload path upload-4d6f2706-3836-4567-8893-0c3a484785bd: Waiting for checks to finish
2025/06/04 23:38:39 DEBUG : S3 bucket upload path upload-4d6f2706-3836-4567-8893-0c3a484785bd: Waiting for transfers to finish
2025/06/04 23:38:39 ERROR : S3 bucket upload path upload-4d6f2706-3836-4567-8893-0c3a484785bd: not deleting files as there were IO errors
2025/06/04 23:38:39 ERROR : S3 bucket upload path upload-4d6f2706-3836-4567-8893-0c3a484785bd: not deleting directories as there were IO errors
2025/06/04 23:38:39 ERROR : Attempt 3/3 failed with 1 errors and: operation error S3: ListObjectsV2, https response error StatusCode: 403, RequestID: 1845E11576EB2957, HostID: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8, api error AccessDenied: Access Denied.
2025/06/04 23:38:39 INFO :
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Errors: 1 (retrying may help)
Elapsed time: 0.2s
2025/06/04 23:38:39 DEBUG : 3 go routines active
2025/06/04 23:38:39 NOTICE: Failed to sync: operation error S3: ListObjectsV2, https response error StatusCode: 403, RequestID: 1845E11576EB2957, HostID: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8, api error AccessDenied: Access Denied.
After inspecting the log, I found if I give s3:ListBucket
permission without Condition
limitation, then rclone can proceed. But it leaks the full structure of the bucket, which make user_1
can also list upload/user_2
policy = {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListBucket",],
"Resource": [f"arn:aws:s3:::upload"],
# "Condition": {
# "StringEquals": {
# "s3:prefix": username,
# }
# },
},
{
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": [
f"arn:aws:s3:::upload/{username}",
f"arn:aws:s3:::upload/{username}/*",
],
},
]
}
Run the command 'rclone version' and share the full output of the command.
rclone v1.69.3
- os/version: Microsoft Windows 10 Pro for Workstations 22H2 22H2 (64 bit)
- os/kernel: 10.0.19045.5737 (x86_64)
- os/type: windows
- os/arch: amd64
- go/version: go1.24.3
- go/linking: static
- go/tags: cmount
Which cloud storage system are you using? (eg Google Drive)
Minio
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
See above
The rclone config contents with secrets removed.
See above
A log from the command with the -vv
flag
See above