What is the problem you are having with rclone?
Rclone copy fails for large files (e.g. multi-part uploads) to an access point on an AWS bucket while the aws cli works.
Run the command 'rclone version' and share the full output of the command.
rclone v1.64.1
- os/version: rocky 8.10 (64 bit)
- os/kernel: 4.18.0-553.74.1.el8_10.x86_64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.21.3
- go/linking: static
- go/tags: none
Which cloud storage system are you using? (eg Google Drive)
AWS
The command you were trying to run (eg rclone copy /tmp remote:tmp)
rclone copy slice_170_tile_037_Cross.mat maripen_apex_ut:cmc-msi-accesspoint-2-254319122668/CMC/Derivatives/Moe/PS-OCT/3DTiles/Cross/ --s3-no-check-bucket --progress -vv
and the relevant aws call that does work:
aws s3 cp slice_170_tile_037_Cross.mat s3://arn:aws:s3:us-east-2:254319122668:accesspoint/cmc-msi-accesspoint-2/CMC/Derivatives/Moe/PS-OCT/3DTiles/Cross/ --profile myeatts --region us-east-2
And the relevant rclone call of a smaller file (no multipart upload) that also works:
rclone copy dummy_file.txt maripen_apex_ut:cmc-msi-accesspoint-2-254319122668/CMC/Derivatives/Moe/PS-OCT/3DTiles/Cross/ --s3-no-check-bucket --progress -vv
Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.
[maripen_apex_ut]
type = s3
provider = AWS
access_key_id = XXX
secret_access_key = XXX
region = us-east-2
location_constraint = us-east-2
endpoint = https://s3-accesspoint.us-east-2.amazonaws.com
storage_class = STANDARD
A log from the command that you were trying to run with the -vv flag
huxfo013@acn18 [~/tile_037_Cross.mat maripen_apex_ut:cmc-msi-accesspoint-2-254319122668/CMC/Derivatives/Moe/PS-OCT/3DTiles/Cross/ --s3-no-check-bucket --progress -vv
2025/10/30 12:09:13 DEBUG : rclone: Version "v1.64.1" starting with parameters ["rclone" "copy" "slice_170_tile_037_Cross.mat" "maripen_apex_ut:cmc-msi-accesspoint-2-254319122668/CMC/Derivatives/Moe/PS-OCT/3DTiles/Cross/" "--s3-no-check-bucket" "--progress" "-vv"]
2025/10/30 12:09:13 DEBUG : Creating backend with remote "slice_170_tile_037_Cross.mat"
2025/10/30 12:09:13 DEBUG : Using config file from "/users/5/huxfo013/.config/rclone/rclone.conf"
2025/10/30 12:09:13 DEBUG : fs cache: adding new entry for parent of "slice_170_tile_037_Cross.mat", "/users/5/huxfo013/cmc/ps-oct/git/midb_cmc_ps-oct/slurm"
2025/10/30 12:09:13 DEBUG : Creating backend with remote "maripen_apex_ut:cmc-msi-accesspoint-2-254319122668/CMC/Derivatives/Moe/PS-OCT/3DTiles/Cross/"
2025/10/30 12:09:13 DEBUG : maripen_apex_ut: detected overridden config - adding "{Dn7qA}" suffix to name
2025/10/30 12:09:13 DEBUG : Resolving service "s3" region "us-east-2"
2025/10/30 12:09:13 DEBUG : fs cache: renaming cache item "maripen_apex_ut:cmc-msi-accesspoint-2-254319122668/CMC/Derivatives/Moe/PS-OCT/3DTiles/Cross/" to be canonical "maripen_apex_ut{Dn7qA}:cmc-msi-accesspoint-2-254319122668/CMC/Derivatives/Moe/PS-OCT/3DTiles/Cross"
2025/10/30 12:09:13 DEBUG : slice_170_tile_037_Cross.mat: Need to transfer - File not found at Destination
2025/10/30 12:09:17 DEBUG : slice_170_tile_037_Cross.mat: open chunk writer: started multipart upload: yAN51qifwglPzghTWyix8NVnPJuiUcdv.XrdwI3wouq6dwD1qZLo1lMZpC0VtrOKwXvbGSHaIWZtO1_MGIbevTB_Vk9ydaQTCS9GmizhX9Nnx7p8HBA6_ADj0CsNqXrV
2025/10/30 12:09:17 DEBUG : slice_170_tile_037_Cross.mat: multi-thread copy: using backend concurrency of 4 instead of --multi-thread-streams 4
2025/10/30 12:09:17 DEBUG : slice_170_tile_037_Cross.mat: Starting multi-thread copy with 300 chunks of size 5Mi with 4 parallel streams
2025/10/30 12:09:17 DEBUG : slice_170_tile_037_Cross.mat: multi-thread copy: chunk 4/300 (15728640-20971520) size 5Mi starting
2025/10/30 12:09:17 DEBUG : slice_170_tile_037_Cross.mat: multi-thread copy: chunk 1/300 (0-5242880) size 5Mi starting
2025/10/30 12:09:17 DEBUG : slice_170_tile_037_Cross.mat: multi-thread copy: chunk 2/300 (5242880-10485760) size 5Mi starting
2025/10/30 12:09:17 DEBUG : slice_170_tile_037_Cross.mat: multi-thread copy: chunk 3/300 (10485760-15728640) size 5Mi starting
2025/10/30 12:09:18 DEBUG : slice_170_tile_037_Cross.mat: multi-thread copy: chunk 3/300 failed: multi-thread copy: failed to write chunk: failed to upload chunk 3 with 5242880 bytes: InvalidAccessPoint: The specified accesspoint name or account is not valid
status code: 400, request id: 87410CC2HF55FZ9A, host id: vt/nRv2YIgO+NMr+FEc+7bDo9QHQGp3aX+xHJVv9ZJ7LZ1VhyjMALWw0M0I8LR9sioKti93fLlIlvhTmlQhJJJzeb5SKIYNQMe9zlhsJ4wE=
2025/10/30 12:09:18 DEBUG : slice_170_tile_037_Cross.mat: multi-thread copy: chunk 5/300 (20971520-26214400) size 5Mi starting
2025/10/30 12:09:18 DEBUG : slice_170_tile_037_Cross.mat: multi-thread copy: chunk 4/300 failed: multi-thread copy: failed to write chunk: failed to upload chunk 4 with 5242880 bytes: RequestCanceled: request context canceled
caused by: context canceled
2025/10/30 12:09:18 DEBUG : slice_170_tile_037_Cross.mat: multi-thread copy: chunk 1/300 failed: multi-thread copy: failed to write chunk: failed to upload chunk 1 with 5242880 bytes: RequestCanceled: request context canceled
caused by: context canceled
2025/10/30 12:09:18 DEBUG : slice_170_tile_037_Cross.mat: multi-thread copy: chunk 2/300 failed: multi-thread copy: failed to write chunk: failed to upload chunk 2 with 5242880 bytes: RequestCanceled: request context canceled
caused by: context canceled
2025/10/30 12:09:18 DEBUG : slice_170_tile_037_Cross.mat: multi-thread copy: chunk 5/300 failed: multi-thread copy: failed to write chunk: failed to upload chunk 5 with 5242880 bytes: RequestCanceled: request context canceled
caused by: context canceled
2025/10/30 12:09:18 DEBUG : slice_170_tile_037_Cross.mat: multi-thread copy: cancelling transfer on exit
2025/10/30 12:09:18 DEBUG : slice_170_tile_037_Cross.mat: multi-thread copy: abort failed: failed to abort multipart upload "yAN51qifwglPzghTWyix8NVnPJuiUcdv.XrdwI3wouq6dwD1qZLo1lMZpC0VtrOKwXvbGSHaIWZtO1_MGIbevTB_Vk9ydaQTCS9GmizhX9Nnx7p8HBA6_ADj0CsNqXrV": InvalidAccessPoint: The specified accesspoint name or account is not valid
status code: 400, request id: 874CXWE30PZEA739, host id: VLyecRnAYJ59Zv6oJzZjHIcyIMK/DvsCHdJTOiNrpExQOf3JHg9+IrrqPXbCa60P/50hpH3kQTbK5xu/K96piuYekJge7aDRzxBz+4Xs1HU=
2025/10/30 12:09:18 ERROR : slice_170_tile_037_Cross.mat: Failed to copy: multi-thread copy: failed to write chunk: failed to upload chunk 3 with 5242880 bytes: InvalidAccessPoint: The specified accesspoint name or account is not valid
status code: 400, request id: 87410CC2HF55FZ9A, host id: vt/nRv2YIgO+NMr+FEc+7bDo9QHQGp3aX+xHJVv9ZJ7LZ1VhyjMALWw0M0I8LR9sioKti93fLlIlvhTmlQhJJJzeb5SKIYNQMe9zlhsJ4wE=
2025/10/30 12:09:18 ERROR : Attempt 1/3 failed with 1 errors and: multi-thread copy: failed to write chunk: failed to upload chunk 3 with 5242880 bytes: InvalidAccessPoint: The specified accesspoint name or account is not valid
status code: 400, request id: 87410CC2HF55FZ9A, host id: vt/nRv2YIgO+NMr+FEc+7bDo9QHQGp3aX+xHJVv9ZJ7LZ1VhyjMALWw0M0I8LR9sioKti93fLlIlvhTmlQhJJJzeb5SKIYNQMe9zlhsJ4wE=
2025/10/30 12:09:18 DEBUG : slice_170_tile_037_Cross.mat: Need to transfer - File not found at Destination
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Transferred: 0 / 1, 0%
Elapsed time: 9.4s
Transferring:
* slice_170_tile_037_Cross.mat: transferring
^^ I did truncate the identical 2 retries.
Also relevant might be the AWS policy on the bucket and access point.
AWS policy associated with users on the bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetObject",
"s3:GetObjectTagging",
"s3:PutObject",
"s3:PutObjectTagging",
"s3:DeleteObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts",
"s3:ListBucketMultipartUploads"
],
"Resource": "*"
}
]
}
Relevant bit of the AWS access point policy:
{
"Sid": "CMCListMultipartUsers",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::376129864689:user/myeatts.psoct.cmc",
"arn:aws:iam::376129864689:user/wanqingyu.tracttracing.cmc",
"arn:aws:iam::376129864689:user/ddemeritte.all.cmc",
"arn:aws:iam::376129864689:user/sheilbronner.all.cmc",
"arn:aws:iam::376129864689:user/jzimmerman.mri.cmc",
"arn:aws:iam::376129864689:user/rhuxford.all.cmc",
"arn:aws:iam::376129864689:user/ssotiropoulos.mri.cmc",
"arn:aws:iam::376129864689:user/swarrington.mri.cmc"
]
},
"Action": [
"s3:ListBucketMultipartUploads"
],
"Resource": "arn:aws:s3:us-east-2:254319122668:accesspoint/cmc-msi-accesspoint-2"
},
{
"Sid": "PSOCTRWUsers",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::376129864689:user/myeatts.psoct.cmc"
]
},
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:GetObjectTagging",
"s3:PutObjectTagging",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": [
"arn:aws:s3:us-east-2:254319122668:accesspoint/cmc-msi-accesspoint-2/object/CMC/Raw/*/PS-OCT/*",
"arn:aws:s3:us-east-2:254319122668:accesspoint/cmc-msi-accesspoint-2/object/CMC/Derivatives/*/PS-OCT/*",
"arn:aws:s3:us-east-2:254319122668:accesspoint/cmc-msi-accesspoint-2/object/CMC/PS-OCT/*"
I believe that the permissions must be set correctly since the aws cli command works, and that rclone is maybe attempting a command that isn’t covered in the policy under the hood.
Any ideas?