"move" (aws s3) fails on 1.70.3, works on 1.67

What is the problem you are having with rclone?

rclone fails to move files from one of my buckets to the other on v1.70.3. An older installation of v1.67 (even with identical config file for the aws account) works perfectly.

Run the command 'rclone version' and share the full output of the command.

Non-working new version:

$ rclone version
rclone v1.70.3
- os/version: clear-linux-os 41970 (64 bit)
- os/kernel: 6.9.7-1445.native (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.24.4
- go/linking: static
- go/tags: none

working old version:

$ rclone version
rclone v1.67.0
- os/version: clear-linux-os 43630 (64 bit)
- os/kernel: 6.12.33-1498.ltscurrent (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.22.4
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

AWS S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone -vvv move s3:nexus-upload-cache/VVTL1 s3:nexus-datasets/backup/VVTL1

(identical command works on v1.67!)

A log from the command with the -vv flag

2025/08/04 20:24:29 ERROR : Attempt 3/3 failed with 67 errors and: operation error S3: CreateMultipartUpload, https response error StatusCode: 400, RequestID: XGCQZF4F7RJFXC29, HostID: /eyjaLBBT17c3N46LzJE7EvBFig+kwbaXChCNeBHB0gtzaUU3uBSO4+N/w/sZBW4Tvf+cRpPFisLQO/oeffUWUh0tqhIf1ywMPBCRZhWs48=, api error AccessControlListNotSupported: The bucket does not allow ACLs
2025/08/04 20:24:29 NOTICE: Failed to move with 67 errors: last error was: operation error S3: CreateMultipartUpload, https response error StatusCode: 400, RequestID: XGCQZF4F7RJFXC29, HostID: /eyjaLBBT17c3N46LzJE7EvBFig+kwbaXChCNeBHB0gtzaUU3uBSO4+N/w/sZBW4Tvf+cRpPFisLQO/oeffUWUh0tqhIf1ywMPBCRZhWs48=, api error AccessControlListNotSupported: The bucket does not allow ACLs

I don't care at all about ACL's. It's disabled on the destination bucket for a reason (even though ACL's are enabled on the upload bucket, where fine-grained access is required).

v1.67 does this:


2025/08/05 00:32:25 INFO  :
Transferred:      136.976 GiB / 1.105 TiB, 12%, 0 B/s, ETA -
Checks:                10 / 10, 100%
Deleted:               10 (files), 0 (dirs), 136.976 GiB (freed)
Renamed:               10
Transferred:           10 / 67, 15%
Server Side Copies:    10 @ 136.976 GiB
Elapsed time:       7m0.2s
Transferring:
 *   Dat_007.fastq.gz:  0% /18.642Gi, 0/s, -
 *   Dat_009.fastq.gz:  0% /17.626Gi, 0/s, -
 *   Dat_011.fastq.gz:  0% /18.311Gi, 0/s, -
 *   Dat_012.fastq.gz:  0% /14.718Gi, 0/s, -

2025/08/05 00:33:27 DEBUG : Dat_022.fastq.gz: Dst hash empty - aborting Src hash check
2025/08/05 00:33:27 INFO  : Dat_022.fastq.gz: Copied (server-side copy)
2025/08/05 00:33:27 INFO  : Dat_022.fastq.gz: Deleted
2025/08/05 00:33:27 DEBUG : Dat_024.fastq.gz: Starting  multipart copy with 4 parts
2025/08/05 00:33:32 DEBUG : Dat_025.fastq.gz: Dst hash empty - aborting Src hash check
2025/08/05 00:33:32 INFO  : Dat_025.fastq.gz: Copied (server-side copy)
2025/08/05 00:33:32 INFO  : Dat_025.fastq.gz: Deleted
2025/08/05 00:33:32 DEBUG : Dat_026.fastq.gz: Starting  multipart copy with 4 parts

So the old version works well. Although the new version, v1.70.3, is correct that ACLs are not enabled on the destination bucket, it is not correct to need them or fail if they aren't present.

hi,
not sure it will help but what about
If the acl is an empty string then no X-Amz-Acl: header is added and the default (private) will be used.


for a look at the api calls, use --dump=headers

It doesn’t fix the problem. If it just dumps info, what part of it do you need?

This is actually a pretty big problem, as there is never a guarantee or even an expectation that both buckets will or won’t have ACL’s enabled when files are moved.

ACLs should not affect a move. Old versions of rclone were perfectly fine.

correct.


S3 backend updated to use AWS SDKv2 as v1 is now unsupported.
that might be the issue.


now, the next step are:

  1. pick just one single file.
  2. for both rclone versions, rclone copy remote:file --dump=headers --retries=1

did you try that?