Configuring rclone to Disable Multipart Uploads for S3 Storage

What is the problem you are having with rclone?

Hi y'all,

We are using rclone to copy data to a cold S3 storage. The challenge is copying few of the data is that it is getting stopped by a multipart error.

Here's a brief overview of the situation:

We are working with Windows 2016 Server.
We have approximately 16TB of data spread across four disks that need to be copied to the S3 storage.
During the copying with rclone, while most of the data copied without a hassle, few hundreds of docs are thrown into error stating :

Failed to copy: multi-thread copy: failed to open chunk writer: create multipart upload failed: AccessDenied: Namespace does not support multipart upload. status code: 403, request id: , host id:

We tried to enable multipart on our s3, but that is not supported on our bucket storage. So the only thing for us to try now is to have rclone not copy the data with multipart

We checked few of the doc files that failed to check the size, but those were merely 300 to 500mb files. We did try out a command with --s3-upload-cutoff=5G --max-size=5G , but not sure if it is correct or not.

Any help with how we can stop the multipart upload?

Run the command 'rclone version' and share the full output of the command.

rclone v1.66.0

  • os/version: Microsoft Windows Server 2016 Standard 1607 (64 bit)
  • os/kernel: 10.0.14393.6800 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.22.1
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

S3 storage(HCP)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy <source> <destination> --no-check-certificate --config "C:\HCP-Drive-Mount\hcp-ac.conf" --log-file "D:\HCP-rcloneLogs\mount.log" --log-level INFO --s3-upload-cutoff=5G --max-size=5G --progress --multi-thread-streams=16

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[backup]
type = s3
provider = Other
access_key_id = XXX
secret_access_key = XXX
endpoint = xxx
acl = public-read

A log from the command that you were trying to run with the -vv flag

ERROR : Attempt 1/3 failed with 124 errors and: march failed with 2 error(s): first error: directory not found
ERROR : Archived/<source>: Failed to copy: multi-thread copy: failed to open chunk writer: create multipart upload failed: AccessDenied: Namespace does not support multipart upload. 	status code: 403, request id: , host id: 
ERROR : Attempt 3/3 failed with 96 errors and: multi-thread copy: failed to open chunk writer: create multipart upload failed: AccessDenied: Namespace does not support multipart upload.
	status code: 403, request id: , host id: 
INFO  : 
Transferred:   	   15.230 TiB / 15.230 TiB, 100%, 1.210 MiB/s, ETA 0s
Errors:                96 (retrying may help)
Checks:           5325318 / 5325318, 100%
Transferred:      2662659 / 2662659, 100%
Elapsed time:  3d13h23m29.2s

Failed to copy with 96 errors: last error was: multi-thread copy: failed to open chunk writer: create multipart upload failed: AccessDenied: Namespace does not support multipart upload.
	status code: 403, request id: , host id: 

might try one or more of the following

  • --multi-thread-cutoff=0
  • --multi-thread-streams=1

and please use --log-level DEBUG

Thanks a lot!!

worked like a charm.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.