Max_upload_parts in config file not currently working

What is the problem you are having with rclone?

I use iDrive Cloud, one of their two versions of s3 storage. Their limit for multi-part uploads is 1,000 chunks (not 10,000 like others). A year or two ago I noticed I was having trouble uploading large files, but the issue was corrected by adding "max_upload_parts=1000" to the config file. Recently, this line of the config file seems not to have any effect and chunk sizes are just the default of 5MB regardless of the origin file size, causing uploads of files larger than 5GB to fail. As a workaround I used "--s3-chunk-size=15MB" directly in my copy command to force a chunk size that would bring the max parts below 1,000, but otherwise the config line regarding the maximum number of parts does not have any effect. Suspect a bug.

Run the command 'rclone version' and share the full output of the command.

rclone v1.62.2

  • os/version: Microsoft Windows 10 Home 21H2 (64 bit)
  • os/kernel: 10.0.19044.2728 Build 19044.2728.2728 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.20.2
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

iDrive Cloud (s3)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

.\rclone.exe copy "C:\Users\Wanderer\Downloads\Movie" "wanderer:wandereridrive/Media/Movies/" -v -P

The rclone config contents with secrets removed.

type = s3
provider = Other
access_key_id = 
secret_access_key = 
region = us-east-1
endpoint =
location_constraint = us-east-1
max_upload_parts = 1000

A log from the command with the -vv flag

Paste  log here

Can you point to some docs for that?

We can make rclone auto adapt to that using the provider field.

The flag works for me - here is me uploading a 10G file - note the chunk size has been raised to 10M

$ rclone-v1.62.2 -vv --s3-max-upload-parts 1000 --s3-disable-checksum copy /tmp/10GB idrivee2:rclone
2023/03/21 13:04:35 DEBUG : rclone: Version "v1.62.2" starting with parameters ["rclone-v1.62.2" "-vv" "--s3-max-upload-parts" "1000" "--s3-disable-checksum" "copy" "/tmp/10GB" "idrivee2:rclone"]
2023/03/21 13:04:35 DEBUG : Creating backend with remote "/tmp/10GB"
2023/03/21 13:04:35 DEBUG : Using config file from "/home/ncw/.rclone.conf"
2023/03/21 13:04:35 DEBUG : fs cache: adding new entry for parent of "/tmp/10GB", "/tmp"
2023/03/21 13:04:35 DEBUG : Creating backend with remote "idrivee2:rclone"
2023/03/21 13:04:35 DEBUG : idrivee2: detected overridden config - adding "{lk_9r}" suffix to name
2023/03/21 13:04:35 DEBUG : Resolving service "s3" region "us-east-1"
2023/03/21 13:04:35 DEBUG : fs cache: renaming cache item "idrivee2:rclone" to be canonical "idrivee2{lk_9r}:rclone"
2023/03/21 13:04:35 DEBUG : 10GB: Need to transfer - File not found at Destination
2023/03/21 13:04:35 DEBUG : 10GB: size: 9.313Gi, parts: 1000, default: 5Mi, new: 10Mi; default chunk size insufficient, returned new chunk size
2023/03/21 13:04:35 DEBUG : 10GB: multipart upload starting chunk 1 size 10Mi offset 0/9.313Gi
2023/03/21 13:04:35 DEBUG : 10GB: multipart upload starting chunk 2 size 10Mi offset 10Mi/9.313Gi
2023/03/21 13:04:35 DEBUG : 10GB: multipart upload starting chunk 3 size 10Mi offset 20Mi/9.313Gi

The config file entry works for me also

type = s3
provider = IDrive
access_key_id = XXX
secret_access_key = XXX
endpoint =
max_upload_parts = 1000

Can you paste a log with -vv please?

From a August 2021 email from iDrive after any file I attempted to upload with more than 1,000 chunks started over in a forever loop attempting to upload the file, I emailed iDrive while troubleshooting. I changed the rclone config file as detailed above and haven't had any issues with large files until recently, with the same behavior now as before I added the line where chunks are back to 5MB regardless of file size, unless forced with an explicit flag.

Which command would you like me to use the -vv with? This command:
.\rclone.exe copy "C:\Users\Wanderer\Downloads\Movie" "wanderer:wandereridrive/Media/Movies/" -v -P

That would be fine.

Well, I tested it and it worked perfectly. Split an 11.5GB file into 12MB chunks. I did upgrade to the latest rclone recently. Maybe an older version had a bug? It's working as advertised. Sorry for the erroneous troubleshooting.

Glad to hear it is working. I think there was a bug in an older rclone, yes.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.