Configure the 10,000 chunks limit?

What is the problem you are having with rclone?

The 10,000 chunks limit mentioned in --s3-chunk-size documentation is not correct when using Scaleway object storage because the maximum number of parts is then 1000.
Consequently, trying to upload a file of 45G with --s3-chunk-size 32M logically fails because it would require to upload 1400+ chunks.
I am looking for a way to change this 10,000 limit to 1000 so rclone will actually

automatically increase the chunk size when uploading a large file of known size to stay below the <configured> chunks limit

but I can't find an option to do this. Did I miss something or is it not possible (yet ?) ?

What is your rclone version (output from rclone version)

$ rclone version
rclone v1.51.0
- os/arch: linux/386
- go version: go1.14

Which OS you are using and how many bits (eg Windows 7, 64 bit)

$ uname -a
Linux Syno 3.2.40 #25426 SMP PREEMPT Tue May 12 04:38:00 CST 2020 i686 GNU/Linux synology_evansport_214play

Which cloud storage system are you using? (eg Google Drive)

Scaleway

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync /volume2/backups encrypted_scaleway:backups 

The rclone config contents with secrets removed.

[scaleway]
type = s3
provider = Other
env_auth = false
access_key_id = XXXXX
secret_access_key = XXXXX
region = fr-par
endpoint = s3.fr-par.scw.cloud
acl = private
force_path_style = false
storage_class = GLACIER
chunk_size = 32M

[encrypted_scaleway]
type = crypt
remote = scaleway:XXXXX
password = XXXXX
directory_name_encryption = false

A log from the command with the -vv flag

2020/06/10 13:25:36 DEBUG : pacer: low level retry 1/10 (error InvalidArgument: Part number must be an integer between 1 and 1000, inclusive
        status code: 400, request id: tx86ca18460d1249898e660-005ee0c32f, host id: tx86ca18460d1249898e660-005ee0c32f)

I think that doesn't exist. The 10,000 is a hard coded number so you need something to override it unless I'm mistaken:

Erf, that's what I was affraid of. Thanks for your response !

Should I create an issue to request this feature or is it too specific ?

No reason not to request, it "feels" like an easy one to me but then again, I don't have to code it!

Actually it won't be necessary ! I was browsing the code you quoted to see if I would be able to code it myself in spite of not knowing Go and I stumbled upon maxSizeForCopy setting :

... which seems to be exactly what I need... because it have been added to solve this exact problem, raised in issue #4159. So I'll just wait v1.53 !

So you are right and wrong :slight_smile:

It's not in the docs as it's not pushed yet so that's a good thing.

Was the pull for that so you should be good using a beta or wait for the next release.

Ha, seems we cross-posted/edited.
Yes, I thought it was strange to see it in the code but not elsewhere so I looked the history, found the issue and the beta status. Again, thanks for the quick response !

I think you are good timing too as a point release just dropped so look now :slight_smile:

You should find the max_upload_parts support in the latest beta.

I just tested it and that works perfectly, it automatically went from 32M to 186M to upload a ~180G file, as expected. I seem to have another problem though, but that will be another post if I can't solve it by myself.

Thanks again !

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.