The 10,000 chunks limit mentioned in --s3-chunk-size documentation is not correct when using Scaleway object storage because the maximum number of parts is then 1000.
Consequently, trying to upload a file of 45G with --s3-chunk-size 32M logically fails because it would require to upload 1400+ chunks.
I am looking for a way to change this 10,000 limit to 1000 so rclone will actually
automatically increase the chunk size when uploading a large file of known size to stay below the <configured> chunks limit
but I can't find an option to do this. Did I miss something or is it not possible (yet ?) ?
What is your rclone version (output from rclone version)
$ rclone version
- os/arch: linux/386
- go version: go1.14
Which OS you are using and how many bits (eg Windows 7, 64 bit)
$ uname -a
Linux Syno 3.2.40 #25426 SMP PREEMPT Tue May 12 04:38:00 CST 2020 i686 GNU/Linux synology_evansport_214play
Which cloud storage system are you using? (eg Google Drive)
The command you were trying to run (eg rclone copy /tmp remote:tmp)
2020/06/10 13:25:36 DEBUG : pacer: low level retry 1/10 (error InvalidArgument: Part number must be an integer between 1 and 1000, inclusive
status code: 400, request id: tx86ca18460d1249898e660-005ee0c32f, host id: tx86ca18460d1249898e660-005ee0c32f)
Ha, seems we cross-posted/edited.
Yes, I thought it was strange to see it in the code but not elsewhere so I looked the history, found the issue and the beta status. Again, thanks for the quick response !
I just tested it and that works perfectly, it automatically went from 32M to 186M to upload a ~180G file, as expected. I seem to have another problem though, but that will be another post if I can't solve it by myself.