Error on s3 scaleway with large file

Hello

rclone sync fail with a 56G media file on s3 scaleway after 15G copy

my version of rclone is rclone v1.54.1 on ubuntu/xenial

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone  -vv --s3-chunk-size 16M --s3-max-upload-parts 10000 sync --progress file backend:test```


#### The rclone config contents with secrets removed.  
<!--  You should use 3 backticks to begin and end your paste to make it readable.   -->

[scaleway]
type = s3
provider = Other
env_auth = false


#### A log from the command with the `-vv` flag  

2021/03/23 13:17:00 DEBUG : pacer: low level retry 1/10 (error InvalidArgument: Part number must be an integer between 1 and 1000, inclusive
status code: 400, request id: tx1506bb80c2494a849cbcc-006059dc3c, host id: tx1506bb80c2494a849cbcc-006059dc3c)
2021/03/23 13:17:00 DEBUG : pacer: Rate limited, increasing sleep to 10ms
2021/03/23 13:17:01 DEBUG : pacer: Reducing sleep to 0s

hello and welcome to the forum,

Part number must be an integer between 1 and 1000, inclusive
that is scaleway telling you to reduce --s3-max-upload-parts

try
rclone file backend:test -vv --s3-max-upload-parts 1000 sync --progress

ok il try, but i just tried with --s3-chunk-size 64M and the sync worked

--s3-max-upload-parts 1000 work
simple is beautiful
thanks :slight_smile:

good,
to make it even simpler, rclone will set the chunk size for you based on the size of the file.
https://rclone.org/s3/#s3-max-upload-parts
"Rclone will automatically increase the chunk size when uploading a large file of a known size to stay below this number of chunks limit"

If you use rclone v1.53.0 or later and set provider = Scaleway in the config then rclone will do this for you automatically!

1 Like

ho wonderful
thank you for this tip

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.