Apparently -drive-chunk-size SizeSuffix Upload chunk size (default 8Mi) is a global flag? not just for google?
Anyone tested this flag on box.com? I'm getting surprisingly slow speeds right now and I wonder if this flag has a sweet spot for box.com?
I tested not using the flag and got about 17-20MiB/sec and I tested 128M just like I did for google and got about 22MiB/sec but for all I know the sweet spot is some other number like 12 or 64, anyone played with this? Or heck, does this flag even effect any remote's besides googledrive?
Then why did it make so many 32MB chunks when it was supposed to make only 8MB chunks? Or have I misread the -vv in some manner?
Also isn't the goal of larger chunks to reduce api calls? and box.com in theory has even more limited api call restrictions than googledrive? right? Or am I again misunderstanding entirely the purpose of chunks?
Why it was supposed to make 8MB chunks? I think that 32MB is hardcoded in Box API implementation. You would have to check their API specification - maybe it is only size they accept?. Or maybe it was never implemented by rclone.
For files above 50 MiB rclone will use a chunked transfer. Rclone will upload up to --transfers chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 8 MiB so increasing --transfers will increase memory use.
Usually if I claim something it's because I read it on /docs
Of course I guess it doesn't really matter why it said 32, it was just so odd, docs told me it should be 8Mib, I tried to set it to 32Mib with a flag that wouldn't work, then the -vv says 32Mib that's how I drew the false conclusion.
But then it is even worse as it is 8MB only without any option to change it.
Either way --box-chunk-size flag is not implemented so it can not be changed by user. Why? I do not know. Would require to go through Box API to understand if it is possible or not.
In general larger chunks should improve performance at the cost of RAM usage.