Request: Mix transfers of large and small files

I have just started using rclone 1.35 (seems like a great app, thanks!) to sync my files to Amazon Drive.

I understand that there is huge overhead copying small files, the average upload speed can drop to some 100 kB/s, whereas I am able to achieve about 2300 kB/s with huge files. The number of concurrent transfers would help, but there might be a more proper/system solution (possibly?).

When starting the rclone on a directory tree it seems to pick files from various directories based on their size with biggest first. So the default four threads run with the biggest files and they will get to smaller files at the same time, slowing down heavily. I don’t know if this happens coincidentally or rclone has a list of files sorted by size.

If the latter was a reality, what I propose is to get the sorted list of files and use some of the threads to transfer the biggest files as they do now, but reserve other threads for the smallest files. So the transfer of big and small files would be “interleaved” and the bandwidth was not wasted.

So let’s say two threads would be picking files from the top of the list and the other two picking the bottom of the list. The number of assigned threads could be user settable.

Thanks for considering this!

I think this is probably a coincidence - rclone picks the files to transfer pretty much at random, just that the big files hang around much longer so you are more likely to see them transferring.

Increasing --transfers is helpful if you have lots of small files.

Ok, I see, thanks for the very fast reply!