Could a reverse processing option speed up large jobs?

I just running my first > 1TB jobs between dropbox and google drive, and of course it’s not as fast as we’d like, and I understand there are a number of reasons with rate limiting being just one potential issue.

However, could there be a potential speed up with an option for rclone to process directories and files in reverse of it’s normal order?

For example, say a job was started and had been running for hours. If another instance on another server of the same job was started, I assume there would be some duplicate overhead. rclone would have to keep getting and processing file info that had already been handled, and would have to “catch up” to the first instance before any copying would actually happen (assuming settings of don’t copy if exists).

With a reverse option, could time be saved because the enumeration and processing is starting on different files, and copying could then begin as soon as possible?

I’m trying to think through this and make sure it would be helpful, before making feature suggestions.


I see what you mean, however I think running two transfers at once, even if one is running backwards will run into trouble…

It sounds like you might want to increase --transfers if you want the transfer to run faster?