Tried to add -no-traverse flag, but still goes thru all the files.
Maybe there is a more efficient way of doing it ?
My guess is that filter removes --fast-list speed boost, but then how do I split the transfert so I can have severals instance doing the copy (on several server) ?
No, the problem is that it takes several minutes before coping the first file.
If it's skipped in the log that's means Rclone consider it for transfert.
Rclone is clever enough to skip directories it doesn't need to read, but it looks like you are asking for all files starting with certain letters so it will have to read all the directories to find them.
That will be --fast-list travsersing the file system first before doing any copies. If you remove --fast-list rclone will start transferring as soon as it lists a file that it should transfer.
Having a mount saves a few seconds of startup time only. Using the rclone commands direct on google drive is more efficient and I think in this case that is what you want to do.
Thank you @asdffdsa and @ncw. Filtering on the top level folder and removing the --fast-list did the trick.
Without parsing the 100k files each time, transfert starts faster and seems to have less api error.