I have a volume which consists of about 75,000 files, totaling 1.2TB. Only a small number of files are changed daily, totaling about 40GB that needs to be uploaded each night. The problem I am having is that it takes over 9 hours just to “check” the files. The internet connection is 250mbps down and 50mbps up, so that isn’t the issue. I was expecting the --fast-list option to solve the issue, but instead rclone doesn’t work at all when the option is specified, it just gives constant Error 403: rate limit exceeded.
What’s the full command you are using?
Are you using your own client ID/API key?
I am using my unique client ID.
rclone -vv --transfers 8 --fast-list --stats 10s sync [SOURCE] [DESTINATION]
Gives this error every few seconds and fails to check or upload any files:
2019/02/14 11:58:31 DEBUG : pacer: Rate limited, sleeping for 16.831541233s (9 consecutive low level retries)
2019/02/14 11:58:34 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
So with transfers 8 and checkers defaulting at 8, you are hitting too many transactions per second as Google is alerting you to. You can only do 10 per second so you need to turn down transfers/checkers and find what the best numbers for your scenario.
I don’t think fast-list has much to do with your issue.
According to the documentation
--fast-list dramatically reduces file listing times. Is there no way to get it working?
Try my sync script maybe that works better and you can start troubleshooting? You can up the checkers and transfers when it’s going well. But I’m not getting limit exceeded with this one and fast list seems to be working fine.
rclone sync SOURCE DESTINATION -vv --buffer-size 128M --drive-chunk-size 32M --checkers 3 --fast-list --transfers 3
Why do you think fast-list doesn’t work? That’s just for listing out directories/files.