Logfile: filter 403 errors

Hi, is it possible to not show 403 errors on Google Drive (User Rate Limit Exceeded)?

My logfile is unreadable because of all these errors…

Regards,
Cadish

Perhaps turn down your transactions per second as the right fix rather than masking the issue?

Thanks for your answer, @Animosity022, didn’t know that I could turn that down myself. Always thought that was done automatically by rclone itself.

Is setting --tpslimit the best way, or are there better ways to limit it, as it’s not really transactions itself which are causing the 403, more the checks if the file is updated or not…

rclone uses many backends so there isn’t a setting that would work for all of them.

For Google Drive, you need to limit lower than 10 per second as a general rule. I use a mount and a copy.

For my copy since I’m not worried about speed as it happens overnight, I go very low:

–checkers 3 --fast-list --tpslimit 3 --transfers 3

If you are just doing the copy, you can bring those numbers up to something bigger. Experiment and see what works best for your situation. Rclone backs off transfers when you hit 403s so that makes it slower.

2 Likes

Ok, will experiment. Thanks a lot @Animosity022!

Thanks again @Animosity022. I’ve decreased the number of errors from like 60% to less than 1% (according to the API dashboard), with not a single error on my logfile anymore.

These are the settings I’m using now:
–checkers 10 --fast-list --tpslimit 10 --transfers 3

1 Like

I wonder whether that should be the default for google drive - at the moment it is 100 I think. What do you think @Animosity022 ?

I think that is a perfect default value. Folks should also try to use their own API keys if at all possible.

OK I’ll put that on the todo list!