Is it normal to have almost 10% 403 errors?

So I am using RClone to transfer completed files from a seedbox to my GDrive and using my own API I am seeing almost 10% 403 errors as a constant pattern. I am nowhere near any quotas and everything seems to be transferring over ok.

I remember reading somewhere here on the forums that it is normal to have some 403 errors, but 10% seems like a lot. Is this ok or should I be digging deeper to see what the problem is?

This is my current script which is set as a crontab job to run every minute;

#!/bin/bash
LOCKFILE="/var/lock/basename $0"

(
flock -n 9 || {
echo “$0 already running”
exit 1
}

/media/dma/craftyclown/bin/rclone move ~/private/rtorrent/data/Complete “gdrive:/The Skull/Complete” -v --min-age 1m --log-file=/media/dma/craftyclown/rclone-upload.log --fast-list

) 9>$LOCKFILE

403 errors is how google does it’s rate limiting.

This is what rclone’s API keys look like

image

image

So I’d say you are doing OK with 10%!

1 Like

Fantastic, so I presume the requests that return 403 error codes are just sent again until they are given a 200 code?

EDIT:

I’ve just noticed I also have a handful of 404 errors and a single 500 error. Would you happen to know what these codes are?

I tune down my transfers and API calls so I get 0 403s usually so I only move 2 file at a times and limit some other items:

/usr/bin/rclone move /data/local/Movies/ gcrypt:Movies --checkers 2 --fast-list --syslog -v --tpslimit 2 --transfers 2 --exclude *.mkv

Thanks, setting a TPS limit of 6 was enough to stop the errors without greatly effecting transfer speeds.

Yes, rclone backs off and retries.

404 is file not found - you might get that in normal operation. 500 is internal server errors, usually some kind of rate limiting in my experience too.