Recently I changed from the ´cache´ backend to vfs on a team drive. My mount options are:
--allow-other --dir-cache-time 1000h --poll-interval 30s --log-level INFO --umask 002 --rc --rc-addr :5572 --rc-no-auth --cache-dir=/config/cache --vfs-cache-mode full --vfs-cache-max-size 20G --vfs-cache-max-age 24h --uid 1000 --gid 1000 --umask 022 --default-permissions --allow-non-empty --tpslimit 10
Today I saw an error on the log file:
vfs cache: downloader: error count now 4: vfs reader: failed to write to cache file: open file failed: googleapi: Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded
and the same error for every file in the team drive. It's probably Plex accessing the files because when I restarted Plex, the error stopped. After a few minutes I'm able to access the files without the error anymore, so I guess I hit a limit but I don't know which, probably not the 10TB one, and did not get a 24 hour ban.
The line speed is 1 Gbps, so even if the Plex is downloading the files at full speed for 24 hours (which I'm quite sure it didn't happen), it'll barely pass the 10TB mark. Drive audit log showed that for every individual files they were only downloaded a few times (< 10) a day.
Anyways, my main question is do you guys have any tips or tricks to mitigate this on rclone/gdrive side, without changing settings on Plex/Sonarr/Radarr side? For example store files on separate Team Drives to balance the load, or create different client IDs/projects and use more mounts for different folders instead of one mount at root folder, or increasing chunk size etc. Have you guys tried these and do they work to spead out the load hence effectively made the limit "higher"?