I'm getting banned from GDrive for the initial scan in Plex. I've been running the same command for a couple years and all was well until I moved to a new server and tried to use the same rclone command. It seems to be having trouble with scanning my library. Google is reporting 403's as a result of the ban.
How's my command? What should I tweak here to avoid getting banned? I have about 500gigs free and a library of roughly 150TB of data on GDrive with about 5-10 streams happening at once at any given time.
I did make sure things like thumbnail, analyze, intro detection, etc are all off on Plex.
Run the command 'rclone version' and share the full output of the command.
os/version: ubuntu 22.04 (64 bit)
os/kernel: 5.15.0-30-generic (x86_64)
Which cloud storage system are you using? (eg Google Drive)
The command you were trying to run (eg rclone copy /tmp remote:tmp)
If the library paths changed or adjusted, it'll reanalyze the files. If you have a large library, it'll probably trigger your download quota, which is why I left Google as those quotas just annoy me as there's no way for Google to tell you what threshold you crossed, nor what the issue is.
They give you a generic answer and it's different for regular drives, shared drives, etc.
That's just the HTTP range request used when it requests something from a cloud remote.
This would be specific to a mount and the cache mode doesn't matter.
If you set it high, you might get a little bloat / extra data before it closes out. Specifically how Google calculates that range request to your quota is unknown.
So if I have a file and I want to read say 1KB of it and my default range request is 128M, I start a larger chunk download for 128M but once I get my 1KB, it'll close out and I'll bleed into a little bit extra download.
@VBB was testing with 1MB range requests and that generally seemed to work and on a large library scan, it would have some impact. Without knowing how the quotas work though, it's a guess if it is a material impact or not.
In the logs, you see something like chunksize on the debug as it'll scale up based sequential reading as if it reads a file longer, it'll start to double the ranges to help performance.
I've been using 1MB for about a year now without any negative impact. Initially, it was for the Plex agent upgrade, but then I enabled "Upgrade media analysis during maintenance", and it's been running nightly ever since (not sure if it's supposed to finish at some point, but mine doesn't).
So, for an initial library scan as well as other, more involved scans, I'd recommend setting --vfs-read-chunk-size 1M.
Yep, this is normal. You'll just need to deal with the temporary 403's until your entire library scan is complete. It's annoying, but it is what it is. Plex's aggressive scanning isn't really meant to be used with google cloud storage, but you'll get through it eventually.