I'd like to preface that my knowledge for rclone and linux is pretty basic, everything I've done so far to set my server up has been from following guides and forums for solutions to problems. The past couple nights I've received download API bans from Google with minimal use and I'm not sure what's causing it to happen and how to rectify it.
If any files are played by Plex I just see something like, "403 downloadQuotaexceeded" being spammed in putty and nothing plays, but I havent run any scans and there was only 1 stream active at the time when the jump in error % occured so I'm not sure what's happening
I've been using that mount command for the past 11 months with no issues but if trying something different with a bunch of flags might work I'd be willing to try that, I think the ban ended a couple hours ago so I have another shot. All I did to get it banned the second time was refresh the metadata on a show with 5 seasons, with no active streams, then I started receiving the messages again.
Just mounted the drive with "rclone mount gcache: ~/mnt/gdrive --cache-db-purge --buffer-size 64M --dir-cache-time 72h --drive-chunk-size 16M --timeout 1h --vfs-cache-mode minimal --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit 1G &" that someone recommended to me on reddit.
Files are able to be played so far, but I'm not going to try scanning any libraries for fear of just getting API banned again.
Edit: So after starting up the mount and monitoring the Google API's, with one stream active it's been going as high as 6/s which I don't recall it ever being that high unless I was doing a full library scan, am I doing something wrong? I also don't know what this new orange & blue "compute" bar that's showing 0 for traffic, and showing 100% errors.
Please excuse my ignorance but what do you mean by the cache backend?
No, I've always been the only person using any of my stuff, and Whatbox specifically says not to use "allow_other" so I've never included it in my mount command;
Many guides on the Internet have said to use the "allow_other" parameter, however you should not do this. This is only intended for when Plex runs on its own user account, and on a shared system it would mean other users being able to access your mounted data. We have this module disabled and you will receive errors if trying to use it.
So unless someone somehow got access to my API, I'm at a loss to why there's been so many calls when there's barely any activity on my server, would it be beneficial to delete the API, and recreate the gdrive remote with a new set of creds?
Here's an updated screenshot of the API, there's currently no streams going on, the last stream ended at 11:17 and Plex is showing nothing running under Status>Alerts
I'd probably nuke my client/secret and start over unless you can figure out what you is using it.
As for allow_other, what that does is let a different user access your rclone fuse mount. In most setups, Plex is running as the plex user and the server has a different user so you need allow_other to have the information be seen by another user. On a shared seedbox though, this would be bad and should not be used. It comes down to use case as most guides are written for standalone systems.
I don't believe anyone is syncing, I just share it with family and a few friends and I don't have the sync option checked on any of their accounts. Is deep analysis under 'Scheduled Tasks'? Is it possible that Plex was running any of these background tasks when I mounted the drive again, I also had my server offline for the past day and just turned it on prior to mounting the drive again.
hmm, I just unchecked that, do you think it's possible that those high calls were from Plex running stuff in the background since I had just booted the server up and mounted the drive in conjunction with the 1 active stream, or do you think it still wouldn't be that high?
Hmm...do you have any idea how Plex handles scanning in/analyzing media from gdrives, does it like "download" the entire file or something, so would a show that's over 1TB in space make me hit the download limit for the day?
Do I need to do anything differently, or delete the cache-backend files if I go back to the normal mount command? I don't know if buffer size or any of those other flags would mess with things.
I'm hoping that being checked is what caused the lockout in the first place, I was in the middle of switching over to my backup when I realized the root path for all the files didn't match my main drive so I cancelled the scan, but my whole M folder for TV shows is going to need to be rescanned with the main drive to fix the path back, I'm just hoping I don't exceed the download quota when it has 125 shows in the folder.