Rclone mount library refresh compromise metadata

Nota bene

Please excuse the simplicity of my knowledge in regards to rclone and its related applications. I’m learning and currently using a seedbox provider which automates most my system but it has its own limitations.

My Config

  • Plex (Web Player: v4.38.2 / Server: Version v1.19.5.3112)
  • Medusa (v0.3.16) – Sickchill fork, PVR for TV Shows, send notifications to update Plex’s library when new content is processed
  • Sub-Zero (v. – Plex plugins to grab/manage subtitles
  • Other Plex Plugins – Trakt TV, Webtools

My System

  • Rclone GDrive Mount (folder pointing at both my Plex folder in Gdrive and my Plex folder in my seedbox, cronjob offload Plex folder (seedbox to Gdrive) every 12 hours)

The issue

Long story short, Medusa sends libraries update requests for ALL the libraries, scanning ALL files, not just the new files. Therefore Subzero (as one of the Plex agent) is struggling to keep track of subs it has to grab. Worst, it blocks the search of metadata, leaving the libraries in a very disorganised state.

Potential solutions

  • Replace Subzero with Bazarr… but my seedb provider doesn’t allow it yet
  • Use Plex Scan (https://github.com/l3uddz/plex_autoscan#rclone-remote-control) to scan only new contents… but my seedb deactivated sudo / root access
  • Set manual Plex scan at a low duration (e.g. 15min)… assuming all episodes are downloaded, a simple scan will grab all episodes in order and trigger Subzero the right way. I’ve disables Medusa’s Plex updates and did one manually from Plex, it works perfectly.
    But since I use a mount drive with Rclone, I wonder wether I won’t risk a ban from Google Drive’s API if I scan contents every 15min? I’ve heard Plex scan “downloads” files in remote filesystems and therefore reach the download limit before the API’s requests quotas

I would like to know more how Plex and Rclone works in terms of cache and scans, if I understand it better I would be more able to avoid getting banned.

Thank you in advance.

Plex and Rclone work fine together.

Plex scans a new file and pulls it's media info. If the file doesn't change size/modtime, it just checks the attributes for a file.

You can't get banned from Google for using anything 3rd party. You only get banned for violating their ToS with content.

Not sure what that means. Check my example above as that's just not true. The download limit is 10TB per day.

Thanks for your comprehensive answer @Animosity022.

I was referring to the post "ncw-really-quick-fix-gdrive-24hr-ban-w-plex-scans-on-r-clone-mount" in the forum (I can't put the link apparently...). What do you think?

I've used the automated Gdrive config from Rclone for the remote (nb 13 if I remember correctly), but I don't know the size of the cache, if any.

For the moment having libraries scans every 15 minutes take 20 seconds to perform on a 12 TB Gdrive. Am I right to assume that if the cache get reinitialised, plex scans may take significantly longer?

Thanks in advance.

That'a 3 year old post that's not valid as with that post, you do not get banned for anything again so the wording is off.

Rclone 1.42 release released back June 2018 as rclone uses to didn't use range requests prior to that so downloads counted as full files so you'd hit the 10TB quota prior. Prior to that release, the cache backend was used to avoid that as well.

I am not sure what you are using or what cache you are referring to. Best to share the info in the help template as that gives a clear picture of what you are using and your setup.

What is the problem you are having with rclone?

I have to manual set library scans with short intervals as auto scan isn’t compatible. I need to set up rclone the right way so scans won’t trigger Google Drive API’s threshold.

What is your rclone version (output from rclone version)

rclone v1.43.1

Which OS you are using and how many bits (eg Windows 7, 64 bit)

os/arch: linux/amd64
go version: go1.11

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

exec rclone mount --log-level NOTICE --allow-other --dir-cache-time 72h --drive-chunk-size 32M --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit off --user-agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36" plex-crypt: /mnt/remote

As the dir cache has a 72h duration, directory tree will be removed every 72h right? Then the next scan with fresh cache will take significantly longer, potentially hitting the API's requests limit?

The rclone config contents with secrets removed.

Current remotes:

Name                 Type
====                 ====
plex-crypt           crypt
plex-gdrive          drive
plex-gdrive2         drive

A log from the command with the -vv flag

As this is test to improve my config, please let me know if I should proceed with this command.

Thanks in advance.

that version is close to two years old, so can you update to latest stable V1.52.2 and test again

Thanks, it's done.
But it's not really a question of testing, but rather finding the right command / config to make sure that my Plex scans (currently every 15 min) won't trigger GDrive API limits.

You have 1 billion API hits per day so you won't hit that limit.

True. But there's a limit of 1000 on requests during 100 sec. I suspect that when the cache after 72h is emptied, a scan from Plex may hit that threshold (currently I go up to 400).

You aren't going to hit any API quota issues is the point. You can configure it how you like. If you hit too much API, it just tells you to slow down and retries.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.