Rclone Mount + Google drive + Library scan

What is the problem you are having with rclone?

I'm using rclone mount Gdrive + a python script to detect new files , and then use cURL to send refresh request to Plex Media Server.

I've heard about API ban from google and tried researching about it. I now know that excess api usage might cause ban from Google drive temp/permanent.

To counter this , I've create a python script to detect new files in the rclone mount /mnt/content/Movies for example. It works fine for now , send curl request , plex takes it , works good. But it was after a lot of ChatGPT prompting , I realized that I was doing the same thing as that plex would have. I wanted to use Partial Media Scan In plex , but before even that , my script will be completely scanning the whole drive hence using APIs. Which is what i dont want (the ban part).

So i came here asking if I'm thinking about it the right way. That plex scan and my dir scan is ending up doing the same thing.

PS:-
I haven't used my own client id and client secret. Does that make any difference ?

I'll attach my script here.

Run the command 'rclone version' and share the full output of the command.

rclone v1.65.2

  • os/version: ubuntu 20.04 (64 bit)
  • os/kernel: 5.4.0 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.21.6
  • go/linking: static
  • go/tags: none

Are you on the latest version of rclone? You can validate by checking the version listed here: Rclone downloads
-->No

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount \
--config /root/.config/rclone/rclone.conf \
--use-mmap \
--allow-other \
--vfs-cache-mode minimal \
--vfs-cache-max-age 48h \
--read-only \
--buffer-size 0 \
--vfs-read-chunk-size 16M \
--vfs-read-chunk-size-limit 0 \
--vfs-read-ahead 0 \
--no-modtime \
--drive-pacer-min-sleep 10ms \
--umask 0000 \
drive:/Content /mnt/content

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

Paste config here[drive]
type = drive
token = XXX

based on other forum posts, that would not be the case.

if you are concerned about api limits, create a client id+secret for your remote.
i believe that gdrive allows for 1,000,000,000 api calls per 24 hours.

can greatly reduce the number of api calls by using capturing the ouput from subproces.popen
rclone test changenotify

Hold up.

Yes , this is not a kind of ban from services. But like a temp ban from using the APIs. Will this also not be caused as of Today? Seems unlikely.

I'm concerned as a kind of , if my using default client id and secret , will that help ? Like kind of , it's api calls must not be adding upto my account. This also seems unlikely , but I'm new to this and don't think that this can be the case.

as far as i know, the only way to get banned from gdrive is to upload copyrighted material.

in your case, not using client id+secret, rclone will get throttled very quickly.
rclone will be forced to slow down the number of api calls as a result.

really, create the client id+secret, use your script and plex.
can monitor the api calls at gdrive website.

if you ever run into a real issue, post about it.

1 Like

Wow , amazing .
I'll switch to my own id+secret right now.
Thanks

might want to check out my summary of the two rclone caches, vfs file cache and vfs dir cache.
your concern only about the vfs dir cache.
https://forum.rclone.org/t/status-about-using-rclone-for-music-storage-playback-in-2021-access-times-improved/27648/34

based on that, i do not think you need to worry too much about your script using too many api calls.
in fact, very few api calls are required.

your script is iterating over the mount, which is really asking rclone to iterate over its in-memory list of files.
so after the first initial scan, rclone has the full list of files.
after that, rclone should not have use api calls to re-scan the entire gdrive structure.

  1. a new file is added to gdrive
  2. rclone mount using polling, will notice the new file and update its in-memory list of files.
    so next time your script scans, rclone is not even contacting gdrive.

of course, how well that works is based on your rclone mount command?

Really informatory . Thanks again!

Also I've changed topic title to trigger SEO , would help someone find this.