Gdrive ban question

Anyone know if you can use unionfs or mergefs or aufs to have two google drive accounts and if the one is banned it can use the back up one ?

I think the easier way would be to have 2 plex servers…

Seems like even having two severs you’ll stay banned most of the time, unless there is a way to point one account as scan and one account as play

imo the best way would be to do inital scan Plex and getting those bans, but once you have all your Lib scanned, just disable all automatic scans and use CLI to scan only folders where you actually added new items.

Btw if you are running plex and download on same server just enable partial scans eg when changes are made.

I have 2 separate servers, one for plex and another that is downloading/uploading files and Iam thinking about making a script where download server would make a list of uploaded files and store it on acd and plex server would read that list and scan only newly uploaded folders.

That is what im doing. on my seedbox i have script that runs which does all the sorting and moves stuff to my Gdrive and than writes the path to scan to a file which is auto syncthed via syncthing to my ovh server with plex. On the OVH i have a python script that processes each file found and launches the plex scanner against it.

Before I had little success with the cli scanner and the -d flag to get it to only scan the target directory. Would you mind sharing parameters you are using ?

Here is a snippet of my python script that runs on my OVH:

'#Check if folder has content
def get_child(path):
    return [item for item in os.listdir(path) if item not in ('.stignore', '.stfolder')]

'#Declare variables
source = "/plexupdate"
backup = "/plextemp"
childcount = len(get_child(source))
process = True
    
os.environ['LD_LIBRARY_PATH'] = "/usr/lib/plexmediaserver"
    os.environ['PLEX_MEDIA_SERVER_APPLICATION_SUPPORT_DIR'] = "/plexdata/Library/Application Support"
    for p, d, f in os.walk(source):
        if not childcount:
            logger.info('There is nothing to process in {}.'.format(source))
            logger.info('-----------------------------------------------------------------')
            continue
        gen = [file_ for file_ in f if file_ not in ('.stignore', '.stfolder')]
        for file_ in gen:
            logger.info('Files found in {}.'.format(source))
            current_file = os.path.join(p, file_)
            logger.info('Processing file {}.'.format(file_))
            with open(current_file,'r') as f2:
                
                for line in f2:
                    line_no_return = line.rstrip('\n')
                    logger.info('LINE: {}.'.format(line_no_return))
                    if 'FrenchTV' in line_no_return:
                        libsection = 4
                        logger.info('Section: {}.'.format(libsection))
                    elif 'TV' in line_no_return:
                        libsection = 8
                        logger.info('Section: {}.'.format(libsection))
                    elif 'UFC' in line_no_return:
                        libsection = 3
                        logger.info('Section: {}.'.format(libsection))
                    elif 'French Movies' in line_no_return:
                        libsection = 2
                        logger.info('Section: {}.'.format(libsection))
                    elif 'Dual Audio Movies' in line_no_return:
                        libsection = 5
                        logger.info('Section: {}.'.format(libsection))
                    else:
                        libsection = 7
                        logger.info('Section: {}.'.format(libsection))
                plexscan = '"/usr/lib/plexmediaserver/Plex Media Scanner" --scan --refresh --section %s --directory "/media/gdrive_encrypted/%s"' %(libsection,line_no_return)
                Plex = subprocess.Popen(plexscan, shell=True, stdout=PIPE).communicate()[0]
                logger.info('Processing Ended...')
                logger.info('{}'.format(Plex))
                logger.info('{}'.format(plexscan))
                sleep(60)
            logger.info('Backing up file to {}'.format(backup))
            shutil.move(current_file,backup)
            logger.info('-----------------------------------------------------------------')

Here is the content of my file that comes from my seedbox:

FrenchTV/District 31/Season 1

My script might not be the most pythonic but it works :slight_smile:

Thanks for sharing! I’m gonna get something setup

So I finally got banned :slight_smile: IT was only for 4 hours or so. I was manually moving files around are rescanning specific TV shows (seasons). Rclone most be doing something very wrong in Google’s eyes cause It really wasn’t a lot of api calls. Anyways, I decided to use node-gdrive-fuse for now. I only need it as a read only mount. And I am still using unionfs-fuse to merge that with my local RW drive. I actually have my local drive as RW, node-gdrive-fuse as RO and rclone as RO (as a backup incase node-drive-fuse goes down). I still use rclone copy/move to get my files up to my google drive account. Even though node-fuse-drive has some bugs, there is enough work arounds out there to make it work nicely. It seems to even have better scanning performance then rclone IMO. We will see how stable it is. For now, its been up all day (about 14 hours).

Which bugs and where are the work around?

Which length of cache have you configured and how did you did you do this? Are these parameters for the unionfs mount?

Thanks in advance!

I followed the info from this post to get node-gdrive-fuse stable:

Though he recommends putting the cache in memory, I elected to keep it on disk. I use it as a read only share, so caching really doesn’t help me any except for the folder cache.

Here is my unionfs-fuse mount config:

#/usr/bin/unionfs -o allow_other,nonempty,cow,statfs_omit_ro,auto_cache,use_ino /media/media/local=RW:/media/media/gfuse/Media=RO:/media/media/acd=RO /media/media/sorted

I use rclone copy via a cronjob to push changes to google nightly. My local storage is big enough where I can keep the latest locally for a few days. Right now, I just go in once a week and delete things to free up space on my local. Unionfs takes over and it then gets played via node-gdrive-fuse. I’m sure I could create a script that would move files at a certain percentage of disk full. I might to that later.

You will notice I have a 2nd RO mount in unionfs for ACD. I use this a a failover incase node goes down. I haven’t had to use it yet really, things seem pretty stable right now.

Another thing I managed to do was to hook into Sonarr and Radarr using custom scripts that manually will force Plex to scan refresh only the show(s) or movies that have been added. This way, I can disable scanning in the GUI completely. I wonder by using this, it would help with the rclone gdrive mount. I’ll have to test that eventually. If someone wants to test this to see if you stop getting banned, that would be awesome. Below is my very “beta” python script for this…you get the idea.

#!/usr/bin/env python

import os
import sys
import logging
import subprocess

logging.basicConfig(filename='/var/lib/plexmediaserver/Src/plex-scripts/sonar.log',
                            filemode='a',
                            format='%(asctime)s,%(msecs)d %(name)s %(levelname)s %(message)s',
                            datefmt='%H:%M:%S',
                            level=logging.DEBUG)

logging.info("Sonarr extra script post processing started.")

directory = os.environ.get('sonarr_series_path')

logging.info("Directory: %s" % directory)

os.environ['LD_LIBRARY_PATH'] = '/usr/lib/plexmediaserver'
subprocess.call(
	['/usr/lib/plexmediaserver/Plex Media Scanner',
	 '--scan',
	 '--refresh',
	 '--section', '3',
	 '--directory', directory])

For this to work, Sonarr and Radarr need to have permissions to the Plex scanner executable. I have both Sonarr and Radarr running as the Plex user so this works out great. So once a show or movie is downloaded, it fires a scan on just the directory and nothing else. And the best part, its happens immediately. I tried using “run partial scan” in the GUI settings, but half the time it wouldn’t work. Seems like it was hit always hit and miss. This way works very well.

Hope this helps.

@jmacul2 if you are running it on same server you can use server settings:
Run a partial scan when changes are detected
When changes to library folders are detected, only scan the folder that changed.

This works pretty good with unionfs setup, the problem is if you have 2 different servers one for uploading content and one for Plex.

@Ajki I am running it all on the same server. I’ve tried Run a partial scan when changes are detected. At first it worked fine, but then it would stop working. I wonder if it has something to do with the mounts being unmounted and remounted? OR maybe its my unionfs settings? Cause it starts out working for an hour or so and then just stops. Very weird.

hmm interesting, it could be something with unionfs.
If you want overall solution you could make simple inotify script directly in the RW folder ( not unionfs ) that would send plex update scan for specific folder in unionfs.

I also made my self script that updates only changed folders but a bit diffrendly since iam uploading from another server.
Basically upload server generates list of folders its uploading and upload that file to crypt then i have a cron script that reads from that file and send plex cli updates just for those folders.

Ha! I figured out the problem!

http://www.monodevelop.com/documentation/inotify-watches-limit/

I was out of watches…obviously since my library is so big. I increased the number and now its working as expected.

Nice find :slight_smile:
Yea the partial updates are the best no hassle no scripting.
I would use it if I had one server setup

where exactly i can find the api usage info at gdrive on the page?

https://forum.rclone.org/uploads/default/original/1X/cfa63472e2f845455006957216fd9fdee9813e09.png

It usually easier to start a new thread than responding to a 2 year old thread.

You log into your Google Cloud Console:

https://console.cloud.google.com

and you can check under API & Services and follow through the menus.