Recommended Dropbox (Formally Google Drive) and Plex Mount Settings

Thanks mate, really appreciate your help.

This morning was the first time in a week I was able to run a Plex scan without running into a ban. Very happy :slight_smile:

I have a feeling the issue was being triggered by Bazarr on my Post-Processing server, which was triggering a Library Scan at 4-5am on Rclone 1.40.

So just to confirm, I have the following config on my Post Processing RClone mount - which handles Sonarr/Radarr etc. Iā€™ve updated it to 1.45, but do I need to do anything else to stop bans moving forward?

MOUNT
ExecStart=/usr/bin/rclone mount edrive: /home/media/.media/rclone --allow-other

MOVE FROM LOCAL TO CLOUD
rclone move /home/media/.hardlinks/ edrive: -no-traverse --size-only --exclude-from /home/media/.bin/excludes --transfers 3 --log-file /home/media/.bin/rclone.log

1 Like

Ah hah! Yes, that version would definitely cause a problem and very happy you found out the issue!

The defaults should be fine in majority of cases. The only default you may want to up is:

    --dir-cache-time duration            Time to cache directory entries for. (default 5m0s)

If you want to avoid some excessive API calls, make that like 24hours or something. New files are detected by API polling so changes should process every minute anyway even if that value is high. The other VFS values are the ones Iā€™m which are also the defaults now.

Ok great - thanks so much again for your help. So no need to add the VFS lines to the rclone config? They are default now?

Yep, depending on how you want to do things, you could always put the defaults in if they decide to change.

 --vfs-cache-max-age duration         Max age of objects in the cache. (default 1h0m0s)
      --vfs-cache-mode string              Cache mode off|minimal|writes|full (default "off")
      --vfs-cache-poll-interval duration   Interval to poll the cache for stale objects. (default 1m0s)
      --vfs-read-chunk-size int            Read the source objects in chunks. (default 128M)
      --vfs-read-chunk-size-limit int      If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
     

The values are listed up there if you want to plug 'em in or just leave as the defaults.

Awesome, thanks so much. Learning a lot here.

Have made those changes :slight_smile:

Hey, can I ask why do you have --vfs-cache-mode to OFF ?

I donā€™t write anything to the mount as I use my local storage for that so no need.

Currently I have setup a cron job at 1 am so my NAS can upload new stuff to gdrive.

If my upload takes longer than 24h, how can I prevent that another rclone move command is started?

I use a basic check to see if itā€™s running:

felix@gemini:~/scripts$ cat upload_cloud
#!/bin/bash
# RClone Config file
RCLONE_CONFIG=/data/rclone/rclone.conf
export RCLONE_CONFIG
LOCKFILE="/var/lock/`basename $0`"

(
  # Wait for lock for 5 seconds
  flock -x -w 5 200 || exit 1

# Move older local files to the cloud
/usr/bin/rclone move /data/local/ gcrypt: --checkers 3 --log-file /home/felix/logs/upload.log -v --tpslimit 3 --transfers 3 --drive-chunk-size 32M --exclude-from /home/felix/scripts/excludes --delete-empty-src-dirs

) 200> ${LOCKFILE}
1 Like

I havenā€™t used bash much, could you explain this line: LOCKFILE="/var/lock/basename $0"?

Could I also solve it like this?

#!/bin/sh
if [/share/CACHEDEV1_DATA/rclone/rclonecopy.pid]; then
    end
else 
    touch /share/CACHEDEV1_DATA/rclone/rclonecopy.pid
    /opt/bin/rclone copy /share/CACHEDEV1_DATA gcrypt:shared --filter-from /share/CACHEDEV1_DATA/rclone/uploadfilter.txt --copy-links --checkers 3 --fast-list --log-file /share/CACHEDEV1_DATA/rclone/backupdata.log -v --tpslimit 3 --transfers 3 --config /share/CACHEDEV1_DATA/rclone/rclonegdrivebackup.conf &
    rm /share/CACHEDEV1_DATA/rclone/rclonecopy.pid
fi

Just wanted to say thanks again. With your help, Iā€™m no longer getting API bans every day.

And moving from Plexdrive, my streaming server can now write directly to the mount (subtitles). Thank you :):grin::+1:

Animosity, with the change to vfs/refresh are you no longer getting an output file?

$0 is the scriptname itself -> upload_cloud

for explanation how flock works:
https://stackoverflow.com/questions/21991298/flock-without-racing-conditions-in-linux-shell

and why your attempt is ok but not 100% safe:
https://stackoverflow.com/questions/185451/quick-and-dirty-way-to-ensure-only-one-instance-of-a-shell-script-is-running-at

My backup rclone copy command needs to check 1.5 Million files each day, and for about 2 hours it seems to be to check if there are new files. Only after 2 hours it starts transfering new stuff.

Is there a way to speed up the process of it?

e.g:

2018/11/30 16:03:57 INFO  : 
Transferred:   	         0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors:                 0
Checks:                 0 / 0, -
Transferred:            0 / 0, -
Elapsed time:   2h15m2.2s

What remote are you using? You could try --fast-list.

Yes, the rc command doensā€™t produce a directory listing. It just outputs a return code, which I didnā€™t see a need to keep.

You could start up a new thread as itā€™s not related to the mount here and we can help you out :slight_smile:

Silly question - but does your Rclone VFS config only download files to the local machine when they are played or analyzed?

Iā€™m finding my Data Usage has skyrocketed since moving from Plexdrive to Rclone VFS.

When a file is analyzed does it pull the whole file? Or just a portion?

When it plays from my Google Drive, it grabs chunks of the file to play it and not the whole thing. Same thing once it analyzes the file. It only grabs some chunks of the file.

If you changed paths in plex, it has to re-analyze your files so thatā€™ll take some time / data usage.