Recommended Dropbox (Formally Google Drive) and Plex Mount Settings

This is my mount command:
/usr/bin/rclone mount --allow-other --dir-cache-time 48h --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit 2G --buffer-size 128M --umask 002 gcrypt: /home/XXXX/media &
I’m using only rclone mount with crypt.

So I followed a guide that had me setup my config as follows

[Gdrive]
type = drive
client_id =
client_secret =
scope = drive
root_folder_id =
service_account_file =
token = <redacted>

[gcache]
type = cache
remote = Gdrive:/mnt/rclone_cache
plex_url = http://127.0.0.1:32400
plex_username = <redacted>
plex_password = <redacted>
chunk_size = 10M
info_age = 24h
chunk_total_size = 10G
plex_token = <redacted>

[gcrypt]
type = crypt
remote = gcache:/mnt/crypt
filename_encryption = standard
directory_name_encryption = true
password = <redacted>
password2 = <redacted>

Now I realize I screwed up with the buckets but well I’ve already started to upload a good bit to it so eh whatever :).

So If I understand this correctly you’re saying that with VFS I don’t need to have the backend cache setting. So if I create a new config for the crypt to point directly at the location instead of through the cache, then is the plex information no longer relevant? Or should I just leave it as it is?

I’m using a OVH dedicated machine for this so I’m limited to their bandwidth which doesn’t look terrible. I’m just trying to get this optimized as much as possible before I got letting my users go crazy on this new machine. Thank you @Animosity022 for your guide above as I am now using your VFS mounts but I’m mounting it with the crypt/cache and not straight up crypt.

Thank you all for your help.

You’d just make another entry in your rclone.conf, but your rclone.conf looks pretty odd as you have paths in some odd spots.

Yeah totally realized what I did after the fact. Thought I was doing it properly then realized that I made it all goofy.

So to confirm that there is no need for plex to be in there any longer?

If I strip out my cache stuff, I just use:

[GD]
type = drive
client_id = someid.apps.googleusercontent.com
client_secret = supersecret
token = {"access_token":"longtoken","token_type":"Bearer","refresh_token":"refreshtoken","expiry":"2018-08-10T19:21:21.236094471-04:00"}

[gcrypt]
type = crypt
remote = GD:media
filename_encryption = standard
password = somepassword
password2 = somepassword
directory_name_encryption = true

My mount command mounts gcrypt:

1 Like

Thank you for the feedback

@Animosity022 do I get it right that with your setup in the OP with mergerfs, I can write files, have them local up to when have uploaded, and then use them from GDrive once they finish uploading?

I upload them by a cron job with a rclone move script so it moves the local files to my GD and my GD picks them up automatically via the 1 min polling interval.

Sorry for late reply.
$ go version
go version go1.6.2 linux/amd64

I’m back. Changed my setup to one big server running dockers, so I’m also back to VFS for gdrive with a local storage in front for new media (and to offload the analyzing of new media) I haven’t kept up the last 2 weeks but you mention a coming patch for buffer-size?

@Animosity022 would your mind sharing you move command please.

I’ve added a few bits to mind to stop partial files getting moved, but I can’t make my mind up about how many checkers and transfers to use - is more better or is that making too many requests? I have a 200Mbps upload

Thanks

rclone move /mnt/user/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 10 --fast-list --transfers 4 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 8500k

Speed really isn’t my concern so I let it run slow and steady as it runs overnight anyways:

felix@gemini:~/scripts$ cat upload_cloud
#!/bin/bash
LOCKFILE="/var/lock/`basename $0`"

(
  # Wait for lock for 5 seconds
  flock -x -w 5 200 || exit 1

# Move older local files to the cloud
/usr/bin/rclone move /data/local/ gcrypt: --checkers 3 --fast-list --log-file /home/felix/logs/upload.log -v --tpslimit 3 --transfers 3 --exclude-from /home/felix/scripts/excludes --delete-empty-src-dirs

) 200> ${LOCKFILE}
felix@gemini:~/scripts$ cat excludes
*.srt
*partial~

What is the buffer seek patch that you talk about that is incoming. I am still running this:

[Unit]
Description=rclone cache
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
ExecStart=/usr/bin/rclone mount plexcache: /home/plex/gdrive/plexcache \
   --allow-other \
   --dir-cache-time 48h \
   --vfs-read-chunk-size 48M \
   --vfs-read-chunk-size-limit 2G \
   --buffer-size 128M \
   --syslog \
   --umask 002 \
   --bind 192.168.2.120 \
   --log-level INFO
ExecStop=/bin/fusermount -uz /home/plex/gdrive/plexcache
Restart=on-abort
User=plex
Group=plex

[Install]
WantedBy=default.target

I seem to be having some issues with buffering. Do you think your latest changes might help? My machine has 16GB of ram and I have a 300Mbps pipe. I thought it might have been a local comcast issue but it hasnt gone away.

The patch I was talking about is a pull request here:

It fixes some stuff with seeking in a file and reusing the buffer instead of discarding it.

Plex is picky at times and will open a file a few times before playing it so it would be helpful.

I’ve been running with the default buffer for a week or so and haven’t noticed any issues:

/usr/bin/rclone mount gcrypt: /GD --allow-other --dir-cache-time 72h --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --umask 002 --bind 192.168.1.30 --log-level INFO --log-file /home/felix/logs/rclone.log

My internet and peering to google is very solid though as well compared to other folks. I can routinely max my line and rarely see any time outs or anything else in my logs.

I just read the pull request. If this is implemented does it mean the buffer should be bigger than drive-chunk-size to aid seeking, memory permitting?

Animosity, I noticed that you removed the ExecStartPost=~~~~/GD_find from your systemd file. Are you handling the initial cache build a different way now?

Yes. I do a group of systemd services to start up all my stuff now so the find got moved out.

I do gmedia service that does the mount, mergerfs, find as a group instead.

@Animosity022 thank you, your configs are working very well for me on the vfs and Plex side. I can’t understand how to make it work with Radarr and Sonarr, though. I use your very same commands, with the following paths:

  • /home/user/{tv,movies} local, where Radarr and Sonarr hardlink to (download happens somewhere else)
  • /home/user/gdrive/{tv,movies} is the rclone mount vfs path
  • /home/user/plex/{tv,movies} is where I mergerfs for Plex

Plex sees both local and mounted files. All well.

If Radarr and Sonarr link to /home/user/{tv,movies} local, all is well.

The cron script then rclone moves the files away from here (to appear into /home/user/gdrive/{tv,movies}), so Radarr and Sonarr think they are now missing files.

If Radarr and Sonarr try to link to the mergerfs /home/user/plex/{tv,movies}, nothing works. Also if I try to manual hardlink to the mergerfs /home/user/plex/{tv,movies}, I receive the error “ln: failed to create hard link ‘file.mkv’ => ‘/home/user/downloads/file.mkv’: Invalid cross-device link”. All folders are on the same disk.

How did you solve this issue?

The hardlink cannot cross a file system so everything has to remain on the /mergerfs mount.

So my torrents would write to /gmedia/torrents and hard link over to /gmedia/Movies or /gmedia/TV and the first write area points back to /data/local/TV /data/local/Movies and the underlying torrents are in /data/local/torrents, which /data is all a single set of mirrored disks for me.

My upload script doesn’t upload my /data/local/torrents as that always stays local

It suddenly works. I do not know whether this was because I added use_ino to mergerfs options, but it works now :slight_smile:

Anyway, thanks again for sharing all these infos. Rclone cache was working well for my but vfs is taking it to another level.