Recommended Dropbox (Formally Google Drive) and Plex Mount Settings

Hmm, you’re correct. I reverted back to cache+vfs last night, upped the time of both to be equal (168h) with a poll interval of 1m and new files do show up on top of my API hits dropping to a record low! I used to cap out at ~300/100s every hour when Kodi was scanning and now it looks like I’m sitting around 15/100s with a few peaks of 75/100s. This is totally sustainable and awesome if it doesn’t error out at some point!

Thank you so much. Now the only thing I need is folder mtime to be updated when there’s a write so that I can turn off deep scanning in Kodi and get faster library updates. That’s the only drag I have now and it’s off topic here.

What is your config for integrating cache & vfs in one mount?

Here’s what I’m currently using, but I plan on tweaking it a little bit once I understand a few more of the concepts. I don’t think I need the cmount locally since I’m only using Kodi but:

rclone.service

[Unit]
Description=Mount and cache Google drive to /mnt/drive
After=syslog.target local-fs.target network.target

[Service]
Type=notify
ExecStartPre=-/bin/mkdir /mnt/drive
ExecStartPre=/bin/sleep 5
ExecStart=/usr/bin/rclone mount drivecache: /mnt/drive
–allow-non-empty
–allow-other
–config /home/bishop/.config/rclone/rclone.conf
–dir-cache-time 168h
–poll-interval 1m
–vfs-read-chunk-size 64M
–vfs-read-chunk-size-limit 2G
–buffer-size 128M
–syslog
–umask 002
–uid 1000
–gid 1000
–log-level INFO
ExecStop=/bin/fusermount -u -z /mnt/drive
Restart=always
[Install]
WantedBy=multi-user.target

rclone.conf

[drive]
type = drive
client_id = removed
client_secret = removed
scope = drive
root_folder_id =
service_account_file =
token = removed

[drivecache]
type = cache
remote = drive:
plex_url =
plex_username =
plex_password =
chunk_size = 10M
info_age = 168h
chunk_total_size = 5G

This dropped API hits drastically, uploads subtitles that I download locally to the drive, and has improved the experience. Other than folder mtime I’d like to figure out how to switch from a mover script to rclone handling everything by excluding ~partial, unionfs, and my download folder. I haven’t tested whether --exclude works on the mount command. The only reason I’d still use unionfs is to keep the downdir on the same filesystem as the mount so that hardlinking works in sonarr/radarr.

My seedbox downloads and serves plex to friends/family. Locally I use Kodi because I prefer the interface but need the files available on the drive ASAP.

Thanks for sharing. This looks interesting. Never tried using vfs with cache before.

@Animosity022, ever tried something like this?

@Animosity022

I’m trying to follow your instructions and having a small issue. I can’t seem to figure out how to set the GOPATH. I think that’s the problem anyways. Would you mind taking a look at the this pastebin link (it wouldn’t let me post the code because I’m a new user) https://pastebin.com/BHPgfW1b

Thanks!

What’s your go version?

felix@gemini:~$ go version
go version go1.7.4 linux/amd64

how are you finding the performance of the cache vfs mount Vs a crypt vfs mount? I’m tempted to give this a go, to remove the need for unionfs and a background transfer job.

Thanks

I’ve never used crypt so, unfortunately, I cannot compare. Though once I find time to test it on my seedbox I plan on continuing to use unionfs to marry my downloads folder to the drive and exclude it. Keeping torrents on a FUSE mount is a terrible idea (rtorrent freaks out) and I couldn’t get the mountpoint to keep a local folder that doesn’t upload-- the folder would be wiped each time.

I’m going to give a cache vfs mount a go tomorrow - will report back on launch times, api calls etc

The vfs read stuff does nothing with the cache as the cache backend uses its own chunking method.

@Animosity022 can you confirm that those settings are a complete replacement for plexdrive/allow plex scans without a ban? I am using them now and it’s working great, better than plexdrive + crypt but fear running a scan. Does the dir-cache-time help avoid the bans? Sorry if this is a dumb question, I read through here but couldn’t see anything confirming this.

Thanks!

The dir-cache time has been around for a bit.

The change that stops bans is the chunked reading. Both vfs-read-chunk-size and cache do chunked reading which is the thing that avoids the bans as plex would grab a file multiple times when analyzing so that which would cause a ban.

Pick one or the other and use. You can scan away as it won’t cause a problem.

Hi,
i’ve just switched from plexdrive to vfs cache. I follow your suggestion but i’ve buffering problem during plex playback. I’m on a 100 Mb server with 8gb ram and in my home a 1Gb connection.
The test that i’ve made are:

  1. 720p tv series (6/8 Mb bitrate) stream without problem
  2. 1080p series (10/16 Mb bitrate) buffering problem

This is my rclone mount command:
/usr/bin/rclone mount --allow-other --dir-cache-time 48h --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit 2G --buffer-size 128M --umask 002 gcrypt: /home/XXXX/media &

I’ve also tried with the same result this command:
/usr/bin/rclone mount --allow-other --dir-cache-time 48h --vfs-read-chunk-size 16M --vfs-read-chunk-size-limit 2G --buffer-size 2G --umask 002 gcrypt: /home/XXXX/media &

Can you help me? With plexdrive everything work fine. My rclone versione is the latest stable (1.42). I’m using crypt

Are you using unionfs?

No. Only rclone mount and crypt

Do you have any logs of when you are trying to do a 1080?

I don’t know if this makes any difference, but in the notes about VFS, it says:
vfs-read-chunk-size should be greater than buffer-size to prevent too many requests from being sent when opening a file.

The note I’m referring to is in the recently closed pull request:

When i first switched from cache to VFS, I too saw buffering when vfs-read-chunk-size was less than buffer-size.
But after I read that note and made the change, it has been working smoothly.

I’m also using a fairly recent beta.

Yeah, I’ve been playing with some settings. Was going to test a bit more and see how it works over a week or so. Recently, I’ve tuned up more based on his defaults and am seeing how that works before I update my settings.

I’ve been testing this:

felix@gemini:~$ ps -ef | grep rclone
felix     7708     1  2 13:16 ?        00:00:05 /home/felix/go/bin/rclone cmount gcrypt: /GD --allow-other -o auto_cache -o sync_read --dir-cache-time 72h --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --buffer-size 100M --umask 002 --bind 192.168.1.30 --log-level INFO --log-file /home/felix/logs/rclone.log

Aren’t you worried about running out of memory?

The vfs-read-chunk-size-limit and vfs-read-chunk-size doesn’t have anything to do with memory. That is just the request it makes to download the file.

The buffer-size is what keeps a file in memory.

1 Like