My VFS SweetSpot - Updated 11-Aug-2018


This is my mount command:
/usr/bin/rclone mount --allow-other --dir-cache-time 48h --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit 2G --buffer-size 128M --umask 002 gcrypt: /home/XXXX/media &
I’m using only rclone mount with crypt.


So I followed a guide that had me setup my config as follows

type = drive
client_id =
client_secret =
scope = drive
root_folder_id =
service_account_file =
token = <redacted>

type = cache
remote = Gdrive:/mnt/rclone_cache
plex_url =
plex_username = <redacted>
plex_password = <redacted>
chunk_size = 10M
info_age = 24h
chunk_total_size = 10G
plex_token = <redacted>

type = crypt
remote = gcache:/mnt/crypt
filename_encryption = standard
directory_name_encryption = true
password = <redacted>
password2 = <redacted>

Now I realize I screwed up with the buckets but well I’ve already started to upload a good bit to it so eh whatever :).

So If I understand this correctly you’re saying that with VFS I don’t need to have the backend cache setting. So if I create a new config for the crypt to point directly at the location instead of through the cache, then is the plex information no longer relevant? Or should I just leave it as it is?

I’m using a OVH dedicated machine for this so I’m limited to their bandwidth which doesn’t look terrible. I’m just trying to get this optimized as much as possible before I got letting my users go crazy on this new machine. Thank you @Animosity022 for your guide above as I am now using your VFS mounts but I’m mounting it with the crypt/cache and not straight up crypt.

Thank you all for your help.


You’d just make another entry in your rclone.conf, but your rclone.conf looks pretty odd as you have paths in some odd spots.


Yeah totally realized what I did after the fact. Thought I was doing it properly then realized that I made it all goofy.

So to confirm that there is no need for plex to be in there any longer?


If I strip out my cache stuff, I just use:

type = drive
client_id =
client_secret = supersecret
token = {"access_token":"longtoken","token_type":"Bearer","refresh_token":"refreshtoken","expiry":"2018-08-10T19:21:21.236094471-04:00"}

type = crypt
remote = GD:media
filename_encryption = standard
password = somepassword
password2 = somepassword
directory_name_encryption = true

My mount command mounts gcrypt:


Thank you for the feedback


@Animosity022 do I get it right that with your setup in the OP with mergerfs, I can write files, have them local up to when have uploaded, and then use them from GDrive once they finish uploading?


I upload them by a cron job with a rclone move script so it moves the local files to my GD and my GD picks them up automatically via the 1 min polling interval.


Sorry for late reply.
$ go version
go version go1.6.2 linux/amd64


I’m back. Changed my setup to one big server running dockers, so I’m also back to VFS for gdrive with a local storage in front for new media (and to offload the analyzing of new media) I haven’t kept up the last 2 weeks but you mention a coming patch for buffer-size?


@Animosity022 would your mind sharing you move command please.

I’ve added a few bits to mind to stop partial files getting moved, but I can’t make my mind up about how many checkers and transfers to use - is more better or is that making too many requests? I have a 200Mbps upload


rclone move /mnt/user/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 10 --fast-list --transfers 4 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 8500k


Speed really isn’t my concern so I let it run slow and steady as it runs overnight anyways:

felix@gemini:~/scripts$ cat upload_cloud
LOCKFILE="/var/lock/`basename $0`"

  # Wait for lock for 5 seconds
  flock -x -w 5 200 || exit 1

# Move older local files to the cloud
/usr/bin/rclone move /data/local/ gcrypt: --checkers 3 --fast-list --log-file /home/felix/logs/upload.log -v --tpslimit 3 --transfers 3 --exclude-from /home/felix/scripts/excludes --delete-empty-src-dirs

) 200> ${LOCKFILE}
felix@gemini:~/scripts$ cat excludes


What is the buffer seek patch that you talk about that is incoming. I am still running this:

Description=rclone cache

ExecStart=/usr/bin/rclone mount plexcache: /home/plex/gdrive/plexcache \
   --allow-other \
   --dir-cache-time 48h \
   --vfs-read-chunk-size 48M \
   --vfs-read-chunk-size-limit 2G \
   --buffer-size 128M \
   --syslog \
   --umask 002 \
   --bind \
   --log-level INFO
ExecStop=/bin/fusermount -uz /home/plex/gdrive/plexcache


I seem to be having some issues with buffering. Do you think your latest changes might help? My machine has 16GB of ram and I have a 300Mbps pipe. I thought it might have been a local comcast issue but it hasnt gone away.


The patch I was talking about is a pull request here:

It fixes some stuff with seeking in a file and reusing the buffer instead of discarding it.

Plex is picky at times and will open a file a few times before playing it so it would be helpful.

I’ve been running with the default buffer for a week or so and haven’t noticed any issues:

/usr/bin/rclone mount gcrypt: /GD --allow-other --dir-cache-time 72h --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --umask 002 --bind --log-level INFO --log-file /home/felix/logs/rclone.log

My internet and peering to google is very solid though as well compared to other folks. I can routinely max my line and rarely see any time outs or anything else in my logs.


I just read the pull request. If this is implemented does it mean the buffer should be bigger than drive-chunk-size to aid seeking, memory permitting?