My VFS SweetSpot - Updated 11-Aug-2018


#101

This is my mount command:
/usr/bin/rclone mount --allow-other --dir-cache-time 48h --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit 2G --buffer-size 128M --umask 002 gcrypt: /home/XXXX/media &
I’m using only rclone mount with crypt.


#102

So I followed a guide that had me setup my config as follows

[Gdrive]
type = drive
client_id =
client_secret =
scope = drive
root_folder_id =
service_account_file =
token = <redacted>

[gcache]
type = cache
remote = Gdrive:/mnt/rclone_cache
plex_url = http://127.0.0.1:32400
plex_username = <redacted>
plex_password = <redacted>
chunk_size = 10M
info_age = 24h
chunk_total_size = 10G
plex_token = <redacted>

[gcrypt]
type = crypt
remote = gcache:/mnt/crypt
filename_encryption = standard
directory_name_encryption = true
password = <redacted>
password2 = <redacted>

Now I realize I screwed up with the buckets but well I’ve already started to upload a good bit to it so eh whatever :).

So If I understand this correctly you’re saying that with VFS I don’t need to have the backend cache setting. So if I create a new config for the crypt to point directly at the location instead of through the cache, then is the plex information no longer relevant? Or should I just leave it as it is?

I’m using a OVH dedicated machine for this so I’m limited to their bandwidth which doesn’t look terrible. I’m just trying to get this optimized as much as possible before I got letting my users go crazy on this new machine. Thank you @Animosity022 for your guide above as I am now using your VFS mounts but I’m mounting it with the crypt/cache and not straight up crypt.

Thank you all for your help.


#103

You’d just make another entry in your rclone.conf, but your rclone.conf looks pretty odd as you have paths in some odd spots.


#104

Yeah totally realized what I did after the fact. Thought I was doing it properly then realized that I made it all goofy.

So to confirm that there is no need for plex to be in there any longer?


#105

If I strip out my cache stuff, I just use:

[GD]
type = drive
client_id = someid.apps.googleusercontent.com
client_secret = supersecret
token = {"access_token":"longtoken","token_type":"Bearer","refresh_token":"refreshtoken","expiry":"2018-08-10T19:21:21.236094471-04:00"}

[gcrypt]
type = crypt
remote = GD:media
filename_encryption = standard
password = somepassword
password2 = somepassword
directory_name_encryption = true

My mount command mounts gcrypt:


#106

Thank you for the feedback


#107

@Animosity022 do I get it right that with your setup in the OP with mergerfs, I can write files, have them local up to when have uploaded, and then use them from GDrive once they finish uploading?


#108

I upload them by a cron job with a rclone move script so it moves the local files to my GD and my GD picks them up automatically via the 1 min polling interval.


#109

Sorry for late reply.
$ go version
go version go1.6.2 linux/amd64


#110

I’m back. Changed my setup to one big server running dockers, so I’m also back to VFS for gdrive with a local storage in front for new media (and to offload the analyzing of new media) I haven’t kept up the last 2 weeks but you mention a coming patch for buffer-size?


#111

@Animosity022 would your mind sharing you move command please.

I’ve added a few bits to mind to stop partial files getting moved, but I can’t make my mind up about how many checkers and transfers to use - is more better or is that making too many requests? I have a 200Mbps upload

Thanks

rclone move /mnt/user/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 10 --fast-list --transfers 4 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 8500k


#112

Speed really isn’t my concern so I let it run slow and steady as it runs overnight anyways:

felix@gemini:~/scripts$ cat upload_cloud
#!/bin/bash
LOCKFILE="/var/lock/`basename $0`"

(
  # Wait for lock for 5 seconds
  flock -x -w 5 200 || exit 1

# Move older local files to the cloud
/usr/bin/rclone move /data/local/ gcrypt: --checkers 3 --fast-list --log-file /home/felix/logs/upload.log -v --tpslimit 3 --transfers 3 --exclude-from /home/felix/scripts/excludes --delete-empty-src-dirs

) 200> ${LOCKFILE}
felix@gemini:~/scripts$ cat excludes
*.srt
*partial~

#113

What is the buffer seek patch that you talk about that is incoming. I am still running this:

[Unit]
Description=rclone cache
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
ExecStart=/usr/bin/rclone mount plexcache: /home/plex/gdrive/plexcache \
   --allow-other \
   --dir-cache-time 48h \
   --vfs-read-chunk-size 48M \
   --vfs-read-chunk-size-limit 2G \
   --buffer-size 128M \
   --syslog \
   --umask 002 \
   --bind 192.168.2.120 \
   --log-level INFO
ExecStop=/bin/fusermount -uz /home/plex/gdrive/plexcache
Restart=on-abort
User=plex
Group=plex

[Install]
WantedBy=default.target

I seem to be having some issues with buffering. Do you think your latest changes might help? My machine has 16GB of ram and I have a 300Mbps pipe. I thought it might have been a local comcast issue but it hasnt gone away.


#114

The patch I was talking about is a pull request here:

It fixes some stuff with seeking in a file and reusing the buffer instead of discarding it.

Plex is picky at times and will open a file a few times before playing it so it would be helpful.

I’ve been running with the default buffer for a week or so and haven’t noticed any issues:

/usr/bin/rclone mount gcrypt: /GD --allow-other --dir-cache-time 72h --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --umask 002 --bind 192.168.1.30 --log-level INFO --log-file /home/felix/logs/rclone.log

My internet and peering to google is very solid though as well compared to other folks. I can routinely max my line and rarely see any time outs or anything else in my logs.


#115

I just read the pull request. If this is implemented does it mean the buffer should be bigger than drive-chunk-size to aid seeking, memory permitting?