My VFS SweetSpot - Updated 15-Jul-2018


#1

I was trying to find a way to use a nice fuse parameter for auto_cache to deal with repeat plex opening and closing of the files so based on @ncw other post on how to leverage the native Linux fuse flags by compiling with the cmount option, I did that.

I wanted to remove my mergerfs and just use vfs-cache-mode writes to deal with uploads to remove a layer from the puzzle and I got down to my quickest analyzes/mediainfo/ff-probes so far, which leads to my quickest start times.

I build from source so I can use cmount and pass auto_cache to rclone.

Set where the binary will install to.
export GOBIN=$HOME/go/bin

Build from source:

go get -u -v github.com/ncw/rclone

cd ~/go/src/github.com/ncw/rclone
go install -tags cmount

My mount command:

felix@gemini:/etc/systemd/system$ cat rclone.service
[Unit]
Description=RClone Service
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
ExecStart=/home/felix/go/bin/rclone cmount gcrypt: /gmedia \
   --allow-other \
   --dir-cache-time 48h \
   --vfs-read-chunk-size 16M \
   --vfs-read-chunk-size-limit 2G \
   --buffer-size 2G \
   --syslog \
   --umask 002 \
   --bind 192.168.1.30 \
   --cache-dir /data/rclone \
   --vfs-cache-mode writes \
   -o auto_cache \
   -o sync_read \
   --log-level INFO
ExecStop=/bin/fusermount -uz /gmedia
ExecStartPost=/home/felix/scripts/gmedia_find
Restart=on-abort
User=felix
Group=felix

[Install]
WantedBy=default.target

If did not want to compile and use the regular rclone mount:

root@gemini:/etc/systemd/system# cat rclone.service
[Unit]
Description=RClone Service
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
ExecStart=/home/felix/go/bin/rclone mount gcrypt: /GD \
   --allow-other \
   --dir-cache-time 48h \
   --vfs-read-chunk-size 16M \
   --vfs-read-chunk-size-limit 2G \
   --buffer-size 2G \
   --syslog \
   --umask 002 \
   --bind 192.168.1.30 \
   --log-level INFO
ExecStop=/bin/fusermount -uz /GD
ExecStartPost=/home/felix/scripts/GD_find
Restart=on-abort
User=felix
Group=felix

[Install]
WantedBy=default.target

This gives me 1-2s mediainfo times with the second one happening instantly as it’s coming from cache.

55GB 4K Movie:

felix@gemini:/gmedia/Radarr_Movies/Tomb Raider (2018)$ time mediainfo Tomb\ Raider\ \(2018\).mkv | grep blah

real	0m2.166s
user	0m0.202s
sys	0m0.016s
felix@gemini:/gmedia/Radarr_Movies/Tomb Raider (2018)$ time mediainfo Tomb\ Raider\ \(2018\).mkv | grep blah

real	0m0.138s
user	0m0.125s
sys	0m0.015s
felix@gemini:/gmedia/Radarr_Movies/Tomb Raider (2018)$ du -ms *.mkv
55133	Tomb Raider (2018).mkv

A small TV show is under a second.

felix@gemini:/gmedia/TV/Preacher$ time mediainfo Preacher.S03E01.mkv | grep blah

real	0m0.823s

Quite happy with this config and going to run with this for a bit to test it out as the increased buffer size is for any direct play and seems to cause no issues with the timings.

Also, if you are using unionfs or mergerfs, you must use the -o sync-read option when mounting as that is used in the default rclone mount and in my options on the cmount. Without using this, items because out of order and things don’t work.

–buffer-size is your safety net in terms of if anything goes wrong. The goal here would be set it as high as possible. Your free memory / max expected concurrent steams would be a good starting point. I have 20GB free and expect no more than 6-7 streams. I added in 3 so 20/10 for 2GB buffer as my max value.

11-JUL-2018 - Updated to remove cmount as my buffer size and vfs-chunk wasn’t working properly with cmount.

12-JUL-2018 - Added cmount back as adding in sync_read fixed the issue I was seeing. Works properly with cmount again.

14-JUL-2018 - I updated my buffer size as I have a 32GB with ~20GB of memory free to 2GB. This will allow me to easily serve 10 streams while my max concurrent is usually 6-7. I matched my buffer-size with my max chunk size.

15-JUL-2018 added the non cmount mount.


Rclone Cache + Plex - Long Scan & Long Load
#2

This looks very interesting @Animosity022. Thanks for testing it out. Any suggestion for handling failed uploads in this setup?

@ncw Why is the cmount option not included by default in the linux builds? Any specific reasons or issues with using it?


#3

I’ve honestly never hit an issue with failed handling to this point.

If it became a problem, I guess you could just rclone move it after it finished the download to get past that.

If my internet was down and it couldn’t upload, odds are, I’d never download the file anyway.


#4

Very interesting. Where did you read about auto_cache? From looking at the man page it doesn’t look that useful

auto_cache
          This option enables automatic flushing of the data cache on open(2). The cache will
          only be flushed if the modification time or the size of the file has changed.

Using -tags cmount means that rclone will link to a C library and that means I’d need to set up a cross compile tool chain for each supported OS :frowning:

Maybe I should make an linux/amd64 build with cmount - that would be relatively easy to fit in the build process. It can’t be the default though as it needs the libfuse library and I don’t want to break the “no dependencies” part of rclone.

It is probably possible to add auto_cache to mount using the https://github.com/bazil/fuse/ library rclone uses.


#5

Yeah, it seems to deal more with the flushing aspect than the actual caching. From looking at the documentation, the kernel_cache looks like a better option:

From what I can see in the API docs, there seems to be some sort of cache but no explicit documentation exists regarding the options.


#6

I avoided kernel_cache as my thought was if something changes it via rclone upload or something, it would be not a good use case.

auto_cache would seem to flush it if the mod time changed.

it seems like it uses the OS filesystem cache, which keeps the file in memory a bit longer depending on how you have your OS setup.


#7

I did a bit of digging through the code…

It looks like kernel_cache is the same as auto_cache except auto_cache flushes stuff if it changes.

It looks like kernel_cache is implemented like this in bazil FUSE

OpenKeepCache   OpenResponseFlags = 1 << 1 // don't invalidate the data cache on open

That would be really easy to try out…

I did that here

https://beta.rclone.org/branch/v1.42-031-g87d64e7f-fuse-auto_cache/

Let me know what you think!

I think auto_cache is kernel_cache selectively if the file hasn’t changed (but need to dig a bit more in the fuse source)


#8

I fixed my mount as was having some things not working right with buffer and vfs with cmount so I removed it for now.


#9

I run a similar setup and if say for example some uploads fail due to the daily limit being reached, they’ve stayed in the temp writes folder and retry. But a few have failed due to I guess lost connection or certain chunks failing and that didn’t resume, it had to start over. It only happens sometimes to really large files. I’ve noticed that by forcing ipv4 I get less timeouts which may be better with Google servers.


#10

Is that your plex IP?


#11

I have a Linux box that has a bunch of stuff on it.

The .30 interface my non VPN’ed interface. I have a .31 interface that I route all my torrent traffic through a VPN #paranoid.


#12

Excellent - I squeezed out a few more seconds for large files with:

rclone mount --allow-other --dir-cache-time 48h --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit 2G --buffer-size 512M --umask 002 --bind 172.30.12.2 --cache-dir=/mnt/user/rclone/vfs --vfs-cache-mode writes --log-level INFO --stats 1m gdrive_media_vfs: /mnt/disks/rclone_vfs

I couldn’t get these to work (unraid rclone plugin user), maybe because I’m still using a unionfs mount??? I think I’m going to stay with my offline rclone upload job anyway as 1 failed/lost upload will probably be my most vital file!


#13

To get those to work, you had to compile from code. I’m not sure how that would work on the Unraid as I’m a Linux user.


#14

What was your thinking behind this? Also, the 16MB which I think is new.

Thanks


#15

Yeah, I made a few changes as I was updating based on the better clarification.

I have a gigabit pipe so I was doing a bit more testing and 16MB chunks seemed to be a better spot for an all around number for folks that might not be lucky enough for gigabit FIOS to their house :slight_smile:

Since I have plenty of memory, I adjusted the caps out for max size and buffer to match.


#16

@Animosity022
Sorry but I am confused, are you mounting the crypt that is cached but using VFS?


#17

VFS is using a similar chunked download for files so you should not banned but it allows for a scaling size of the chunks so it can grow.

dir-cache gives you directory/file name caching based on how long you configure. It basically removes the need to use the cache backend.


Moving from Plexdrive to Rclone VFS. Tips?
#18

Just want to share my setup from Animosity022 .

I was running plexdrive with no encryption. Start times took aprox 5-10 secs.

My new setup is rclone with all data is encrypted on Googledrive - testet this morning after my plex libraries where done.
A normal movie started instantly :slight_smile:

So nice @Animosity022 . Keep up the good work.

Cheers

Morphy


#19

Happy to hear! Thanks for sharing.


#20

With my set up, I keep files younger than 30 days local until a disk usage threshold is met then an rclone move script uploads older files so I don’t need to write to my gdrive mount. Can this still be used as read only? If not, what need to be changed?