My VFS SweetSpot - Updated 11-Aug-2018


#1

My use case:

  • I use a local disk called data/local for temporary storage
  • I use a /GD for my GD encrypted storage
  • I write everything to a mergerfs mount called /gmedia which contains Sonarr/Radarr/Torrents/all my movies/TV shows. I do that as it will support hard linking for anything I download as long as it’s all on the same file system
  • I do not sync my torrent folder to the cloud
  • I run a daily script overnight that uploads any files excluding .srt files as I want to keep my subtitles local

This isn’t a step by step guide as certain things like creating directories and such is assumed to be done :):smiley:

My mount command:

felix@gemini:~$ cat /etc/systemd/system/gmedia-rclone.service
[Unit]
Description=RClone Service
PartOf=gmedia.service

[Service]
Type=notify
ExecStart=/usr/bin/rclone mount gcrypt: /GD \
   --allow-other \
   --dir-cache-time 72h \
   --vfs-read-chunk-size 128M \
   --vfs-read-chunk-size-limit off \
   --umask 002 \
   --bind 192.168.1.30 \
   --log-level INFO \
   --log-file /home/felix/logs/rclone.log
ExecStop=/bin/fusermount -uz /GD
Restart=on-failure
User=felix
Group=felix

[Install]
WantedBy=gmedia.service

My mergerfs script is start forward:

felix@gemini:~/scripts$ cat mergerfs_mount
#!/bin/bash
/usr/bin/mergerfs -o direct_io,default_permissions,sync_read,allow_other,category.action=all,category.create=ff /data/local:/GD /gmedia

My nightly upload runs via cron, excludes any *.srt as I want to keep them local since they are tiny and I have a /data/local/torrents that I don’t want to move as I use that so my mergerfs can hard link.

felix@gemini:~/scripts$ cat upload_cloud
#!/bin/bash
LOCKFILE="/var/lock/`basename $0`"

(
  # Wait for lock for 5 seconds
  flock -x -w 5 200 || exit 1

# Move older local files to the cloud
/usr/bin/rclone move /data/local/ gcrypt: --checkers 3 --fast-list --log-file /home/felix/logs/upload.log -v --tpslimit 3 --transfers 3 --exclude-from /home/felix/scripts/excludes --delete-empty-src-dirs

) 200> ${LOCKFILE}

With a very simple exclude:

felix@gemini:~/scripts$ cat excludes
*.srt
torrents/**

11-JUL-2018 - Updated to remove cmount as my buffer size and vfs-chunk wasn’t working properly with cmount.

12-JUL-2018 - Added cmount back as adding in sync_read fixed the issue I was seeing. Works properly with cmount again.

14-JUL-2018 - I updated my buffer size as I have a 32GB with ~20GB of memory free to 2GB. This will allow me to easily serve 10 streams while my max concurrent is usually 6-7. I matched my buffer-size with my max chunk size.

15-JUL-2018 added the non cmount mount.

21-JUL-2018 - So with my gigabit, I found that 64M was a much better sweet spot for me. I lowered my buffer and increased my plex threshold to 900 seconds. I found that the buffer was just waste for me since everyone was transcoding remotely anyway (for my use-case). By increasing the min chunk size, I was able to reduce my API hits a lot and have noticed no difference in streaming or start times.

7-AUG-2018 - I updated the chunk size since I have a large link and ensured that the buffer size is smaller than the chunk size. Additionally to make things faster, I started to keep my .SRT (subtitles) local since they are tiny anyway by using an --exclude *.srt from my upload script that runs daily. Shared my use case and my thought process is setting up what I have.

11-AUG-2018 - I simplified my setup a bit to remove some of the compiling/other items and removed some items from my mergerfs that I think were causing some issues. For now until the buffer seek patch goes in, I left the buffer back to default.


Cache a encrypted Gdrive mount? [Solved]
Rclone Cache + Plex - Long Scan & Long Load
#2

This looks very interesting @Animosity022. Thanks for testing it out. Any suggestion for handling failed uploads in this setup?

@ncw Why is the cmount option not included by default in the linux builds? Any specific reasons or issues with using it?


#3

I’ve honestly never hit an issue with failed handling to this point.

If it became a problem, I guess you could just rclone move it after it finished the download to get past that.

If my internet was down and it couldn’t upload, odds are, I’d never download the file anyway.


#4

Very interesting. Where did you read about auto_cache? From looking at the man page it doesn’t look that useful

auto_cache
          This option enables automatic flushing of the data cache on open(2). The cache will
          only be flushed if the modification time or the size of the file has changed.

Using -tags cmount means that rclone will link to a C library and that means I’d need to set up a cross compile tool chain for each supported OS :frowning:

Maybe I should make an linux/amd64 build with cmount - that would be relatively easy to fit in the build process. It can’t be the default though as it needs the libfuse library and I don’t want to break the “no dependencies” part of rclone.

It is probably possible to add auto_cache to mount using the https://github.com/bazil/fuse/ library rclone uses.


#5

Yeah, it seems to deal more with the flushing aspect than the actual caching. From looking at the documentation, the kernel_cache looks like a better option:

From what I can see in the API docs, there seems to be some sort of cache but no explicit documentation exists regarding the options.


#6

I avoided kernel_cache as my thought was if something changes it via rclone upload or something, it would be not a good use case.

auto_cache would seem to flush it if the mod time changed.

it seems like it uses the OS filesystem cache, which keeps the file in memory a bit longer depending on how you have your OS setup.


#7

I did a bit of digging through the code…

It looks like kernel_cache is the same as auto_cache except auto_cache flushes stuff if it changes.

It looks like kernel_cache is implemented like this in bazil FUSE

OpenKeepCache   OpenResponseFlags = 1 << 1 // don't invalidate the data cache on open

That would be really easy to try out…

I did that here

https://beta.rclone.org/branch/v1.42-031-g87d64e7f-fuse-auto_cache/

Let me know what you think!

I think auto_cache is kernel_cache selectively if the file hasn’t changed (but need to dig a bit more in the fuse source)


#8

I fixed my mount as was having some things not working right with buffer and vfs with cmount so I removed it for now.


#9

I run a similar setup and if say for example some uploads fail due to the daily limit being reached, they’ve stayed in the temp writes folder and retry. But a few have failed due to I guess lost connection or certain chunks failing and that didn’t resume, it had to start over. It only happens sometimes to really large files. I’ve noticed that by forcing ipv4 I get less timeouts which may be better with Google servers.


#10

Is that your plex IP?


#11

I have a Linux box that has a bunch of stuff on it.

The .30 interface my non VPN’ed interface. I have a .31 interface that I route all my torrent traffic through a VPN #paranoid.


#12

Excellent - I squeezed out a few more seconds for large files with:

rclone mount --allow-other --dir-cache-time 48h --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit 2G --buffer-size 512M --umask 002 --bind 172.30.12.2 --cache-dir=/mnt/user/rclone/vfs --vfs-cache-mode writes --log-level INFO --stats 1m gdrive_media_vfs: /mnt/disks/rclone_vfs

I couldn’t get these to work (unraid rclone plugin user), maybe because I’m still using a unionfs mount??? I think I’m going to stay with my offline rclone upload job anyway as 1 failed/lost upload will probably be my most vital file!


#13

To get those to work, you had to compile from code. I’m not sure how that would work on the Unraid as I’m a Linux user.


#14

What was your thinking behind this? Also, the 16MB which I think is new.

Thanks


#15

Yeah, I made a few changes as I was updating based on the better clarification.

I have a gigabit pipe so I was doing a bit more testing and 16MB chunks seemed to be a better spot for an all around number for folks that might not be lucky enough for gigabit FIOS to their house :slight_smile:

Since I have plenty of memory, I adjusted the caps out for max size and buffer to match.


#16

@Animosity022
Sorry but I am confused, are you mounting the crypt that is cached but using VFS?


#17

VFS is using a similar chunked download for files so you should not banned but it allows for a scaling size of the chunks so it can grow.

dir-cache gives you directory/file name caching based on how long you configure. It basically removes the need to use the cache backend.


Moving from Plexdrive to Rclone VFS. Tips?
#18

Just want to share my setup from Animosity022 .

I was running plexdrive with no encryption. Start times took aprox 5-10 secs.

My new setup is rclone with all data is encrypted on Googledrive - testet this morning after my plex libraries where done.
A normal movie started instantly :slight_smile:

So nice @Animosity022 . Keep up the good work.

Cheers

Morphy


#19

Happy to hear! Thanks for sharing.


#20

With my set up, I keep files younger than 30 days local until a disk usage threshold is met then an rclone move script uploads older files so I don’t need to write to my gdrive mount. Can this still be used as read only? If not, what need to be changed?