Rclone Cache Is Better/Faster for Plex Than VFS

Ok, i think i made a post somewhat related to this before, but i'm genuinely confused here.
There is so much pushing for use of VFS over cache here but with all my testing, opening files and seeking through files is double or more the speed when im using cache over VFS.
32gbs memory, 8 cores on OVH
I've been using @Animosity022's rclone/mergerfs setup for many months and switched to cache again a couple day ago and i must say cache vs vfs is night and day - blazing fast. People on my plex server are asking me if i switched servers because videos are just so much more responsive.

Care to share your settings? Could be interesting to try

Yes please share your settings.

I also read lately here that VFS is great..but i m on cache currently using setup that is recommended in
docs ( gdrive -> cache -> crypt) and its pretty fast. What i dont get still is that if i mount the cache can i upload to it with rclone copy from other vps or not ? I mean copying to mount works but i didn't try many copy operations yet, just wondering if mounted cache remote can be used from another location so i can do rclone copy with bandwidth limit that i cant use on mounted rclone cache remote.

cheers

@St0rm @Mr.M

PERSISTENT RCLONE.CONF

[drive]
type = drive
client_id = xxx
client_secret = xxx
scope = drive
token = xxx

[cache]
type = cache
remote = drive:
plex_url = http://127.0.0.1:32400
plex_username = xxx
plex_password = xxx
plex_token = xxx
chunk_size = 32M
info_age = 8760h
chunk_total_size = 50G
db_path = /data/.cache/rclone
chunk_path = /data/.cache/rclone
chunk_clean_interval = 5m
workers = 4

My rclone.conf has always been the same.


OLD SLOWER MOUNT

[Unit]
Description=drive
Wants=network-online.target
After=network-online.target

[Service]
Type=notify
User=user
Group=user
Environment=RCLONE_CONFIG=/opt/rclone/rclone.conf
KillMode=none
RestartSec=5
ExecStart=/usr/bin/rclone mount drive: /mnt/drive \
--allow-other \
--attr-timeout 8760h \
--dir-cache-time 8760h \
--log-level NOTICE \
--log-file /opt/rclone/logs/rclone.log \
--timeout 1h \
--umask 002 \
--user-agent rclone \
--rc \
--rc-addr 127.0.0.1:5572
ExecStop=/bin/fusermount -uz /mnt/drive
Restart=on-failure

[Install]
WantedBy=default.target

WAY WAY FASTER MOUNT

[Unit]
Description=drive
Wants=network-online.target
After=network-online.target

[Service]
Type=notify
User=user
Group=user
Environment=RCLONE_CONFIG=/opt/rclone/rclone.conf
KillMode=none
RestartSec=5
ExecStart=/usr/bin/rclone mount cache: /mnt/drive \
--allow-other \
--gid=1000 \
--uid=1000 \
--bind xx.xx.xxx.xxx \
--dir-cache-time 6570h \
--fast-list \
--cache-chunk-path=/data/.cache/rclone \
--cache-db-path=/data/.cache/rclone \
--log-file /opt/rclone/logs/rclone.log \
--umask 002 \
--log-level DEBUG \
--user-agent rclone
ExecStop=/bin/fusermount -uz /mnt/drive
Restart=on-failure

[Install]
WantedBy=default.target

It’s very setup dependent as I have done quite the testing and have the exact opposite results.

You’d have to share some quantitative numbers and more details on your setup.

Cache works better for certain scenarios much better.

Also interested in what Mr.M asked. Is it possible in that setup (drive-cache-crypt) when crypt is mounted to use rclone copy from another server ? Missing the bwlimit option when copying directly to mount. If possible we are copying directly to crypt remote ? Any help for this setup is appreciated.

sure, i get it - everyone's setup is different. i have a fresh Rise 2 server on OVH btw

however, with that said, why the hatred of cache backend and hard push of vfs by developers if that is the case(that some people will need cache backend)

What do you mean? I’ve personally always said to use what is best. I was asking you to quantify you results to help others make good choices.

There is not a push for either as cache simply does not have a maintainer now unfortunately so it’s been left behind a bit.

So if you can share your testing and results, that helps as more information is always better.

Just for some numbers, comparing a mediainfo command against a VFS mount vs Cache:

Cache


real	0m4.389s
user	0m0.056s
sys	0m0.034s

VFS

real	0m1.836s
user	0m0.063s
sys	0m0.020s

I use the same file in both and use the command:

time mediainfo filename.mkv

Once the chunks are 'cached', you get much better results as expected as Cache gives near disk speeds:


real	0m0.300s
user	0m0.061s
sys	0m0.007s

What I've generally shared is that the cache backend is a great solution for Plex if you have particularly bad players, built in apps and items that tend to not direct play or buffer very well. Music is a great example as it constantly opens and closes files all the time so the standard VFS does not work well for that.

My setup is all ATV's with my server local in my basement and I have gigabit internet so my use case is very specific to me and what works for my setup.

When I hit play on something, it starts in 1-2 seconds so it's almost hard to tell it's even remote media. I use the enhanced player on my ATVs with Plex so almost 99.9% of everything direct plays.

thanks for clarifying in such detail.
i didnt mean just you though i've seen @thestigma and @ncw push for vfs over cache backend at every turn.
i'm not complaining. i'm really just trying to understand why we are trying to move forward away from cache in develepment of rclone while vfs is good for some use cases, and cache is good for others.

I noticed your old mount doesn't have bind, while the new one does... I've noticed that OVH defaults to IPv6, and their routing to Google is not the best using IPv6. Try the old mount command with bind to compare IPv4 with IPv4. OVH has direct peering with Google and it is really fast with IPv4.

Yep, Verizon FIOS is only IPv4 currently so I have no use for it. I wonder if that's a good command to put in for a default as using IPV4 is just faster in my tests currently when I tried some Google Compute machines as well.

I think the focus is putting the functionality in the base VFS mount and expanding that rather than maintaining another backend. I'd personally rather see them merge up and have one to maintain to keep things simple and get the functionality that would be more helpful for streaming in there :slight_smile:

1 Like

Unless i'm misunderstanding, vfs will never be able to do certain things cache backend can such as seeking backwards.seeking backwards on vfs will always just recreate the stream and cache backend doesnt and is much faster to seek backwards.
doesn't vfs also rely heavily on memory rather than your disk? what about those people who rather have it rely more on disk?

It just depends on how things are implemented. I don't see any issues moving backwards when I try to as I tested a bit last night. I went back a few minutes and things started right back up for me. Cache would be faster since the chunks are local on disk though so that's definitely correct.

Both use memory for reading things as it depends on how you configure it. VFS has buffer-size which is configurable and only is used until a file is closed so it's really only good for sequential reading / playing with direct play since transcoding reads ahead based on your plex setting.

How do I cache it in a manner such that the chunks are only cleared when the limit for chunk_total_size has crossed?

That should be how it is working. Are you seeing something different?

chunk_size = 32M
chunk_total_size = 10G
info_age = 1d
chunk_clean_interval = 5m0s
chunk_path = /home/pi/rclone_temp

Well, I started Analyze in Plex for one particular TV Show, and it ended up storing over 40GB of chunks, averaging about 190MB for each episode (crazy, I know but it's necessary).

The main issue I had before was scrubbing backwards in the app.plex.tv video player for (transcoded) content would not play; playback session had to be restarted. With cache, I still faced the same issue, so I reverted back.

I'd start a new post and use the question template and we can help out as that should not be the case.

What should not be the case?

It should not go above your 10G you configured.