Rclone cache or VFS cache pros/cons

I’m in the process of testing between a gdrive mount with cache and a gdrive mount with vfs backend (both crypted)

From my testing I can see that using VFS speeds up the initial scanning in plex. (28TB new scan) playing media (streaming) is also noticeable faster to start and keeps the Plex cache filled. Direct streaming not yet tested due to low bandwidth connection on my end.

I’ve read most of the threads regarding VFS and the VFS-read-chunk settings, but some things are not clear yet.

  • Using VFS-read-chunk will there be more API calls to Gdrive? I’m unable to check this due to a problem on google’s end.
  • Using VFS-read-chunk, is the directory listing etc all cached? Would this mean a second scan (from plex / custom plex scan scripts) hit the cache first and then only retrieve the newly added files / dirs?

I got a setup with multiple Plex servers and one download server. Was planning to use plex_rcs but this depends on using the cache feature.

As there is a possibility that it will scale to a bigger setup, if every VPS will start scanning the whole library; this amounts to more API hits and a potential ban.

Directories/files are ‘cached’ by using dir-cache time. That keeps the directory / directory listings for the time you specify.

I personally have noticed no difference in API hits from using cache or vfs. I used cache for a few weeks and moved over to vfs over the last few weeks.

I find vfs to be faster in playing, initial library scanning (analyzing the files) and just tends to play better overall for me. The daily quota is 1 billion for the API key so if you can hit that, power to you. In 30 days, I’ve managed to hit 1.3 million calls and that’s reseeding a library multiple times and seeding Emby as well. I run both plex/emby in parallel.

So unless you’ll have a few thousand VPS running daily, I think you are fine on never hitting the daily limit :slight_smile:

Sorted my problems out and got the API overview now, not remotely hitting the quota so that’s one less problem.

Though just ran into an “Download limit exceeded for this file” error. And that means for the whole library. Strange thing is, this is on my test server, my production runs cache (without and clientID) to the same Gdrive and runs fine. Was analyzing while seeding the test library.

Do you use the build in scan and analyze options in plex for initial scanning? Or some scripts? Looking at the debug log it looked like the script tried to open the same file over and over again.

When it goes to analyze a file, it usually opens and closes it 3-4 times from what I’ve noticed.

Some 403s/500s are normal in the process and I usually just ignore those.

I’ve never hit a Download limited exceeded ever while using vfs-read or cache.

I just let Plex do it’s thing when I add a library and it will analyze each file as it loads. I just recently wiped my DB and reseeded ~45TB of stuff in about 2 1/2 days so that’s analyzing roughly ~21k files.

In Sonarr/Radarr, I have analyze off. In Plex, I have all deep analysis off in the scheduled tasks. I let regular analysis happen as a scheduled task.

Guess it was something with the analyze script I was using. After posting that it worked again. You like the simple setup if (read your previous posts, been lurking here for months) so can I presume you don’t use extra scripts for scanning and adding media besides plex_autoscan and the build in scanner?

Yep.

The downside for me for VFS is I need to use union/fsmergerfs to combine a local and remote file system to manage the uploads.

I like the cache as I can just use the cache-tmp-uploads to handle that jazz with plex-autoscan.

VFS starts ~3-5 seconds faster than cache for me. I have no buffering or problems streaming with either setup.

With the startup also comes faster mediainfo or ffprobe with VFS as if you have 21k files to analyze and you 5 seconds to each file, that’s like an extra 29 hours if my math is right. (21k * 5 seconds / 60 seconds in a minute / 60 minutes in an hour = 29.16 hours).

Yes I noticed the speed up when I changed from cache to VFS due to slow initial scanning of my media.

Speed up with starting between cache and VFS is not that big for me, but that’s because I’m on a low bandwidth line so it will always transcode for me.

My download server is separate so I’ll use cache there for the uploading. Decided to implement plex_autoscan on the others VPS’s also, need to adjust my script on the download unit, but I’m already automating the Sh*t out of this whole project so why not :wink:. In the end a cleaner and simpler setup than my production one now.

Sorry to hijack but any chance you peeps could post your mount commands so we can get a better idea of how others are making use of this feature? I’m currently using cache not VFS cache so any speed improvements are welcome! :slight_smile:

/usr/bin/rclone mount
–config /home/plex/.config/rclone/rclone.conf
–read-only
–allow-other
–allow-non-empty
–dir-cache-time=48h
–vfs-read-chunk-size=128M
–vfs-read-chunk-size-limit 2G
–buffer-size=256M
–attr-timeout=1s
–umask 002
–log-level=INFO
–log-file=/home/plex/logs/gdrive.log
gdrivecrypt: /home/plex/media & >> $logs 2>&1

Direct copy paste from my mount script / .service

If you want to test, don’t forget to change the rclone config so your crypt is NOT looking at the cache but direct to gdrive.

2 Likes

Thanks indeed to @Animosity022 for the testing - I too switched from cache to VFS and it has been faster, more consistent and more stable.

More stable as I suffer from bug #2354 as I upload to the drive from elsewhere, VFS seems to deal with it.

Not bothered about missing the upload feature in my particular case as my mount is read-only.

Only slight negative is that my time based Plex library scanning / mediainfo checking / etc of around 400,000 items takes about 300gb/day in traffic, whilst on cache it took about 120gb/day. Naturally this can be cut significantly by just scanning what has updated, but thats for another day.

You might want to try a smaller chunk size and maybe a smaller limit? I would think the problem you are hitting is the VFS might be too efficient and grabbing too much too fast.

You’d be able to check that by analyzing a few files with debug mode to see what it pulled in the logs. That would be my guess.

--buffer-size is probably pulling in data to fast and therefore increasing the download volume.

@Iguana9999 Are you using --buffer-size=0M for your Plex mount? If this is working for “direct play” and transcoded streams without buffering issues?

If so, I can reduce the value for my mounts as well. 64M is working fine for me, but it presumably wastes a lot of traffic during scans.

Actually no, I set it like that on my VPS while scanning the library. With a set buffer it will download to much at one time when scanning / analyzing which makes rclone shoot up in memory usage till it crash and burns. With —buffer-size on 0 it can scan the whole 28TB on a 2GB VPS.
EDIT: If i scan my library using built in scanner it will still fill up the memory and crash Rclone. Setting —buffer-size back to 0 solves it for me.

I just set it cause Animosity022 had a good explanation in his posts in the other thread. Don’t know if it does something for direct streams as I’m unable to test. For transcoding it works fine.

@Linhead: How do you see the data it uses? My VPS provider only counts egress data so don’t have an overview of my Ingress from gdrive to my VPS. Any Linux program that can count this?

@Animosity022: Was your post directed to Linhead or to me :wink:

Was a reply to @Linhead

Yeah, if you have a higher buffer size during the scans and not the memory to handle it, rclone will blow.

I had a low buffer-size during my scans and you can see plex had at times 35 files open during the scan:

Since my buffer was low, memory really wasn’t an issue so during the initial seed, it’s probably better to have a lower size buffer if you are memory constrained.

Thanks for clarifying @Animosity022 - totally agree, a smaller chunk size & limit could easily make a huge impact there. I’ll have a play as soon as I am bored with everything working so perfectly with VFS!

@Iguana9999 I use ‘vnstat’ as I only really care about total box consumption over long periods (and not per process) use, so just a guesstimate with the daily-tally - guestimate-of-filesize-played, I’m sure ‘ntop’ or similar can give process-level stats if you do not want to fiddle with iptables counters.

@Linhead: Cool i’ll look into those, just interested to know how much incoming traffic there is per month.

@Iguana9999 no problem, very easy with vnstat indeed,

$ vnstat -m | head -8

eth0 / monthly

   month        rx      |     tx      |    total    |   avg. rate
------------------------+-------------+-------------+---------------
  Oct '17     30.91 TiB |  737.67 GiB |   31.63 TiB |  103.86 Mbit/s
  Nov '17     26.42 TiB |    1.06 TiB |   27.49 TiB |   93.28 Mbit/s
  Dec '17     21.60 TiB |  592.15 GiB |   22.18 TiB |   72.83 Mbit/s

@Linhead: that’s a lot of incoming traffic :wink:

Someone can advice me a monitoring tool which is customizable, is able to show custom logs / text files and is able to monitor multiple servers?

Using cockpit at the moment but would like to be able to read some log files.

I’d like to implement a max cap for the buffer so rclone limits the amount of memory it uses. As @Animosity022 says above 35 files open can use a lot of memory which is unecessary really.

The --buffer flag was put in specifically to speed up transfers on Windows as windows IO seems very slow unless you do some form of read-ahead and use big buffers. I haven’t really analysed how useful it is in the mount case.

If I just do a ffprobe on a file, it doesn’t fill the buffer-size up from what I can tell from the logs so it wouldn’t waste the memory on those type of items:

2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: >Open: errc=0, fh=0x0
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: Read: ofst=0, fh=0x0
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.openRange at 0 length 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 0 length 4096 chunkOffset 0 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 4096 length 8192 chunkOffset 0 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 12288 length 16384 chunkOffset 0 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 28672 length 32768 chunkOffset 0 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 61440 length 65536 chunkOffset 0 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 126976 length 131072 chunkOffset 0 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: >Read: n=65536
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: Getattr: fh=0xFFFFFFFFFFFFFFFF
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: >Getattr: errc=0
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: Read: ofst=65536, fh=0x0
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: Read: ofst=196608, fh=0x0
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 258048 length 262144 chunkOffset 0 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: >Read: n=131072
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: Getattr: fh=0x0
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 520192 length 524288 chunkOffset 0 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: >Read: n=131072
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: >Getattr: errc=0
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: Read: ofst=57810812928, fh=0x0
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ReadFileHandle.seek from 327680 to 57810812928 (fs.RangeSeeker)
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.RangeSeek from 1044480 to 57810812928 length -1
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at -1 length 4096 chunkOffset 57810812928 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.openRange at 57810812928 length 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 57810817024 length 8192 chunkOffset 57810812928 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: >Read: n=10533
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: Read: ofst=327680, fh=0x0
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ReadFileHandle.seek from 57810823461 to 327680 (fs.RangeSeeker)
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.RangeSeek from 57810823461 to 327680 length -1
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at -1 length 4096 chunkOffset 327680 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.openRange at 327680 length 33554432
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: Read: ofst=458752, fh=0x0
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 331776 length 8192 chunkOffset 327680 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 339968 length 16384 chunkOffset 327680 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 356352 length 32768 chunkOffset 327680 chunkSize 33554432

It seems to just read what it needs and close out the file as the ffprobe takes roughly 1.8 seconds and moving 512M would take ~4 seconds.