Rclone cache or VFS cache pros/cons

When it goes to analyze a file, it usually opens and closes it 3-4 times from what I’ve noticed.

Some 403s/500s are normal in the process and I usually just ignore those.

I’ve never hit a Download limited exceeded ever while using vfs-read or cache.

I just let Plex do it’s thing when I add a library and it will analyze each file as it loads. I just recently wiped my DB and reseeded ~45TB of stuff in about 2 1/2 days so that’s analyzing roughly ~21k files.

In Sonarr/Radarr, I have analyze off. In Plex, I have all deep analysis off in the scheduled tasks. I let regular analysis happen as a scheduled task.

Guess it was something with the analyze script I was using. After posting that it worked again. You like the simple setup if (read your previous posts, been lurking here for months) so can I presume you don’t use extra scripts for scanning and adding media besides plex_autoscan and the build in scanner?

Yep.

The downside for me for VFS is I need to use union/fsmergerfs to combine a local and remote file system to manage the uploads.

I like the cache as I can just use the cache-tmp-uploads to handle that jazz with plex-autoscan.

VFS starts ~3-5 seconds faster than cache for me. I have no buffering or problems streaming with either setup.

With the startup also comes faster mediainfo or ffprobe with VFS as if you have 21k files to analyze and you 5 seconds to each file, that’s like an extra 29 hours if my math is right. (21k * 5 seconds / 60 seconds in a minute / 60 minutes in an hour = 29.16 hours).

Yes I noticed the speed up when I changed from cache to VFS due to slow initial scanning of my media.

Speed up with starting between cache and VFS is not that big for me, but that’s because I’m on a low bandwidth line so it will always transcode for me.

My download server is separate so I’ll use cache there for the uploading. Decided to implement plex_autoscan on the others VPS’s also, need to adjust my script on the download unit, but I’m already automating the Sh*t out of this whole project so why not :wink:. In the end a cleaner and simpler setup than my production one now.

Sorry to hijack but any chance you peeps could post your mount commands so we can get a better idea of how others are making use of this feature? I’m currently using cache not VFS cache so any speed improvements are welcome! :slight_smile:

/usr/bin/rclone mount
–config /home/plex/.config/rclone/rclone.conf
–read-only
–allow-other
–allow-non-empty
–dir-cache-time=48h
–vfs-read-chunk-size=128M
–vfs-read-chunk-size-limit 2G
–buffer-size=256M
–attr-timeout=1s
–umask 002
–log-level=INFO
–log-file=/home/plex/logs/gdrive.log
gdrivecrypt: /home/plex/media & >> $logs 2>&1

Direct copy paste from my mount script / .service

If you want to test, don’t forget to change the rclone config so your crypt is NOT looking at the cache but direct to gdrive.

2 Likes

Thanks indeed to @Animosity022 for the testing - I too switched from cache to VFS and it has been faster, more consistent and more stable.

More stable as I suffer from bug #2354 as I upload to the drive from elsewhere, VFS seems to deal with it.

Not bothered about missing the upload feature in my particular case as my mount is read-only.

Only slight negative is that my time based Plex library scanning / mediainfo checking / etc of around 400,000 items takes about 300gb/day in traffic, whilst on cache it took about 120gb/day. Naturally this can be cut significantly by just scanning what has updated, but thats for another day.

You might want to try a smaller chunk size and maybe a smaller limit? I would think the problem you are hitting is the VFS might be too efficient and grabbing too much too fast.

You’d be able to check that by analyzing a few files with debug mode to see what it pulled in the logs. That would be my guess.

--buffer-size is probably pulling in data to fast and therefore increasing the download volume.

@Iguana9999 Are you using --buffer-size=0M for your Plex mount? If this is working for “direct play” and transcoded streams without buffering issues?

If so, I can reduce the value for my mounts as well. 64M is working fine for me, but it presumably wastes a lot of traffic during scans.

Actually no, I set it like that on my VPS while scanning the library. With a set buffer it will download to much at one time when scanning / analyzing which makes rclone shoot up in memory usage till it crash and burns. With —buffer-size on 0 it can scan the whole 28TB on a 2GB VPS.
EDIT: If i scan my library using built in scanner it will still fill up the memory and crash Rclone. Setting —buffer-size back to 0 solves it for me.

I just set it cause Animosity022 had a good explanation in his posts in the other thread. Don’t know if it does something for direct streams as I’m unable to test. For transcoding it works fine.

@Linhead: How do you see the data it uses? My VPS provider only counts egress data so don’t have an overview of my Ingress from gdrive to my VPS. Any Linux program that can count this?

@Animosity022: Was your post directed to Linhead or to me :wink:

Was a reply to @Linhead

Yeah, if you have a higher buffer size during the scans and not the memory to handle it, rclone will blow.

I had a low buffer-size during my scans and you can see plex had at times 35 files open during the scan:

Since my buffer was low, memory really wasn’t an issue so during the initial seed, it’s probably better to have a lower size buffer if you are memory constrained.

Thanks for clarifying @Animosity022 - totally agree, a smaller chunk size & limit could easily make a huge impact there. I’ll have a play as soon as I am bored with everything working so perfectly with VFS!

@Iguana9999 I use ‘vnstat’ as I only really care about total box consumption over long periods (and not per process) use, so just a guesstimate with the daily-tally - guestimate-of-filesize-played, I’m sure ‘ntop’ or similar can give process-level stats if you do not want to fiddle with iptables counters.

@Linhead: Cool i’ll look into those, just interested to know how much incoming traffic there is per month.

@Iguana9999 no problem, very easy with vnstat indeed,

$ vnstat -m | head -8

eth0 / monthly

   month        rx      |     tx      |    total    |   avg. rate
------------------------+-------------+-------------+---------------
  Oct '17     30.91 TiB |  737.67 GiB |   31.63 TiB |  103.86 Mbit/s
  Nov '17     26.42 TiB |    1.06 TiB |   27.49 TiB |   93.28 Mbit/s
  Dec '17     21.60 TiB |  592.15 GiB |   22.18 TiB |   72.83 Mbit/s

@Linhead: that’s a lot of incoming traffic :wink:

Someone can advice me a monitoring tool which is customizable, is able to show custom logs / text files and is able to monitor multiple servers?

Using cockpit at the moment but would like to be able to read some log files.

I’d like to implement a max cap for the buffer so rclone limits the amount of memory it uses. As @Animosity022 says above 35 files open can use a lot of memory which is unecessary really.

The --buffer flag was put in specifically to speed up transfers on Windows as windows IO seems very slow unless you do some form of read-ahead and use big buffers. I haven’t really analysed how useful it is in the mount case.

If I just do a ffprobe on a file, it doesn’t fill the buffer-size up from what I can tell from the logs so it wouldn’t waste the memory on those type of items:

2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: >Open: errc=0, fh=0x0
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: Read: ofst=0, fh=0x0
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.openRange at 0 length 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 0 length 4096 chunkOffset 0 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 4096 length 8192 chunkOffset 0 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 12288 length 16384 chunkOffset 0 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 28672 length 32768 chunkOffset 0 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 61440 length 65536 chunkOffset 0 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 126976 length 131072 chunkOffset 0 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: >Read: n=65536
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: Getattr: fh=0xFFFFFFFFFFFFFFFF
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: >Getattr: errc=0
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: Read: ofst=65536, fh=0x0
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: Read: ofst=196608, fh=0x0
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 258048 length 262144 chunkOffset 0 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: >Read: n=131072
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: Getattr: fh=0x0
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 520192 length 524288 chunkOffset 0 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: >Read: n=131072
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: >Getattr: errc=0
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: Read: ofst=57810812928, fh=0x0
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ReadFileHandle.seek from 327680 to 57810812928 (fs.RangeSeeker)
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.RangeSeek from 1044480 to 57810812928 length -1
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at -1 length 4096 chunkOffset 57810812928 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.openRange at 57810812928 length 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 57810817024 length 8192 chunkOffset 57810812928 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: >Read: n=10533
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: Read: ofst=327680, fh=0x0
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ReadFileHandle.seek from 57810823461 to 327680 (fs.RangeSeeker)
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.RangeSeek from 57810823461 to 327680 length -1
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at -1 length 4096 chunkOffset 327680 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.openRange at 327680 length 33554432
2018/07/11 14:16:13 DEBUG : /Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: Read: ofst=458752, fh=0x0
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 331776 length 8192 chunkOffset 327680 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 339968 length 16384 chunkOffset 327680 chunkSize 33554432
2018/07/11 14:16:13 DEBUG : Radarr_Movies/Tomb Raider (2018)/Tomb Raider (2018).mkv: ChunkedReader.Read at 356352 length 32768 chunkOffset 327680 chunkSize 33554432

It seems to just read what it needs and close out the file as the ffprobe takes roughly 1.8 seconds and moving 512M would take ~4 seconds.

Direct streaming was tested yesterday with —buffer-size=0, Movies started fast and worked great.

Getting these errors again; TVShows/Cops/Season 31/Cops - S31E03 - Keys to Success WEB-DL-1080p.mp4: ReadFileHandle.Read error: low level r
etry 10/10: couldn’t reopen file with offset and limit: open file failed: googleapi: Error 403: The download quota for this file has been e
xceeded., downloadQuotaExceeded

Cannot seem to find out what the downloadQuota for gdrive is, also it says for this file but noting loads at the moment.

EDIT: After thinking about it, VFS using the chunks will download more parts of the file while its accessing it, am i correct in thinking this will count toward the 1000 queries per 100 second quota? API console shows a spike to 1K and after that I’ve seen the errors.

EDIT2: Reading the other post about vfs-chunk-size, the API calls will be lessen when using vfs-chunk-size-limit. I’m confused.

Memory usage is currently “fine” for me. I was talking about wasted traffic by filling the buffer which is never used.
I tested ffprobe and mediainfo to see how much data they really read und how much buffered additionally.
It turns out, there is no difference in traffic between --buffer-size 4M and --buffer-size 64M. Only --buffer-size 0 makes a real difference.
With --buffer-size >= 4M a mediainfo call will waste around 7-10 MB per file and ffprobe only 3-5 MB on an 1Gbit/s connection.
So its not worth to reduce the buffer size to save on traffic.

Sadly, there is no official download quota. It is possible that there are different kinds of “bans”.
At one time during a downloadQuotaExceeded limit, I was able do still download smaller files and even parts of larger files using the Range header. Since then i never encountered the downloadQuotaExceeded error again to verify this.

Yes these will count towards your quota. They will be listed as drive.files.get in the API console.

When only --vfs-chunk-size x is set, the chunks will always have a fixed size and every x bytes a new drive.files.get request will be sent.
If --vfs-chunk-size-limit y is also set the chunk size will be doubled after each chunk until y bytes is reached. This will reduce the number of drive.files.get requests.
You can set --vfs-chunk-size-limit off to “disable” the limit which means unlimited growth.

1 Like