When it goes to analyze a file, it usually opens and closes it 3-4 times from what I’ve noticed.
Some 403s/500s are normal in the process and I usually just ignore those.
I’ve never hit a Download limited exceeded ever while using vfs-read or cache.
I just let Plex do it’s thing when I add a library and it will analyze each file as it loads. I just recently wiped my DB and reseeded ~45TB of stuff in about 2 1/2 days so that’s analyzing roughly ~21k files.
In Sonarr/Radarr, I have analyze off. In Plex, I have all deep analysis off in the scheduled tasks. I let regular analysis happen as a scheduled task.
Guess it was something with the analyze script I was using. After posting that it worked again. You like the simple setup if (read your previous posts, been lurking here for months) so can I presume you don’t use extra scripts for scanning and adding media besides plex_autoscan and the build in scanner?
The downside for me for VFS is I need to use union/fsmergerfs to combine a local and remote file system to manage the uploads.
I like the cache as I can just use the cache-tmp-uploads to handle that jazz with plex-autoscan.
VFS starts ~3-5 seconds faster than cache for me. I have no buffering or problems streaming with either setup.
With the startup also comes faster mediainfo or ffprobe with VFS as if you have 21k files to analyze and you 5 seconds to each file, that’s like an extra 29 hours if my math is right. (21k * 5 seconds / 60 seconds in a minute / 60 minutes in an hour = 29.16 hours).
Yes I noticed the speed up when I changed from cache to VFS due to slow initial scanning of my media.
Speed up with starting between cache and VFS is not that big for me, but that’s because I’m on a low bandwidth line so it will always transcode for me.
My download server is separate so I’ll use cache there for the uploading. Decided to implement plex_autoscan on the others VPS’s also, need to adjust my script on the download unit, but I’m already automating the Sh*t out of this whole project so why not . In the end a cleaner and simpler setup than my production one now.
Sorry to hijack but any chance you peeps could post your mount commands so we can get a better idea of how others are making use of this feature? I’m currently using cache not VFS cache so any speed improvements are welcome!
Thanks indeed to @Animosity022 for the testing - I too switched from cache to VFS and it has been faster, more consistent and more stable.
More stable as I suffer from bug #2354 as I upload to the drive from elsewhere, VFS seems to deal with it.
Not bothered about missing the upload feature in my particular case as my mount is read-only.
Only slight negative is that my time based Plex library scanning / mediainfo checking / etc of around 400,000 items takes about 300gb/day in traffic, whilst on cache it took about 120gb/day. Naturally this can be cut significantly by just scanning what has updated, but thats for another day.
You might want to try a smaller chunk size and maybe a smaller limit? I would think the problem you are hitting is the VFS might be too efficient and grabbing too much too fast.
You’d be able to check that by analyzing a few files with debug mode to see what it pulled in the logs. That would be my guess.
Actually no, I set it like that on my VPS while scanning the library. With a set buffer it will download to much at one time when scanning / analyzing which makes rclone shoot up in memory usage till it crash and burns. With —buffer-size on 0 it can scan the whole 28TB on a 2GB VPS.
EDIT: If i scan my library using built in scanner it will still fill up the memory and crash Rclone. Setting —buffer-size back to 0 solves it for me.
I just set it cause Animosity022 had a good explanation in his posts in the other thread. Don’t know if it does something for direct streams as I’m unable to test. For transcoding it works fine.
@Linhead: How do you see the data it uses? My VPS provider only counts egress data so don’t have an overview of my Ingress from gdrive to my VPS. Any Linux program that can count this?
@Animosity022: Was your post directed to Linhead or to me
Since my buffer was low, memory really wasn’t an issue so during the initial seed, it’s probably better to have a lower size buffer if you are memory constrained.
Thanks for clarifying @Animosity022 - totally agree, a smaller chunk size & limit could easily make a huge impact there. I’ll have a play as soon as I am bored with everything working so perfectly with VFS!
@Iguana9999 I use ‘vnstat’ as I only really care about total box consumption over long periods (and not per process) use, so just a guesstimate with the daily-tally - guestimate-of-filesize-played, I’m sure ‘ntop’ or similar can give process-level stats if you do not want to fiddle with iptables counters.
I’d like to implement a max cap for the buffer so rclone limits the amount of memory it uses. As @Animosity022 says above 35 files open can use a lot of memory which is unecessary really.
The --buffer flag was put in specifically to speed up transfers on Windows as windows IO seems very slow unless you do some form of read-ahead and use big buffers. I haven’t really analysed how useful it is in the mount case.
If I just do a ffprobe on a file, it doesn’t fill the buffer-size up from what I can tell from the logs so it wouldn’t waste the memory on those type of items:
Getting these errors again; TVShows/Cops/Season 31/Cops - S31E03 - Keys to Success WEB-DL-1080p.mp4: ReadFileHandle.Read error: low level r
etry 10/10: couldn’t reopen file with offset and limit: open file failed: googleapi: Error 403: The download quota for this file has been e
xceeded., downloadQuotaExceeded
Cannot seem to find out what the downloadQuota for gdrive is, also it says for this file but noting loads at the moment.
EDIT: After thinking about it, VFS using the chunks will download more parts of the file while its accessing it, am i correct in thinking this will count toward the 1000 queries per 100 second quota? API console shows a spike to 1K and after that I’ve seen the errors.
EDIT2: Reading the other post about vfs-chunk-size, the API calls will be lessen when using vfs-chunk-size-limit. I’m confused.
Memory usage is currently “fine” for me. I was talking about wasted traffic by filling the buffer which is never used.
I tested ffprobe and mediainfo to see how much data they really read und how much buffered additionally.
It turns out, there is no difference in traffic between --buffer-size 4M and --buffer-size 64M. Only --buffer-size 0 makes a real difference.
With --buffer-size >= 4M a mediainfo call will waste around 7-10 MB per file and ffprobe only 3-5 MB on an 1Gbit/s connection.
So its not worth to reduce the buffer size to save on traffic.
Sadly, there is no official download quota. It is possible that there are different kinds of “bans”.
At one time during a downloadQuotaExceeded limit, I was able do still download smaller files and even parts of larger files using the Range header. Since then i never encountered the downloadQuotaExceeded error again to verify this.
Yes these will count towards your quota. They will be listed as drive.files.get in the API console.
When only --vfs-chunk-size x is set, the chunks will always have a fixed size and every x bytes a new drive.files.get request will be sent.
If --vfs-chunk-size-limit y is also set the chunk size will be doubled after each chunk until y bytes is reached. This will reduce the number of drive.files.get requests.
You can set --vfs-chunk-size-limit off to “disable” the limit which means unlimited growth.