Correct way to cache an encrypted remote?

Hmm, I have a Mac/Linux so not something I can really test myself.

What I notice is that the player seems to open and close the file repeatedly:

egrep 'mkv: OpenFile: flags=O_RDONLY|mkv: Flush: fh=0x0' blah.log
2019/07/25 18:58:12 DEBUG : /Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2019/07/25 18:58:12 DEBUG : /Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2019/07/25 18:58:12 DEBUG : /Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2019/07/25 18:58:12 DEBUG : /Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2019/07/25 18:58:12 DEBUG : /Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: Flush: fh=0x0
2019/07/25 18:58:12 DEBUG : /Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2019/07/25 18:58:12 DEBUG : /Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: Flush: fh=0x0
2019/07/25 18:58:12 DEBUG : /Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2019/07/25 18:58:12 DEBUG : /Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: Flush: fh=0x0
2019/07/25 18:58:12 DEBUG : /Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2019/07/25 18:58:12 DEBUG : /Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: Flush: fh=0x0
2019/07/25 18:58:13 DEBUG : /Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2019/07/25 18:58:14 DEBUG : /Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: Flush: fh=0x0
2019/07/25 18:58:14 DEBUG : /Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: OpenFile: flags=O_RDONLY, perm=-rwxrwxrwx
2019/07/25 18:59:12 DEBUG : /Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: Flush: fh=0x0

It seems to jump around a lot and isn't really reading sequentially either:

ze 33554432
2019/07/25 18:58:19 DEBUG : Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: ChunkedReader.RangeSeek from 10622599 to 134282365 length -1
2019/07/25 18:58:19 DEBUG : Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: ChunkedReader.Read at -1 length 131072 chunkOffset 134282365 chunkSiz
e 33554432
2019/07/25 18:58:19 DEBUG : Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: ChunkedReader.openRange at 134282365 length 33554432
2019/07/25 18:58:19 DEBUG : Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: ChunkedReader.Read at 134413437 length 131072 chunkOffset 134282365 c
hunkSize 33554432
2019/07/25 18:58:19 DEBUG : Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: ChunkedReader.Read at 134544509 length 131072 chunkOffset 134282365 c
hunkSize 33554432

I wonder if someone else uses the same setup/player and has some information that might be helpful. I don't see a tuning thing that would fix it offhand. Let me look through the log a bit and see if anything else jumps out.

Okay, I think I know what the issue is.

MPC-BE-x64 locally is using LAV filters + rarfilesource x64 (to read/play unrarred content).

I have created a second (renamed) MPC-BE-rclone instance without rarfilesource (to solely play rclone/gcrypt streaming content).

It seems to work .... testing now.

Oh, that makes sense on the player as the logs seemed odd.

Yes, that was the problem! Thankfully ... :slight_smile:

So, final nssm mount is as follows:

mount --allow-other --allow-non-empty --buffer-size 0M --dir-cache-time 24h --drive-chunk-size 32M --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit off --vfs-cache-mode writes --cache-workers=8 --cache-dir "C:\Cache" --cache-db-path "C:\Cache\gcache.db" --cache-chunk-path "C:\Cache\gcache" gcrypt:/ X: --config "H:\rclone\rclone.conf" --log-level DEBUG --log-file ""C:\Cache\logs\rclone.log"

But I noticed above you wrote:

You can also just remove this: --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit off

Do you think that would improve the response?

I'd definitely remove "allow-non-empty" as this allows for over mounting and running multiple processes and confusing things.

Removing the extra vfs commands should not change much at all other than simplifying the command to what's needed.

Perfect, thank you

Do you know if there is a limit to streamable file sizes?

I'm streaming Gladiator. The file is 92GB and almost 3 hours long...

It took roughly a minute to open, but it plays perfectly! Amazing :slight_smile:

Are there any ways to optimise the mount for 4K playback, eg by increasing the cache size and/or the buffer size?

There are no file size limits from a rclone perspective.

You'd be limited by bandwidth and horse power of your player playing it. I'm not sure how MPC works in terms of resourcing on what it plays.

The biggest file I've personally played is about 87GB.

imagebam.com

Plays flawlessly. literally :smiley:

Cache size won't really help optimize for 4K. It will help you keep recently accessed files of all kinds snappy though.

The only thing I would recommend for high-bitrate 4K is to make sure chunk size is sufficient (the 32MB you use now should be fine, and I don't think I'd recommend above 64MB). Just to be clear here - if you are using the cache then the chunk size set in that is what matters and you can remove VFS chunk settings. But if no cache is used then use the VFS chunk settings.

If you end up using VFS chunking then using a small minimum size will make files open quickly, then let it grow up to some reasonable amount - say 64 or 128MB. I don't see a good reason why you should let that grow to an unlimited size. That probably wouldn't be good for seeking responsiveness - and there's not much to gain from it beyond that anyway.

And then finally the workers can be increased a little if you should ever encounter stuttering on especially high bitrate media. It will keep a larger buffer of chunks effectively, so should lessen the chance of running out during playback. I haven't done a lot of intensive 4K streaming testing so I can't say what would be considered the recommended minimums for that sort of workload.

Not a whole lot more to optimze beyond this I think...

Although on an unrelated note I would highly recommend you increase the
upload_cutoff = 128M
chunk_size = 128M
inside your Gdrive, because this is a meager 8MB by default. This will not affect downloads at all, only uploads - but uploading larger files will then utilize your bandwidth far more efficiently since it doesn't have to stop and start the transfer every 8MB. My numbers here are just a rough example. Set what you want - but keep in mind this amount of memory will be used for each upload (times 4 if you use 4 transfer threads). Feel free to go to 256MB if your connection is stable and fast and you have more than enough RAM - but beyond that there is sharply decreasing benefits in performance.

Changed both the gdrive parameters to 256M as I'm running 64GB of RAM so I think it can handle it :slight_smile:

Definitely snappier!

The default settings are that way because they work very well as there was a lot of time spent on testing and seeing what works best.

The purpose of letting it grow as it allows for less API hits for sequential reading. It doesn't really come into play if you are seeking a lot. Seeking in general is expensive as it closes the file and opens it back up again dropping anything you had in the buffer.

One thing I have noticed is errors after removing --tpslimit 8. The files still send and complete, but I see instances of the error scattered throughout the log:

error uploading: googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console:

If I readd --tpslimit 8, the errors disappear.

Any idea why? :slight_smile:

Well apparently you are hitting the API limit, but that seems weird because it really shouldn't be happening under most circumstances unless you go really nuts on using overly high transfers and workers. The API limit isn't that restrictive. On the google API webpage you can see statistics for your user key and see if it actually looks like it's hitting the limit.

You are using your own API key and not the default right? Can't remember if you said earlier, but if not that's a very common cause of that problem since it's effectively shared between all rclone users who aren't using their own keys.

I don't know if the chunk size resets when you seek, but if it does it's probably not so bad as I imagine. If it closes and opens the file again like you say then that sounds like it would be the case. I don't know enough about those sorts of details.

Transfers are limited to 8 and workers are also set to 8: workers = 8

Using my own API key and I'm nowhere near any of the limits.

Do you think I should remove workers = 8 from the rclone.conf?

I removed workers = 8 from the rclone.conf and the errors have disappeared. tpslimit = 8 has also been removed.

One other question: when rclone starts it outputs the following:

2019/07/26 01:27:34 INFO : gcache: Cache DB path: C:\cache_gcrypt\gcache.db 2019/07/26 01:27:34 INFO : gcache: Cache chunk path: C:\cache_gcrypt\gcache 2019/07/26 01:27:34 INFO : gcache: Chunk Memory: true 2019/07/26 01:27:34 INFO : gcache: Chunk Size: 32M 2019/07/26 01:27:34 INFO : gcache: Chunk Total Size: 20G 2019/07/26 01:27:34 INFO : gcache: Chunk Clean Interval: 1m0s 2019/07/26 01:27:34 INFO : gcache: Workers: 4 2019/07/26 01:27:34 INFO : gcache: File Age: 1w 2019/07/26 01:27:36 INFO : gcache: Cache DB path: C:\cache_gcrypt\gcache.db 2019/07/26 01:27:36 INFO : gcache: Cache chunk path: C:\cache_gcrypt\gcache 2019/07/26 01:27:36 INFO : gcache: Chunk Memory: true 2019/07/26 01:27:36 INFO : gcache: Chunk Size: 32M 2019/07/26 01:27:36 INFO : gcache: Chunk Total Size: 20G 2019/07/26 01:27:36 INFO : gcache: Chunk Clean Interval: 1m0s 2019/07/26 01:27:36 INFO : gcache: Workers: 4 2019/07/26 01:27:36 INFO : gcache: File Age: 1w

Is it normal to have the above output? I note it's basically the same info - twice.

Those errors are fine. You can get some 403s rate limiting. Just let it go as a few is no problem.

That's normal output from what i can see.

Great, thank you.

Speeds are superb with workers = 8, so good to know the errors are perfectly normal.

Appreciate all the help today - running wonderfully now! :smile:

workers = 8 is the default by the way