It seems to jump around a lot and isn't really reading sequentially either:
ze 33554432
2019/07/25 18:58:19 DEBUG : Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: ChunkedReader.RangeSeek from 10622599 to 134282365 length -1
2019/07/25 18:58:19 DEBUG : Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: ChunkedReader.Read at -1 length 131072 chunkOffset 134282365 chunkSiz
e 33554432
2019/07/25 18:58:19 DEBUG : Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: ChunkedReader.openRange at 134282365 length 33554432
2019/07/25 18:58:19 DEBUG : Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: ChunkedReader.Read at 134413437 length 131072 chunkOffset 134282365 c
hunkSize 33554432
2019/07/25 18:58:19 DEBUG : Encodes-Film/Step.Into.Liquid.2003.1080p.BluRay.DTS.x264-GOS/StepIntoLiquid.BluRay.x264.OAR.GOS.mkv: ChunkedReader.Read at 134544509 length 131072 chunkOffset 134282365 c
hunkSize 33554432
I wonder if someone else uses the same setup/player and has some information that might be helpful. I don't see a tuning thing that would fix it offhand. Let me look through the log a bit and see if anything else jumps out.
Cache size won't really help optimize for 4K. It will help you keep recently accessed files of all kinds snappy though.
The only thing I would recommend for high-bitrate 4K is to make sure chunk size is sufficient (the 32MB you use now should be fine, and I don't think I'd recommend above 64MB). Just to be clear here - if you are using the cache then the chunk size set in that is what matters and you can remove VFS chunk settings. But if no cache is used then use the VFS chunk settings.
If you end up using VFS chunking then using a small minimum size will make files open quickly, then let it grow up to some reasonable amount - say 64 or 128MB. I don't see a good reason why you should let that grow to an unlimited size. That probably wouldn't be good for seeking responsiveness - and there's not much to gain from it beyond that anyway.
And then finally the workers can be increased a little if you should ever encounter stuttering on especially high bitrate media. It will keep a larger buffer of chunks effectively, so should lessen the chance of running out during playback. I haven't done a lot of intensive 4K streaming testing so I can't say what would be considered the recommended minimums for that sort of workload.
Not a whole lot more to optimze beyond this I think...
Although on an unrelated note I would highly recommend you increase the
upload_cutoff = 128M
chunk_size = 128M
inside your Gdrive, because this is a meager 8MB by default. This will not affect downloads at all, only uploads - but uploading larger files will then utilize your bandwidth far more efficiently since it doesn't have to stop and start the transfer every 8MB. My numbers here are just a rough example. Set what you want - but keep in mind this amount of memory will be used for each upload (times 4 if you use 4 transfer threads). Feel free to go to 256MB if your connection is stable and fast and you have more than enough RAM - but beyond that there is sharply decreasing benefits in performance.
The default settings are that way because they work very well as there was a lot of time spent on testing and seeing what works best.
The purpose of letting it grow as it allows for less API hits for sequential reading. It doesn't really come into play if you are seeking a lot. Seeking in general is expensive as it closes the file and opens it back up again dropping anything you had in the buffer.
One thing I have noticed is errors after removing --tpslimit 8. The files still send and complete, but I see instances of the error scattered throughout the log:
error uploading: googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console:
Well apparently you are hitting the API limit, but that seems weird because it really shouldn't be happening under most circumstances unless you go really nuts on using overly high transfers and workers. The API limit isn't that restrictive. On the google API webpage you can see statistics for your user key and see if it actually looks like it's hitting the limit.
You are using your own API key and not the default right? Can't remember if you said earlier, but if not that's a very common cause of that problem since it's effectively shared between all rclone users who aren't using their own keys.
I don't know if the chunk size resets when you seek, but if it does it's probably not so bad as I imagine. If it closes and opens the file again like you say then that sounds like it would be the case. I don't know enough about those sorts of details.