How to find ideal chunk size

I'm using a 10 MB chunk size, but looking at my debug logs I seem to read only 4kb of the chunk?

ChunkedReader.Read at 1764147200 length 4096 chunkOffset 1761607680 chunkSize 10485760
>Read: read=4096, err=<nil>
ChunkedReader.Read at 82231296 length 4096 chunkOffset 73400320 chunkSize 10485760

If I'm always reading at that size, wouldn't 1 MB be more efficient?

My settings:

  --vfs-read-chunk-size=10M \
  --vfs-read-chunk-size-limit=0 \
  --buffer-size=0K \
  --max-read-ahead=0K \

I don't have issues, but just wondering if I can get even faster start times with a smaller chunk size.

That does nothing unless you have a custom compiled kernel so can be removed.

Why buffer size 0?

The 'chunked' reading with VFS is related to requesting data from the backend and is really for reducing API calls.

If you have a 128M chunk size (the default), it requests a 128M HTTP range:

The goal of using this is reducing the API calls and that's pretty much it.

The OS itself reads things in 4KB blocks and unless you run a custom kernel, that will always be.

In the end a 10MB makes things much slower as it generates a lot more API calls until it picks up based on having the limit off.

It's best to use the defaults in majority of cases.

What is the benefit of buffer-size?

Why I should care about API calls as long I'm under my quota, which is very generous btw? I rather increase performance even if it does more API calls.

You need to balance API calls quota vs performance.

By default, you only get roughly 10 transactions per second via the google API.

image

So you really want the sweet spot of making enough calls to get the best performance, but not too many calls to get rate limited and forced to back off.

In general, if you are playing a file in Plex and can make 1000 API calls to play a file or play the same file with 10,000 API calls. What is better? As a general statement, less API calls per second to produce the same result is less taxing to do.

buffer-size is the amount of memory per file that is used when a file is opened and read sequentially. Once the file is closed, the buffer is dropped and isn't reused. I've toyed around with many buffer sizes as it really only impacts "Direct Playing" in Plex.

In theory, a big buffer size would provide some help if you had some latency in getting the next set of data assuming that Plex keeps the file open and is reading sequentially. I've never really gone down and fully tested this to how effective it is.

I don't use plex so can't speak for it.

I was looking at my API usage stats:

Is there a way to have some kind of progressive chunk size? So we can start with 10 MB and go bigger over time if it's a sequential read?

I don't have any buffering issues, even with high bit rate usage with my current settings and my RAM usage is really minimal with ~40 open files on the mount

That's how the VFS system works.

If you have an initial read request of 128M and the read continues to go sequentially, it starts to request larger sizes by going to 256M / 512M / etc.

There really isn't a concept of a chunk tough. It's requests a piece of data and the buffer-size is where the data stays.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.