New Feature: vfs-read-chunk-size

I’m trying to wrap my head around some concepts here. Please be patient and follow my reasoning.

Let’s say one is playing back an UHD remux. Say a 60GB files. With the default configuration of --vfs-read-chunk-size at 128M, chunks will be downloaded at 128M/256M/512M/1G/2G/4G/8G/16G/32G.

Now the last step at 32G would not be completed, because the file would be “completed” by then (sum of the above gives 63,8 GB).

Two questions, then.

  1. Does the --vfs-read-chunk-size-limit argument exist to limit RAM usage, which for huge files could easily escalate out of control?
  2. If so, shouldn’t I be seeing huge RAM loads in Task Manager (Windows user here) once playback proceeds and bigger chunks are getting downloaded in RAM? Because I’m not seeing that and I don’t understand why.
    I mount my Google Drive account with:
    rclone mount gDrive: Q: --allow-other --dir-cache-time 96h --vfs-read-chunk-size 128M --buffer-size 64M --timeout 1h
    I start playback of a 32GB remux and my machine has 8,9/31.9 GB of RAM used. As playback progresses (and bigger chunks are downloaded) I would expect to see RAM usage sharply increasing. But I’m just not seeing it. rclone.exe is displayed in Task Manager as using about 82MB of RAM and that figure remains steady.

Am I missing something? I suppose the memory buffer is involved somehow (at 64MB), constantly being filled and emptied by the ongoing playback process.

If this were to be the case, playback would be easily disruptable by network “hiccups” (64MB hold a very limited amount of video content, especially in high bitrate scenarios) and I might want to consider increasing the memory buffer to something like 256 or 512MB. Any cons that I should bear in mind, were I do this?

It’s interesting to watch in the admin console the download events not increasing even though network activity is not constant. It looks like the same “call” is being kept active for the size of the chunk to be downloaded, even though network activity starts and stops constantly as memory buffer is emptied and refilled (assuming I got this right).

–vfs-read-chunk-size-limit would then exist simply to avoid “wasting” too much download quota (reaching the limit not being a likely scenario would explain why the default for it is off). In the initial example (UHD remux), one would risk “wasting” a good portion of a 32GB call, correct? 19.8GB wasted if my calculations are correct.

Would it make sense, at that point, to have a --vfs-read-chunk-size very low for initial library scanning purposes and then raise it once the library has been scanned (as subsequent additions, from day to day, would be a fraction of the initial scan)?

Thanks for any clarification. This is fascinating stuff. :smiley:

Please try not to necrobump 6 month old topics as it’s always best to open a new one.

vfs-read-chunk-size has nothing to do with memory as it’s all about how chunks are requested from the backend.

https://rclone.org/commands/rclone_mount/#chunked-reading

Any memory used per file is related back to the buffer-size command.

https://rclone.org/commands/rclone_mount/#file-buffering