I have a gdrive mounted with rclone and fronted by an http web server. This in turn is fronted by a CDN with large file support enabled so the files are read in chunks (byte range requests) ranging from 2 to 5 megabytes. The mount implements vfs caching with files going to a dedicated NVME drive.
I'm trying to fine-tune the settings to optimize read latency and reduce iowait as much as possible. These are the relevant settings that the mount is running with:
While this seems to work, iowait is a bit higher than I'd hope:
Without that much network activity:
Are there any recommended settings that I'm missing? Any recommended changes to any of the settings that I already have?
Also, can I get a confirmation that the chunks will likely be read from disk/memory and not generate additional network requests back to gdrive with the current flag choice (especially around buffer-size, vfs-read-chunk-size/limit and vfs-read-ahead)?
My rationale for using vfs-read-ahead at 256M is because I assume that will help in ensuring enough has been cached on disk by the time the CDN asks for the chunk. I can also see how that might keep iowait high and that wouldn't necessarily be a bad thing. My concern with iowait would be requests getting blocked because of API limits but I feel like reducing some of these values would increase API calls and potentially create higher likelihood of getting rate limited.
I'll test the current settings a bit more but I don't believe I've experienced any major problems with it, I'd also expect less and less origin requests as more stuff gets cached naturally so blocking requests should be less of a concern. I might play around with the values once I reach a steady state in terms of cache hit ratio at the CDN.