This was my settings before, I don't want to read chunks in advance, I'd like to keep 1:1 as much as possible, download requested chunk, send it. With this settings I make sure I don't kill my incoming bw while doing little output...
Also I like that changes are picked up after 1 min
I'm not using Plex, I have a bunch of files I do live streaming, and I don't want to do any kind of buffer, just do 1:1 bandwidth usage.
If I'm outputting 10 Mb/s I want to be downloading 10 Mb/s... with plexdrive --chunk-load-ahead=0 I achieve this, since only requested chunks are downloaded and sent to client, client are still allowed to buffer I guess by requesting more chunks? not sure... but server-side there's no buffer, only what is requested is downloaded and sent. There's no waste of bandwidth.
Before when I had chunk-load-ahead=1 I'd be sending like 300 mbps output but downloading 900 mbps, now If I'm sending 300 mbps of data, i'm only downloading 300 mbps
The way I see is say a file start with chunks A,B,C,D,E.
Right now with my settings a client can request chunk A,B,C,D,E. Server will download each of it, send to client and discard.
If I had --chunk-load-ahead=1 then if client only requested A,B server would download C anyways, even if client only needs A,B.
It creates unnecessary waste of bandwidth, even if in the end, the amount downloaded is the same assuming you do use all the chunks anyway. Maybe that increase performance in some use cases, but in others it's not good? Not sure
So I was not sure if I was being clear before, but I think I am now. Maybe there is a way to replicate this behavior in rclone already.
It could be similar to ffmpeg read input at native frame rate -re... Maybe another way to say is that if a file stream bitrate is 6 Mbps, I never want to read it faster than 6 Mbps (because that should be fast enough for real time streaming)
What if I use vfs-read-chunk-size-limit=0 with --vfs-read-chunk-size=0K and --buffer-size=0 (because from my understanding the buffer is not shared even for same file?)
Changing the software is not a option really... and you are still not understanding what I'm saying, yes if client request A,B,C,D,E it will get that but with plexdrive I'll not download chunks preemptively.
I also tested a mount with --max-read-ahead=0K but it's still using much more bandwidth than plexdrive...
root@server ~ # dd if=/mnt/plexdrive/test.mkv of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 128.232 s, 8.4 MB/s
root@server ~ # dd if="/mnt/rclone/test.mkv" of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 24.814 s, 43.3 MB/s
I want rclone to be as slow as possible, while still allowing to read the file in real time... I don't want/need huge spikes and don't need any kind of pre-buffer, read-ahead or anything.
These two probably don't do what you think they do and should not be relevant for your goal. --buffer-size is a read/write buffer in memory to smooth out spikes in transfer and ensure optimal throughput where otherwise a stream might need to wait for an app or a HDD to receive it. it won't read ahead - just try to keep already requested data in memory. Usually a good thing to have on at least default.
--max-read-ahead is a hint to the OS and is usually limited to 128kb anyway. Not even sure if it does anything on Linux at all as some OS ignore this.
These I am also pretty sure won't help in this scenario. a non-chunked stream works fine and rclone won't read more of the stream that is required to fill the requesting application's buffer. There is a few seconds of high bandwidth to start as the application-buffer fills, but after that it will use a steady bandwith equal to the bitrate of the media - and if you pause playback it will stop.
rclone doesn't read ahead itself (except if you used cache-backend with large chunks or VFS cache-mode full - which are not needed here). Whatever pre-buffering happens is done by the application that requests the data, so that is really the only place to limit it. It seems to me like you are trying to limit the wrong thing.
You could always hard-cap rclone with --bwlimit to only use a certain bandwidth, but that would be static, and it would be a poor solution applied to the wrong place. Setting a bwlimit a little below your max can be useful just to prevent any network congestion or lag for other uses while streaming however.
I don't control the clients so changing the clients is not a option, and besides, you can see that with my read speed tests, rclone is much faster than plexdrive which is bad in my situation, I don't need to waste my bandwidth reading a file much faster than needed to stream it in real time.
I'd rather have 10 files downloading at 10 Mbps than 1 at 100 mbps for no reason...
I only need to read a file at speeds fast enough to read in real time, everything above is just a waste...
See a few comments above. The feature you want doesn't exist as you want bandwidth control per file. Raise an issue and ask for a feature request or continue with plexdrive if it meets your use case (which is seems to so why change?)
The dd command you executed is a sequential read and you piped to dev/null it's going to sequentially read as fast as possible.
Plex and Emby both have controls in their software to limit read ahead/transcoding to hit the issue you describe which is reading too far ahead. You haven't mentioned what the software is so might be worth checking into that.
If you want to limit the read-ahead you could try --vfs-read-chunk-size 0 and --buffer-size 0. This should be reasonably efficient but the OS will still buffer stuff.
You could also try setting --vfs-read-chunk-size 1M--vfs-read-chunk-size-limit 1M which will limit the transfer to 1M. That will do loads more transactions and be very inefficient so probably isn't what you want!
I don't want to hardcap rclone. It can use 100% of my incoming bandwidth, I just don't want a few files using that much, it's about being efficient. They can be read in real time at speeds less than 3 Mbps.
Sometimes I have 200 or more files open at same time on the mount, and I'd rather limit each of them so they aren't fighting for bandwidth or wasting bandwidth reading a file too fast while others are too slow for real-time streaming. The issue is really the spikes.