Read files as slow as possible?

How to replicate this settings to rclone_vfs? also is vfs the best way to mount nowadays?

/opt/plexdrive/plexdrive mount -v 3 --refresh-interval=1m --chunk-check-threads=8 --chunk-load-threads=8 --chunk-load-ahead=0 --chunk-size=10M --max-chunks=800 --fuse-options=allow_other,read_only --config=/opt/plexdrive --cache-file=/opt/plexdrive/cache.bolt /mnt/plexdrive

This was my settings before, I don't want to read chunks in advance, I'd like to keep 1:1 as much as possible, download requested chunk, send it. With this settings I make sure I don't kill my incoming bw while doing little output...

Also I like that changes are picked up after 1 min

Thanks!

1 Like

I use a pretty simple mount. plexdrive doesn't quite exactly to compare to rclone as they have a bit of fundamental things different on how they were designed.

I use a pretty simple mount and you can tweak to your needs. I personally just read from my rclone with the occasional delete.

 /usr/bin/rclone mount gcrypt: /GD --allow-other --dir-cache-time 96h --log-level INFO --log-file /opt/rclone/logs/rclone.log --timeout 1h --umask 002

I use a longer dir-cache as changes like plexdrive, are picked up via 1 minute polling by default.

And how do you stop it using 200 mbps incoming bandwidth to serve 10 mbps output? i.e not reading chunk aheads

1 Like

You can use:

https://rclone.org/docs/#bwlimit-bandwidth-spec

But I think you are really seeing the difference in plexdrive and rclone as rclone can handle getting the data much better. Rclone itself isn't using the bandwidth.

Plex normally requests up to a large part so it can buffer and rclone just serves that.

If you are only putting out 10Mb/s, you'd only be pulling 10B/s.

Plex's transcoder will buffer up a number of seconds and that happens as fast as you can grab it. Once full, it drops down to nothing.

In the end if you play a 2GB file, you'll only grab 2GB of data (plus a small % of overhead for TCP etc).

I'm not using Plex, I have a bunch of files I do live streaming, and I don't want to do any kind of buffer, just do 1:1 bandwidth usage.

If I'm outputting 10 Mb/s I want to be downloading 10 Mb/s... with plexdrive --chunk-load-ahead=0 I achieve this, since only requested chunks are downloaded and sent to client, client are still allowed to buffer I guess by requesting more chunks? not sure... but server-side there's no buffer, only what is requested is downloaded and sent. There's no waste of bandwidth.

Before when I had chunk-load-ahead=1 I'd be sending like 300 mbps output but downloading 900 mbps, now If I'm sending 300 mbps of data, i'm only downloading 300 mbps

1 Like

There's no option on a mount to limit per file bandwidth.

If you app is requesting a larger set of data, that's the place to limit it.

You can always request a feature for your use case.

I think i'll keep using plexdrive then :slight_smile: rclone wastes too much bandwidth

1 Like

Yep, use the best tool for the job.

In this case, the application is requesting a lot of data and rclone is only serving it based on the request coming in.

You found a nice work around to your challenge by use the limits that plexdrive allows.

I'd just rephrase as your app is wasting the bandwidth as it would be like saying the it's hammer's fault for hitting your finger :slight_smile:

Could be a neat feature request though as I get your use case.

I don't want to bother the devs...

The way I see is say a file start with chunks A,B,C,D,E.

Right now with my settings a client can request chunk A,B,C,D,E. Server will download each of it, send to client and discard.

If I had --chunk-load-ahead=1 then if client only requested A,B server would download C anyways, even if client only needs A,B.

It creates unnecessary waste of bandwidth, even if in the end, the amount downloaded is the same assuming you do use all the chunks anyway. Maybe that increase performance in some use cases, but in others it's not good? Not sure

So I was not sure if I was being clear before, but I think I am now. Maybe there is a way to replicate this behavior in rclone already.

It could be similar to ffmpeg read input at native frame rate -re... Maybe another way to say is that if a file stream bitrate is 6 Mbps, I never want to read it faster than 6 Mbps (because that should be fast enough for real time streaming)

What if I use vfs-read-chunk-size-limit=0 with --vfs-read-chunk-size=0K and --buffer-size=0 (because from my understanding the buffer is not shared even for same file?)

1 Like

vfs-read-chunk-size and buffer-size really would not have any impact.

If your application is requesting A,B,C,D,E from rclone, it's going to get that. Making the read request size or buffer size small wouldn't change that.

buffer-size is just what is stored in memory that was requested in more of a sequential fashion.

vfs-read-chunk-size is how request is sent to the back end remote. I'm not sure what the minimum is but really doesn't change much in terms of the request coming in.

You'd want to make the fix at the right spot, which would be the app. Anything else would be duct tape around the problem, which seems to work with plexdrive.

Changing the software is not a option really... and you are still not understanding what I'm saying, yes if client request A,B,C,D,E it will get that but with plexdrive I'll not download chunks preemptively.

I also tested a mount with --max-read-ahead=0K but it's still using much more bandwidth than plexdrive...

See comparisons...

root@server ~ # dd if=/mnt/plexdrive/test.mkv of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 128.232 s, 8.4 MB/s

root@server ~ # dd if="/mnt/rclone/test.mkv" of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 24.814 s, 43.3 MB/s

I want rclone to be as slow as possible, while still allowing to read the file in real time... I don't want/need huge spikes and don't need any kind of pre-buffer, read-ahead or anything.

@ncw can you take a look at this please?

My mount settings:

  --vfs-read-chunk-size=10M \
  --vfs-read-chunk-size-limit=0 \
  --buffer-size=0K \
  --max-read-ahead=0K \
1 Like
  --buffer-size=0K \
  --max-read-ahead=0K \

These two probably don't do what you think they do and should not be relevant for your goal. --buffer-size is a read/write buffer in memory to smooth out spikes in transfer and ensure optimal throughput where otherwise a stream might need to wait for an app or a HDD to receive it. it won't read ahead - just try to keep already requested data in memory. Usually a good thing to have on at least default.
--max-read-ahead is a hint to the OS and is usually limited to 128kb anyway. Not even sure if it does anything on Linux at all as some OS ignore this.

  --vfs-read-chunk-size=10M \
  --vfs-read-chunk-size-limit=0 \

These I am also pretty sure won't help in this scenario. a non-chunked stream works fine and rclone won't read more of the stream that is required to fill the requesting application's buffer. There is a few seconds of high bandwidth to start as the application-buffer fills, but after that it will use a steady bandwith equal to the bitrate of the media - and if you pause playback it will stop.

rclone doesn't read ahead itself (except if you used cache-backend with large chunks or VFS cache-mode full - which are not needed here). Whatever pre-buffering happens is done by the application that requests the data, so that is really the only place to limit it. It seems to me like you are trying to limit the wrong thing.

You could always hard-cap rclone with --bwlimit to only use a certain bandwidth, but that would be static, and it would be a poor solution applied to the wrong place. Setting a bwlimit a little below your max can be useful just to prevent any network congestion or lag for other uses while streaming however.

max read ahead does nothing unless you custom compile a kernel.

I've already shared what you need and it's a bandwidth limit per file, which doesn't exist. The functionality is not there.

The options you are using would just add more API calls and make a lot of overhead, which would slow it down, but the issue is the client is requesting the data so rclone would serve it.

The fix is at the client requesting not limiting the backend.

I don't control the clients so changing the clients is not a option, and besides, you can see that with my read speed tests, rclone is much faster than plexdrive which is bad in my situation, I don't need to waste my bandwidth reading a file much faster than needed to stream it in real time.

I'd rather have 10 files downloading at 10 Mbps than 1 at 100 mbps for no reason...

I only need to read a file at speeds fast enough to read in real time, everything above is just a waste...

1 Like

See a few comments above. The feature you want doesn't exist as you want bandwidth control per file. Raise an issue and ask for a feature request or continue with plexdrive if it meets your use case (which is seems to so why change?)

The dd command you executed is a sequential read and you piped to dev/null it's going to sequentially read as fast as possible.

Plex and Emby both have controls in their software to limit read ahead/transcoding to hit the issue you describe which is reading too far ahead. You haven't mentioned what the software is so might be worth checking into that.

If you want to limit the read-ahead you could try --vfs-read-chunk-size 0 and --buffer-size 0. This should be reasonably efficient but the OS will still buffer stuff.

You could also try setting --vfs-read-chunk-size 1M --vfs-read-chunk-size-limit 1M which will limit the transfer to 1M. That will do loads more transactions and be very inefficient so probably isn't what you want!

Any chance we get a bandwidth limit per open file? or transfer or whatever you call it...

1 Like

You can set a total limit with --bwlimit and you can set the number of transfers with --transfers - this should provide enough control hopefully.

He's using a mount though.

I don't want to hardcap rclone. It can use 100% of my incoming bandwidth, I just don't want a few files using that much, it's about being efficient. They can be read in real time at speeds less than 3 Mbps.

Sometimes I have 200 or more files open at same time on the mount, and I'd rather limit each of them so they aren't fighting for bandwidth or wasting bandwidth reading a file too fast while others are too slow for real-time streaming. The issue is really the spikes.

1 Like