Slow sometimes .. help please

I have rclone running for a while now. It worked great so far, but always had a very little buffer in Plex.
While trying to enhance this, i think i broke it somewhere .. It's slower now and every few days it's completely stuck.

> ExecStart=/usr/bin/rclone mount SecretDrive: /mnt/media \
>   --config /home/snikay/.config/rclone/rclone.conf \
>   --log-file=/home/snikay/logs/rclone-mount.log \
>   --log-level DEBUG \
>   --user-agent "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36" \
>   --allow-other \
>   --allow-non-empty \
>   --log-level INFO \
>   --uid 1000 \
>   --gid 1000 \
>   --umask 002 \
>   --cache-chunk-no-memory \
>   --cache-tmp-upload-path /mnt/PlexLibrary/tmp/ \
>   --cache-db-path /dev/shm/ \
>   --cache-dir /dev/shm/ \
>   --cache-chunk-path /dev/shm/ \
>   --cache-chunk-total-size 30G \
>   --file-perms 0777 \
>   --fast-list \
>   --dir-cache-time=160h \
>   --cache-chunk-size=100M \
>   --cache-info-age=120h \
>   --cache-tmp-wait-time 60m \
>   --buffer-size 5G \
>   --vfs-read-chunk-size 50M \
>   --vfs-read-chunk-size-limit 200M \
>   --vfs-cache-max-age 48h

It stops at the very beginning of the Playback for a short time and when seeking to much forward (where it's not buffered). So am i right, that the chunk sizes are too big and it's only played when the first chunk is complete?

I have 60GB of RAM and a NVMe-SSD. Don't know if this making much difference between the both? Problem is the main-system is (sadly) on a HDD and the SSD is a second disk - so i have to redirect everything to either RAM or the SSD. Plus (if this makes really sense) i want to use as much cache as possible, since both solutions seem to be fast enough theoretical?

The server has a 1GB/s connection, but is always remote streaming (hosted).

Can somebody please help me with the best setting here? I have to confess, that i'm not really knowing what i'm doing with these settings :smiley: I wish there would be some kind of a WebUI for noobs like me :smiley:


Despite good documentation, rclone settings can be quite confusing until you understand the system intimately. I'm not quite there myself, but I should be able to offer some pointers.

cache-chunk-no-memory - I would disable this unless you have to run it for the sake of very limited RAM. Having the active stream in memory definitely helps responsiveness.

Ideal chunk size really depends on your usecase and bandwidth. The higher bitrate streams you use and the higher the bandwidth, the larger chunk size you want to optimal throughput (but that's probably a non-issue on a 1Gbit connection). Smaller chunks will open and seek streams faster, but too small may become vulnerable to buffer-stutter unless you also up the worker threads. I think I would suggest setting down the chunk size a little. Maybe 50MB. If this should stutter on 4K video then try adding --cache-workers 6 (default4) to give you some more leeway. Some tweaking is probably needed to find the perfect values for you since it's not a one-size-fits-all kind of setting.

I don't think these settings will do anything when you run via a VFS mount due to limitations in translating commands from the OS.

I don't think these do anything when you are using a cache in front because that will do the chunking instead.
--vfs-read-chunk-size 50M
--vfs-read-chunk-size-limit 200M

This is just very silly... what do you think this actually does? This is a read/write buffer for transfers to smooth out bandwidth utilization in case local disks are multitasking for example. A higher than default (16MB) makes sense for a 1Gbit connection, but try something like 128MB, not 5GB.
--buffer-size 5G

Aside from these notes I see nothing badly wrong, and it should work with some minimal tweaking I think. I stream 4K easily on a 150Mbit connection and I also use cache for the job (16 or 32MB chunks typically, but remember my connection is also much slower).

1 Like

Thank you for taking the time thestigma , i really appreciate it!
This really helped me a lot already.
I'll try this settings and let you know.
Also "--cache-workers" is a very good hint. I didn't knew about it.

The --cache-chunk-no-memory was from here, and i thought maybe would be a try worth..^^


Very happy to help :slight_smile:

Be aware that --cache-workers seems to operate in the way that if you had 8 for example, as soon as you opened a file the cache would start downloading 8 chunks. From then it would try to keep 8 ready at all times (ie. a larger effective buffer).

This also means that you shouldn't set workers overly high, because the frist segment (which is needed to start or seek a stream) is probably done a bit faster if it's say, 6 consecutive downloads at the start rather than 20. TLDR: More workers is not necessarily better.

Oh sorry, I'm not a daily Linux user, so I honestly have no idea what /dev/shm is. It is some sort of RAM virtual location? If so it would make sense to use --cache-chunk-no-memory in that spesific case (no need to cache to RAM something already in RAM). It is a fairly niche setup though and arguably not worth it unless you just have heaps of unused RAM. You definitely don't need this for performance. It just seems like a neat trick (if you have lots of RAM) to not have to use any disk-writes for a cache. I'd probably verify that you can do what you want on a more typical setup before experimenting with this.

But for the record, Animosity knows his stuff and has been here a lot longer than me. Not saying that is a wrong setup. Just probably not what you really need.

ah, okay. i get it.
oh boy, this is really kinda complex :smiley:
can't wait for some automatic-setup based on some main information (ram, connection, cpu, a little slider for performance and done..) :smiley:

Yes, since i have 60GB RAM (only 15GB used), i thought this would be a good idea to have everything only on ram, while i had only a HDD. But i upgraded to SSD and now i'm not so sure if this (RAM only setting) is really an enhancement.

Oh plus i don't exactly have a cache-mount , or ?! so while there are settings for "cache" etc there, i don't mount it via "rclone cache-mount" or something. So isn't it a VFS mount ? There was a discussion back a then, whats best cache vs vfs? But i thought this would be another thing with cache-mount. Oh boy, i'm confused :smiley:

maybe it's best to leave everything default? :rofl:

Won't matter for performance at all - unless you plan to stream files with higher bitrate than a HDD (or SSD) can actually keep up with - which don't really exist :slight_smile:

You do avoid any drive-wear though - if you care about that, but your cache also becomes non-persistent between system restarts. There are some good benefits to having a fairly large persistent cache because you don't have to re-download recent files. Most useful for limited bandwidth though obviously.

naah... have unlimited bandwidth.. so i don't care..^^

(as a general rule)
Always best to leave defaults unless you are sure you need to change them. Defaults are rarely bad for you. Mis-set parameters definitely can be though.

yeah, had a feeling that could be the case.. :smiley:
I guess i'll try what you said and compare it to the main-defaults and Animosity's settings for high RAM.

thank you for taking the time!

just thought that my connection plus hardware on the server, plus the amount of multiple connections are higher than for most people. But of course, if things go crossing each other, that can't be good..^^

Sorry I completely missed this.
If you don't use the cache backend (not the same as the VFS cache used by the mount) then all flags starting with "cache" will do nothing. If you aren't sure I'd need to see your config file (if you share it make sure to [REDACT] all sensitive lines, basically all line that look like passwords and randomly generated keys). But if you don't know if you use a cache you probably don't - because you would have to have specifically chosen to set one up and connected it...

In that case I might need to revise my suggestion because I assumed you were using cache from all the cache arguments you had in there.

The VFS uses a cache because it HAS to have a write cache to support all operations the OS expect to be able to do on a file. It only does write-caching. It can be disabled, but then you have very bad limitations on editing files.

The cache backend is a dedicated cache meant to speed stuff up by keeping recent data locally availiable. It does read caching by default, and write can also be enabled (though I find this to be too buggy to actually use). Cache backend is a completely optional module.

I use both personally. VFS handles write-caching. Cache backend handles read-caching, and both are set to large sizes in a dedicated HDD to really cut down on how much data I actually need to re-request. That's just what I need for my usecase though. If I had 1Gbit and primarily streamed video that's quite different.

Let me know if I need to elaborate on anything here.

yeah, no its not a cache-mount. so i remove that particular lines.

So is this looking right for you? :smiley: or still silly?^^

ExecStart=/usr/bin/rclone mount SecretDrive: /mnt/media \
  --config /home/snikay/.config/rclone/rclone.conf \
  --log-file=/home/snikay/logs/rclone-mount.log \
  --log-level DEBUG \
  --user-agent "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36" \
  --allow-other \
  --allow-non-empty \
  --uid 1000 \
  --gid 1000 \
  --umask 002 \
  --file-perms 0777 \
  --buffer-size 128M \
  --vfs-read-chunk-size 50M \
  --vfs-read-chunk-size-limit 200M \
  --vfs-cache-max-age 48h

Or what could be enhanced? .. it looks.. like .. not really enhanced for my system? kinda basic? ^^

That seems pretty good to me. Yea, it's not a huge list of flags or anything but when you only use Rclone + mount there aren't all that many more flags relevant for performance.

The only thing is I'd highly recommend
--vfs-cache-mode writes
Without this you use a cacheless mount. This is fine for pure reading or pure write operations, but it sucks really bad doing any sort of read/write opening, such as editing a document. Frustrating for general usage and many limitations. Also, a cacheless mount doesn't have much error resistance on uploads. If something fails badly on a "move" operation it might just lose the file. Perhaps not common on fiber - but at some point a cable gets cut, power is lost or a server burps - so if you highly value reliability it's a no-brainer.

You can probably boost your upload throughput quite a bit with larger upload chunks (typically quite low by default) but I'd need to know what type of cloud service you use. Also I don't know if you care enough at 1Gbit speeds =P

I use GDrive unlimited
So maybe something like this?
--vfs-read-chunk-size 20M
while leaving --vfs-read-chunk-size-limit empty/default ? so that the chunks getting double (unlimited) bigger while watching? is that right?
is it a good idea to leave it default? unlimited doubled ? sounds strange to me..^^
or like 500M or something?

maybe like this?

ExecStart=/usr/bin/rclone mount SecretDrive: /mnt/media \
  --config /home/snikay/.config/rclone/rclone.conf \
  --log-file=/home/snikay/logs/rclone-mount.log \
  --log-level DEBUG \
  --user-agent "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36" \
  --allow-other \
  --allow-non-empty \
  --log-level INFO \
  --uid 1000 \
  --gid 1000 \
  --umask 002 \
  --file-perms 0777 \
  --fast-list \
  --buffer-size 128M \
  --vfs-read-chunk-size 20M \
  --vfs-read-chunk-size-limit 500M \
  --vfs-cache-max-age 2h \
  --vfs-cache-max-size 2G \
  --vfs-cache-mode writes \

oh.. vfs-read-chunk-size is 128M by default .. hmm..
guess i leave those both default too .. ? ^^

Yes, that is entirely reasonable. That will enable quick access, but also it will quickly ramp up to high throughput efficiency once the playback is underway.

Not sure if I'd leave it at unlimited exactly... it should be fine, but I'd probably use something like 256MB or 512MB myself. Above that there is little efficiency to be gained, and should you ever have a transfer error it would suck to have to throw out a 8GB sized chunk and re download it all :slight_smile:

As for optimizing uploads on Gdrive, use this:
upload_cutoff = 256M
chunk_size = 256M
(or use 512MB or even higher if you have the RAM for it)

This should very nearly double your effective upload throughout. The why of it has to do with how TCP fundamentally operates and has to essentially "ramp up from 0 speed" at the start of each transferred chunk. Larger segments = less bandwidth lost on the "ramping up" periods overall. The default is a meager 8MB and not at all ideal for a 1Gbit connection.

I have to be honest and say I don't know for sure if VFS chunks function like the cache backends chunks. The cache backend needs to finish the first chunk to start playing. I don't know if the VFS cache has that requirement. It may be able to start before the first chunk is even done. Not sure... I have not experimented a ton with this since on Windows I could never get VFS cache alone to provide smooth playback, so I use the cache backend for that. (this may be a windows-spesific quirk as Linux users have reported smooth playback via VFS only)

If in doubt, try testing both a very large in initial size, and a fairly small one like 16MB. Do they open at the same speed? If so then it probably isn't restricted in this way and thus feel free to set pretty large chunks. Large chunks may not fetch partial downloads quite as efficiently as smaller ones.

on linux i had not really a problem with vfs. or maybe it's the fast connection.
but the whole read from cache and write with vfs seems very advanced for me.
and to bring this up to work with plex .. :open_mouth:

so i'll try a small chunk-size and a bigger total size for doubling. since 40GB for a 4K movie is still in 80 chunks for 500M
i guess slowly i understand the whole thing better :slight_smile:
but where is the vfs-cache stored? automatically in ram?
or is it the "--cache-dir " for vfs too?

again, thank you so much for taking the time !