Rclone vfs full very high bandwidth usage and server crash

rclone v1.53.1 with rclone cache mode full, read-only with the following settings:

ExecStart=/usr/bin/rclone mount
--config=/home/xtreamcodes/.config/rclone/rclone.conf
--allow-other
--vfs-read-chunk-size=3M
--vfs-read-chunk-size-limit=0
--vfs-read-ahead=3M
--buffer-size=0
--vfs-cache-max-age=168h
--vfs-cache-max-size=1T
--vfs-cache-mode=full
--cache-dir=/mnt/cache
--no-modtime
--no-checksum
--umask=002
--log-level=DEBUG
--log-file=/opt/rclone.log
--async-read=false
--rc
--rc-addr=localhost:5572
--bwlimit-file 1M \
--read-only \

My reasoning of the flags:

  • 0 buffer size as to make every read buffer to disk and then serve to user from disk and to not waste bandwidth filling that memory buffer

  • 3M chunks to make for a faster file open (3M should be enough for a 7 seconds buffer from a single request)

  • 3M read ahead, so there will be always a buffer of 7 seconds in the disk

The way I see the workflow of rclone when a file is requested would be like this:

File is requested > 3M chunk downloaded and served to the user immediately > followed by a 3M chunk download and buffered to disk so the next read will be from the disk

So for each open file there will be 2 requests for 3M chunks at once.

However with just 6 files open it's using 100% of my download bandwidth:

image

even with --bwlimit-file of 1M how's that possible?

I need help finding the optimal settings for the new cache mode, that targets minimal read-ahead (bandwidth spent downloading things in advance) with only what is strictly required to work.

With the default settings it does behave way better:

image

^ this with the defaults for the flags --buffer-size, --vfs-read-ahead, vfs-read-chunk-size-limit, vfs-read-chunk-size

What is the explanation behind this?

Can you try your settings but comment them out one at a time to see if you can see which one is causing the problem?

My guess is --buffer-size=0 so can you try that one first?

How can I delete the vfs cache? I tried to rm -rf /mnt/cache* and then restart the service with the new settings, and now I can't open any file at all

OK, so I discovered a issue with my CDN, that if rclone failed to serve a file, this would be cached, and people wouldn't be able to open the file again till the cache expired !!

So I can confirm that --buffer-size=0 seems to be causing this issue. Setting it to 3M seems to fix the issue and use even less bandwidth than the default value! But would need more testing to be 100% sure about that

I think that should work... I would stop the service first then delete the cache directory then start it again.

Ah!

Great - that is something for me to investigate.

OK I'll investigate --buffer-size 0 and see if I can spot anything.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.