--buffer-size defaults?

Just wondering what the default is for --buffer-size, if unspecified.
Can't find it in the docs.

Although, the docs indicate that --buffer-size applies per file and does not indicate the total memory allocated to rclone for buffering overall. Is this correct? That seems like a minefield if one doesn't know how many files one intends to open and has limited memory.

https://rclone.org/flags/

--buffer-size SizeSuffix In memory buffer size when reading files for each --transfer. (default 16M)

Heh. Good work, me.

Still curious about files*buffer-size issue. I'd set to 1GB, but am not sure how that would play with my 4GB RAM when opening 3, 4, or 5 flies...

as per the docs,
for each --transfer

total ram used = --buffer-size multiplied by the number of simultaneous --transfers.

i have never found a use for --buffer-size.
as i am able to saturate my fios gigabit connection using default setting

I'm referring to this:

This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files .

That implies that it will read ahead X megabytes per file, and then (I guess) move the file to the vfs cache after fully downloaded/opened?

That behavior doesn't mirror --vfs-cache-max-size though, which just defines the maximum size of the cache, overall. Also still unclear on how that would handle situations where buffer-size * number of open files exceeds the available RAM.

Check out here:

https://rclone.org/commands/rclone_mount/#file-buffering

It explains how buffer is used for memory.

The section under that explains file caching which is different.

1 Like

It does not answer my questions about the flow of files from buffer to vfs cache, or memory management of the buffer when constrained by onboard memory...

There is no flow. They are separate things.

If you are use more memory than you have, you'd generate an out of memory error and rclone would crash due to lack of memory.

The VFS layer both buffers files to disk, and caches them there (until purged). --buffer-size also buffers files to RAM (apparently).

If I set --buffer-size 1G and then open a 500 MB file, does this fill the RAM buffer, and then copy to the entire file to the vfs cache directory?

I guess the best way to re-frame this is:
I'm just not seeing the advantage/use case for setting --buffer-size 500M versus --vfs-read-chunk-size 500M --vfs-read-chunk-size-limit 0M

If you are use more memory than you have, you'd generate an out of memory error and rclone would crash due to lack of memory.

Hopefully not? Worst case scenario, I would hope the kernel just pages out the buffer. But I assume rclone will be more graceful than you describe.

The buffer is used to make reading from the network and writing to the output asynchronous. This means that if your network is faster than your disk you'll fill up that 500 MB buffer and if it isn't then it will remain mostly empty. A small buffer is very useful because it means the input task can run independently of the output task. Making the buffer really large probably isn't helpful unless you have big glitches in network connectivity.

The --vfs-read-chunk-sizeand-limit` is to do with the sizes of chunks rclone requests from the provider. These don't get buffered in RAM.

You are comparing an apple to an orange though.

When I open a file, the vfs-read-chunk-size relates to the range request it sends to the cloud provider to grab parts of a fie big file.

If I request a 1GB file and use a 512M vfs-read-chunk-size to start, it makes 2 API calls to get the file. If I use 128M, it does ~10. So that helps with reducing API hits. There is the overhead of using a large size so you may waste a little bandwidth, but it's very negligible.

Now, let's say i open that same 1GB file and have a 256M buffer size. If the file continues to be read sequentially, rclone keeps that 256M buffer in memory filled and say you have a light network hiccup, the reading of the file stays consistent because you have a memory buffer going on.

So in the Plex world, if you direct playing a file that is usually opened once and read sequentially so a large buffer may provide some help if there is a bit of latency in reading from your provider. The downside here is that if the file is closed, the buffer is dumped so depending on how it's being used it could be bad to have it big. If a file is constantly opened and closed, a large buffer would have a negative effect since it keeps trying to read ahead, but the player keeps closing the file.

If we move to file caching, that's the strawberry as it is different from all the other two items we talked about. If you have a requirement to keep files on disk or are writing files, that is the where the file caching layer comes into play. The different options for that are described in the file caching section.

Finally, the last option is the cache backend, which is different from the vfs cache layer (very confusing). That allows for chunked reading and it keeps chunks on local storage based on the parameters described. It also offers offline uploading as well.

My use case is plex so I just use standard vfs with no file caching and a larger buffer size since i have gigabit FIOS and no bandwidth caps so a little waste means nothing for me. I also have a 32GB server so plenty of extra memory. Unfortunately, if you set the memory too high and the server doesn't have it, it will crash like any other program or get killed off by the OS. There is no graceful way to handle out of memory conditions as those are mainly configuration issues.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.