How should --buffer-size be set relative to --drive-chunk-size?

What is the problem you are having with rclone?

I am using rclone mount to sync media files and it works great but am looking for further optimisations.

My question is around how --buffer-size should be configured relative to --drive-chunk-size:

  • If buffer-size is less than drive-chunk-size, will the buffer even be used (as each drive chunk cannot fit into the buffer)?
  • Is it beneficial to make the buffer-size at least twice as large as the drive-chunk-size as I have done (see below) so that rclone 'reads ahead' more?
  • Does --vfs-read-chunk-size come into play at all or are its effects orthogonal to --buffer-size and --drive-chunk-size?

What is your rclone version (output from rclone version)

$ rclone version
rclone v1.49.3
- os/arch: linux/amd64
- go version: go1.12.9

Which OS you are using and how many bits (eg Windows 7, 64 bit)

$ uname -a
Linux core1 4.19.68-coreos #1 SMP Wed Sep 4 02:59:18 -00 2019 x86_64 Intel(R) Core(TM) i5-6600K CPU @ 3.50GHz GenuineIntel GNU/Linux

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

/opt/bin/rclone mount gcrypt: /mnt/google-drive \
    --config=/root/.config/rclone/rclone.conf \
    --vfs-read-chunk-size=128M \
    --vfs-read-chunk-size-limit=off \
    --drive-acknowledge-abuse=true \
    --drive-chunk-size=128M \
    --buffer-size=256M \
    --use-mmap \
    --attr-timeout=72h \
    --dir-cache-time=72h \
    --gid=1000 \
    --uid=1000 \
    --modify-window=1s \
    --stats=0 \
    --log-level=INFO \
    --allow-other \
    --fast-list

And my config

[gdrive]
type = drive
client_secret = [REDACTED]
scope = drive
token = [REDACTED]
client_id = [REDACTED]
root_folder_id = [REDACTED]

[gcrypt]
type = crypt
remote = gdrive:
password = [REDACTED]
password2 = [REDACTED]

This does nothing on a mount and can be removed.

This is only used for uploading files and has nothing to do when downloading files / streaming them.

This is used to do a http range request for a file. So if you are sequentially reading a file, it does a first request for 128M of a file and slowly builds up doubling the range request to reduce the number of API calls for reading the whole file.

Buffer size only comes into effect when a file is sequentially read and not closed. Once a file is closed, the buffer is dropped. Rclone tries to keep the buffer filled.

My goal is less config and use defaults when applicable. I have an obnoxious amount of free memory on my machine so I run with a 128M buffer size.

Through a lot the testing, I really see very little impact of having a small or big buffer. Potentially with a big buffer, you waste some download/api as it tries to fill the buffer but if the file closes, it's neglible. I'd argue with a lot of streams, having a small buffer is probably better but it really depends a lot on the server/clients from the Plex aspect and way too much goes into that.

I do all my uploads via the default buffer size over night and can push my gigabit each time with a few transfers.

1 Like

That's great, thank you for the quick reply, went for the following (removing defaults):

/opt/bin/rclone mount gcrypt: /mnt/google-drive \
    --config=/root/.config/rclone/rclone.conf \
    --drive-acknowledge-abuse \
    --drive-chunk-size=256M \
    --buffer-size=64M \
    --use-mmap \
    --attr-timeout=72h \
    --dir-cache-time=72h \
    --gid=1000 \
    --uid=1000 \
    --modify-window=1s \
    --log-level=INFO \
    --allow-other

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.