I have a glusterfs storage cluster, mounted at /mnt/glusterfs. And I wanted to use rclone mount with the vfs cache so I can cache contents of the cluster locally, saving bandwidth on the cluster.
I noticed there is a huge performance drop in time to open files
My flags:
--allow-other \
--vfs-cache-max-age=168h \
--vfs-cache-max-size=3T \
--vfs-cache-mode=full \
--cache-dir=/mnt/cache \
--vfs-read-chunk-size=256K \
--vfs-read-chunk-size-limit=8M \
--buffer-size=256K \
--no-modtime \
--no-checksum \
--dir-cache-time=72h \
--timeout=10m \
--umask=002 \
--log-level=DEBUG \
--log-file=/opt/rclone.log \
--async-read=false \
--rc \
--rc-addr=localhost:5572 \
I tested with a starting chunk size of 1MB with buffer size of 1 MB too and found no noticeable difference in time to open files.
Compared to reading the same amount of data directly from the cluster:
Any ideas in how to fix this? Can we have a setting that create all the sparse files in advance, and maybe never clear them from cache
My rclone remote is just:
[glusterfs]
type = local