32K read chunks on local storage with cache and rclone move

Hi,

I started using the cache feature incl. write cache to a GDRIVE backend. When I copy something to the drive it is copied first to the cache and then later on moved to the backend. I am also using crypt.

What I am seeing currently is that after things are copied and the "rclone move" starts, rclone is doing tons of 32K read requests on the cache drive. This result in a lot of IOPs which is slowing down the system a lot more than I expected.

Is there any way convincing rclone to use larger read chunks to lower the IOPS from the local storage (writecache)?

rclone -v --attr-timeout=60s --cache-chunk-no-memory --user-agent=Something --dir-cache-time=50m --cache-workers=12 --cache-rps=900 --cache-db-path=/data/gdc --cache-writes --rc  --cache-tmp-upload-path=/data/gdwc --cache-info-age=2h --buffer-size=0 mount CACHEDRIVE: /data/external --allow-other

Also when accessing (reading) files on GDRIVE I can see the cache being used however the chunks being read from the cache also are 32K which in my use case (backup of photo library) does not make an awful lot of sense.

Both cache and writecache are on local ZFS storage

Regards

Can you share:

rclone version

Debug log with -vv

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.