Rclone cryptcheck block size (slow read performance)

Hello,

I'm wondering what rclone is using for read block size when reading a local mount for rclone cryptcheck?

I'm running cryptcheck against a 12tb local data set rclone cryptcheck /local remote_crypt:/ and noticed that my zfs 4k read counters are incrementing like crazy.

The recordsize on the zfs dataset is 1mb.

The max speed I'm able to get from disk is 200MB/s (even though the pool is capable of 1GB/s sequential read).

rclone is using about 200% cpu on a 6 core (12 logical) -- so plenty of cpu headroom.

My guess is the reads are sub-optimal.

thoughts?

For cryptcheck rclone reads, encrypts then hashes the files. However these operations are single threaded (so one CPU per file).

Actually until very recently rclone only checked one file at once full stop... If you try the latest beta then rclone will check --checkers worth of files at once which should work your system a bit harder.

thanks for that beta.

i tested the latest beta a few times
11 files of total size 57GB

the best performance is --checkers=1

on average
--checkers=1 took 06 minutes
--checkers=4 took 16 miuntes
--checkers=8 took 13 minutes

Do you have an HDD? Seeking on the HDD could well be the bottleneck here. I think SSDs will saturate your CPU.

Try increasing --buffer-size - that may help.

i set --buffer-size=1G, no real change.

the os is windows server, free 2019 hyper-v edition, command line only, no gui.
the filesystem is REFS, window's version of ZFS.
three HDD in a softraid, with built-in checksums for data and metadata.
great file system.

i will do some more testing on ssd

Fair enough.

I suspect SSD will be quick.

I'm used to Linux file systems and I believe Windows performs a bit differently.

You were right, newer version of rclone was much faster. --checkers did seem to queue up additional files (whether they were actually being decrypted or not). I was able to get 350MB/s or so without any fs tuning. I'm assuming some more optimizing could be done, but it was enough for me.

Thanks for the response.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.