Hasher Bolt Not Written After File Replacement + No Strict Checksum Mode

What is the problem you are having with rclone?

I want every file on my remote to have a verified checksum matching the local source.
Running rclone sync --checksum does not achieve this reliably due to two problems:

Problem 1: When rclone sync replaces an existing file, the hasher bolt is not updated.
The log line where it fails: "Dst hash empty - aborting Src hash check". New uploads write
to the bolt correctly. Replacements do not. On the next run, the stale bolt entry masks the
missing checksum and the file passes hash validation undetected.

Problem 2: There is no strict checksum mode. rclone sync --checksum falls back to size
and modtime when a hash is unavailable, so files with no remote checksum pass undetected.

The hasher backend was added on top of the crypt layer specifically to capture and cache
checksums locally during transfers, so that rclone has access to remote checksums for
comparison during sync. Without it, the crypt layer does not expose the underlying
remote checksums to rclone sync.

Proposed Fixes:

  1. New flag(s) on rclone sync. Example: rclone sync {source} {remote} --checksum-strict --upload-missing-hash

For providers that store checksums natively on upload, this would make the hasher backend
and unnecessary.

  1. Alternatively, fix the hasher backend so the bolt is always written after a successful
    file replacement, removing the condition that causes "Dst hash empty - aborting Src hash check" to abort the bolt write.

Happy to discuss on these forums. More details in the GitHub issue:

Run the command 'rclone version' and share the full output of the command.

1.73.1 (Windows AMD x64)

rclone config contents with secrets removed.

With the proposed rclone sync flags, the [HasherRemote] can be removed.

[BaseRemote]
type = {type}
email = {redacted}
password = {redacted}
api_key = {redacted}

[CryptRemote]
type = crypt
remote = BaseRemote:
filename_encryption = off
directory_name_encryption = false
password = {redacted}

[HasherRemote]
type = hasher
remote = CryptRemote:
hashes = blake3
max_age = off
1 Like

I don't think hasher backend is needed at all in that scenario, crypt backend has authentication built in

Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.

That is 16bytes of authenticator (poly1305) for every 64KiB of data

Note however that this does not help at all if data is corrupted on upload and only discovered later on download and no backup exists (nor does locally stored hash help here).

Hi @minesheep

Thank you for taking a look at this scenario. That’s much appreciated.

Currently, the reason of using Hasher is to capture hashes during upload so that I can
retrieve them in the same run via rclone hashsum, write them to a file, and compare
them against the locally computed source hashes to verify the remote matches the
source exactly without downloading any files.

Hasher is specifically needed here because files are encrypted via the crypt backend.
The hash stored on the remote is the hash of the encrypted bytes, which will never
match the locally computed hash of the plaintext source file. The hasher captures the
plaintext hash during upload before encryption occurs, which is the only way to get a
hash comparable to the source without downloading and decrypting every file to recompute it.

1 Like