I would like a second opinion on how I made my pipeline and if this is the correct way of avoiding the --size-only fallback. I want my data to be checked for consistency on a reliable way and this was the only way I could think of.
Run the command 'rclone version' and share the full output of the command.
rclone v1.63.1
os/version: debian 12.1 (64 bit)
os/kernel: 6.1.0-10-amd64 (x86_64)
os/type: linux
os/arch: amd64
go/version: go1.20.6
go/linking: static
go/tags: none
Which cloud storage system are you using? (eg Google Drive)
Dropbox Business
The command you were trying to run (eg rclone copy /tmp remote:tmp)
Would that still do the hash check on the storage system? As dropbox does only support their own hash that chunker doesn't?
EDIT:
As I am using chunker to make same size chunks I am afraid that a remote file would pass as equal when it in fact is another file with the same size (as dropbox does not support modtime too)
You ideally want to check the consistency from source to destination file. From how I understand your suggested configuration would only check the source file against chunker, but would not check the dropbox checksum?
You have only one destination in given rclone command. so depends where is your destination (chunker or dropbox) you only check against this destination.
Can you show command you think will fail when you run it? because of not checking right checksums.?
I do a rclone sync --checksum localfile crypt:folder
But in a chained setup like this one, does the last step (dropbox) still check the checksums and replace the file when they differ? If yes: consider the question answered by your first response.