hi all, I'm trying to understand what makes a remote support hashes, but I have not yet found out why.
is someone able to explain it to me?
what makes, say, Box support (some) hashes, and another, say, Mega, not?
is it lacking on the cloud storage provider's side? or is it possible but not yet implemented in rclone?
also, how does the hash checking work, specifically?
when a digest is queried from the cloud storage provider, is it effectively computed on-the-fly from there?
or is it retrieved from there, but computed previously, for example upon completion of the last file modification (or whatever the provider's choice is)?
I understand pretty well how hash sums work, but I'd like to understand it even better so as to figure out if and when rclone can help speeding up my backups, which do rely on daily hashsums!
If you look at the API for some cloud providers, you see that they ask you to submit the hash of the data along with the actual data. I strongly suspect they use that to verify it and confirm (and store) it along with the object. The B2 API is pretty readable even if you're not very advanced with this stuff (including myself).
Other remotes such as SFTP do not store it and it is computer on the fly as requested. These are likely remotes that are less about immutable storage and are more directly file-based
Some do but they generally check the checksum and issue an error if the file it received corrupt.
Note that most cloud providers check the checksum of each file periodically to make sure it hasn't bit rotted and if it has then they restore from a different copy.