modtime+size (both combined, although with a backend-determined "precision margin" for the modtime component as backends store to different levels of precision.)
We can't control the server-side generated hash. Not on most backends anyway. It simply is what the file makes it. We could store it in the file itself though (for that, see further down in the post)
if there are not compatible hashes between the who systems then it falls back to size+modtime. This is usually pretty accurate all things considered.
Of course the core problem here is that the hash of the original file will not be the same as the hash of the encrypted file that is generated on the server-side. And since server can't decrypt to look at the underlying file we are at am impasse.
Not having comparable hashes is not purely a crypt issue pr se. If you sync two encrypted volumes (directly, not through the crypt remote) then these can be hash-compared just fine. But any time you compare non-encrypted to encrypted - or two encrypted systems where files aren't necessarily always originating from one source - then you have a problem.
What further complicates things is the nonce, the "random seed" for the encryption that makes sure that a hashed name is not the same each time for security/obfuscation reasons. This makes it so that we can not simply encrypt locally and hash that to compare with the encrypted file on the server. These will not match even if the file inside is identical.
What can be done is to download the nonce, then using that same nonce, encrypt locally and hash. Then the hashes will match (if the files were identical underneath obviously). Having a function that can automatically do this has been suggested. It actually already exists in the form of rclone cryptcheck, but there's no flag you can use to make use of this technique in a copy/move/sync. This is something that probably should be added...
Lastly, let me inform you that I've had some chats with Nick on this already and we have come to the agreement that it would be wise to bake the original hash into the crypt-format itself (and potentially also other metadata). This would allow to easily access the "original-file-hash" and compare based on that. it's not going to be quite as fast as grabbing all that info from a listing - but it will surely be a worthwhile compromise in the return for the ability to use --checksum, --track-renames and much more between any two remotes - regardless of encryption or not. Even regardless of if there are several different crypt-keys in play.
That hash will have to be generated locally, but because that data will reside within the crypt structure it will be inherently protected against failure by the data-integrity of the format. Thus there shouldn't be any way for the locally calculated hash to "not be true" because it got corrupted on transfer somehow.
An issue has been started on that topic here:
Hope this helped. I'm sure you have followups to this as usual
Probably need to wait until tomorrow though...