Crypt hashes are tricky indeed!
One reason not to store hashes of the unencrytped data is that it is an info leak. Lets say you were storing a known file with a known hash. The provider could potentially work out that you were storing that known file by finding its hash as metadata on your encrypted file.
Likely some/most/all providers just treat a checksum as metadata just stored at upload time. Some providers I'm sure check that metadata from time to time to find bitrot, but not many recompute the checksum on the fly when you ask for it as that is expensive. The only backends which do that are the local backend and the sftp backend as far as I know.
This will detect bitrot when you come to download the file - the metadata hash will not match the hash rclone calculates.
Rclone does this already when uploading and downloading files via crypt. It is a bit behind the scenes, but if your providers supports a hash then when uploading rclone will work out what the hash of the encrypted data should be and check it at the end of the upload. A similar process happens for downloads.
Your point about errors not being caught during encryption is an interesting one.
If your computer had some bad ram which introduces errors in the data before it was encrypted then the file would upload apparently all OK and the checksum would not notice that and the file would download OK.
However rclone cryptcheck
would notice the problem.
I'll just note also that rclone chunks the data into 64k chunks for crypt and each of these blocks has a very strong hash in it, so if any of these get corrupted you will get an error on download.