and the header/footer would get extracted when the file get downloaded to verify the hash? (how you will handle it with mount? the files didn't get downloaded mostly and only get viewed... )
but even then would be a additional file with names+hashes awesome for integrity check with 2 hashes,
because when you download the file with the head and the encrypted file get corrupted while download you cant be sure if only the head/foot or/and the data is corrupted.
here you need a additional file with the hash of the unencrypted data inside it to can verify the hash when:
the data is encrypted and the hash is not the same like in the head/foot
because of that the additional file should get uploaded too. here would be a good question:
- do you want to save the filename + hash as a single file per data who get uploaded
(easier to find the correct hash and easier to implement it because you don't have to download the complete data directory file)
(name of the hash file could .hashFILENAME (not the hash of the file only the name "hash" ) or something like these so it is invisible)
- or do you want to save all filenames + hash in the same file
(you have to search inside the file for the correct name and have to extract it to look if the hash is correct)
you could probably save the hash of the encrypted file from the storage supplier too so you can check if the file get corrupted by the storage supplier or not and when yes you could upload it again
(would be awesome to get a notification when you copy or sync files)
(probably a own command would be worth to check if the encrypted file hash is still the same or the have changed, so you could upload it again?)
i know the change is really little that it get corrupted but i think its not a huge point to do that too because i think the technical backend is already finished? (when the storage supplier support it)
what did you want to do with the data who are already get uploaded... i think many people have a big amount of data in the cloud... to download it all and upload it again with the additional head/foot would be far away from nice...
(i finished my 5TB backup a few days ago and it needed around 4 weeks for upload)
there would be the additional file awesome (mostly the download speed is mutch better than uplaod), because you could implement it so:
- new data get uploaded with hash in head/foot and a file with filename+hash
- old data get only a additional file with filename+hash
because when you downloading a old file and the file have no hash in head/foot than you could use only the hash in the .hashFILENAME file who get additional downloaded.
these way you have only one hash to verify the integraty of the file but better than nothing.
there would be a additional command awesome for the person who want to download the old files from the cloud and want to add the hash into the head/foot and uplaod the file again (naturly with the test if the uploaded data get uploaded correctly or not