Hello Nick and everyone,
I’ve been intensively testing rclone with ACD, and during the last 10 or so days I’ve uploaded 1.6TB of varied data using rclone over an encrypted remote on top of my ACD account, and it’s been working great.
I’ve seen some worrisome messages (like “Attempt 1/3 failed with 3 errors and: failed to authenticate decrypted block - bad password?”), but after the transfer finishes, I’m able to copy the data back from ACD, and when I check it with my (previously, locally calculated) MD5 checksum, it is perfectly OK – so rclone seems to be doing a great job recovering from ACD troubles.
In a single word, rclone is AWESOME, way better than all other alternatives I’ve checked so far. Many thanks to Nick for creating, releasing and maintaining it.
One thing I’ve been thinking about is how rclone sync (and copy, and check) can determine whether the local and the remote file are equal. As ACD doesn’t support ModTime, seems we’d be restricted to size and hash… but in an encrypted remote, rclone wouldn’t be able to do the remote checksum server-side directly as the hashes of the encrypted files would differ, so the only thing that could be checked would be the size.
Perhaps while copying files to the remote, rclone could ask the remote server for the hash of the (encrypted) file as soon as it’s uploaded and save it locally along with the local (unencrypted file) hash, and at the end of the transfer, upload those local/remote hashes to a special file (say, “.rclone_hashes”) in the same directory in the server. Then at the start of the next sync/copy/check, rclone could verify whether there’s such a file and if yes, download it first and use it to correlate remote (encrypted) hashes it could ask the remote server for, with the local hashes.
Just an idea… rclone is awesome as it is, but if it could be able to check the content (ie, using the hash) of local files against their copies in an encrypted remote, it would add a lot of peace of mind.
Cheers,
Durval.