cryptcheck on a 5TB directory (over 15000 files, many files larger than 50GB), my internet connection dropped and I had to run
cryptcheck from the beginning (it had already been running for over 28 hours). Is there a way to avoid this? For example, rclone could technically write the name of the files that it has successfully checked to a temporary text file and the user could pass this file to a flag like
--continue-from next time they are running
cryptcheck. I understand that it is possible for the user to do this programmatically by means of a bash script in several different ways; e.g. to send one file at a time to
cryptcheck. But I am really interested to know if there is an
rclone way already out there to do this or if there are any plans to implement such a feature? It would be really useful for extremely large directories with complex nested structures containing large files.
On another note, what would usually speed up
cryptcheck? Is the CPU on the local computer the only bottleneck? Or does rclone need to also download files from an encrypted remote GSuite and hence internet speed plays a role here too (in other words, does GSuite store MD5 of files and does rclone cryptcheck read them directly)?
What is your rclone version (output from
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Windows 10, 64-bit
Which cloud storage system are you using? (eg Google Drive)
The command you were trying to run (eg
rclone copy /tmp remote:tmp)
.\rclone.exe cryptcheck H:\backups gcrypt: --progress -vv --config .\custom_path_to_conf_file