Cryptcheck pause/resume and continue where it left off last time

While doing cryptcheck on a 5TB directory (over 15000 files, many files larger than 50GB), my internet connection dropped and I had to run cryptcheck from the beginning (it had already been running for over 28 hours). Is there a way to avoid this? For example, rclone could technically write the name of the files that it has successfully checked to a temporary text file and the user could pass this file to a flag like --continue-from next time they are running cryptcheck. I understand that it is possible for the user to do this programmatically by means of a bash script in several different ways; e.g. to send one file at a time to cryptcheck. But I am really interested to know if there is an rclone way already out there to do this or if there are any plans to implement such a feature? It would be really useful for extremely large directories with complex nested structures containing large files.

On another note, what would usually speed up cryptcheck? Is the CPU on the local computer the only bottleneck? Or does rclone need to also download files from an encrypted remote GSuite and hence internet speed plays a role here too (in other words, does GSuite store MD5 of files and does rclone cryptcheck read them directly)?

What is your rclone version (output from rclone version)

1.52.0 (64-bit)

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Windows 10, 64-bit

Which cloud storage system are you using? (eg Google Drive)

GSuite

The command you were trying to run (eg rclone copy /tmp remote:tmp)

.\rclone.exe cryptcheck H:\backups gcrypt: --progress -vv --config .\custom_path_to_conf_file

You should have been able to just keep rclone running and it should have sorted itself out.. maybe!

We don't have a plan for a resume for crypt check at the moment.

I do have a plan for output files for check though which would make the resume really easy. Crypt check would write a list of files it had checked and you could exclude them.

Crypt check uses local disk transfers to read the local files, CPU to encrypt and checksum them and network to fetch the checksums and a few bytes from the start of each file. Any one these could be there bottle neck. OS stats on your computer should gives you a clue as to which one it is .

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.