when running cryptcheck, i am getting the following in log.
2020/04/07 07:31:23 DEBUG : Aldous.Huxley's.Brave.New.World-ek5vse2_Aq0.mp4: OK - could not check hash
2020/04/07 07:31:24 DEBUG : FutureShock.OrsonWelles-vVJrJk3q3MA.mp4: OK
2020/04/07 07:31:24 DEBUG : FutureShock.OrsonWelles-vVJrJk3q3MA.mp4: OK
notice that
what does this mean and why cannot rclone check the hash but the file is OK? OK - could not check hash
for each OK in the log, there are two entries.
rclone is not checking hashes. the command would take a long time with hundreds of movie files to hash.
thanks
What is your rclone version (output from rclone version)
v1.51.0
Which OS you are using and how many bits (eg Windows 7, 64 bit)
windows10.64
Which cloud storage system are you using? (eg Google Drive)
wasabi, using crypted remote
The command you were trying to run (eg rclone copy /tmp remote:tmp)
2020/04/07 07:34:59 DEBUG : rclone: Version "v1.51.0" starting with parameters ["c:\\data\\rclone\\scripts\\rclone.exe" "cryptcheck" "s:\\data\\m\\media\\movies\\" "wasabimediacrypt:/m/media/movies/" "--progress" "--log-level=DEBUG" "--log-file=C:\\data\\rclone\\scripts\\rr\\other\\test\\log.include.txt"]
A log from the command with the -vv flag (eg output from rclone -vv copy /tmp remote:tmp)
2020/04/07 07:34:59 DEBUG : rclone: Version "v1.51.0" starting with parameters ["c:\data\rclone\scripts\rclone.exe" "cryptcheck" "s:\data\m\media\movies\" "wasabimediacrypt:/m/media/movies/" "--progress" "--log-level=DEBUG" "--log-file=C:\data\rclone\scripts\rr\other\test\log.include.txt"]
2020/04/07 07:34:59 DEBUG : Using RCLONE_CONFIG_PASS password.
2020/04/07 07:34:59 DEBUG : Using config file from "c:\data\rclone\scripts\rclone.conf"
2020/04/07 07:34:59 INFO : Using MD5 for hash comparisons
2020/04/07 07:34:59 INFO : Encrypted drive 'wasabimediacrypt:/m/media/movies/': Waiting for checks to finish
2020/04/07 07:34:59 DEBUG : Aldous.Huxley's.Brave.New.World-ek5vse2_Aq0.mp4: OK - could not check hash
2020/04/07 07:35:00 DEBUG : FutureShock.OrsonWelles-vVJrJk3q3MA.mp4: OK
2020/04/07 07:35:00 DEBUG : FutureShock.OrsonWelles-vVJrJk3q3MA.mp4: OK
2020/04/07 07:35:07 NOTICE: Encrypted drive 'wasabimediacrypt:/m/media/movies/': 0 differences found
2020/04/07 07:35:07 NOTICE: Encrypted drive 'wasabimediacrypt:/m/media/movies/': 148 hashes could not be checked
2020/04/07 07:35:07 NOTICE: Encrypted drive 'wasabimediacrypt:/m/media/movies/': 384 matching files
2020/04/07 07:35:07 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks: 384 / 384, 100%
Elapsed time: 7.1s
OK... Files without hashes will cause the problem.
Did you transfer them from somewhere other than local disk? If you did a cloud -> cloud copy from a source which didn't support md5s that could explain it.
For the small files (below --s3-upload-cutoff) everything is fine.
For the large files above --s3-upload-cutoff what is happening is this
When rclone uploads a chunked file to s3
it asks the source backend for the MD5SUM of the file before the transfer
if the source backend does not have that then the MD5SUM will be blank
In the case of crypt, it does not have the MD5SUM - it would need to read the file then MD5 it first
This could be fixed by
Making a special case in crypt so that if the source file is a local file, we do read it, encrypt the data and md5sum it to produce the hash. This wouldn't be very much more intensive than hashing a local file
If the md5 can't be read from the source backend, then calculate it on the fly and add it to the metadata afterwards. Note that you can only supply the metadata at the start of the process so supplying it at the end would mean doing an S3 copy operation which can be expensive for large objects. We did think of doing this before but decided that people wouldn't appreciate the extra costs.
I have had a go with option 1 which I think is quite a good compromise - let me know what you think.
thanks,
yes, i was doing some testing and i was thinking it had something to do with multi-part uploads.
has something to do with --s3-chunk-size.
i am little confused by all this.
i just want to have a hash for every file in every scenario.
option 1 is good because there be a hash for all crypted files.
but i think i will have to re-upload the 1TB of data, as i need the hash. no big deal.
option 2, when
--- uploading local file
and
uploading to non-crypted remotes like wasabi, s3 clone.
if there any scenario where there will not be a hash?
also, when testing the beta, with crypted remote.
how can i, for sure, trigger the problem.
is there a rclone command flags that will be a good test of the new beta.
i see the beta is up and i am downloading now...
again, thanks
i am lucky to have wasabi and 1Gbps uploads, and of course rclone
when testing the beta, with crypted remote.
how can i, for sure, trigger the problem.
is there a rclone command flags that will be a good test of the new beta.
also, good is that i see new entries when uploading
2020/04/09 12:18:14 DEBUG : 100MB: Found object directly
2020/04/09 12:18:14 DEBUG : 100MB: Computing MD5 hash on 100MB