Hopefully I'm missing something, and I can just delete this post and pretend it never happened. But I uploaded a file, and now want to check my local version against the uploaded version.
I changed the name after uploading, but checking by size or MD5 should still be okay. Except... I don't even get that far.
@ncw
Yes--this makes perfect sense.
But it leads to a different set of issues, because the file has been renamed at the destination, so the "implied" filter (if I can call it that?) causes different errors, as filtering the destination directory returns zero matching files--and rightly so.
I do not understand the first part of this, because there are no symlinks anywhere. My local file is not a symlink, and the destination is google drive.
So, it sounds like there actually may not be a way to run rclone check against two files with different names? Or even rclone cryptcheck in such a way as to check only hashes, ignoring names entirely?
You can't use rclone check with an encrypted remote, you have to use cryptcheck
Indeed, you can use rclone check, but it will not do a checksum, just a "quick check" i.e. size
If you want to move a single file, you'd use copyto as copy is expecting a folder/directory:
That's simply not true. rclone copy will work on a file or a directory. copyto would have allowed me to change the name of the file at the time of the copy, so I can see the confusion, but I changed the name after copying so I used copy instead. I could have been more clear on that, I apologize.
Nah, you misunderstand what that is referring to. The destination is expecting a directory and your command above:
Would have made a directory called "project1module1.py" with a file in that directory.
If you wanted to run your exact command above, using copyto instead of copy works while your command does not.
There isn't much point of having a checksummed remote and doing a size check. While it's possible, it really doesn't make sense. The recommended approach is to use cryptcheck as that is why it was created.
@Animosity022
You're right--I apologize--I was trying to simplify because I did rename the file, but I did it via rclone mount as part of an attempt to organize all of my stuff... Outcome TBD.
I can think of a couple of situations in which that would be useful:
Checking the integrity of a single file that has been renamed either during or after upload, as essentially the equivalent of rclone check to use after rclone copyto
To find duplicates of a single file across multiple directories in a crypt remote prior to uploading (e.g. if I want to see whether ~/movie.mp4 is anywhere in a mount, regardless of name, I could say rclone checkfile ~/movie.mp4 cryptdrive: and rclone would locate duplicates, even if they were buried somewhere and called movie2.mp4)
#2 might be sort of an outlier case. But rclone copy has rclone check so it seems consistent to have an rclone check equivalent for rclone copyto which we currently do not have on a single-file basis--only for directories.
It occurred to me that if I could just run rclone hashsum on each individual file, I could pipe both outputs to diff and achieve the result I'm looking for. But I think we don't currently have anything that will hash a single file on a crypted remote? hashsum and md5sum return errors, and cryptcheck does not give output in a way that I could pipe.
This would be a very expensive operation as it would read the nonces from all the remote files and encrypt and then hash all your local files...
Perhaps adding another flag to rclone checkrclone cryptcheck maybe --file or something like that to make it think both the arguments are files might be simplest?
That is essentially what I was proposing with the --crypted and --cryptkey flags above, so you could read hashes from crypts and the local file system but encrypted like this crypt. This would work for single files.
This seems straightforward because it would essentially mean "take the full source and destination paths exactly as they are each written". That's a simple behavior to explain to users.
This would be a very expensive operation as it would read the nonces from all the remote files and encrypt and then hash all your local files...
Wouldn't it just read the nonces, and encrypt and hash a single local file over and over? Still expensive. But the same number of hash operations as checking full directories against eachother. (a directory of 20 files, and a remote of 20 files is 20 hash operations with cryptcheck; 1 local file checked against a directory of 20 files is 20 hash operations with checkfile).
Either way, de-duping your rclone backups is probably something another utility can do, so I withdraw that example. I was just imagining what such a command might do if one ran it with a source file and a destination directory, instead of a file on each end...
I think it will get less use if it is implemented as a flag to hashsum.
I almost always run check or cryptcheck after a copy operation, just out of an abundance of caution.
Having a checkfile operation seems logical if we already have what could perhaps be considered checkdir and, accordingly, a good way for users to keep tabs on their data without having to think too much.
This is your baby, and development direction should be what makes sense to you, first and foremost.