Rclone sync/copy not ignoring existing files on GDrive

Edit 2: Solved with --checksum flag

Edit: Now this is interesting, I ran an rclone check on source and path and only 12 differences came back which is much more like it but 96 hashes could not be checked?

I think they are not being ignored because of the modification time and not the file size? I did some -vv logging and got this as an example: Modification times differ by 1640h48m31.484699464s: 2017-01-29 00:22:49.473300536 -0800 PST, 2017-04-07 17:11:20.958 +0000 UTC

Originally I was syncing a local folder via rclone crypt to ACD and then syncing that folder from ACD to GDrive via cloud compute as a 2nd backup. With the recent events I am now attempting to sync or copy that local folder directly to GDrive but for whatever reason it is seeing my local data that GDrive has via ACD as new data even though its the exact same data on my local machine and trying to copy everything as new files instead of skipping the existing files already on GDrive.

I have cross checked the data on GDrive and the only difference I see is that my folders are showing a size of 4096 bytes locally and 0 bytes on GDrive. However I have tried syncing a single file with the exact same file size and still run into this issue. I ran an rclone mount and compared file sizes and everything is there.

In further testing I grabbed a single file down from GDrive to my local machine and attempted to copy it back and it was ignored and skipped as it rightfully should be because it already exists on GDrive. This leads me to believe somehow the data was changed during the ACD>GDrive original sync/copy process because rclone is seeing it as new data.

These are encrypted files and only happens on this one local machine. I have this working fine on some other VMs I have. Ran a few dry runs with rclone sync -v linux/ googlecrypt:Linux/ --transfers=1 checkers=1 --stats=30s --dry-run and its just all coming back as Not copying as --dry-run

Errors: 0 Checks: 36 Transferred: 48 Elapsed time: 13.5s

But those 48 files are already there! I hope this makes sense, if anyone can help me out you’d save me a ton of time.

Cheers.

QNAP TS-451+ NAS Firmware 4.3.3.0127 Build 20170320
Linux 4.2.8 #1 SMP x86_64 GNU/Linux
RClone 1.36

Ok answered my own question and had to set the --checksum flag. Leaving in case anyone else runs into this issue. So I was right in thinking something change from cloud to cloud :slight_smile: That was a good one!

-c, --checksum
Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check the file hash and size to determine if files are equal.

This is useful when the remote doesn’t support setting modified time and a more accurate sync is desired than just checking the file size.

This is very useful when transferring between remotes which store the same hash type on the object, eg Drive and Swift. For details of which remotes support which hash type see the table in the overview section.

Eg rclone --checksum sync s3:/bucket swift:/bucket would run much quicker than without the --checksum flag.

When using this flag, rclone won’t update mtimes of remote files if they are incorrect as it would normally.