First of all, thank you NCW and team for building this awesome product.
I have a general question regarding backing up my files to encrypted remote. I have been using sync with - - checksum - - backup-dir and - - suffix($date) flags and runing an infinite loop to backup through everything. I have noticed that it will run through the backup and on second pass completely move the files to backup directory. In turn uploading all files again.
Question, what is the best way to achieve this. I would like to backup everything in my given folder amd if I have deleted anything move to backup directory and continuously running in the background without haveing to reupload files that are already uploaded. so, I don’t have to do anything.
Currently I am using rclone 1.42 in Ubuntu VM with nfs mounted shares. I am using my unlimited google drive given from my University. I do not want to store my files unencrypted.
I was using rclone -c -v sync /media/HD unicrypt:/Backups/ --backup-dir unicrypt:/old/ --suffix="-date-$(date)" --copy-links --syslog
However, It appears that somehow the VM must have lost connectivity with NFS share and end up copying files over to --backup-dir multiple times as I can see there are files which hasn’t been copied to unicrypt:/Backups/ It appears I need to investigate this NFS share issue.
I only discovered this after running
2018/08/10 21:14:46 DEBUG : rclone: Version “v1.42” starting with parameters [“rclone” “-c” “-vv” “sync” “/media/HD” “unicrypt:/Backups/” “–dry-run” “–backup-dir” “unicrypt:/old/” “–suffix=-date-Fri 10 Aug 21:14:46 AEST 2018” “–copy-links” “–log-file=/media/HD/Stuff/MediaServer/rcloneunicrypt-Testing.log”]
Where I could see files that are the same and files that are going to be copied and checked in the rclone folder and voila the files weren’t there.
Apologies for the trouble as I should have checked with --vv and --dry-run flag before posting.
Ok, I have confirmed somehow rclone is not taking checksum file dates and size into consideration when syncing to encrypted remote.
I have a script that just runs into infinite loop syncing my NAS to Google drive. I had -vv flag set and found that this is copying file into backup directory even though file hasn’t changed.
8/14/18
10:14:17.000 PM
2018-08-14T22:14:17+10:00 rclone-xi rclone[3441]: Copy/Photos/Webdav/Photos/20171124_110210.jpg: Moved into backup dir
Action = Moved FileName = Copy/Photos/Webdav/Photos/20171124_110210.jpg host = rclone source = /var/log/rclone-xi/rclone-xi-rclone.log sourcetype = rclone
8/14/18
10:14:17.000 PM
2018-08-14T22:14:17+10:00 rclone-xi rclone[3441]: Copy/Photos/Webdav/Photos/20171124_110210.jpg: Moved (server side)
Action = Moved FileName = Copy/Photos/Webdav/Photos/20171124_110210.jpg host = rclone source = /var/log/rclone-xi/rclone-xi-rclone.log sourcetype = rclone
Theses are just couple of examples out of thousands. Please see attached screenshot from Splunk.
Adding more to this. Should it matter if I had originally uploaded with version 1.38 and now I am syncing again with 1.42 ? Is the way 1.38 and 1.42 did checks different ? Can’t really find another reason. I believe this has to do with change in --checksum command.
In the above example. PerungoGdrive is my gdrive crypt folder and NAS is my NAS that is getting synced. The file exists on NAS but not on the backup folder but instead has been moved to backup directory.
md5 is exactly the same. Not sure why rclone moved to backup directory.