I was previously using rclone copy but found it was retaining large duplicates I would never want it to retain.
So to reduce the size of my destination I switched to using Rclone Sync ie;
rclone sync --log-file /CM/data-drive/logs/backup-offsite.log --config /CM/data-drive/cm_backups/offsite/rclone.conf /CM/data-drive data-drive:$bucket_id
However, in checking my destination (backblaze) I had expected the duplicates to be gone after the first run since sync deletes from the destination what does not exist on the source.
However, it is behaving just as with ‘copy’ and all old files are still there and it keeps adding new revisions of the same named file. In this case my Virtual Machine qcow2 file is 40GB so that results in many replications of a 40GB file.
What sayeth the group?
sync doesn’t prune the b2 revisions. You’ll need to use
rclone cleanup for that - see the b2 docs.
Hello and thank you for the reply. It really helps that you are weighing in on things.
I am having some uncertainty about the correct calling of cleanup
Can you confirm that the syntax below seems correct ?
rclone cleanup --log-file /path/to/cleanup-test.log --config /path/to/rclone.conf data-drive:$bucket_id
I have run it in my terminal but I am seeing no feedback even after inserting stats=1m so I am concerned it is not doing anything.
Yes that looks right. You’ll need
-vv to see files being cleaned up.
Once again, grateful for the response. That solves it. Did a perfect job of cleaning up all the excess. Love it.
So rclone sync then cleanup when I wish.
That unintended but appreciated consequence of rclone sync NOT deleting the older files actually has a built in advantage. If I run cleanup only every 2nd or more day it will insure if a Ransomware hits them and also gets backed up, we can retain a recent copy of their good stuff and not just sync the encrypted files and deleting all the good stuff.