It’s been a while since I last posted I’m in the process of refreshing the local infrastructure and one of the thing I did was revisiting the current scripts I run for offsite backups.
Right to the question:
One of the script performs a sync from Google Drive, to (the same) Google Drive (using two different remotes with different APIs).
Would it make more sense to add an extra process and speed things up a little? I was thinking:
- Run rclone copy myremote:/folder1 myremote:/folder2
- Run rclone sync myremote:/folder1 myremote:/folder2 --backup-dir myremote:/folder3
This would normally run daily and upload about 10/50GB circa. I don’t care about saving my bandwidth, I care more about saving operations/uploads/downloads seen from Gdrive to avoid getting a ban for the day.
As of right now with just sync I’m downloading these files from folder1 and uploading them back to folder2, so I guess adding first a copy, said copy will be performed server side instead, resulting in a quicker job? Or am I wrong?