so not thinking, I was thinking that an easy way to move a few TBs of stuff to gdrive would be to to call sync once a day until stuff was copied.
seeing: sync is taking a while to start moving, but eh? probably doing some calculations to determine what to send
seeing: ah, sync is moving, and dies when it hit limits
....
seeing: sync finished, why isn't it exiting?
thinking: oh no! what I have done!
realizing: this is google drive: I can restore from trash! (hint, move to list mode, easier on not hanging firefox).
as an aside, supposdly able to restore files from admin console, but if it works, its much slower than restring them manually as user.
learn from my mistakes, make sure you know sync will delete.
I made this mistake with rclone too. Luckily I didn't delete anything important because I was still testing rclone. Sync to me means bi-directional, so that if a file on remote exists but not on the local, then copy back to local, but as you mention with rclone sync will make remote identical to the source thus deleting the remote file. I am glad I know this should I ever need to restore data. I always use the 'copy' command now for my back-up jobs.
well, i didnt lose any data (and also discovered that the supposed 30 day trash removal isn't true, nothing had been deleted from my trash since I opened it!)
Also, --backup-dir (docs) and/or --suffix (docs) are great to use if you have things running automatically. This way, if you screw up, it is unwindable. It may not be an easy unwinding but it is possible.
I use--backup-dir extensively since I prefer non-interative calls to rclone such as from crontab or just because I don't want to think about it.
If you go the --suffix route, see the admonition about filters
you can add a timestamp to the --backup-dir, then each time it runs, rclone will create a new folder, keep each old version, sort of a forever forward incremental copy.