I'd like to suggest rclone have a delete everything protection. (I know there is already a max-delete, but it isn't very flexible because it uses a static value as an option, not differentiating if there are many files or very little).
I'll give you an example, I was syncing a Google drive, but the owner suddenly moved it to another account and because the original drive was empty rclone wiped out everything locally.
It could be as simple as if remote root folder doesn't exist or is empty, don't touch local files.
Feel free to comment on the existing issue for this:
Thanks, just posted my suggestion there.
I'm a bit confused on your use case.
Sync never touches the source as it only hits the destination.
Do you mean if you have some loop going on?
Rclone doesn't wipe out things locally so I'm not sure what you mean.
My destination is local, I'm syncing from remote Google Drive (source) to local drive (destination), apologies for not making this clear.
I have corrected my suggestion to:
IF source root folder doesn't exist OR is empty, DON'T touch destination files
I handle those type of situations with a check before I do a sync and normally don't depend on a the cp/mv/rsync/rclone to validate the logic of what I want to do.
You can put some logic in whatever you are automating to check and see if it exists prior by checking return codes.
felix@gemini:~$ rclone ls GD:blah
2021/01/05 07:28:20 Failed to ls: directory not found
felix@gemini:~$ echo $?
felix@gemini:~$ rclone lsf GD:
felix@gemini:~$ echo $?
I think every Unix/Linux admin in the world has accidentally rm -rf * at the wrong place before
I try to be very careful when I am doing anything destructive and put checks in there to validate before it continues.
I could wrap around some higher logic around it, and it is definitely what those users who think rclone is a backup tool should do, but I think this is a basic added value feature that anyone can take advantage of.
I'm a registered user of Syncovery which has this feature and another one which I really miss in rclone which is max-delete by percentage levels. I know this has already been discussed several times here in the forum, but I find the max-delete with a static number of files very unflexible.
I think that's probably the challenge as many think rclone is not a back tool by any means. I personally would not use it as a backup tool albeit some others do.
To me, it's more like rsync as it can be used to backup things, but it's not it's primary purpose.
I use rsnapshot, which wraps rsync around it for snapshotting on my Linux machines and another duplicati to move them offsite.
I do like the percentage abort though as that feels like a good way to catch a large issue and maybe it can abort if more then 75% or something.
I think the challenge is breaking existing functionality and that's the discussion.
i agree 100%
having a snapshot or backup of that local system, would be the solution.
i never use
rclone sync without
having a optional flag would not help, as the user would have to remember to use that flag.
if the user did not remember to have a backup/snapshot of that local system and did not remember to look at the local folder before the sync, that flag is not going to help.
I'm not using rclone as a backup tool and like I mentioned I don't believe it is a backup tool.
as the user would have to remember to use that flag
That's true for any flag... in my case all my rclone executions are inside batch files, so I don't 'forget' to use flags...
I keep meaning to make an
rclone backup command which wraps
rclone sync with
--backup-dir and some opinionated defaults like
--track-renames to make a backup tool.
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.