Unexpected deletions in gdrive

Howdy rclone world,

I recently set up rclone for the first time. I have used it to mount Google Drive on my local Ubuntu. The directory I mounted contains a set of subdirectories which contain git repos. These repos have had a number of unexpected file disappearances.
It is possible rclone was involved. I say this because the Google Drive UI is giving status messages about permanently deleting items. The enclosed screen shot shows the messages.

Note that at the times listed in these messages I was nowhere near my source code directories. My machine was open but the only possible interaction with rclone or gdrive would have been some headless daemon or cronjob.

I don’t mean to blame rclone but to look for clues.

Thanks in advance.

Here’s the log for a directory I haven’t visited since May 2:

Screenshot from 2017-07-31 11-31-35

Were you by chance using the command sync?

Same thing is happening to me. I’m not sure if sabnzbd is doing it, or rclone. I have it set up to where sab downloads locally, then is moved to my mount. Which I believe is similar to sync, not sure though. However some files will appear in my drive, then a couple minutes later they will disappear.

I was not using the command sync, no. It was Saturday night and my computer was in another room.

But possibly the command sync ran. I’m looking for evidence of that in system logs but haven’t yet found it. Any ideas where rclone would leave traces?

Unless you used the logging command there won’t be one. How were you executing rclone, script, cron?

You’ll see deletes even on a copy if the source files changed but you shouldn’t lose data because by default they are deleted after the new ones are copied. They will only occur in the target though.

What command did you run?

I used rclone the day before via direct invocation.

This is my entire command history with rclone

lucasgonze@2017-ubuntu:~/src/cuetagger.com$ history | grep rclone
  166  rclone config
  168  rclone -h
  169  rclone -h | less
  170  rclone ls
  171  rclone ls 1
  172  rclone 1 ls
  173  man rclone
  174  rclone ls remote:
  175  rclone ls remote:1
  176  rclone ls remote
  177  rclone ls remote:
  178  rclone ls gdrive:
  179  rclone ls 1:
  180  rclone -h | less
  181  rclone listremotes
  182  rclone config
  184  rclone c
  185  rclone listremotes
  186  rclone ls gdrive:
  189  rclone ls gdrive:"myproject/src"
  194  rclone sync gdrive:"myproject/src"
  195  rclone sync gdrive:"myproject/src" | less
  196  man rclone
  197  rclone sync . gdrive:"myproject/src"
  212  rclone -h | less
  214  rclone -v sync gclones/ "gdrive:"myproject/src"
  215  rclone sync -v gclones/ "gdrive:"myproject/src"
  216  rclone sync -v gclones/ "gdrive:myproject/src"
  217  rclone sync -v gclones/ "gdrive:myproject/src/legacy-v4-before-refactoring"
  232  history | grep rclone

An important detail is that all files under myproject/src have been deleted, but no directories.

Hypothesis: this command nuked everything in gdrive:myproject/src:

rclone sync -v gclones/ "gdrive:myproject/src"

So, yeah.

The right behavior should be to confirm such a destructive action, and to have a -f flag to prevent confirmation.

Moving discussion to https://github.com/ncw/rclone/issues/1574

You’ve asked it to sync from gclones to gdrive:myproject/src. If gclones was empty when you ran it, it will delete everything at the destination by design. The ‘-f’ option doesn’t really make sense as you can just as easily run with --dry-run if you want to test first. Alternatively, if you don’t want the deletes propagated, you can just use the ‘copy’ command rather than the ‘sync’ command.

Yes, gclones was empty at run time. If you read through the bash history sequence you will see that this user (myself) was just starting with rclone, hitting the help pages, doing initial config, and within about a minute of first touching rclone had destroyed my gdrive contents irrevocably. My learning process was very ordinary.

The difference between the -f option and --dry-run is opt-in vs opt-out. This is the same behavior as “rm -f”.

There is a good reason why rm has that flag: the potential data loss is extreme.

I fully am sensitive that you lost data here but even ‘rm’ is the same (or dd, or rsync, or cp, etc). If you ran ‘rm file’, it deletes the file. If you run ‘rm -r *’, it will recursively delete things. The ‘-f’ just ignores errors. Even with ‘rm’ you would need to select ‘-i’ for a prompted removal. If you ran ‘dd if=asd of=/dev/sda’ that will clobber your sda disk. Most linux tools do not use an opt-in approach.

That aside, it is a powerful tool that can do as much harm as it does good but like any tool, understanding how to use it is of extreme importance before using it. Sorry you lost your data.