Again...remote folders wiped out, help!

Great testing effort! Your verbose postings helped me understand the issue and your thought process. To be used within reason off course.

I am very surprised to see deletion of files that pass check with the message: “Unchanged skipping”

This contradicts my understanding of correct rclone sync behavior, no matter the command flags or errors encountered during the sync.

Here is a simple example extracted from your log:

2021/08/21 00:48:22 DEBUG : Using config file from "N:\\rclone\\rclone.conf"
2021/08/21 00:48:22 INFO : Starting bandwidth limiter at 5.859MBytes/s
2021/08/21 00:48:22 DEBUG : rclone: Version "v1.55.1" starting with parameters ["N:\\rclone\\rclone.exe" "sync" "H:\\Anime" "Media:Anime test" "--backup-dir=Media:00 - Backup/Anime test/20210821_ 04822" "--fast-list" "--delete-during" "--checksum" "-vv" "--transfers" "4" "--checkers" "8" "--bwlimit" "6000" "--contimeout" "60s" "--timeout" "300s" "--retries" "5" "--low-level-retries" "10" "--rc" "--stats" "10s" "--stats-file-name-length" "150" "--exclude" "desktop.ini" "--links" "--ignore-size" "--log-file" "r:\\rclone scripts\\Anime test_20210821_ 04822.log" "--track-renames" "--track-renames-strategy" "modtime" "--check-first"]
2021/08/21 00:48:22 NOTICE: Serving remote control on http://localhost:5572/
2021/08/21 00:48:22 DEBUG : Creating backend with remote "H:\\Anime"
2021/08/21 00:48:22 DEBUG : local: detected overridden config - adding "{b6816}" suffix to name
2021/08/21 00:48:22 DEBUG : fs cache: renaming cache item "H:\\Anime" to be canonical "local{b6816}://?/H:/Anime"
2021/08/21 00:48:22 DEBUG : Creating backend with remote "Media:Anime test"
2021/08/21 00:48:23 DEBUG : Creating backend with remote "gdrive:01/fc72a6d2taars62jv7prm4jro8"
2021/08/21 00:48:23 INFO : Encrypted drive 'Media:Anime test': Running all checks before starting transfers
2021/08/21 00:48:23 DEBUG : Creating backend with remote "Media:00 - Backup/Anime test/20210821_ 04822"
2021/08/21 00:48:23 DEBUG : Creating backend with remote "gdrive:01/kmjdrgnqm9n6mc29qv1vjgr2ks/fc72a6d2taars62jv7prm4jro8/b7ahmdsg71echa31smlveavkbg"
2021/08/21 00:48:28 NOTICE: Encrypted drive 'Media:Anime test': --checksum is in use but the source and destination have no hashes in common; falling back to --size-only
2021/08/21 00:48:28 DEBUG : 7Seeds (2019)/NOTE CONVERSIONE.txt: Size of src and dst objects identical
2021/08/21 00:48:28 DEBUG : 7Seeds (2019)/NOTE CONVERSIONE.txt: Unchanged skipping
2021/08/21 00:48:28 DEBUG : Aggretsuko (2018)/~$Aggretsuko.xlsx: Size of src and dst objects identical
2021/08/21 00:48:28 DEBUG : Aggretsuko (2018)/~$Aggretsuko.xlsx: Unchanged skipping
…
2021/08/21 00:49:06 INFO : Aggretsuko (2018)/~$Aggretsuko.xlsx: Moved (server-side)
2021/08/21 00:49:06 INFO : Aggretsuko (2018)/~$Aggretsuko.xlsx: Moved into backup dir
…

I see similar deletions in your test without --backup-dir, so that isn’t the reason.

You didn’t see the issue when using the --retries=1 flag in the very last test, but this could be a coincidence caused by an issue having a somewhat random nature. I cannot see any reason it should influence the previous deletions happening in the first attempt.

This could be a serious issue in rclone, so I would like to find the root cause.

To do this we need to reduce complexity to a minimum of flags (and files). I therefore suggest you try to reproduce the issue using this bare bone command:

rclone sync “H:/Anime” “Media:Anime test” --size-only --log-file ”yourlogname” -vv

Please read carefully and verify that it doesn’t cause loss of important data and use --dry-run first. You can add --bw-limit if needed in your environment.

You may have to execute the command several times to provoke the issue.

If the issue doesn’t show after 5 tries, then try again after adding --fast-list and --delete-during to the command line.

Feel free to give updates along the way, but no need for logs unless you have a log showing the issue.

You can stop the tests as soon as you have managed to reproduce the issue.