Bisync max-depth flag incorrectly deletes subdirectories

What is the problem you are having with rclone?

Using rclone bisync unexpectedly tries to delete all subdirectories when using the --max-depth flag.

When using rclone sync with a max depth, no data is deleted, however rclone bisync attempts to delete all child directories and their contents.

I feel like the expected behavior here is that bisync should just sync the files in the specified directory and ignore any subdirs, not attempt to delete everything that isn't within the max depth.

It's likely that I may just have missed some caveat with bisync, but it'd be really nice to be able to have this functionality since it takes about 2 hours to do a full bisync on my entire backup.

None of the bug reports when looking for max depth seem to relate specifically to bisync: https://forum.rclone.org/search?q=max-depth%20%23bug

Run the command 'rclone version' and share the full output of the command.

rclone v1.67.0
- os/version: debian bookworm/sid (64 bit)
- os/kernel: 6.8.0-76060800daily20240311-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.22.4
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Proton drive.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone bisync ~/Storage/temp/ protonDrive:/temptest/ --max-depth 1 --dry-run --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -MvP --drive-skip-gdocs --fix-case

The rclone config contents with secrets removed.

cat ~/.config/rclone/rclone.conf 
[protonDrive]
type = protondrive
username = <user>@<site>
password = <pass>
2fa = <2fa>
client_uid = <uid>
client_access_token = <tok>
client_refresh_token = <tok>
client_salted_key_pass = <pass>

A log from the command with the -vv flag

For posterity, here is the directory structure I'm using to test:

> find .                                                                                                                                                .
./testFile.txt
./testSubDir
./testSubDir/testSubFile.txt
./testSubDir/testSubSubDir
./testSubDir/testSubSubDir/testSubSubFile.txt

First the output from bisync:

rclone bisync ~/Storage/temp/ protonDrive:/temptest/ --max-depth 1 --dry-run --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -vv -MP --drive-skip-gdocs --fix-case
2024/06/17 22:21:23 DEBUG : rclone: Version "v1.67.0" starting with parameters ["rclone" "bisync" "/home/<user>/Storage/temp/" "protonDrive:/temptest/" "--max-depth" "1" "--dry-run" "--create-empty-src-dirs" "--compare" "size,modtime,checksum" "--slow-hash-sync-only" "--resilient" "-vv" "-MP" "--drive-skip-gdocs" "--fix-case"]
2024/06/17 22:21:23 DEBUG : Creating backend with remote "/home/<user>/Storage/temp/"
2024/06/17 22:21:23 DEBUG : Using config file from "/home/<user>/.config/rclone/rclone.conf"
2024/06/17 22:21:23 DEBUG : fs cache: renaming cache item "/home/<user>/Storage/temp/" to be canonical "/home/<user>/Storage/temp"
2024/06/17 22:21:23 DEBUG : Creating backend with remote "protonDrive:/temptest/"
2024/06/17 22:21:23 DEBUG : proton drive root link ID 'temptest': Has cached credentials
2024/06/17 22:21:26 DEBUG : proton drive root link ID 'temptest': Used cached credential to initialize the ProtonDrive API
2024/06/17 22:21:27 DEBUG : fs cache: renaming cache item "protonDrive:/temptest/" to be canonical "protonDrive:temptest"
2024/06/17 22:21:27 NOTICE: bisync is IN BETA. Don't use in production!
2024/06/17 22:21:27 INFO  : Slow hash detected on Path1. Will ignore checksum due to slow-hash settings
2024/06/17 22:21:27 NOTICE: proton drive root link ID 'temptest': will use sha1 for same-side diffs on Path2 only
2024/06/17 22:21:27 INFO  : Bisyncing with Comparison Settings: 
{
	"Modtime": true,
	"Size": true,
	"Checksum": true,
	"HashType1": 0,
	"HashType2": 2,
	"NoSlowHash": false,
	"SlowHashSyncOnly": true,
	"SlowHashDetected": true,
	"DownloadHash": false
}
2024/06/17 22:21:27 INFO  : Synching Path1 "/home/<user>/Storage/temp/" with Path2 "protonDrive:temptest/"
2024/06/17 22:21:27 DEBUG : : updated backup-dir for Path1
2024/06/17 22:21:27 DEBUG : : updated backup-dir for Path2
2024/06/17 22:21:27 INFO  : Building Path1 and Path2 listings
2024/06/17 22:21:27 DEBUG : &{0xc0000fb040 0xc0006b4420 false false false /home/<user>/.cache/rclone/bisync/home_<user>_Storage_temp..protonDrive_temptest /home/<user>/.cache/rclone/bisync /home/<user>/.cache/rclone/bisync/home_<user>_Storage_temp..protonDrive_temptest.path1.lst-dry /home/<user>/.cache/rclone/bisync/home_<user>_Storage_temp..protonDrive_temptest.path2.lst-dry /home/<user>/.cache/rclone/bisync/home_<user>_Storage_temp..protonDrive_temptest.path1.lst-dry-new /home/<user>/.cache/rclone/bisync/home_<user>_Storage_temp..protonDrive_temptest.path2.lst-dry-new map[] 0xc00056fd40 {{}} {{}} false false <nil> <nil>   map[] false}: starting to march!
2024/06/17 22:21:30 DEBUG : testFile.txt: both path1 and path2
2024/06/17 22:21:30 DEBUG : testFile.txt: is Object
2024/06/17 22:21:30 DEBUG : testSubDir: both path1 and path2
2024/06/17 22:21:30 DEBUG : testSubDir: is Dir
2024/06/17 22:21:30 DEBUG : &{0xc0000fb040 0xc0006b4420 false false false /home/<user>/.cache/rclone/bisync/home_<user>_Storage_temp..protonDrive_temptest /home/<user>/.cache/rclone/bisync /home/<user>/.cache/rclone/bisync/home_<user>_Storage_temp..protonDrive_temptest.path1.lst-dry /home/<user>/.cache/rclone/bisync/home_<user>_Storage_temp..protonDrive_temptest.path2.lst-dry /home/<user>/.cache/rclone/bisync/home_<user>_Storage_temp..protonDrive_temptest.path1.lst-dry-new /home/<user>/.cache/rclone/bisync/home_<user>_Storage_temp..protonDrive_temptest.path2.lst-dry-new map[] 0xc00056fd40 {{}} {{}} false false <nil> <nil>   map[] false}: march completed. err: <nil>
2024/06/17 22:21:30 INFO  : Path1 checking for diffs
2024/06/17 22:21:30 DEBUG : 2024-06-18 04:56:04.3555259 +0000 UTC: modification time the same (differ by 0s, within tolerance 1ns)
2024/06/17 22:21:30 NOTICE: - Path1    File was deleted          - testSubDir/testSubFile.txt
2024/06/17 22:21:30 NOTICE: - Path1    File was deleted          - testSubDir/testSubSubDir
2024/06/17 22:21:30 NOTICE: - Path1    File was deleted          - testSubDir/testSubSubDir/testSubSubFile.txt
2024/06/17 22:21:30 INFO  : Path1:    3 changes:    0 new,    0 modified,    3 deleted
2024/06/17 22:21:30 INFO  : Path2 checking for diffs
2024/06/17 22:21:30 DEBUG : 2024-06-18 04:56:04 +0000 UTC: modification time the same (differ by 0s, within tolerance 1s)
2024/06/17 22:21:30 NOTICE: - Path2    File was deleted          - testSubDir/testSubFile.txt
2024/06/17 22:21:30 NOTICE: - Path2    File was deleted          - testSubDir/testSubSubDir
2024/06/17 22:21:30 NOTICE: - Path2    File was deleted          - testSubDir/testSubSubDir/testSubSubFile.txt
2024/06/17 22:21:30 INFO  : Path2:    3 changes:    0 new,    0 modified,    3 deleted
2024/06/17 22:21:30 ERROR : Safety abort: too many deletes (>50%, 3 of 5) on Path1 "/home/<user>/Storage/temp/". Run with --force if desired.
2024/06/17 22:21:30 NOTICE: Bisync aborted. Please try again.
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Errors:                 1 (retrying may help)
Checks:                 4 / 4, 100%
Elapsed time:         6.8s
2024/06/17 22:21:30 NOTICE: 
Transferred:   	          0 B / 0 B, -, 0 B/s, ETA -
Errors:                 1 (retrying may help)
Checks:                 4 / 4, 100%
Elapsed time:         6.8s

2024/06/17 22:21:30 DEBUG : 7 go routines active
2024/06/17 22:21:30 Failed to bisync: too many deletes

Output from bisync without a max depth specified:

rclone bisync ~/Storage/temp/ protonDrive:/temptest/ --dry-run --create-empty-src-dirs --compare size,modtime,checksum --slow-hash-sync-only --resilient -vv -MP --drive-skip-gdocs --fix-case 
2024/06/17 22:37:40 DEBUG : rclone: Version "v1.67.0" starting with parameters ["rclone" "bisync" "/home/<user>/Storage/temp/" "protonDrive:/temptest/" "--dry-run" "--create-empty-src-dirs" "--compare" "size,modtime,checksum" "--slow-hash-sync-only" "--resilient" "-vv" "-MP" "--drive-skip-gdocs" "--fix-case"]
2024/06/17 22:37:40 DEBUG : Creating backend with remote "/home/<user>/Storage/temp/"
2024/06/17 22:37:40 DEBUG : Using config file from "/home/<user>/.config/rclone/rclone.conf"
2024/06/17 22:37:40 DEBUG : fs cache: renaming cache item "/home/<user>/Storage/temp/" to be canonical "/home/<user>/Storage/temp"
2024/06/17 22:37:40 DEBUG : Creating backend with remote "protonDrive:/temptest/"
2024/06/17 22:37:40 DEBUG : proton drive root link ID 'temptest': Has cached credentials
2024/06/17 22:37:42 DEBUG : proton drive root link ID 'temptest': Used cached credential to initialize the ProtonDrive API
2024/06/17 22:37:44 DEBUG : fs cache: renaming cache item "protonDrive:/temptest/" to be canonical "protonDrive:temptest"
2024/06/17 22:37:44 NOTICE: bisync is IN BETA. Don't use in production!
2024/06/17 22:37:44 INFO  : Slow hash detected on Path1. Will ignore checksum due to slow-hash settings
2024/06/17 22:37:44 NOTICE: proton drive root link ID 'temptest': will use sha1 for same-side diffs on Path2 only
2024/06/17 22:37:44 INFO  : Bisyncing with Comparison Settings: 
{
	"Modtime": true,
	"Size": true,
	"Checksum": true,
	"HashType1": 0,
	"HashType2": 2,
	"NoSlowHash": false,
	"SlowHashSyncOnly": true,
	"SlowHashDetected": true,
	"DownloadHash": false
}
2024/06/17 22:37:44 INFO  : Synching Path1 "/home/<user>/Storage/temp/" with Path2 "protonDrive:temptest/"
2024/06/17 22:37:44 DEBUG : : updated backup-dir for Path1
2024/06/17 22:37:44 DEBUG : : updated backup-dir for Path2
2024/06/17 22:37:44 INFO  : Building Path1 and Path2 listings
2024/06/17 22:37:44 DEBUG : &{0xc0000f8f00 0xc0006722c0 false false false /home/<user>/.cache/rclone/bisync/home_<user>_Storage_temp..protonDrive_temptest /home/<user>/.cache/rclone/bisync /home/<user>/.cache/rclone/bisync/home_<user>_Storage_temp..protonDrive_temptest.path1.lst-dry /home/<user>/.cache/rclone/bisync/home_<user>_Storage_temp..protonDrive_temptest.path2.lst-dry /home/<user>/.cache/rclone/bisync/home_<user>_Storage_temp..protonDrive_temptest.path1.lst-dry-new /home/<user>/.cache/rclone/bisync/home_<user>_Storage_temp..protonDrive_temptest.path2.lst-dry-new map[] 0xc0005a9d40 {{}} {{}} false false <nil> <nil>   map[] false}: starting to march!
2024/06/17 22:37:46 DEBUG : testFile.txt: both path1 and path2
2024/06/17 22:37:46 DEBUG : testFile.txt: is Object
2024/06/17 22:37:46 DEBUG : testSubDir: both path1 and path2
2024/06/17 22:37:46 DEBUG : testSubDir: is Dir
2024/06/17 22:37:48 DEBUG : testSubDir/testSubFile.txt: both path1 and path2
2024/06/17 22:37:48 DEBUG : testSubDir/testSubFile.txt: is Object
2024/06/17 22:37:48 DEBUG : testSubDir/testSubSubDir: both path1 and path2
2024/06/17 22:37:48 DEBUG : testSubDir/testSubSubDir: is Dir
2024/06/17 22:37:50 DEBUG : testSubDir/testSubSubDir/testSubSubFile.txt: both path1 and path2
2024/06/17 22:37:50 DEBUG : testSubDir/testSubSubDir/testSubSubFile.txt: is Object
2024/06/17 22:37:50 DEBUG : &{0xc0000f8f00 0xc0006722c0 false false false /home/<user>/.cache/rclone/bisync/home_<user>_Storage_temp..protonDrive_temptest /home/<user>/.cache/rclone/bisync /home/<user>/.cache/rclone/bisync/home_<user>_Storage_temp..protonDrive_temptest.path1.lst-dry /home/<user>/.cache/rclone/bisync/home_<user>_Storage_temp..protonDrive_temptest.path2.lst-dry /home/<user>/.cache/rclone/bisync/home_<user>_Storage_temp..protonDrive_temptest.path1.lst-dry-new /home/<user>/.cache/rclone/bisync/home_<user>_Storage_temp..protonDrive_temptest.path2.lst-dry-new map[] 0xc0005a9d40 {{}} {{}} false false <nil> <nil>   map[] false}: march completed. err: <nil>
2024/06/17 22:37:50 INFO  : Path1 checking for diffs
2024/06/17 22:37:50 DEBUG : 2024-06-18 04:56:04.3555259 +0000 UTC: modification time the same (differ by 0s, within tolerance 1ns)
2024/06/17 22:37:50 DEBUG : 2024-06-18 04:56:14.0757677 +0000 UTC: modification time the same (differ by 0s, within tolerance 1ns)
2024/06/17 22:37:50 DEBUG : 2024-06-18 04:56:33.1337893 +0000 UTC: modification time the same (differ by 0s, within tolerance 1ns)
2024/06/17 22:37:50 INFO  : Path2 checking for diffs
2024/06/17 22:37:50 DEBUG : 2024-06-18 04:56:04 +0000 UTC: modification time the same (differ by 0s, within tolerance 1s)
2024/06/17 22:37:50 DEBUG : 2024-06-18 04:56:14 +0000 UTC: modification time the same (differ by 0s, within tolerance 1s)
2024/06/17 22:37:50 DEBUG : 2024-06-18 04:56:33 +0000 UTC: modification time the same (differ by 0s, within tolerance 1s)
2024/06/17 22:37:50 INFO  : No changes found
2024/06/17 22:37:50 INFO  : Updating listings
2024/06/17 22:37:50 INFO  : Bisync successful
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:                10 / 10, 100%
Elapsed time:         9.4s
2024/06/17 22:37:50 NOTICE: 
Transferred:   	          0 B / 0 B, -, 0 B/s, ETA -
Checks:                10 / 10, 100%
Elapsed time:         9.4s

2024/06/17 22:37:50 DEBUG : 7 go routines active

For comparison, using sync with a max depth:

rclone sync ~/Storage/temp/ protonDrive:/temptest/ --dry-run --max-depth 1 --create-empty-src-dirs -MP -vv --drive-skip-gdocs --fix-case
2024/06/17 22:23:24 DEBUG : rclone: Version "v1.67.0" starting with parameters ["rclone" "sync" "/home/<user>/Storage/temp/" "protonDrive:/temptest/" "--dry-run" "--max-depth" "1" "--create-empty-src-dirs" "-MP" "-vv" "--drive-skip-gdocs" "--fix-case"]
2024/06/17 22:23:24 DEBUG : Creating backend with remote "/home/<user>/Storage/temp/"
2024/06/17 22:23:24 DEBUG : Using config file from "/home/<user>/.config/rclone/rclone.conf"
2024/06/17 22:23:24 DEBUG : fs cache: renaming cache item "/home/<user>/Storage/temp/" to be canonical "/home/<user>/Storage/temp"
2024/06/17 22:23:24 DEBUG : Creating backend with remote "protonDrive:/temptest/"
2024/06/17 22:23:24 DEBUG : proton drive root link ID 'temptest': Has cached credentials
2024/06/17 22:23:26 DEBUG : proton drive root link ID 'temptest': Used cached credential to initialize the ProtonDrive API
2024/06/17 22:23:27 DEBUG : fs cache: renaming cache item "protonDrive:/temptest/" to be canonical "protonDrive:temptest"
2024/06/17 22:23:29 DEBUG : proton drive root link ID 'temptest': Waiting for checks to finish
2024/06/17 22:23:29 DEBUG : testFile.txt: Size and modification time the same (differ by -355.5259ms, within tolerance 1s)
2024/06/17 22:23:29 DEBUG : testFile.txt: Unchanged skipping
2024/06/17 22:23:29 DEBUG : proton drive root link ID 'temptest': Waiting for transfers to finish
2024/06/17 22:23:29 DEBUG : Waiting for deletions to finish
2024/06/17 22:23:29 INFO  : There was nothing to transfer
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:                 1 / 1, 100%
Elapsed time:         5.4s
2024/06/17 22:23:29 NOTICE: 
Transferred:   	          0 B / 0 B, -, 0 B/s, ETA -
Checks:                 1 / 1, 100%
Elapsed time:         5.4s

2024/06/17 22:23:29 DEBUG : 6 go routines active

A --resync is required when changing any filter settings, including --max-depth. From the docs:

If you make changes to your filters file then bisync requires a run with --resync. This is a safety feature, which prevents existing files on the Path1 and/or Path2 side from seeming to disappear from view (since they are excluded in the new listings), which would fool bisync into seeing them as deleted (as compared to the prior run listings), and then bisync would proceed to delete them for real.

1 Like

Ah, thanks for the heads up. I guess I missed that part.

Interesting, so I guess what I want won’t actually work. I wanted to set up a script that would monitor my drive directory and output changes made to subdirs using inotifywatch, then keep those directories in sync between local and remote instead of having to run a full bisync, but having to run with --resync every time would mean deletions wouldn’t ever occur.

Because of throttling (I assume) by proton drive, it takes about 2 hours to run a full bisync on ~300GB.

Actually, I just need to modify my thinking a bit. If I just use each subdirectory as its own compartmentalized target, it should be good. Maybe. I need to test it, but assuming I just iterate over the list of directories with changes I can get from the watcher instead of passing the file list as an rcclone argument, I think that’ll get my behavior I want.

Thanks again for pointing out that bit I overlooked. I was specifically searching the docs for max depth related callouts and didn’t think about the resync flag.

When is proton coming out of beta?

weclome to the forum,

not going to happen any time soon, if ever.
rclone relies on a third party library, which has not been updated in nine months.
the rclone forum member, who wrote the code, has not visited the forum in two months.

much discussed in the forum
https://forum.rclone.org/t/rclone-blocked-by-proton/46203
https://forum.rclone.org/t/i-cant-connect-to-proton-drive/45904

proton is doing a great job keeping users away from its drive offering these days

Have seen recently article proton users might find interesting:

yeah, i have a paid account, and with protonmail, cannot create more than three folders, unless i upgrade to a more expensive paid account.

oh,no, here comes @kapitainsky ;wink

1 Like

Yea we are aware. In order for this to truly work, they will need to break up their services into separate organizations, which i doubt they will do

I feel like you're more likely to get a native drive client for linux before you get rclone support out of beta. Also you'd probably have to hope proton can negotiate a bit with how much they allow rclone to access drives.

The solution I've hacked together for maintaining a native-app-ish experience works for me because I'm not usually changing a ton of files at a time. I can create a writeup of what I've done when I'm done testing it to a point where I'm satisfied with it, but for now it seems to work.

It took quite a long time to do a full resync of every individual folder (it took around 19 hours runtime for me), but now that they're all setup, running a bisync on any individual directory I see have a change should be relatively fast.

Making sure everything is up-to-date would probably take about as long as a full resync, but I'm not likely to need that very often since this is my main machine.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.