Bisync should be considered experimental

I certainly agree with bisync being labeled as experimental.

Full disclosure - I wrote and maintained the rclonesync-V2 origin tool that bisync is a tight re-implementation of. rclonesync-V2 has been in wide use going on 5 years (since 2017), with 337 GitHub stars and certainly many more downloads.

My experience with rclonesync-V2, and more recently with bisync, says that the algorithm and safety features in bisync are pretty robust. If you are looking for a bidirectional sync utility, and understand the limitations, then I'd say give it a go. Start with an isolated directory tree and run with --verbose to view and understand all the transactions.

Some notable aspects of bisync that show a level of maturity of the tool

  • An extensive test suite covering lots of corner cases - allowing robust validation after code changes. This test suite was one of the main drivers of rclonesync-V2 being used as the reference implementation of bisync.
  • Safety feature in the bisync algorithm include:
    • A lock file prevents multiple simultaneous runs when taking a while.
    • Handles change conflicts non-destructively by creating ..path1 and ..path2 file versions
    • --check-access uses RCLONE_TEST files for checking for a linked/mounted portion of the filesystem being offline
    • --max-deletes check protects against one of the paths being offline being interpreted as all the files were deleted, or in case you've accidentally deleted a large directory on one side.
    • If something evil happens, rclonesync goes into a safe state to block damage by later runs, forcing user interaction and --resync to recover
    • --check-sync checks that both filesystems / paths actually match as a final step in a run. The default file comparison mode is by checksum. **
    • Detects changes to the user's --filters-file, and forces a --resync

bisync has several notable limitations that should be seriously considered before use. Also read the Troubleshooting section for particularly the --dry-run oddity and non-support for syncing Google Docs (& Photos).

I run several frequent syncs around my environment using these tools, and have for years. The problem I run into periodically is from trying to sync files that are open and being written to by some other application, which causes bisync to error out, and thus force a --resync. I have not had data loss issues since implementing the --check-access feature back in 2017.

** One personal usage note: I run bisync with --ignore-checksum to greatly speed things up. In this usage, file changes and sync checks are detected by timestamp only, which is fine for my use (and reliable). If you want, you can run a --check-sync only (without --ignore-checksum) periodically to force a checksum check.

So, in summary, I think this tool is pretty robust. It should work fine on most rclone supported backends (that support timestamps). bisync's safety features for avoiding data loss have several years of usage testing.

But...
I certainly agree with bisync being labeled as experimental.

1 Like