Bisync and backup-dir

I like to sync some of my files to and from google drive with bisync, however to prevent accidental data loss I also like to use --backup-dir with timestamped folders, which I want to be on the remote (rclone bisync folder remote:storage/folder --backup-dir remote:backups). It works as expected when the remote files are overwritten by local ones, but when local files are about to get overwritten, it throws

 Bisync critical error: parameter to --backup-dir has to be on the same remote as destination

I have two questions:

  1. Why is there such a restriction in the first place? It would make sense, at least for me, to simply copy these files to the remote backup dir before overwriting them.
  2. Is there currently any way to automatically back up deleted files with bisync?

This is fixed in v1.66 :slightly_smiling_face:
Bisync: --backup-dir1 and --backup-dir2

It is currently available to try in the latest beta, and should be officially released very soon.

By the way, it is also possible to circumvent the "same remote" limitation by using a Combine remote -- if it can't server-side copy it will fall back to normal copy. (But I don't think this should be necessary now, at least for Bisync.)

I have tried the latest beta and it still gives me that error when I only provide one backup-dir. Could you explain a little more how one can use the combine remote for this?

Right, this is as documented:

Because --backup-dir must be a non-overlapping path on the same remote,
Bisync has introduced new --backup-dir1 and --backup-dir2 flags to support
separate backup-dirs for Path1 and Path2 (bisyncing between different
remotes with --backup-dir would not otherwise be possible.) --backup-dir1
and --backup-dir2 can use different remotes from each other, but
--backup-dir1 must use the same remote as Path1, and --backup-dir2 must
use the same remote as Path2. Each backup directory must not overlap its
respective bisync Path without being excluded by a filter rule.

The standard --backup-dir will also work, if both paths use the same remote
(but note that deleted files from both paths would be mixed together in the
same dir). If either --backup-dir1 and --backup-dir2 are set, they will
override --backup-dir.

So if you are bisyncing paths on different remotes, you'd want to make a backup dir on each remote and then use --backup-dir1 and --backup-dir2. If they are on the same remote you can get away with just using one --backup-dir, but even then, I'm not sure you'd want to, as they would overwrite each other's changes.

Sure. Basically the idea is you set up a Combine remote that includes both Path1 and Path2 as upstreams, and then you bisync combine:path1 combine:path2 --backup-dir combine:path3. Essentially, you are fooling rclone into thinking that you are syncing two paths on the same remote, and that therefore server-side-move should be available (assuming all the upstreams support it individually). Rclone will then try to server-side-move the file into the backup dir, and when this fails (because it's not actually the same remote), it will fall back to moving the file by deleting + copying.

Again though, keep in mind that if you do it this way, files from both paths will end up mixed together in the same backup dir, and files backed up from one path could end up overwriting files backed up from the other path.

Thank you for your reply. I have tried this approach, but rclone complained for some unclear reason. Either way this setup sounds a little complex to me, I have settled with just moving the files in backup-dir-1 to the remote after the bisync in my script (they don't overwrite each other's changes since the folders have timestamps).

Maybe a flag can be added to make rclone work with a single backup-dir by copying the files to the other remote when it is not the same one?

Sounds like a good idea to me. What this would require is allowing --backup-dir to fall back to regular move when server-side-move is not available. This would also apply to sync, copy, and move, not just bisync.

I'm not sure why the decision was made to limit --backup-dir to only server-side move... server-side is certainly the most efficient method, but I don't see why copy+delete shouldn't be available for users who don't mind the extra bandwidth. I would recommend opening an issue on GitHub to request this feature.

As an aside, another related change I'd love to see is for --suffix to accept dynamic date variables like bisync's --conflict-suffix can now do. It would be quite simple to just reuse some of that existing code.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.