Auto-Sync Across Machines

tl;dr want to pull from the remote if it’s newer than local before pushing, then push regularly

Went over the documentation, but didn’t see a clear answer or native solution. Decided to make an account and ask and explain my situation.

Plan to use rclone to sync 2+ machines. Initial though was to have a crontab run sync to make local match the remote to ‘fast-forward’, then push to the remote. Problem is with the fast-forward step since it’ll be done automatically, so any local changes will be over written since local will always be made to look like the remote. Reversing the order doesn’t help since I’ll be pushing old files.

Best work around that came to mind is to only pull on wake-up/log in and push every 15-30 min while working.

Thoughts? Suggestions?

I deliberately haven’t tackled the 2-way sync problem for rclone. I’d like to one day, but I keep hoping someone who has some more ideas than me about how it should work will step up and do it for me!

I wonder if you could acheive what you want by keeping a version number. You could use rclone cat to fetch it and a bit of script to find the most recent number, then you’ll know which side is the master.

Thanks for the prompt reply. You suggestion sounds about right, I’ll work on a personal solution and keep this post updated as things develops.

1 Like

I wonder if you could use rclone mount + good old normal rsync for this…

Have you come up with a satisfying solution to the problem?

If not, I’ll try and work something out tomorrow. I would just like to know what OS you’re using (although I guess Python - my main language - can be sported across devices).

What I’ve seen so far is the -u function, which ignores files that are newer than the source files. If I could pipe the --dry-run output to python and extract the files that shouldn’t be copied, I can use their name in 2 commands:

The first downloads the newer files, rewriting the old ones
The second uploads ALL the files, because after the first command, your local machine now has the guaranteed newest files it could have (we downloaded all the newer files in the first command, remember?)

This might work just like you described it in your post. Right now, I’m busy, but I’ll manage to get to it in about 14-15 hours. I’ll read through the man pages to know which command options might help do this thing the best way possible.

Cheers!

EDIT: If you run linux, you could set the individual commands into a simple crontab job. But I’m getting ahead of myself, I didn’t even look at it in-depth yet, still procrastinating.

I’ve come up with a solution involving cron jobs (so for Linux users only, but what do I know, perhaps there are timed jobs in Windows as well).

Now that I’m thinking about it, it’s all a simple shell script away.

The script would trigger on launch of the computer (or whenever you want). Using the -u argument, we can sync/copy all files from the cloud to local storage, omitting any files that may be newer in our storage. After this command finishes, we push everything to the cloud again by swapping source and destination, again using the -u command (although not necessarily). This means we first get all the up-to-date files on our PC and then clone the directory to cloud again, updated.

The magic happens when you download to local, because your computer creates the most up-to-date version, keeping what you did on it and then pushing it to the cloud again. The -u flag is not really necessary to the second step, as we will have already downloaded all the stuff in the first command, so the computer will always upload an up-to-date version of the folder.

EXAMPLE TIME:

Let’s say I have a folder called Work on my PC (let’s call it A). I clone the file to the cloud © and download from C to my laptop (B). I then set up this system I outlined above.

What happens when I change 1 file in the Work folder on my laptop and trigger the two-way script?

First, my computer looks for any newer files in the cloud to download, but thanks to -u it doesn’t overwrite my updated file. THEN, it takes the whole Work folder and uploads it to the cloud.

If I then go to computer A, change another file and trigger two-way sync, the computer first downloads the file B updated, doesn’t overwrite the file A changed, and then uploads the up-to-date file to the cloud again.

Regardless of your version, you should probably set up a Startup script (Windows has something like that, create a bat file), or use a script that triggers on init. This way, if your computer is connected to the internet on boot, you can rest assured that once you turn your computer on, it’ll have the latest and greatest of your files downloaded to it automatically.

There is a fear in my head that this could lead to a great cock-up, but I don’t see where my logic is flawed. If you come up with a situation where the system won’t work and would potentially damage the files in the cloud, please let me know.