Need tutorial for synchronisation between Ubuntu and OneDrive

Hello everyone!
I'm a beginner rclone user and I'm looking for the best way to do something simple for you experts, but for which I don't know where to start.
What I would like is to keep the contents of a local folder synchronised with an MS OneDrive folder.
The folder in question contains a considerable number of files (documents, photos, short videos) that are not changed often.

Here is my rclone configuration:

rclone v1.63.1

  • os/version: ubuntu 22.04 (64 bit)
  • os/kernel: 6.2.16-10-pve (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.20.6
  • go/linking: static
  • go/tags: none

The version of Ubuntu it's installed on an LXC container in Proxmox and its without GUI (a sort of Ubuntu Server version).

The cloud storage is MS OneDrive already correctly configured.

The rclone config contents with secrets removed.

[OD]
type = onedrive
token = {"access_token":"xxxx>
drive_id = yyyy
drive_type = personal

I think that a service needs to be created that executes the appropriate rclone command every time a change is made in the local folder.
Can anyone point me to a tutorial on how to do this?

I also have a doubt: should the service to be created be based on rclone's sync command, or is it more efficient to mount the remote OneDrive folder and run the rsync command (or something like that) as if working locally? Which approach do you recommend in my case?

I know my request is a beginner's one, but it is not easy to understand how to proceed.
Thank you!

Here you are my take on this:

  1. configure your onedrive remote using dedicated client_id/secret - see docs. This is needed to avoid onedrive throttling - without your own client_id you share default client_id with all other rclone users - your sync performance will suffer.

  2. rclone is not designed for continues syncing. The best way to sync local folder with cloud is to run simple rclone sync local remote: periodically - create systemd service and run it e.g. every 1h.

  3. To maintain your cloud replica consistency you should use filesystem snapshots (available on Linux with BTRFS or ZFS). Take snapshot of your local folder and run rclone sync against it. Ideally you should never sync live filesystem.

Thank you very much for reply!

Modified my config file: thank you for suggestion!

Since files do not change very often, no problem if the sync command is launched once every hour.
I was actually hoping that it would be possible to create a service that would be run each time a file was created/changed/deleted in the local folder: I also found a script on the net that use inotifywait (and rclone), but it is not suitable for directories containing many files.

If it is not too much to ask, can you give me a link to a page where it is shown what the script for the sync should look like and how it can be run every hour?

I am not in a 'critical' environment, so I have no particular need for consistency.

I think here you have given full answer - AFAIK there is no good method to monitor huge number of files and directories in consumer OS I know. For limited cases solutions like inotifywait can be used. In general it is not really rclone problem but OS. There might be some specialised OS and filesystems good for such task.

Google is your friend. E.g. first hit from run rclone sync using systemd

Sorry for the late reply.
Thank you very much for the link! I usually do my own research before asking in a forum, but I didn't get anything useful. I certainly hadn't used the best keywords....

I'm left with several questions that I've only just started reading up on.

  1. For rclone's sync command: in my case (a lot of not-too-large files divided into several folders) are there any recommended options to enter?
  2. If I wanted to use the bisync command instead of sync, are there any special tips?
  3. There can be several consecutive commands in the script that the timer has to call (in the example on the linked site, sync is called twice on different folders). I too will have to do something similar (I have several folders that I will have to backup on several OneDrive from different users). The first time I run the script, a lot of files will be copied to the remote folders, but from the second time onwards there will be few changes in general.
    However, I wanted to find out if these calls will have a heavy impact on the system or will it remain usable without any particular slowdown. (I ask this in order to understand how many times to have the script called by the timer)
  4. In the situation illustrated by the article, if I want to change the script or the timer, do I have to stop the timer first or can the changes be made 'on the fly'?

Thank you very much for your help!

No. Use defaults. If you have problems then think about tuning.

bisync is still experimental with a lot of ongoing work happening. Read manual carefully and do some testing before risking your real data. Follow latest development - Pull requests · rclone/rclone · GitHub. You might be better off using the latest beta instead of stable rclone release.

Try and see. There are too many variables to guess. And all terms are relative...

You can make all changes on the fly

1 Like