Set-up (Local computer / Synology / Google Drive / OneDrive / Dropbox)

What is the problem you are having with rclone?

I do not have a problem in itself. Just trying to get some guidance if possible. I am new to rclone, learned about its existence two weeks ago and I have been trying to set up the following configuration by reading the documentation and the forum. This is a diagram of what I would like to ultimately set up.


  1. Two "computers" (one Mac and a Synology Drive)
  2. Three cloud storages (Google Drive, OneDrive, DropBox)
  3. Two folders (maybe more in the future) with important docs and media

Run the command 'rclone version' and share the full output of the command.

rclone v1.62.2

  • os/version: darwin 13.4 (64 bit)
  • os/kernel: 22.5.0 (x86_64)
  • os/type: darwin
  • os/arch: amd64
  • go/version: go1.20.2
  • go/linking: dynamic
  • go/tags: cmount

rclone v1.62.2

  • os/version: unknown
  • os/kernel: 4.4.59+ (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.20.2
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

  • Google Drive (configured)
  • OneDrive (configured)
  • DropBox (still not configured)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync gdrive:/path/important_docs onedrive:/path/important_docs --progress --check-first --backup-dir onedrive:rclone_backups

The rclone config contents with secrets removed.

type = drive
client_id = xxx
client_secret = xxx
scope = drive
token = {"access_token":"xxx,"token_type":"Bearer","refresh_token":"xxx","expiry":"2023-06-12T20:31:17.900552-04:00"}
team_drive =

type = onedrive
client_id = xxx
client_secret = xxx
token = {"access_token"xxx","token_type":"Bearer","refresh_token":"xxx","expiry":"2023-06-12T20:31:20.30853-04:00"}
drive_id = fbf51fef41b55155
drive_type = personal

2023/06/09 08 = 34:29 DEBUG : rclone: Version "v1.62.2" finishing with parameters ["rclone" "version" "--log-level=DEBUG" "--log-file=rclone.conf"]


  1. Before I start meddling with my wife's dropbox is the command and logic correct?
  2. Since I only want 1 direction sync I am using sync but I do not truly follow the difference with copy, bisync, or mount. My understanding is that copy keeps the destination docs intact even if they are not in source, while sync removes them, bisync I understand is that it is literally what the name says sync but 2 directional. Mount I was not able to understand. Should I be using another one or sync is good?
  3. There are many flags, am I using the right ones? I was not sure if I had to use --compare-dest
  4. I reran sync multiple times deleting some documents in a test run just to understand a bit better. Is it correct to say that every time it runs it does not copy everything but only the files that have changed? I am asking because the "important document" took 25 minutes, the Media directory is larger, will it take so long every time it runs?

  1. Crontab command. I am running this, is it correct?
0 22 * * * sudo /path_to_script/ > /tmp/stdout.log 2> /tmp/stderr.log

I take the opportunity to thank all the team behind this software and those that have been answering the questions on the forum. Great work!

It is a lot of questions for one post:) But let's cover some.

Cloud storages were not created equal so there is no one fit all command/flags you can use. They have to be approached one by one - read docs carefully to understand requirements and limitation of each (as things like max path length or illegal characters are not the same).

Create specific appID registrations e.g. for gdrive. Similar step applies for dropbox and onedrive - all details are in docs. This is important to keep things work smoothly.

Both gdrive and dropbox are famous for very aggressive throttling policies you will need extra flags for these remotes e.g.:

gdrive: --fast-list
dropbox: --tpslimit 12

I would work here one by one and if you hit some problems create separate forum tickets.

If I was you I would use crypt remote everywhere otherwise your important_docs are all over the place in unencrypted format. It is your decision to have 100% trust in all these cloud companies but you also increase attack surface. What if one day data is stolen from one of them or your password compromised? etc. The best approach IMHO is TNO (trust no one). Encrypt/decrypt data locally and keep in the cloud only encrypted data. This is what crypt remote provides - please note that when configured it is transparent. You can forget it exist and all data is secure.

In terms of sync speed if all is configured correctly your initial sync will be limited by your Internet speed and later ones should be very fast - unless your data falls into some edge category like millions of small files.

Overall setup I am sure you thought a lot about what you need - there is no best solution here.

Me personally would use something different e.g.:

  • Synology probably already has Time Machine of your local computer so you have local backup
  • I would add TM style backup to the cloud (lets say to Onedrive) - I use program called arq for this but there are other options like restic/kopia/borg
  • to share data with wife I would use program called SyncThing - it quietly runs in the background and sync all changes immediately between your computers
  • I would use rclone (run from Synology) to replicate my onedrive cloud backup to another cloud provider or maybe also to Synology itself
  • I would use rclone to sync some important data to google drive (but with crypt)
1 Like

Why you want to use this one? What is the reason? Please check docs. It needs DIR as a parameter.

From what you described you pretty much just need basic sync with --backup-dir and flags helping with throughput

bisync - forget it - it is for two way sync
sync sounds for me like good option to feed data from your source to the cloud

mount is handy if you want to access data in the cloud - simply your cloud storage will appear on your computer as additional disk

Hey @kapitainsky, I appreciate you taking the time to answer everything and provide deeper context. Tonight I will look at everything in detail and try to make it work. A quick response to your suggestions:

  1. Creating specific appID registrations. I have done it for gdrive and onedrive. I still have dropbox pending but these are actually very useful indeed.

  2. Thanks for the --fast-list and --tpslimit 12 recommendations. I will read about it and include them

  3. Good call using crypt. Better be careful than sorry.

  4. Thanks for the recommendation on how you would set up everything (Time Machine, arq, SyncThing). I will look into those as well.

  5. With regards to --compare-dest. The docs say "If a file identical to the source is found that file is NOT copied from source." I thought that this might help the sync speed and total time but was not sure.

Overall I have a lot of work to go over what you mentioned, I appreciate the time you took here.

1 Like

This is absolutely my personal take:) But I see often people using rclone as a backup to the cloud. It is not. Think with your solution what happens when by mistake you delete some file on your computer. After some time it will be populated (deleted) from all your cloud storages. OK you use --backup-dir. But it is very primitive way preventing from only very obvious problems. If you lose/corrupt multiple files in multiple directories (maybe files with the same name) good luck in "getting back in time" which is what every good backup solution purpose is. Rclone is fantastic tool but not for everything.

It is weird option IMHO - there are many like this in rclone (it is nowadays big project with a lot of legacy things)- I think this one makes sense if you want to create some sort of crude backup. So for example I rclone sync source: dest:DAY0. Then next day I can create "delta" backup rclone sync source: dest:DAY1 --compare-dest DAY0. You think you want it use it:) But atm without providing DIR it does nothing. Even worse who knows what it does when DIR is nothing. It is quite common issue of software problems - users use options outside of the design scope. You would hope software check it and ignores... but it does not have to be the case. I always try to use only what is 100% required.


When using sync, copy or move DIR is checked in addition to the destination for files. If a file identical to the source is found that file is NOT copied from source. This is useful to copy just files that have changed since the last backup.

You must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory.


fwiw, i would start just one case, local -> remote, and do all your mistakes experimenting on that.

and as a safety create an alias remote, which locks rclone to a single folder.

type = drive
client_id = xxx
client_secret = xxx
scope = drive
token = {"access_token":"xxx,"token_type":"Bearer","refresh_token":"xxx","expiry":"2023-06-12T20:31:17.900552-04:00"}
team_drive =

type = alias
remote = gdrive:testfolder

rclone ls test:

rclone sync /home/username/source test:files/current --backup-dir=test:files/archive/`date +%Y%m%d.%I%M%S` -vv
1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.