Dropbox sync is very slow

With rclone 1.64 a sync between google drive and dropbox takes 16 min just to check the files, with no transfers at all.

Is there anything I can do to do speed up? I think the issue is with dropbox, because the same check against other backends like cloudflare r2 takes ~1 min.

rclone sync 1:folder dropbox:folder --transfers 25 -vvP --stats 15s --fast-list --checkers 15 --multi-thread-streams 0 --dropbox-chunk-size 150M --drive-fast-list-bug-fix=false -c

The folder have ~120K objects.

I use sync but the files almost never change, most of the time it's just content that is added but I like sync because sometimes changes can still happen

Any ideas why dropbox is so slow and how to make it faster?

hi,
that is an old version of rclone, might want to rclone selfupdate and test again.

afiak, --fast-list is not supported on dropbox remotes. no support for for ListR


and can you post the output of rclone config redacted ?

I just tried with 1.69.1 and it's been 6min to check 52k objects and now it's just hanging...

[1]
type = drive
scope = drive
service_account_file = /opt/sa-json/19.json
team_drive = XXX

[cryptbox]
type = dropbox
token = XXX

[dropbox]
type = crypt
remote = cryptbox:
password = XXX
password2 = XXX

Dropbox has always been one of the slowest remotes for me. I pay so much for the storage and their performance sucks so much at this point I probably should go with backblaze b2

not sure how that works, as gdrive and dropbox uses different hash formats?
the debug log would know...

this is what my dropbox remotes looks like

[remote]
type = dropbox
client_id = XXX
client_secret = XXX
token = XXX

i suggest that you create your own client id+secret and test again.

from the rclone docs,
When you use rclone with Dropbox in its default configuration you are using rclone's App ID. This is shared between all the rclone users.

-c just gets ignored and fall backs to size only, but i still leave there as habit...

It would be nice if there was a way to do hash check between crypted and non crypted remotes, or even between two crypted remotes using the same passwords :frowning:

I will test using my own client id and let you know thanks. I didn't know it was possible with dropbox

tho, it is experimental, there is hasher

maybe rclone check --download

and check out

I'm talking about hundreds of TBs here downloading them to generate a checksum is not practical...

Also I just tried with my own client id / key and I don't see any performance increase at all...

Still 5min to check ~53k files so it seems the same. Is there a way to be sure it's really using my client id and key ? I followed all the instructions and have no errors :frowning:

I think I Just found an error in the documentation too lol

    1. Log into the Dropbox App console with your Dropbox Account (It need not to be the same account as the Dropbox you want to access)

I think it's meant here to be the SAME account you want to access and not the other way right?

well, seems a basic first step.


we have more experienced dropbox forum members.
i suggest that you post the output of
rclone version
rclone config redacted
rclone lsd dropbox:folder -vv - ok to redact|remove the directory names.

rclone v1.69.1
- os/version: ubuntu 18.04 (64 bit)
- os/kernel: 4.15.0-213-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.24.0
- go/linking: static
- go/tags: none
[1]
type = drive
scope = drive
service_account_file = /opt/sa-json/19.json
team_drive = XXX

[cryptbox]
type = dropbox
token = XXX
client_id = XXX
client_secret = XXX

[dropbox]
type = crypt
remote = cryptbox:
password = XXX
password2 = XXX

I can't past the folder / filenames and anyway it would be a very huge list, and I'm not sure how that would help anyway.

13min and still didn't finish checking the folder only ~105k objects now.

I can tell you that my folder structure is very optimized. I try to keep each folder with an average of 20-30 folders/files inside it so there is not a single giant folder

For comparison, checking the same google drive remote against cloudflare r2 with the same number of checkers it takes 1m19s

Now there is info in my dropbox activity dashboard and it says 0 api calls, so I'm not sure it's really using my client id + key ?

You have to either refresh dropbox remote token by reconnecting or delete and create your dropbox remote again providing clientid/key values in the process.

Also need client_id/secret for Google Drive.

Is definitely too much - it can actually slow down everything. Use defaults here.

Add --tpslimit 12 --tpslimit-burst 0 flags - this is max what Dropbox can do without throttling.

I am struggling to use my own app client key id with dropbox sometimes the logs says the apps had no permissions enabled, but i go to the web interface and give all permissions i can give. I tried rclone reconnect and evertyhing and then i got his

edit:

I deleted all the apps, and remade the configuration following the docs to use my own api key and client and now it seems to be working. I'm testing many different combinations and I will post results soon

Here are the results with different flags to check sync time when there was nothing new to transfer

rclone sync 1: dropbox: --transfers 25 -vvP --stats 15s --fast-list --checkers 15  --multi-thread-streams 0  --dropbox-chunk-size 150M --drive-fast-list-bug-fix=false -c --tpslimit 12 --tpslimit-burst 04
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:            118978 / 118978, 100%
Elapsed time:     14m19.5s

rclone sync 1: dropbox: --transfers 25 -vvP --stats 15s --fast-list --checkers 15  --multi-thread-streams 0  --dropbox-chunk-size 150M --drive-fast-list-bug-fix=false -c
2025/03/12 07:36:49 INFO  :
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:            118978 / 118978, 100%
Elapsed time:     18m44.2s

rclone sync 1: dropbox: --transfers 25 -vvP --stats 15s --fast-list --checkers 30  --multi-thread-streams 0  --dropbox-chunk-size 150M --drive-fast-list-bug-fix=false -c
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:            118978 / 118978, 100%
Elapsed time:     39m21.7s

rclone sync 1: dropbox: --transfers 25 -vvP --stats 15s --fast-list  --multi-thread-streams 0  --dropbox-chunk-size 150M --drive-fast-list-bug-fix=false -c
2025/03/12 09:14:34 INFO  :
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:            118978 / 118978, 100%
Elapsed time:     27m59.0s

rclone sync 1: dropbox: --transfers 25 -vvP --stats 15s --fast-list  --multi-thread-streams 0  --dropbox-chunk-size 150M  -c
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:            118978 / 118978, 100%
Elapsed time:     26m57.6s

rclone sync 1: dropbox: --transfers 25 -vvP --stats 15s --fast-list  --multi-thread-streams 0  --dropbox-chunk-size 150M
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Checks:            118978 / 118978, 100%
Elapsed time:     30m35.0s

It seems the fastest way we can run sync with ~118k objects is 14 minutes.... which I feel is really slow... is this really the fastest dropbox can handle?

Like I said with other backends the same sync with nothing to transfer takes 1-2min.

Any ideas how to speed up this?