Rclone transfer checking unreasonably? slow with dropbox

What is the problem you are having with rclone?

I'm running rclone sync to a dropbox remote from a locally attached disk. The files are already uploaded to dropbox, and I'm only running this to ensure the upload is correct and current.
In the first two minutes I get to around 70k checked files (I believe the total would be just over 200k).
After that, the checking speed decreases to a trickle, barely reaching 80k files by minute 10.
Is this normal/expected? This seems very weird to me.

I've tried running the command with high --checkers up to 48, which is worse, and as low as 1.
Between 2-4 seems to give the best result.

If this is a rate limit issue, it's a very weird one: if I stop and restart the command immediately, I will still reach 70k files in the first minute, which doesn't really make sense if this is a Dropbox-sided rate limit issue.

Run the command 'rclone version' and share the full output of the command.

rclone v1.62.2
- os/version: darwin 13.3.1 (64 bit)
- os/kernel: 22.4.0 (arm64)
- os/type: darwin
- os/arch: arm64 (ARMv8 compatible)
- go/version: go1.20.2
- go/linking: dynamic
- go/tags: none

Which cloud storage system are you using?

Dropbox

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync '/Volumes/Media 1' 'db:Backup/Media 1' \
  --progress --dropbox-batch-mode=sync --dropbox-batch-size 256 --dropbox-batch-timeout 10s --checkers 4 --transfers 256 --tpslimit 12 \
  --exclude '.Spotlight-V100/**' \
  --exclude '.System Volume Information/**' \
  --exclude '.Trashes/**' \
  --exclude '.TemporaryItems/**' \
  --exclude 'desktop.ini' \
  --exclude '.DS_STORE' \
  --exclude '.fseventsd/**' \
  --exclude '._*' \
  --delete-excluded \
  --max-backlog 200000 \
  --size-only \
  -vv \
  --log-file backup.log \
  --backup-dir "$backup_dir"

The rclone config contents with secrets removed.

[Dropbox]
type = dropbox
token = {"access_token":"","token_type":"bearer","refresh_token":"","expiry":""}
client_id = xxx
client_secret = xxx

[db]
type = alias
remote = Dropbox:/XXXXXX

A log from the command with the -vv flag

2023/06/14 19:41:47 INFO  : Starting transaction limiter: max 12 transactions/s with burst 1
2023/06/14 19:47:47 INFO  : Dropbox root 'xxx/Backup/Archive/Media 1/2023-06-14': Committing uploads - please wait...
2023/06/14 19:49:56 INFO  : Signal received: interrupt
2023/06/14 19:49:56 INFO  : Dropbox root 'xxx/Backup/Media 1': Committing uploads - please wait...
2023/06/14 19:49:56 INFO  : Exiting...
2023/06/14 19:54:18 INFO  : Starting transaction limiter: max 12 transactions/s with burst 1
2023/06/14 19:54:18 DEBUG : rclone: Version "v1.62.2" starting with parameters ["rclone" "sync" "/Volumes/Media 1" "db:Backup/Media 1" "--progress" "--dropbox-batch-mode=sync" "--dropbox-batch-size" "256" "--dropbox-batch-timeout" "10s" "--checkers" "3" "--transfers" "256" "--tpslimit" "12" "--exclude" ".Spotlight-V100/**" "--exclude" ".System Volume Information/**" "--exclude" ".Trashes/**" "--exclude" ".TemporaryItems/**" "--exclude" "desktop.ini" "--exclude" ".DS_STORE" "--exclude" ".fseventsd/**" "--exclude" "._*" "--delete-excluded" "--max-backlog" "200000" "--size-only" "-vv" "--log-file" "backup.log" "--backup-dir" "db:Backup/Archive/Media 1/2023-06-14"]
2023/06/14 19:54:18 DEBUG : Creating backend with remote "/Volumes/Media 1"
2023/06/14 19:54:18 DEBUG : Using config file from "/Users/fabian/.config/rclone/rclone.conf"
2023/06/14 19:54:18 DEBUG : Creating backend with remote "db:Backup/Media 1"
2023/06/14 19:54:18 DEBUG : Creating backend with remote "Dropbox:/xxx/Backup/Media 1"
2023/06/14 19:54:18 DEBUG : Dropbox: detected overridden config - adding "{3NXTF}" suffix to name
2023/06/14 19:54:19 DEBUG : Dropbox root '': Using root namespace "xxx"
2023/06/14 19:54:20 DEBUG : fs cache: renaming cache item "Dropbox:/xxx/Backup/Media 1" to be canonical "Dropbox{3NXTF}:xxx/Backup/Media 1"
2023/06/14 19:54:20 DEBUG : fs cache: renaming cache item "db:Backup/Media 1" to be canonical "Dropbox{3NXTF}:xxx/Backup/Media 1"
2023/06/14 19:54:20 DEBUG : Creating backend with remote "db:Backup/Archive/Media 1/2023-06-14"
2023/06/14 19:54:20 DEBUG : Creating backend with remote "Dropbox:/xxx/Backup/Archive/Media 1/2023-06-14"
2023/06/14 19:54:20 DEBUG : Dropbox: detected overridden config - adding "{3NXTF}" suffix to name
2023/06/14 19:54:20 DEBUG : Dropbox root '': Using root namespace "xxx"
2023/06/14 19:54:20 DEBUG : fs cache: renaming cache item "Dropbox:/xxx/Backup/Archive/Media 1/2023-06-14" to be canonical "Dropbox{3NXTF}:xxx/Backup/Archive/Media 1/2023-06-14"
2023/06/14 19:54:20 DEBUG : fs cache: renaming cache item "db:Backup/Archive/Media 1/2023-06-14" to be canonical "Dropbox{3NXTF}:xxx/Backup/Archive/Media 1/2023-06-14"
2023/06/14 19:54:20 DEBUG : .fseventsd: Excluded
2023/06/14 19:54:20 DEBUG : .Spotlight-V100: Excluded
2023/06/14 19:54:20 DEBUG : .Trashes: Excluded
2023/06/14 19:54:20 DEBUG : .TemporaryItems: Excluded

2023/06/14 19:54:21 DEBUG : XXX: Sizes identical
2023/06/14 19:54:21 DEBUG : XXX: Unchanged skipping
...

The ~70k filepaths after that seem unnecessary to post here. Nothing indicating anything apart from filesize identical and then the same filepath 'unchanged skipping'.

Thanks for your help!

Cheers,
Fabian

Nope.

Most likely you hit dropbox throttling.

replace:

--checkers 4 --transfers 256

with

--checkers 4 --transfers 4

Hi, thanks for your response!
As you can see, I'm already using the tpslimit 12 flag, and rclone seems to be respecting it according to the first line in the log.

Cheers,
Fabian

Yes only noticed when pressing send:)

I've edited my command according to your post, though I've tried this previously. And really, the transfers parameter shouldn't really be in effect as I'm not transferring any files here.

I'll let it run for a few minutes and report back :slight_smile:

Cheers,
Fabian

Also.. as nobody really knows what dropbox is doing I would try --tpslimit 6

Who knows maybe 10 will work - but good to test with drastically lower limit to see if problem is here.

This is the biggest problem with dropbox - there is nothing official about their rate limits. Maybe they watch short term one like 1 min but also 10 min, 1h? Hopefully your tests will shed more light on it.

Same story, even with tpslimit 6, checkers 4, transfers 4.
I still got to 70k files around 1.5minutes, and now going slow at 73k files after 5 minutes,
which is about the same result I get, regardless of what settings I choose (apart from the really unrealistic ones).

Have you got any insight on how this might be a rate limiting issue, if I can restart the job immediately and achieve the same 70k files in one minute?

One thing that might be relevant: the checked files count is always at 100%, both numbers will increase at exactly the same rate, so I'm seeing 73k/73k files checked, while in reality I have about 200k files.
I don't know if this is normal behaviour or possibly relevant to my issue.

Agree that it is weird - but I run similar commands against AWS with 2 million files and there is no issue. So I think it is highly unlikely it is rclone end.

Can be dropbox - can be your computer...

I would try without --max-backlog 200000 - maybe this will give us some clue

A full debug log would be the way to really take a look at it.

There is no sign of rate limiting in the logs posted so far.

Yes, I‘ll see if that does something, however the issue was there before I added that. The backlog was my idea to combat the always 100% checked status, but apparently this feature doesn’t do what I thought it did.

How do I get that? I am using the -vv flag in the log you see above, and there are no too_many_requests errors in the logs after.

If that's the whole log with -vv as it's small, there is no rate limiting happening.

You'd see error with -vv and they would be obvious if you were being rate limited.

With 12 TPS, I personally would set checkers to 12 and transfers to 12 and if you are spending most of the time checking, that should be fine as you want to spawn as much checking as possible without rate limiting yourself.

There is a sweet spot to be had with a lot of this and it is a lot of trial and error to see what works best for you.

I try to leave as much as the defaults as possible, test and validate things by changing a thing at a time to ensure my results are accurate.

Hmm, yes, as I‘ve said, I’ve censored about 70k lines of ‚skipping‘, as the full file paths are visible. I’d be happy to send it privately, but there are no error messages in there. Are there any more ways to get rclone to reveal why it is suddenly slowing down? At the beginning, the amount of checked files tends to jump by thousands instantly, but after 1 minute, it’s just steadily increasing by a few files every second. Is it normal to always see 100% files checked? As in, I always see 45678/45678 files checked and both numbers increase at exactly the same rate. Should rclone not be able to read the local filesystem much faster and find an upper bound after some time?

Cheers,
Fabian

I can't answer what I can't see.

If there isn't a full log file to look at, we just guess and guess and that's all we can do.

If you want to share it, great. If you don't, that's up to you. I'm happy to look or not look as I get paid the same either way :slight_smile:

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.