Dropbox - Too many requests or write operations. Trying again in 300 seconds

What is the problem you are having with rclone?

Too many requests or write operations. Trying again in 300 seconds

rclone v1.62.2
Dropbox remote

The command you were trying to run (eg rclone copy /tmp remote:tmp)

I'm running the following three commands simultaneously, only the third one is showing an error (at least for now):

rclone copy GDSA03:movies/ DB:movies/ --transfers 12 --tpslimit 12 --dropbox-batch-mode async --fae/hd22/eteam/rclone.log -vvv
rclone copy GDSA03:tv/ DB:tv/ --transfers 12 --tpslimit 12 --dropbox-batch-mode async --fae/hd22/eteam/rclone.log -vvv
rclone size DB:

The rclone config contents with secrets removed.

[DB]
type = dropbox
token = {"access_token":"xxxxx","token_type":"bearer","refresh_token":"xxxxx","expiry":"2023-06-06T16:12:00.361948583+02:00"}

A log from the command with the -vv flag

2023/06/06 14:00:40 DEBUG : rclone: Version "v1.62.2" starting with parameters ["rclone" "size" "DB:movies" "-vv"]
2023/06/06 14:00:40 DEBUG : Creating backend with remote "DB:movies"
2023/06/06 14:00:40 DEBUG : Using config file from "/home/hd22/eteam/.config/rclone/rclone.conf"
2023/06/06 14:03:01 NOTICE: too_many_requests/...: Too many requests or write operations. Trying again in 300 seconds.
2023/06/06 14:03:01 DEBUG : pacer: low level retry 1/10 (error too_many_requests/...)
2023/06/06 14:03:01 DEBUG : pacer: Rate limited, increasing sleep to 5m0s
2023/06/06 14:03:01 DEBUG : pacer: Reducing sleep to 3m45s
2023/06/06 14:03:01 DEBUG : pacer: Reducing sleep to 2m48.75s
2023/06/06 14:03:01 DEBUG : pacer: Reducing sleep to 2m6.5625s
2023/06/06 14:03:01 DEBUG : pacer: Reducing sleep to 1m34.921875s
2023/06/06 14:03:01 DEBUG : pacer: Reducing sleep to 1m11.19140625s
2023/06/06 14:03:01 NOTICE: too_many_requests/: Too many requests or write operations. Trying again in 300 seconds.
2023/06/06 14:03:01 DEBUG : pacer: low level retry 1/10 (error too_many_requests/)
2023/06/06 14:03:01 DEBUG : pacer: Rate limited, increasing sleep to 5m0s
2023/06/06 14:03:01 DEBUG : pacer: Reducing sleep to 3m45s
2023/06/06 14:03:01 DEBUG : pacer: Reducing sleep to 2m48.75s

Adding to the above here is the config of my Dropbox API app:
I don't know if were it says Development teams 0/1 it should say 1/1.. I don't know how to change it either.


IMHO

--transfers 12 --tpslimit 12

is too much for Dropbox - check some threads from the last few days. It is repeating issue. Dropbox has limits - they are not public. You have to experiment. If you are seeing a lot of errors you have to slow down.

1 Like

Also check:

--dropbox-batch-mode async

does not guarantee the uploads

I would also change the number of transfers, from 12 change it to 5

Thanks guys I didn't know that Dropbox also had upload limits!

You might also create separate APP ID for every command you run in parallel.

2 Likes

Yes, that's something that I did at least.
In any cas in Development teams still shows 0/1, which I don't know if it's right or wrong...

No idea neither. Not using Dropbox atm. Maybe somebody who does can check

1 Like

Do you have a team setup for development? If not, 0 is fine.

Dropbox does API restrictions per app registration so you really want to use a different app reg for every 'thing' you do. I use on for each mount, one for uploads, one for testing so nothing every impacts anything else.

Generally, 12 TPS with 0 burst works ok, but on rare times I am moving / renaming a lot, I get timed for a few seconds.

If you see 5 minutes, you are hammering it too much and it's asking you to slow down.

2 Likes

What is the trick to have more app ids configured? You add dropbox more than once in the config?

Yes, multiple entries with different app id.

1 Like

Got it thanks!

Now I'm using running two rclone copy commands at the same time, each one of them has its own Dropbox api and each own rclone config code.

I'm running them with -transfers 12 and tpslimit 12. I'm not using anytthing for tpslimit-burst.

Would you recommend me to change tpslimit to 0 then?

While I want to copy fast. I rather do it more slowly but at steady peace than having to see how the seedbox rebooted every few hours, lol

The default is 1 so I only set it to 0 just to be sure I never go past 12 per second. With 1, you can get 13 in a second, but at the end of the day, not sure how impactful that would be.

Fast is tough as you want to squeeze out every bit you can but too much and you start retrying which is much worse.

Just test and see what works best.

2 Likes

Don't forget to use the --dropbox-batch-size flag too.

--dropbox-batch-mode sync is default since very long so unless really needed no point to tinker.

--dropbox-batch-size by default this is 0 which means rclone which calculate the batch size depending on the setting of batch_mode.

For massive Gdrive->Dropbox transfers I would not over engineer it. Do basics and defaults. Use few flags to control speed not to hit Dropbox limits.

And maybe

--order-by size,mixed,75 --max-backlog 10000

to make all transfer more balanced.

max-backlog: controls how deep into filesystem rclone will scan to gather up a list of files to transfer. There is no need to do more.

order-by: those up to 10,000 items in the queue will be sorted by size and the threads will be filled with 75% small files and 25% big files. This helps the queue get saturated with a bunch of tiny files but also working on some larger files at the same time.

3 Likes

maybe this --max-backlog 10000 flag will also solve such a problem that I have Total size: 37.619 TiB (checked with rclone size), but if I start to "copy", the logs says it is only (5 TB?):
Transferred: 1.430 GiB / 5.357 TiB

So do you mean something like this only??

rclone copy GDSA03:movies/ DB:movies/ --order-by size,mixed,75 --max-backlog 10000 --progress

By the way, I noticed that in the OP I didn't paste correctly the rclone copy commands that I was using when I created this thread. I should have written this:

rclone copy GDSA03:movies/ DB:movies/ --transfers 12 --tpslimit 12 --dropbox-batch-mode async --fast-list --dropbox-batch-size 1000 --dropbox-batch-timeout 10s --progress -vvv
rclone copy GDSA03:tv/ DB:tv/ --transfers 12 --tpslimit 12 --dropbox-batch-mode async --fast-list --dropbox-batch-size 1000 --dropbox-batch-timeout 10s --progress -vvv
rclone size DB:

I'm not editing the OP anyway because otherwise the answers received might not make sense.

I would remove

--dropbox-batch-mode async

unless you have special reason to use it.

--transfers 12 --tpslimit 12 - if it works for you great. Just watch for throttling errors and lower it if hit by limits

1 Like