Dropbox Dir.ReadDirAll error: too_many_requests/

I see your point but it's not only my problem and it's not only about my accounts. I've 3 team and 9 accounts. Also my friends have the same problem exactly with same reason. It can be about the total file size and the file count in a directory. But I can't spot the exact problem

I think you have some correlation going on there without any data to support it.

It could be a solar flare too, but we can only figure things out as we go along.

@Ole point is probably the best on Chia farming...

In my opinion, if it's true, they could accept that they caused that or they could admitted that they don't allow anymore this. We use the business accounts. So we have a good support on dropbox but they are telling it's not caused by dropbox. Also they requested the 3rd party developers to reach dropbox's developers to solve the issue. If they do this, they wouldn't request a touch...

Anyway, what can we do to solve this?

So find a developer to reach out to them as that's not me.

You'd wait for @ncw to chime and donate some of his time as it's a free project or hire a developer to help you out.

1 Like

@ncw thank you, I wish you will help us for this...

It is my understanding that all the data of the Chia plot(s) needs to be available and readable without too much delay all of the time. This is constantly verified by random reads of the plots from some sort of Chia console using the mount in the first post. The more plots the more reads.

Is this a (somewhat) correct understanding?

If so, then it is my best guess that this read activity will exceed the quotas of Dropbox and result in the rate limiting (too many requests) that you are seeing.

I explained the business rationale for these quotas in an earlier post and would expect them to be prohibitive for Chia farming based on the calculations made by Backblaze. I would therefore expect the providers to gradually close any loopholes as they get aware of them. That's why I call it a losing game.

Hey Ole, thank you for your comments.
I got your point but I don't believe that. Because we couldn't fill a small percentage of the dropbox api call limit which is

Account1:

Space used: 534.35 TB
API calls this month: 0% of 1,000,000,000

As I told, if you were right, dropbox would explain the reason.

It's reproducible with a rclone ls command not a mount so it isn't related to the mount as it was tested without that.

A rclone ls command should return results and not get rate limited with tps 1 so something else is not working as expected.

If that's the data set / structure / bad api return / etc, that's what I was trying to get at.

1 Like

Are all those people doing chia farming? If so that seems like the common factor since the problem you are seeing is very specific and not one we've come across in the Dropbox backend.

I wouldn't be at all surprised if Dropbox are specifically detecting chia activity.

I would have hoped using --tpslimit on the mount should slow things down to an acceptable rate for Dropbox and the fact it isn't is a bit fishy.

Anyway I'm happy to email Dropbox - do you have a contact you are talking to? You can loop me in on nick@craig-wood.com if you want.

1 Like

Thank you Nick,
Please don't share the chia details with them :slight_smile: I'll send them your email to contact with you, I have only a ticket, not a direct email contact.

Hello Nick, @ncw

I've sent your email but they respond me like:

Thank you for your response and for your patience on this matter!
The 3rd party developer can reach out to us from this link: Dropbox - Contact Dropbox support.
Keep in mind, that they can use the ticket number of this communication as reference.
Please feel free to reach out again if there is anything else you need.

Please can you contact with them? And you can use this ticket (15715169) as reference Ticket #15715169: Dropbox Support Chat

Unfortunately, not being a paying customer, I have no support options to contact them on that page :frowning:

However I have had a go with the credentials you sent me and I've discovered some things using this command

rclone lsf -vv --dump bodies dropbox4:Party1/plot1 --checkers 1 --transfers 1

In the docs: HTTP - Developers - Dropbox

It says

Note: auth.RateLimitError may be returned if multiple list_folder or list_folder/continue calls with same parameters are made simultaneously by same API app for same user. If your app implements retry logic, please hold off the retry until the previous request finishes.

I think this must be the case rclone is hitting.

The HTTP request looks like this

2022/02/09 17:07:08 DEBUG : HTTP REQUEST (req 0xc000864200)
2022/02/09 17:07:08 DEBUG : POST /2/files/list_folder HTTP/1.1
Host: api.dropboxapi.com
User-Agent: rclone/v1.58.0-beta.5995.9cc50a614
Content-Length: 220
Authorization: XXXX
Content-Type: application/json
Accept-Encoding: gzip

{"path":"/Party1/plot1","recursive":false,"include_media_info":false,"include_deleted":false,"include_has_explicit_shared_members":false,"include_mounted_folders":false,"limit":100,"include_non_downloadable_files":false}

And the response looks like this

2022/02/09 17:07:09 DEBUG : HTTP RESPONSE (req 0xc000864200)
2022/02/09 17:07:09 DEBUG : HTTP/2.0 429 Too Many Requests
Content-Length: 109
Accept-Encoding: identity,gzip
Cache-Control: no-cache
Content-Security-Policy: sandbox allow-forms allow-scripts
Content-Type: application/json
Date: Wed, 09 Feb 2022 17:07:08 GMT
Retry-After: 5
Server: envoy
X-Content-Type-Options: nosniff
X-Dropbox-Request-Id: 69c85318bf4341f2a2ce163f5b50e1d9
X-Dropbox-Response-Origin: far_remote

{"error_summary": "too_many_requests/", "error": {"reason": {".tag": "too_many_requests"}, "retry_after": 5}}

That directory can't even be listed once without getting that error. I tried changing the number of items rclone is asking for from 1000 to 100 to 10 to 1 with no difference.

The fact it is the first transaction makes me wonder if something else is using the directory?

So have you got other things using that directory at the same time?

Rclone does this already. It requests 1000 items normally each page, but I tried 1000,100,10,1 with no difference.

So I think the cause of this is one of these things

  1. You are using the same app ID elsewhere listing that directory a lot
  2. That directory is somehow broken at Dropbox due to a Dropbox bug
  3. Dropox are deliberately breaking that directory with rate limits (maybe they don't like Chia farming).

Cause 1. agrees with the documentation. 2. and 3. would require confirmation from Dropbox. I don't think rclone is doing anything wrong. It is respecting the retry after headers, the directory listing just never returns what it is supposed to.

Hello,
I don't know how but it's fixed now. I didn't do anything special. I guess something is fixed on dropbox side.
BTW, thank you very much for all comments @Ole @Animosity022 and especially @ncw congrats to develop such an application and thank you for your support. I'm here to contribute to project.

2 Likes

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.