Uploading to dropbox from two computers at once, results in getting stuck seemingly forever

rclone v1.63 windows on one computer ubuntu on the other

edit: To clarify these computers are in different state's with different internet.

Uploading to dropbox even in sync or async mode will eventually result in me hitting the too many requests wait 300 seconds message.

But the commands cannot reliably recover from this message leaving them stuck for all eternity.

It worked perfectly with only one computer.

Stopping them both waiting 5-10minutes and restarting one of them always works perfectly.

Conclusion, the wait timer setup for dropbox is not anticipating two different computers attacking the same dropbox account via the same dropbox personal api at the same time.

Is there a flag I could use to extend this timer in a way that solves all my problems?

example command:

.\rclone copy -v --transfers 16 --bwlimit 110M --dropbox-batch-mode async --drive-chunk-size 128M --max-transfer 1500G "googledrive:blahblahblah" "dropboxsmugglefistgames:/blahblahblah"

Alternatively maybe I could move one of these two a different api and target two api's at the same dropbox account and somehow get better performance? (each computer would use a different one in this case.)

edit: This happened 2or3 times, but now I'm repeating the scenario with two api tokens instead of a shared one. Could be an hour or two until I get stuck again, or it could already be solved via this tactic.

I know some people use multiple VPS during a migration from google to dropbox though, so surely it's possible, not sure what flag or setting I might be doing poorly, or if I just have more small files than the average person.

edit: hit the first 300second wait and recovered this time, but like I said, the issue isn't that it doesn't work, it's that when it breaks it seems to break forever unless I tap two buttons on my keyboard which instantly fixes it... BUT the two api token for two connections trick could also just have this issue fixed (won't be confident about that until tomorrow.)

edit2: This shows the problem. Small dips are the 5minute wait it's supposed to do, giant holes are when it got stuck forever and didn't resume until I returned to a keyboard to hit ctrl-c up arrow enter

The giant block on the left is when I was only using one connection not two (never even a hiccup.)

Then again all this could just be down to comparison of performance on small files versus large files.

I moved files from gdrive to dropbox with 2 VPS
I had 3 folders
I have created 6 api keys
1-1 api keys for all 3 folders for each VPS
Added them to the rclone config 1 by 1 on each box
Used a command to transfer folder by folder always with different api key and server to move with
And run the command different on each server at the same time:
Like VPS 1 moved folder 1 and VPS 2 moved folder 2

Okay so yeah, more api key's is more helpful. It makes sense.

If I have multiple users on the same dropbox team can they each have their own api key? For that matter can one dropbox user have more than one api key?

My 2 keys are working a lot better than one, but I'm wondering a bit how you made it to 6, was that 6 separate teams? 6 users on one team? some alternate I haven't imagined?

TLDR: I can't just make 6 api keys all on one user can I? Surely if I did they'd share their limitations rather than adding them? Or uh am I really this poorly informed?

Just follow the howto and create 6x API keys, note them and reuse in the rclone config, just add a new dropbox every time, and use the next API key on your list.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.