Multi-drive rclone cron upload

I upload anywhere from ~1.2-1.7TB everyday, which means I have multiple gdrive business accounts, each with their daily limit of 750GB. I manage these with a cron script that iterates thru each drive 1 by 1 in an attempt to load balance. At any given time, there is only a single rclone instance uploading, with the use of lockfiles.

My problem arises when one of the drives in my cluster of 3 hits the daily limit. Instead of exiting, and allowing my other two gdrives to continue picking up the slack, rclone gets stuck retrying with 403 User Rate Limited exceeded errors.

While this is going on, my cron job sees this running instance and doesn't let the next rclone upload process spawn.

How would I go about solving this issue in the simplest way?

Does rclone have an option to exit instead of keep retrying in this instance? It makes sense to retry until the ban is lifted, but not for my use case unfortunately since it holds up the other instances.

I'm fairly certain there exists a maximum tranfer parameter you can set that will produce a fatal error once the limit is reached. That should allow the script to continue and not get jammed. I will see if I can link it - but it should be part of basic documentation on the webpage.

This is what I was thinking about:

--max-transfer=SIZE

Rclone will stop transferring when it has reached the size specified. Defaults to off.
When the limit is reached all transfers will stop immediately.
Rclone will exit with exit code 8 if the transfer limit is reached.

Seems like that would solve the problem you have described, since Rclone will actually exit (with a code you can use if you want) and thus allow the next part of the script to pick up the rest of the work.

That wouldn't work since it's PER rclone instance. Per each instance, I might only upload 25 gb or 100 gb. But throughout the day, with repeated rclone jobs, I'll exceed the daily 750gb. Rclone currently has no way to track this limit across multiple calls.

Just to give you an example of how this works:

Time 0 min: rclone gdrive1 upload 100gb
Time 20 min: rclone gdrive2 upload 100gb
Time 40 min: rclone gdrive3 upload 100gb
Time 60 min: rclone gdrive1 upload 20gb
Time 80 min: rclone gdrive2 upload 40gb
Time 100 min: rclone gdrive3 upload 10gb

And so on and so forth as it loops throughout the day, uploading some data every hour. So as you can see, in each rclone move to gdrive1, I uploaded < 100gb, but over 2 hours, I uploaded 120gb.

I see. I was assuming a typical daily backup sort of setup.

Well in that case the only "easy" solution I can see is if rclone could be set to exit with an error if it ever receives a rate-limit error. That shouldn't be too hard to do in terms of code - but I don't think such a function exists yet.

After doublechecking that something like that isn't actually in the documention I would recommend making an issue about it - and hopefully it should be a fairly minor code update. If you know how to code you can make a pull request and try implementing it yourself too. It seems like it would be a useful function to have in general since getting stuck on a rate limit is rarely very productive in any case.

New issues (about bugs or feature requests for example) can be made here if you aren't familiar with github already:


Just make sure to do your due diligence in documenting the request in adequate detail.

Yeah, I was thinking about something like that. I'll have to look into rclone documentation to see how it's setup.

A possibly easier solution was me just reading the log file and checking for 403 errors, and if any are detected, exit script.

Seems like your script would just get stuck again and end up in an endless loop then. An elaborate external script workaround might be feasible, but it would certainly be a lot cleaner and easier to have internal support for it.

Agreed, an internal solution is definitely cleaner and easier.

Although I'm not sure I understand how my script would end up in an endless loop. Maybe I'm missing some logic here...

1: Upload to drive1 (Read log file while uploading to check for 403s)
2: Upload to drive2 (Read log file, 403 found, exit script)
3: Upload to drive3 continues as normal next time cron calls the script (read log, check for 403s)

It would indeed error everytime it tries for drive2, but not sure I'm seeing the loop.

It should work if you make each upload a separate subroutine to a main script or something of that nature. Just make sure it doesn't start from the start of the script again after detecting an error (and thus just repeating what produced the error in the first place).

I'm sure there are many ways you could potentially work around it if you really had to - I'm not suggesting otherwise.

The problem is that 403 can be normal rate limiting messages too. You don't want to exit on them.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.