Stop upload on rate limit

What is the problem you are having with rclone?

Is there any way to stop uploads upon rate limit failure?
I have a cron scheduled task that uploads every 20 minutes but i'd like it to stop trying if i'm rate limited by google.
I want it to do this because i don't know what kind of resources are used or anything that happens when rclone just keeps trying to upload until the next day.

here's how i do the uploading

#!/bin/bash
/usr/bin/rclone move --transfers=6 --config /opt/rclone/rclone.conf --no-traverse --bwlimit 8.5M --buffer-size 0 --use-mmap /home/user/rtorrent/upload/movies/ drive:media/movies/
/usr/bin/rclone move --transfers=6 --config /opt/rclone/rclone.conf --no-traverse --bwlimit 8.5M --buffer-size 0 --use-mmap /home/user/rtorrent/upload/tv/ drive:media/tv/
find /home/user/rtorrent/upload/movies/* -empty -type d -delete 2>/dev/null
find /home/user/rtorrent/upload/tv/* -empty -type d -delete 2>/dev/null
flock -xn /home/user/bin/locks/upload.lck -c "/home/user/bin/upload_silent.sh"

What is your rclone version (output from rclone version)

v1.48.0

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu

Which cloud storage system are you using? (eg Google Drive)

Google Drive

You are hitting the max upload quota pr Day 750GB. Set bwlimit to 8M then you wont hit your quota.

respectfully, that isn't at all even addressing my question
you told me 2 things i already know

  1. im hitting quota
  2. you can already see that im using 8.5 bwlimit above

403 rate limits can mean more than 1 thing unfortunately so rclone won't know you hit your limit.

You can lower the rertries to it fails sooner.

He suggested to move down your bwlimit from 8.5 to 8 to help ensure you don't hit the daily limit.

Unfortunately I don't think such a feature exists yet. I agree it would be useful to have an option you can set to abort a session if a quota limit is detected - thus preventing the rest of your script from stalling. I've been wanting that myself. Currently I think the only way to do this would be to have an external script parse the output and detect it from error code - but this is convoluted. The problem is that I don't know if you can technically distinguish between rate and quota limits based on the error you get from the server. I don't know enough details on that - but that would be a requirement for this feature to work.

An alternative could be to have rclone persistently track the total transfers in a day by saving it to a file with timestamps and just making sure it doesn't go over.

Don't confuse rate limiting and quota limiting though. It can be entirely normal to see some rate limiting errors during bursts but rclone will scale it's requests to deal with that. After you hit quota though you won't be able to upload more at all.

I would suggest you make an issue for this as a suggestion. Just check real quick to see if it already exists and upvote it instead if it does. I can't remember if I maybe saw an issue for this already somewhere...

I think the best tool we have for the job right now is to set --max-transfer to a limit below your daily quota (currently 750G for a Gdrive to the best of my knowledge - you'll want to limit it to slightly less). It's only going to count that limit pr session though, so it has limited usefulness, but it should do the job just fine if you only need to run a daily sync job. All this would be so much easier if google just had a function to read the remaining quota from the API - but they are notorious for keeping all that stuff unpublished and in the dark.

I don't see any reason to use --bwlimit instead. Not unless you purposefully want to smooth out the bandwidth use. You'll hit the same daily quota anyway - just really slowly. Don't see much benefit to this.

This would be his worst option as he runs a process every 20 minutes and it would never trigger.

This would actually be the OP's best option as he runs a process every 20 minutes and he's trying to make sure he doesn't hit 750GB a day. By setting a low bandwidth limit on his transfer and ensuring that he has a good limit, he won't hit 750GB per day and won't hit the quota.

The other current options require much more effort in terms of log scraping or having to request a very complex feature as 403s are used for multiple errors so it's impossible to tell if you hit quota or something else.

Yes, as I said it would only really work for a daily sync. It was meant as an alternative option.
Either method has large downsides to it. Either you can't sync often, or else you will have to deal with very poor bandwidth utilization. Ultimately we will need some new system to do this optimally.

If the limit is 750GB per day, it doesn't matter how you utilize your bandwidth as long as you consume 750GB per day and meets OP requirements of constantly uploading 750GB per day with minimal effort.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.