Gdrive now "silently" fails uploads when they go over daily quota?

wondering if people are seeing this too? When I have a download that goes over quota, it seems that gdrive allows the upload to continue, but then whatever rclone is doing at the end to "finish" the upload, is getting api errors

ala:

2020/06/22 01:27:34 ERROR : file put: error uploading: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded
2020/06/22 01:27:43 ERROR : file: put: error uploading: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded
2020/06/22 01:27:53 ERROR : file: put: error uploading: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded
2020/06/22 01:28:02 ERROR : file: put: error uploading: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded
2020/06/22 01:28:11 ERROR : file: put: error uploading: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded
2020/06/22 01:28:24 ERROR : file: put: error uploading: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded
2020/06/22 01:28:33 ERROR : file: put: error uploading: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded
2020/06/22 01:28:41 ERROR : file: put: error uploading: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded
2020/06/22 01:28:50 ERROR : file: put: error uploading: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded
2020/06/22 01:29:02 ERROR : file: put: error uploading: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded
2020/06/22 01:29:02 ERROR : file: Failed to copy: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded
2020/06/22 01:29:02 ERROR : file: Not deleting source as copy failed: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded

i.e. I was watching 'file' transfer and it got to 0, then threw that error. this seems different than older gdrive behavior where a file that was started uploading would always be able to be completed, just one couldn't start new uploads once your quota was exhausted.

There are different quotas as there is one for upload and one for download. The one you have above looks to be upload quota.

You'd have to share what you are running, command, version, etc.

Just use:

--drive-stop-on-upload-limit

If you are using copy or sync and that will stop the process if you hit the daily quota.

OP is correct in his findings. I haven't posted about this, but it started happening about two weeks ago. Before the change, running uploads would always finish, even if they went past the 750GB daily limit. Since the change, running uploads will fail once you go past the limit. I assumed this is something that Google changed on their end rather than an issue with Rclone.

1 Like

What is the command you are running though?
What version?
Can you share a debug log?

I always use the same command via RcloneBrowser, with --drive-chunk-size being the only flag I change depending on the actual number of files being uploaded. For example, I have 50 files ready to upload. A total of 900GB. Before the change, all 50 files would finish uploading. Now, any file that has already started uploading but would go over the 750GB limit stops uploading (but will resume once the timer rolls over). Does that make sense?

move --ignore-existing --verbose --transfers 999 --checkers 8 --bwlimit 95M --contimeout 60s --timeout 300s --retries 3 --low-level-retries 10 --drive-chunk-size 128M --stats 1s

These are pretty much all RcloneBrowser defaults, except for the 999 transfers, the speed limit, and the drive chunk size.

In my opinion, this is something that the big G changed recently. Basically, a hard limit now instead of a soft one.

have to test with a file greater than 750GB

https://support.google.com/a/answer/7338880?hl=en

but there documentation says it should be able to finish, even if one goes over quota.

We need the info to see if something in rclone changed or not though.

So if we can share the version, command, debug, etc.

I'm using an old version, nothing changed in how I use it. v1.50.2

I use it via cache and crypt.

~/rclone move -P --rc --delete-empty-src-dirs . gcrypt1:/ToSort --transfers 100

Can you please share a debug log?

don't have more than what I posted above. I can try building one later, but realize, that its difficult as it doesn't trigger until you go over 750GB for the day.

I'm seeing the same thing with 1.51, first the uploads start failing with this error:

Failed to copy: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded

And then this error when rclone retries the transfer:

Failed to copy: googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=447300822720, userRateLimitExceeded

Like OP, I transfer with many simultaneous connections (128) which can allow you to upload over 750GB, as anything started before the limit is hit is allowed to complete. I have been using this methodology for over a year, and have not had any issues with it finishing transfers until recently.

Gotcha.

Still need a debug log though.

running a 24 transfer test of 900GB with debug logging (it's going to be a massive log)

If you can split it up and upload like 725GB first and start a new one with the last part.

We just need to see what's happening.

too late :blush: if this doesnt help, will be plenty more opportunities in future

so, a file just completed fine after more than 750GB were transferred

Transferred: 773.477G / 909.271 GBytes, 85%, (46.986 MBytes/s,45.343 MBytes/s), ETA 49m19s

(the transfer line is a little different because of my hacked up code display code)

makes me wonder if it only starts failing things after 750+GB of files have "completed". will see. if so, I should have ~5-6 errors to see when I wake up in the morning.

I'm running my usual transfer tonight, I'll have a debug log in the morning. 900GB to be transferred.

had "4" errors (but one retried successfully) after I went to sleep (i.e. after 750GB transferred but not completed). 3 errored out. will try to post logs later.

based on the fact that I was trying to upload 910GB and I had 137 remaining in the 3 files that errored out, it seems that it prevents "comitting" (whatever that means) after one hits the 750GB commit level.

1 Like

Oof, you are using the cache backend and doing the uploads that way? That's a bit harder to see what's going on as that's usually why the first post asks for the command as it saves a bit of everyone's time.

Is there a reason you are writing to a cache remote rather than a regular remote?