wondering if people are seeing this too? When I have a download that goes over quota, it seems that gdrive allows the upload to continue, but then whatever rclone is doing at the end to "finish" the upload, is getting api errors
i.e. I was watching 'file' transfer and it got to 0, then threw that error. this seems different than older gdrive behavior where a file that was started uploading would always be able to be completed, just one couldn't start new uploads once your quota was exhausted.
OP is correct in his findings. I haven't posted about this, but it started happening about two weeks ago. Before the change, running uploads would always finish, even if they went past the 750GB daily limit. Since the change, running uploads will fail once you go past the limit. I assumed this is something that Google changed on their end rather than an issue with Rclone.
I always use the same command via RcloneBrowser, with --drive-chunk-size being the only flag I change depending on the actual number of files being uploaded. For example, I have 50 files ready to upload. A total of 900GB. Before the change, all 50 files would finish uploading. Now, any file that has already started uploading but would go over the 750GB limit stops uploading (but will resume once the timer rolls over). Does that make sense?
don't have more than what I posted above. I can try building one later, but realize, that its difficult as it doesn't trigger until you go over 750GB for the day.
I'm seeing the same thing with 1.51, first the uploads start failing with this error:
Failed to copy: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded
And then this error when rclone retries the transfer:
Failed to copy: googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=447300822720, userRateLimitExceeded
Like OP, I transfer with many simultaneous connections (128) which can allow you to upload over 750GB, as anything started before the limit is hit is allowed to complete. I have been using this methodology for over a year, and have not had any issues with it finishing transfers until recently.
(the transfer line is a little different because of my hacked up code display code)
makes me wonder if it only starts failing things after 750+GB of files have "completed". will see. if so, I should have ~5-6 errors to see when I wake up in the morning.
had "4" errors (but one retried successfully) after I went to sleep (i.e. after 750GB transferred but not completed). 3 errored out. will try to post logs later.
based on the fact that I was trying to upload 910GB and I had 137 remaining in the 3 files that errored out, it seems that it prevents "comitting" (whatever that means) after one hits the 750GB commit level.
Oof, you are using the cache backend and doing the uploads that way? That's a bit harder to see what's going on as that's usually why the first post asks for the command as it saves a bit of everyone's time.
Is there a reason you are writing to a cache remote rather than a regular remote?