It seems like the limit is 750GB but the uploads in progress are allowed to finish with thottled speed.
Google has never acknowledged any limits before, so i am not holding my breath.
I asked Google support and they answered:
“In the case that you see that the limit is not reached inside the Developer Console, then you are hitting drive backend limit. This limits are calculated with an algorithm to protect our system from abuse. I’d suggest to implement an exponential backoff solution as detailed here https://developers.google.com/drive/v3/web/handle-errors#exponential-backoff and experiment until you can find a balance that would suite your app needs and the server needs.”
So has anyone tried uploading more than around 800GB using the google drive web page?
Curious if this also results in a temp upload ban.
I would try it, but my crap uk upload speeds would probably not be able to upload enough to cause a ban.
If this does result in a ban, we could go back to google and say, hey, I was using your web app and it wont let me upload any more. That way at least they could not palm us off with a load of developer specific questions.
After I had been banned, I tried uploading using the google drive website and uploads just failed. So the ban was applied to the account rather than to an api key.
I was just looking for a better angle to approach google, as if you say you are using rclone, then they point to this as the issue and ask loads of developer type questions.
But if you can get banned just using the google drive website, then that takes rclone out of the equation, so they cant point to that as causing the issue.
but tbh, I don’t really think google will ever explain the limits they arbitarily impose
No they won’t, it’s been long time that there is 10Tb for download limit, but there is not single information about it, so don’t think now they come and tell you about this new limit.
But I wish at least it could show you that you can’t upload, like when you get ban for Download, it will show you on the WebUI, but this one nope.
"Thanks for getting back to me. As I dig deeper into this, I was able to find some internal documented information. I’ll be sharing with you what I found.
There is a bandwidth limitation per viewer and per owner, and a limitation on the number of times a document can be viewed. The limits are 10TB/day and 50,000 views/day with bursts up to 900/min (15 QPS) per document. I believe this might be the drive back end limit you are reaching. Also, in June 2017 a quota for creating blobs of 100Gb/day was established. It’s possible to create files of bigger size, but after this quota is exceeded all subsequent blob creation operations will fail.
That is all the information I was able to get and I hope it is useful. If you have any other question around your G Suite account, please reply to this message and I will be happy to follow up with you. In the meantime, the case will continue to remain open."
I am monitoring drive usage in admin console. In three days i have uploaded 2275,55 GB ~ 758,5 GB per day. It really looks like 750 GB plus active uploads when the limit is reached,
Currently, there is a hard limit for write operations (create, update, delete) that is lower than your allowed QPS which is something that cannot be lifted and raising per-user limit will not improve the error rate. These QPS limits represent an upper bound on aggregate API calls and short term bursts are allowed over that limit. With this, I suggest that you consider slowing down on per-user operations then compensate by doing more users in parallel to maximize throughput.
Also, aside from using a Service Account with authority delegation as suggested in this documentation, I would suggest that you consider the following strategies to optimize your app:
Batch API requests - allows your client to put several API calls into a single HTTP request.
Push notifications - if you want to be updated on the changes of the file
Thanks, @tdaniels - but what is the correlation between chunksize / multipart upload and number of API calls. Do i need to use higher or lower values than default?
I am mostly uploading large files 2-40 GB, so i am thinking large drive-chunk-size and low cutoff?
If it isn’t a GB/day ban I’d speculate that a sufficiently high value could reduce the calls enough to allow you to upload more. I do recall rclone needing as much memory free as the value you set though. In tandem, one could impose a resting period between file uploads to further increase chances. The latter isn’t something I believe rclone can do natively, so one’d have to script something to that effect.
@Larskl i’m pretty certain it has nothing to do with API, I had mine increased to 20,000 per 100 seconds and I never get anywhere near that. The highest i’ve ever had was 2,000 and that’s once every few days at most.