Gdrive now "silently" fails uploads when they go over daily quota?

no good reason. I was actually thinking about that as a problem. mostly because I first setup a mount set and then realized I couldn't move and mount at same time so just copied it all.

with all that said, we need an option that says only queue X amount of data and then exit (not die after 750GB, as want things in progress to finish, want to limit the queue, though perhaps I misunderstand the current limit option)

The cache just adds overhead if you are copying/moving.

You should just copy directly to a remote.

That's how it's supposed to work, but without seeing what changed either from Google or rclone, it's tough to debug. The cache log isn't really helpful the cache backend is going away anyway so it just makes trying to figure what's going on more complex.

If you could to a non cache remote and have a log, that would be great.

have another on going (this will probably be a terrible fail, as its obviously less than 24 hours since my previous one) but it was able to kick off 28 transfers of another 900GB archive. removed the caching

Here's my transfer from last night, I use a setup similar to @Animosity022 (MergerFS, rclone mount). I wrote a script that walks the local ssd for new files, sorts the files by size, separates the largest N files into their own list, and pares down the remaining files to fit under 745GB. These are transferred first, and then the remaining largest files are transferred second. This is controlled by using rclone's --files-from option (the script builds two text files with the files to transfer).

Here's the log from last night. I only ran debug logging on the second transfer.

upload.20200623.tar.gz.log (270.5 KB)

I was asking for something slightly different. i.e. as it seems google's algorithm is something like

func commit(curSize) error {
if totalCommit < 750GB {
totalCommit += curSize
return nil
}
return errors.New("over limit")
}

i.e. when building my queue, I want it to continue adding files as long as the current size is under 750GB. so if I'm at 745GB, it should be able to add a 50GB file, but then no more.

Rclone doesn't build the entire queue of things to transfer though as that's done on the fly based on what you are copying/transferring and that's no metadata or way to calculate or ask Google what you've already transferred and what's left.

I swear there is a setting to influence the order of things, but I'm not sure offhand as maybe @ncw remembers what it is.

Any transfer should finish that's in flight but any new transfer that's over the quota would fail, but it's not quite as easy as that as if you have 748GB uploaded and you tried to do a 10GB file, it would fail, but a 1GB file would work.

Basically,

Forgot to mention, my log shows the files that are attempted past 750 GB failing to upload.

Hmm, with 9 transfers going on it's a bit tough to piece together, but my first pass I think that you uploaded a file in flight.

2020/06/23 07:24:08 INFO  : Movies/Movies/For Richer or Poorer 1997 tt0119142/For Richer or Poorer 1997 Remux-1080p.mkv: Deleted

and everything after that failed so that seems to be what I'd expect. Would you expect something else?

Your transfers would continue to fail unless you used the new stop uploading on error flag that reads a 403 and stops it from continuing to try to upload.

Before recently, all 9 would succeed, not just the first one to pass the 750GB barrier. Isn't the idea that all transfers are allowed to complete, and no new transfers started after would succeed?

Hmm, I don't know to be honest. I would say if you that's how you saw it before, that's how it was.

Did the behavior change on an update of 1.51 and if you test with a lower version or 'working' version before, I'd be curious if it's a rclone change or Google change.

I'm not aware of a rclone change that would have changed.

At first I thought it was an issue with 1.52 as I had updated to it a few days ago, I reverted to 1.51 and the problem has remained. I used 1.51 for months, often times uploading as much as 1700GB in a single day following the methodology I described above:

  • 128 transfers per run
  • 745 GB or less the first run
  • The largest 128 files in the second run

Cool. That helps as I felt confident it wasn't a rclone change.

@Harry - any thoughts on what to look at you spent some time with the 403 work?

I'm fairly positive its that they don't let files commit after the current commit level passes 750GB. so one could upload a single 4TB file, (as the commit value would be 0 when it goes to commit), but a 1GB when the commit value is 751GB will fail to commit, even if it started when the commit value is 0. that's what changed on google's end.

1 Like

and my second test even the first file to finish couldn't complete was failing with

2020/06/23 20:35:34 DEBUG : pacer: low level retry 8/10 (error googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded)

so I killed it

strangely, it was able to restart the download :confused: (i.e. saw it back to 0% before I killed it)

Would that mean that --max-transfer=750G --cutoff-mode=soft would not work for transfers > 1

It had to happen eventually. Like @qweasdzxc787, I would sometimes upload close to twice the 750GB a day simply by setting "transfers 999". That would make all files start uploading at the same time. Oh well. Let's hope G doesn't take away anything else.

Haha. Yes. Iā€™m afraid the unlimited storage will be killed by Google (like Microsoft did with OneDrive) sooner or later because the way people started to abuse it.

I don't consider what we're discussing here abuse, though. I draw the line at anything that actually circumvents the daily limit by using service accounts, etc. Also, I am absolutely willing to pay the 60 bucks a month, should it come to that. I was lucky enough to join GSuite when there was no upload limit at all.

It was literally in their support pages:

https://support.google.com/a/answer/172541

Individual users can only upload 750 GB each day between My Drive and all shared drives. Users who reach the 750-GB limit or upload a file larger than 750 GB cannot upload additional files that day. Uploads that are in progress will complete. The maximum individual file size that you can upload or synchronize is 5 TB.

I just took them at their word and built my transfer scheme around it.

1 Like