New flag for google drive --drive-stop-on-upload-limit to stop at 750 GB limit

In the latest beta v1.50.2-131 or later I've added this flag. I think this should work but testing appreciated by heavy uploaders :slight_smile:

--drive-stop-on-upload-limit

Make upload limit errors be fatal

At the time of writing it is only possible to upload 750GB of data to
Google Drive a day (this is an undocumented limit). When this limit is
reached Google Drive produces a slightly different error message. When
this flag is set it causes these errors to be fatal. These will stop
the in-progress sync.

Note that this detection is relying on error message strings which
Google don't document so it may break in the future.

See: https://github.com/rclone/rclone/issues/3857

  • Config: stop_on_upload_limit
  • Env Var: RCLONE_DRIVE_STOP_ON_UPLOAD_LIMIT
  • Type: bool
  • Default: false
3 Likes

Remember: Google counts normal uploads & server side side copy separately. So you can do 750gb normal uploads + 750gb server side copy a day.

Also note, Google starts to throw quota errors from 748 Gb, then slowly increases the errors if you try to upload more.

Google doesn’t mind if you try to upload 5-6 small files even if you reach your daily quota.

5 Likes

Using rclone v1.50.2-146-g3a1b41ac-beta doesn't work, it stills show this message and continues
2020/01/18 22:09:07 ERROR : Path/to/folder: error reading destination directory: couldn't list directory: googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=XXXXXXXXXXXX, userRateLimitExceeded

Can you run the full command now with the flag and -vv on and share the output?

The error you have above is a normal slow down message in terms of listing a directory and does not appear to be related to the upload quota.

You still haven't reached the capped state according to our tests. See the issue here for what the expected error message is when you get capped at 750 GB.

A few of the 403 messages are normal BTW.

Quick testing:

Works with non-server-side uploads? Yes
Works with server-side copy/sync? Yes
Works when account has full 750GB quota remaining? Yes
Works when account has partial quota (100-300GB) remaining? Yes
Works when account has zero quota remaining? Yes

@ncw Thank you! Outstanding addition :+1:

1 Like

:slight_smile: Thanks for testing! I think this flag is quite likely to get broken by changes at Google so we'll need to keep an eye on it!

1 Like

This flag is proven to be very useful!
Especially if you want to reduce error rate per hour.
I don't know if it always works though.
There are times when using this flag caused rclone to do nothing for few hours at the start time, no logs at all even with using -vv flag. But after few hours it starts normally by itself.

In my case, before I use this flag whenever I uploaded more than 750GB limit of either server-side or non-server-side, I always get 403 errors due to each Google account is heavily being used. So for example near uploading 700GB, I always have constant error 403 userratelimit exceeded for almost all day for constant 20% of total all requests. Uploading speed is throttled at this time (by Google?). Then around uploading 740GB the 403 userratelimit exceeded error rate jumps to 100% before the process is killed or stayed like that until 750GB limit is reset.
So in the dashboard for each of Google account, the graph is always full of 403 error because I use up 750gb limit just within few hours every day. The pattern of the 403 error is also the same, 20% error rate for almost all day long, spiked into 100% error before being reset.

By using this flag, it looks like that once it reaches 750gb and receive error, it becomes fatal error and kill the process. So this minimizes the number of 403 error rate I have been receiving. This result in much cleaner DRIVE API dashboard. No more 403 error all day long!
I wish this is turned On by default setting.
Together with --tpslimit flag, this results in so much reduction of total 403 and 5xx error. UI

He thought about it but nervous to do so, you can read our discussions on Github.

But you can make the flag default using config variable.

How can I do that?

IMHO this flag and --no-traverse should be turned on by default @ncw

Add

stop_on_upload_limit = true

On your grdrive remote config file. If you have mutiple gdrive remotes, you need to add it in all of them.

—no-traverse doesn’t have any config variable yet, so you need to use the flag.

Great testing - thank you :slight_smile:

As stated above you can put it in your remote config. The flag works by reading error message text which isn't guaranteed to stay the same so I don't want to make it the default yet.

What is your use case for --no-traverse? For most operations --no-traverse will cause rclone to do more work, but for certain specific things it works really well.

I didn't know that no-traverse will cause rclone more work, I thought that by doing no traverse it will less work. My use case is like this:
I have folders, for example remote:/path/a. For each of this folders have hundreds of thousands files and folders.
For each of remote:/path/a I have several Google account doing rclone copy/move to that folder from several minimalistic server that usually has RAM around 128MB-512MB with weak CPU and 1Gbps connection.
no-traverse flag always yield better memory and CPU usage and just generally much faster than without the flag. "rclone copy/sync/move" has always been faster or better for CPU and RAM with that flag.

--no-traverse works really well if you are copying a small number of files into a destination with a large number of files, so with rclone move or rclone copy --max-age 1d etc

If you are trying to sync though I think --no-traverse is pretty much always slower.

Normally rclone reads the whole directory structure in syncing. If you use --no-traverse then rclone will read info about each file individually. If there are (say) 50 files in a directory, then --no-traverse will do 50 transactions whereas without it it will only do 1.

1 Like

Server side copy as in copying files between drive accounts?

Yes, if you use the --drive-server-side-across-configs flag.

1 Like

Technically as the flag says across configs. Means you can copy files server-side between rclone remotes - where the remotes can be either in the same account or in a different account provided all of your permissions are correct.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.