In the latest beta v1.50.2-131 or later I've added this flag. I think this should work but testing appreciated by heavy uploaders
--drive-stop-on-upload-limit
Make upload limit errors be fatal
At the time of writing it is only possible to upload 750GB of data to
Google Drive a day (this is an undocumented limit). When this limit is
reached Google Drive produces a slightly different error message. When
this flag is set it causes these errors to be fatal. These will stop
the in-progress sync.
Note that this detection is relying on error message strings which
Google don't document so it may break in the future.
Using rclone v1.50.2-146-g3a1b41ac-beta doesn't work, it stills show this message and continues 2020/01/18 22:09:07 ERROR : Path/to/folder: error reading destination directory: couldn't list directory: googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=XXXXXXXXXXXX, userRateLimitExceeded
You still haven't reached the capped state according to our tests. See the issue here for what the expected error message is when you get capped at 750 GB.
Works with non-server-side uploads? Yes
Works with server-side copy/sync? Yes
Works when account has full 750GB quota remaining? Yes
Works when account has partial quota (100-300GB) remaining? Yes
Works when account has zero quota remaining? Yes
This flag is proven to be very useful!
Especially if you want to reduce error rate per hour.
I don't know if it always works though.
There are times when using this flag caused rclone to do nothing for few hours at the start time, no logs at all even with using -vv flag. But after few hours it starts normally by itself.
In my case, before I use this flag whenever I uploaded more than 750GB limit of either server-side or non-server-side, I always get 403 errors due to each Google account is heavily being used. So for example near uploading 700GB, I always have constant error 403 userratelimit exceeded for almost all day for constant 20% of total all requests. Uploading speed is throttled at this time (by Google?). Then around uploading 740GB the 403 userratelimit exceeded error rate jumps to 100% before the process is killed or stayed like that until 750GB limit is reset.
So in the dashboard for each of Google account, the graph is always full of 403 error because I use up 750gb limit just within few hours every day. The pattern of the 403 error is also the same, 20% error rate for almost all day long, spiked into 100% error before being reset.
By using this flag, it looks like that once it reaches 750gb and receive error, it becomes fatal error and kill the process. So this minimizes the number of 403 error rate I have been receiving. This result in much cleaner DRIVE API dashboard. No more 403 error all day long!
I wish this is turned On by default setting.
Together with --tpslimit flag, this results in so much reduction of total 403 and 5xx error. UI
As stated above you can put it in your remote config. The flag works by reading error message text which isn't guaranteed to stay the same so I don't want to make it the default yet.
What is your use case for --no-traverse? For most operations --no-traverse will cause rclone to do more work, but for certain specific things it works really well.
I didn't know that no-traverse will cause rclone more work, I thought that by doing no traverse it will less work. My use case is like this:
I have folders, for example remote:/path/a. For each of this folders have hundreds of thousands files and folders.
For each of remote:/path/a I have several Google account doing rclone copy/move to that folder from several minimalistic server that usually has RAM around 128MB-512MB with weak CPU and 1Gbps connection.
no-traverse flag always yield better memory and CPU usage and just generally much faster than without the flag. "rclone copy/sync/move" has always been faster or better for CPU and RAM with that flag.
--no-traverse works really well if you are copying a small number of files into a destination with a large number of files, so with rclone move or rclone copy --max-age 1d etc
If you are trying to sync though I think --no-traverse is pretty much always slower.
Normally rclone reads the whole directory structure in syncing. If you use --no-traverse then rclone will read info about each file individually. If there are (say) 50 files in a directory, then --no-traverse will do 50 transactions whereas without it it will only do 1.
Technically as the flag says across configs. Means you can copy files server-side between rclone remotes - where the remotes can be either in the same account or in a different account provided all of your permissions are correct.