If you have a little spare bandwidth available for test purposes this is a little test you can do for some insights of your own. You will need to be able to upload faster than 10MB/s (750GB/day) to trigger the ban.
-
Create a new user for the test at https://admin.google.com/.
Else, make sure the google drive account you are going to use for the test has not been uploaded to for at least 24 hours.
-
Create a new rclone config for the test, name it ban-size-test (blank client_id/client_secret).
$ rclone config
-
Transfer some bytes to the new drive - 15TB in 1GB sparse files should give us some testing headroom.
The sparse files will occupy ~100MB of local disk space only (ext4)
$ date >> ban-size-test.log
$ mkdir -p ban-size-test
$ for x in `seq -w 1 15000`; do dd if=/dev/zero of=ban-size-test/file-$x.GB bs=1k seek=1M count=1; done | tee -a ban-size-test.log
$ rclone copy ban-size-test ban-size-test:ban-size-test --transfers=30 --stats 10s -vvv 2>&1 | tee -a ban-size-test.log
Feel free to add i.e. --tpslimit 6
to the command line if you are testing with the newest rclone (v1.37). The end result will be the same.
-
Wait for the 403 error to happen in the output.
-
Kill the rclone process with ^C.
-
Inspect the test result: the ban size.
$ rclone lsl ban-size-test:ban-size-test -vvv 2>&1 | tee -a ban-size-test.log
$ rclone size ban-size-test:ban-size-test -vvv 2>&1 | tee -a ban-size-test.log
-
Restart the rclone copy process and observe how the ban is being lifted and re-applied over the next days.
$ rclone copy ban-size-test ban-size-test:ban-size-test --transfers=30 --stats 10s -vvv 2>&1 | tee -a ban-size-test.log
-
Redo 5) and 6) once your patience runs out.
-
Inspect the ban-size-test.log log file. The timestamps of the transfered files should leave you
with a good impression of how many GBs were transferred when.
$ less ban-size-test.log
-
Clean up.
$ rclone delete ban-size-test:ban-size-test -vvv 2>&1 | tee -a ban-size-test.log
$ rclone rmdirs ban-size-test:ban-size-test -vvv 2>&1 | tee -a ban-size-test.log
$ rm -rf ban-size-test
-
Delete the test user again (at https://admin.google.com/).
-
Consider pasting your findings here.
My expectations are that you will find the same as I keep finding.
At exactly 750GB transferred, no new files will be allowed to be uploaded any longer. The error we discuss here will now start showing in the output:
DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded, userRateLimitExceeded)
At this point, only the transfers that are already in progress will be allowed to finish. As I am using the parameter --transfers=30 in the command line, so the last file that will be transferred is file-00779.GB
. All the remaining transfers will now be stalled, and will be reported as 0% done, 0 Bytes/s, until the ban is lifted:
* file-00780.GB: 0% done, 0 Bytes/s, ETA: -
* file-00781.GB: 0% done, 0 Bytes/s, ETA: -
* file-00782.GB: 0% done, 0 Bytes/s, ETA: -
...
$ rclone size ban-size-test:ban-size-test
Total objects: 779
Total size: 779.001 GBytes (836445678592 Bytes)
If you are willing to accept the ban now and then, maybe you will be able to use this to your advantage. Simply, get all those huge linux isos in line, and start as many simultaneous transfers as possible. I.e. if they are well above 50GB each and there are 200 concurrent transfers going when the ban hits, that should leave you able to still upload ~10TB - if that is what is left to do for the transfers already in progress The maximum file size for google drive is 5TB so for backup purposes this strategy could be elaborated quite a bit.
So, back to the test conclusion. On one hand you could be experiencing an exact 750GB ban, and on the other hand another user would tell you “no way that can be true! I just uploaded XX terabytes today - and I’m still not banned, my transfers are still running?!”. And, you would both be right