So I'm trying to copy the contents of "folder" to "folder2" within the same drive.
I'm using just the basic rclone copy "drive:/folder" "drive:/folder2" -vv -P
I'm getting which I assume is the 750gb upload limit, the problem is I don't have any uploads yet. How long does this take to reset? It's been 12 hours since the time this started.
There is an upload quota each day which is 750GB per 24 hours. There is a download quota which is 10TB per 24 hours. The download quota is more apparent as the error message says such.
You should also use your own API key if you are not:
I’m so confused as to how the download limit is breached. I’ve not been downloading any of the files I’ve just been moving from a shared folder to my own drive both from the same account.
I’m using my own client_id but it doesn’t seem to be helping to correct this issue.
Those are pacer errors = transactions per second. Nothing to do with upload or download limits. Completely normal with gdrive. It's just rclone negotiating an acceptable rate of transactions.
What happens when you let the command run?
Add --tpslimit=4 --tpslimit-burst=40 if you want to eliminate most of the pacer errors.
Stop using -vv. Use -v. It will freak you out less.
Add --drive-server-side-across-configs=true. You shouldn't need it on the same drive but it doesn't hurt.
That's not correct. You also get those errors when you hit your quota for the day, which is a challenge with 403s as they can be rate limits if you have too many transactions per second AND it can be if you hit a limit for the day / your quota.
That would make things slower than you need with the first one as you can do 10 tps via the default quotas on Google. Bursting to 40 would do nothing as the limit is 10 tps so that's not needed either. The default pacer values work fine and nothing else is really needed.
This would allows you do to do server side copies as it's off by default. Turning it on wouldn't change much other than not use your local bandwidth to make a copy and do a server side operation, which uses the same quota.
If you are running with -vv and still seeing nothing upload, someone/something is using your daily quota. You'd want to figure out what or your drive isn't unlimited and a limit is being reached.
I stand corrected. I should have said "are not necessarily related to quota limits". Was running out, should have taken the time to be more accurate. Didn't mean to step on your toes.
I didn't suggest that it would make things faster or that it should be used permanently. Setting --tpslimit=10 or less generally suppresses extraneous rate limit errors that are related to tps quota, which can make it easier to see more meaningful error messages. And yes 10 is the default quota. If other running processes may be using quota then setting it to something well below 10 (like 3, 4, 5) during testing means, again, you don't get as many extraneous messages. In the OP's last paste the tps errors are absent.
I didn't say it would change much. Nor did I suggest that it would change quota. It does however, as you point out, move the transactions off of your local server. When running tests it can make iterative testing and transfers go a bit faster. On very rare occasions if something in the local setup is impeding DL/UL then trying a server side copy highlights that a local issue exists. It is also not a terrible flag to know about for new users (until/if ss copy is turned back on as a default).
Hopefully by now the OP's issue has gone away. What triggered me to offer some alternatives was that the progress chart showed 0 transferred. Typically when you hit the 750G limit there is still some upload activity, even if minimal. Here is an example where I intentionally used all the quota, then repeatedly initiated rclone copy. Each time I ran rclone it would show some upload, not 0. This is the 10th time running rclone copy after the full 750G limit was hit: