Google Drive very slow and way below quota

What is the problem you are having with rclone?

Hey folks - I've got two servers (different locations) both on 1gbs internet connections, both with identical rclone configs. Both use their own API credentials.

Both are running very slow (like 400kbs) and have been for the last 3-4 days.

What is your rclone version (output from rclone version)

rclone v1.51.0
- os/arch: linux/amd64
- go version: go1.13.7

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Both systems are Ubuntu 18.04

Which cloud storage system are you using? (eg Google Drive)

Both use Google Drive as the backend

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Paste command here

rsync -avz --progress ~/google/largefile ~/

The rclone config contents with secrets removed.

[gsuite]
type = drive
client_id = redacted
client_secret = redacted
token = redacted
chunk_size = 16M
root_folder_id = redacted

[gcache]
type = cache
remote = gsuite:encmedia
plex_url = redacted
plex_username = redacted
plex_password = redacted
plex_token = redacted
chunk_size = 50M
info_age = 3d
chunk_total_size = 6G

Log from rsync -vv copy ~/local/file ~/google/

2020/07/20 15:39:07 DEBUG : pacer: Rate limited, increasing sleep to 16.353677425s
2020/07/20 15:39:09 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=271515702996, userRateLimitExceeded)
2020/07/20 15:39:09 DEBUG : pacer: Rate limited, increasing sleep to 16.386304049s

hello,

as per the log, it seems
Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly

https://developers.google.com/drive/api/v3/handle-errors#resolve_a_403_error_usage_limit_exceeded

rsync -avz --progress ~/google/largefile ~/
are you using rclone or rsync?
both of those paths look like local paths, not gsuite: or gcache?

are you sure you need that cache backend?
https://rclone.org/cache/#status

thanks for the link on the rate limited exceeded error.
It is a 403 user rate exceeded error. But according to the API dashboard those users are way below the quota - that's the mystery for me.

I'm using rsync --progress to check the download speed of the rclone google drive backend.

how can you be using rsync to check rclone, are you using a rclone mount?

about rclone, what are the command(s) you are using?

Yes, it's an rclone mount.
Sorry, I dont think I was very clear about that.

here's the command:

/usr/sbin/rclone mount gsuite:encmedia /Users/myuser/gclone --config=/Users/myuser/.config/rclone/rclone.conf --dir-cache-time 24h --drive-chunk-size 32M --log-level INFO --timeout 1h --umask 002 --rc --rc-user=myuser --rc-pass=apassword --rc-addr=10.75.1.20:5572 --rc-web-gui

so you are not using the gcache:?

not sure exactly what the problem is but i find that simplifying the command,
to be sure that if you use a flag, you know that you need it

in your config file, you have chunk_size = 16M
and
in your mount command, you have --drive-chunk-size 32M

also, cannot hurt to update rclone to latest stable

no worries, are we have many gdrive experts and i am sure they will stop by to comment
@thestigma

Hey friends - I wanted to share that @asdffdsa's tip of upgrading from 1.51.1 to 1.52.0 did the trick!

I rebooted several times before upgrading so I can fairly confidently confirm that the upgrade is what instantly fixed things. Not sure if the API changed, or if I just refreshed the magic dust....but in the event that it helps anyone else, I'm marking this as solved.

yes, the new version of rclone has a hidden flag named --magic :upside_down_face:

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.