userRateLimitExceeded while using --fast-list on Shared Google Drive

What is the problem you are having with rclone?

Can't copy the files from shared google drive to local storage when we use option --fast-list. Getting excessive userRateLimitExceeded I've attached the logs for comparison. We are using our own service account and we are definitely not hitting any quotas.

Run the command 'rclone version' and share the full output of the command.

rclone v1.58.0

  • os/version: debian 10.12 (64 bit)
  • os/kernel: 5.4.0-0.bpo.2-cloud-amd64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.17.8
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy --config /srv/gdrive-backup/.config/rclone/rclone.conf -vvv --fast-list Accounting-and-control-documentation: /srv/gdrive-backup/data/Accounting-and-control-documentation

The rclone config contents with secrets removed.

[Accounting-and-control-documentation]
type = drive
scope = drive.readonly
service_account_file = /srv/gdrive-backup/config/gsuite-secret.json
team_drive = REDACTED

A log from the command with the -vv flag

rclone log with --fast-list not working
rclone log without --fast-list working

The 403s are API quota issues.

You'll seem them on the Google Admin console.

The errors for API quotas are in both the not working and the working at the end.

You'd have to take a peek in the Admin console and see what your error rates are and share that info.

Tip: --fast-list isn't always the fastest, it depends on the characteristics of your data. You may see better speed and less pacer issues by replacing --fast-list with something like this: --checkers=16 --drive-pacer-min-sleep=10ms

The problem is that when i use --fat-list option there are only errors about quota, nothing is being copied from that share even if I leave it for hours. It happens with every shared drive ill try. It worked earlier without a problem.

Here are 403s after I've started the rclone with --fast-list :

This look's bad:


look's like method drive.drives.get is causing the problem while using --fast-list in my case

Thank you. I have tested this and this look's promising. I will explore that!
Comparison:
vanilla rclone took ~6h
--checkers=16 --drive-pacer-min-sleep=10ms took ~1.5h

Depends on how you define problem as you are hitting a quota issue as it's trying to get more API hits in than your quota allows.

@Ole suggestion of manipulating your API hits per second is the best way to address that as you are having a quota issue as rclone is sometimes too fast :slight_smile:

Glad to hear!

It often takes some testing to find the optimum, so you may want to play a bit with the values.

This could be even faster or much slower due to throttling (quota limitations):

--checkers=64 --drive-pacer-min-sleep=0ms

It heavily depends on your data, other usage and quotas. Happy testing!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.