Rclone mount of Google Drive - aggresive rate limiting by Google

What is the problem you are having with rclone?

I am using rclone mount (see exact command below). I tried the command with just basic options (--daemon --vfs-cache-mode full --cache-dir --log-file --log-level) and subsequently with various combinations and values of the performance-related flags that follow. I have not tried the flags that are commented out, at the very bottom, but listed them out just in case you recommend I use some of them. I saw the same problem (getting rate-limited by Google) for all variations of flags I tried. I was hoping that using --fast-list and --no-seek, lowering --drive-pacer-burst from 8 to 1, increasing drive-pacer-min-sleep from 100ms to 200ms, increasing drive-chunk-size from 8Mi to 128Mi, reducing transfers from 4 to 1 and reducing --checkers from 8 to 1, I would not be rate-limited, even if latency increased.

/usr/bin/rclone mount "$RCLONE_REMOTE" "$LOCAL_MOUNT_DIR" \
	--daemon \
	--vfs-cache-mode full \
	--config="$RCLONE_CONFIG_FILE" \
	--cache-dir="$RCLONE_CACHE_DIR" \
	--log-file="$LOG_FILE" \
	--log-level "$LOG_LEVEL" \
	--fast-list \
 	--no-seek \
 	--drive-pacer-burst 1 \
 	--drive-pacer-min-sleep 200ms \
 	--drive-chunk-size 128Mi \
 	--transfers 1 \
 	--checkers 1 \
 	--buffer-size 16Mi \
  	--vfs-read-ahead 4Gi \
  	--vfs-read-chunk-streams 1 \
  	--vfs-read-chunk-size 128Mi \
  	# --vfs-read-chunk-size-limit 512Mi \
  	# --tpslimit 1 \
  	# --tpslimit-burst 1 \
  	# --max-read-ahead 128Ki \
	# --no-modtime \
 	# --vfs-fast-fingerprint \
 	# --no-checksum \
  • Most of my files are 10/100 KB. Many are of order <10 MB. I just have one large file that is 2 GB
  • I do not have/stream videos, nor have/run executables from the mount.
  • I manually browse my mount (no scripts etc.)
  • My usual (non-heavy) operations with Google Drive are uploading, opening, deleting, renaming and moving files, and creating, opening, renaming and moving folders. However, I have not done any uploads, deletes, renames or moves of files since I made the mount. I have not created, renamed or moved folders since I made the mount. I was getting rate limiting errors just casually navigating the folder structure and opening the KB and MB files.

Run the command 'rclone version' and share the full output of the command.

$ rclone --version
rclone v1.68.1
- os/version: debian 12.1 (64 bit)
- os/kernel: 6.1.0-26-amd64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.23.1
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive (personal, non-team/shared)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

/usr/bin/rclone mount "$RCLONE_REMOTE" "$LOCAL_MOUNT_DIR" \
	--daemon \
	--vfs-cache-mode full \
	--config="$RCLONE_CONFIG_FILE" \
	--cache-dir="$RCLONE_CACHE_DIR" \
	--log-file="$LOG_FILE" \
	--log-level "$LOG_LEVEL" \
	--fast-list \
 	--no-seek \
 	--drive-pacer-burst 1 \
 	--drive-pacer-min-sleep 200ms \
 	--drive-chunk-size 128Mi \
 	--transfers 1 \
 	--checkers 1 \
 	--buffer-size 16Mi \
  	--vfs-read-ahead 4Gi \
  	--vfs-read-chunk-streams 1 \
  	--vfs-read-chunk-size 128Mi \
  	# --vfs-read-chunk-size-limit 512Mi \
  	# --tpslimit 1 \
  	# --tpslimit-burst 1 \
  	# --max-read-ahead 128Ki \
	# --no-modtime \
 	# --vfs-fast-fingerprint \
 	# --no-checksum \

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

$ rclone config redacted 
[gdrive-remote]
type = drive
scope = drive
token = XXX
team_drive = 

A log from the command that you were trying to run with the -vv flag

2024/10/26 15:34:29 INFO  : vfs cache: cleaned: objects 25 (was 25) in use 0, to upload 0, uploading 0, total size 556.985Mi (was 556.985Mi)
2024/10/26 15:34:29 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Quota exceeded for quota metric 'Queries' and limit 'Queries per minute' of service 'drive.googleapis.com' for consumer 'project_number:202264815644'.
Details:
[
  {
    "@type": "type.googleapis.com/google.rpc.ErrorInfo",
    "domain": "googleapis.com",
    "metadata": {
      "consumer": "projects/202264815644",
      "quota_limit": "defaultPerMinutePerProject",
      "quota_limit_value": "420000",
      "quota_location": "global",
      "quota_metric": "drive.googleapis.com/default",
      "service": "drive.googleapis.com"
    },
    "reason": "RATE_LIMIT_EXCEEDED"
  },
  {
    "@type": "type.googleapis.com/google.rpc.Help",
    "links": [
      {
        "description": "Request a higher quota limit.",
        "url": "https://cloud.google.com/docs/quotas/help/request_increase"
      }
    ]
  }
]
, rateLimitExceeded)
2024/10/26 15:34:29 DEBUG : pacer: Rate limited, increasing sleep to 1.03296117s
2024/10/26 15:34:29 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: Quota exceeded for quota metric 'Queries' and limit 'Queries per minute' of service 'drive.googleapis.com' for consumer 'project_number:202264815644'.
Details:
[
  {
    "@type": "type.googleapis.com/google.rpc.ErrorInfo",
    "domain": "googleapis.com",
    "metadata": {
      "consumer": "projects/202264815644",
      "quota_limit": "defaultPerMinutePerProject",
      "quota_limit_value": "420000",
      "quota_location": "global",
      "quota_metric": "drive.googleapis.com/default",
      "service": "drive.googleapis.com"
    },
    "reason": "RATE_LIMIT_EXCEEDED"
  },
  {
    "@type": "type.googleapis.com/google.rpc.Help",
    "links": [
      {
        "description": "Request a higher quota limit.",
        "url": "https://cloud.google.com/docs/quotas/help/request_increase"
      }
    ]
  }
]
, rateLimitExceeded)
2024/10/26 15:34:29 DEBUG : pacer: Rate limited, increasing sleep to 2.702414426s
2024/10/26 15:34:30 DEBUG : pacer: low level retry 3/10 (error googleapi: Error 403: Quota exceeded for quota metric 'Queries' and limit 'Queries per minute' of service 'drive.googleapis.com' for consumer 'project_number:202264815644'.
Details:
[
  {
    "@type": "type.googleapis.com/google.rpc.ErrorInfo",
    "domain": "googleapis.com",
    "metadata": {
      "consumer": "projects/202264815644",
      "quota_limit": "defaultPerMinutePerProject",
      "quota_limit_value": "420000",
      "quota_location": "global",
      "quota_metric": "drive.googleapis.com/default",
      "service": "drive.googleapis.com"
    },
    "reason": "RATE_LIMIT_EXCEEDED"
  },
  {
    "@type": "type.googleapis.com/google.rpc.Help",
    "links": [
      {
        "description": "Request a higher quota limit.",
        "url": "https://cloud.google.com/docs/quotas/help/request_increase"
      }
    ]
  }
]
, rateLimitExceeded)
2024/10/26 15:34:30 DEBUG : pacer: Rate limited, increasing sleep to 4.713340704s
2024/10/26 15:34:33 DEBUG : pacer: Reducing sleep to 0s
2024/10/26 15:35:29 DEBUG : Google drive root '': Checking for changes on remote

welcome to the forum,

the solution is Making your own client_id


--fast-list does nothing on a mount

By the way, what boggles me is that the Google's rate limit error (see logs in original post) says I am exceeding 420,000 queries per minute. How is that even possible with me manually browsing the mount? I thought I was using MY google account's quota associated with MY Google account's token in the config file. Am I mistaken about this? Is rclone using some kinda shared quota across all users?

the mount appears as local storage, so any application can access it.
perhaps anti-virus, perhaps something else.

yes, that is why you need to create your own client id+secret

Ah thank you! By the way, currently my config file (~/.config/rclone/rclone.conf) does not show any client_id and client_secret entries. I guess this is because it is hidden/"baked-into-the-code" of rclone given it is the rclone authors' client_id/secret? After I create these two for my own google account, I guess I have to add them to the config file? By the way, in the default/current state I am in, in addition to rate limit issues, was it also insecure to share client_id w/ all the rclone users? I realize each user is still using their own personal access token that they generated and kept locally on their computers and is not being exposed to rclone or other users.

welcome!

correct

somewhat confusing but token and client_id are for two different purposes, with different security risks.

--- token

  • create using your email address and password
  • other users with token can access your files.

-- client_id

  • a setting within your account, to control rate limiting and other features.
  • other users with client_id cannot access your files

can encrypt the rclone config file
Configuration Encryption

Wow, thank you so much for detailed answers. So, going back to my original problem, I guess none of the options I was passing with the mount command were going to help me not get rate limited, given I was using the global client_id and making myself beholden to the traffic generated by other users. So, let us say I created my own client_id and client_secret and put that in the config file, and want to avoid getting rate-limited, do the options I presented in my original post relevant (except perhaps --fast-list as you mentioned it does nothing on mount) and are the values of those options reasonable. Is there a recommended set of options that have been battle-tested specifically for Google Drive backend that I can start with, instead of experimenting from scratch with so many different combinations of options that I do not fully understand? I looked in the Google Drive section of the rclone documentation and it did not provide this. My concern is getting permanently banned from my Google Account and losing all access to my data/email. So, I would rather get ordinary performance than get rate-limited by Google. In other words, I am not trying to stretch the limits of performance of rclone mount, given my average usage patterns, but get decent performane w/o getting regular rate-limits (which also has effect on performance given the backk-off timeouts). I would have used the Google Drive Client, but there is none on Linux. I do not have anti-virus software, so all usage is manual (file manager and terminal)

  1. create your own client id.
  2. start with the simplest mount command possible, as few flags as possible.
    can search the forum, dozens of examples.
  3. if and when, you get rate limited, post the exact command and we can tweak it then.

that never happens to rclone users.
can search the forum, that has been discussed many times.

the only way to get banned is to upload illegal content, such as child porn, pirated content, etc..
that has nothing to do with rclone itself.
you would get banned if uploading via gdrive website or uploading via google drive client

If I use my own client_id, client_secret and token, wouldn't Google just see it as a regular API call coming from me? I mean it could be coming from rclone or just a regular python script, but they would see me as the intiator of the script/process and ban me if the rate-limits are happening on a daily basis? But if I understood you right, you are saying that purely crossing rate-limits regularly is not a cause for ban (unless it is something crazy like a purposeful attack). Thanks again for all your help. I will try with my own client_id and basic command with minimal flags as possible and reach out if issues.

I think part of the problem is that your "casual" browsing mount containing a lot of files generates a lot of traffic simply to pull remote's content again and again. By default --dir-cache-time is 5min meaning that after 5min given directory has to be re-read again.

GDrive is polling remote so you can safely set --dir-cache-time to "unlimited". Something like --dir-cache-time 9999h. Then add --vfs-refresh to get all data after mount is started (it happens in the background) and your experience should be much more smooth IMO. This way you will also cut number of API calls significantly - will help with throttling.

And yes, crucially, as mentioned, configure your own clinet_id. It is must. When done I suggest you re-create your remote to make sure that it is used.

Thank you so much for the tip!

Quick question. What about files? i.e. --vfs-cache-max-age? Should I set it to a super high number and rely on the remote poller to get any changes to the files themselves? I understand that this is not related to directory listing and hence not related to getting rate-limited while simply navigating the directory tree, but rather controls how long the contents of the files live in the cache. But thought I would ask, if the same rationale can be used if I see rate limits when working with contents of the file (opening them etc)

You are right. If you can spare disk space for cache it makes also sense to set --vfs-cache-max-age to "forever". At least this is what I do. If something has been downloaded and I have space in cache then let it live there. Worst case it will be evicted when --vfs-cache-max-size is reached. This is off by default and makes sense to set to control cache size.

Small caveat here (as per docs):

If using --vfs-cache-max-size or --vfs-cache-min-free-size note that the cache may exceed these quotas for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache. When --vfs-cache-max-size or --vfs-cache-min-free-size is exceeded, rclone will attempt to evict the least accessed files from the cache first. rclone will start with files that haven't been accessed for the longest. This cache flushing strategy is efficient and more relevant files are likely to remain cached.

I am an unpaid user of GoogleDrive, so maximum I will have just 15GB on there and my laptop has enough space if I have to cache my entire Google Drive. So, I can choose to let the --vfs-cache-max-size be off/"ulimited" (default). And with "forever" --vfs-cache-max-age I think I would actually make the mount behave more like a local copy (like using Google Drive Client on MacOs/Windows with local copy option) provided I have accessed all files at least once at some point within the "forever" time-frame. I actually prefer the behaviour of the Google Drive client on MacOs/Windows where I don't have to rely on network connection to access my files if I am on the road, which is the case with a mount. What do you think?

Yeah it should work. And actually be super fast and sleek.

Just to confirm, the "remote poller" checks for both directory listing changes and file content (create, modified, delete) changes with Google Drive server. Correct? And would the cache entry for the file automatically be evicted if the poller see the file has been updated? By automatically I mean that the user does not have to open the file for the switch to happen? Or does the user have to open the file again to download the updated file? Looks like former is controlled by --poll-interval which is default of 1m and latter by --vfs-cache-poll-interval? I guess I can set both to slightly higher to cut down on some more traffic to the Google Server. Say between 3 mins and 5 mins for both. Not sure if the poller traffic is accounted for in the quota for API calls.

Edit: Actually, I think the `--vfs-cache-poll-interval is used to determine at what frequency local frequency the objects in the local cache have gone stale as per the max-age set on them (in our case, "forever"). Don't think this is associated with traffice sent to Google Drive server - that seems to be the --poll-interval, though it is documented in the context of directory cache. So, not sure what the polling interval is for files and how it is controlled.

hi,

i have a summary of the two vfs caches

for the vfs dir cache which is stored in memory.

for the vfs file cache which is stored on local storage.