Help to increase API GDrive

I requested the increase of 1000 to 10000 calls per user in GDrive API and they sent me this email:

What I need to response? THANKS!!

**Thanks for your interest in receiving a higher quota for the Drive API for Project ID XXXXXXXXXX. Your request does not include enough details. If you would like to request for more, please provide the following information: **

  • The number of users.
  • Average number of requests per day/per user (calculation describing your expected usage of the API).
  • Which API methods will be called and what will be the frequency.
  • Are you polling the API to check if files have been modified?
  • Have you implemented exponential backoff?
  • Are you requesting for 1000 QPS? Is this temporary request or permanent?

I think he listed out the specifics in the bold that they were asking for. Why are you looking for more API? I’ve been setup for some time now and I barely use an API calls. My last 14 days look like:

I have ~30TB and usually have 3-5 people streaming.

Simply listing folders with 10K+ files triggers >10QPS and causes 403s. I asked for 100QPS and I suspect that support noticed 25% errors on queries so I got them.
If you consider your Drive a NAS with only few big files, API won’t be a problem indeed.

@justmwa - use plexdrive to mount. It caches the directory/file structure and you don’t get that problem at all. There’s a full scan of 125k files with 0 API calls.

felix@plex:/GD$ time ls -alR | wc -l

real	1m10.013s
user	0m0.780s
sys	0m1.484s


Plexdrive uses the buffer that uses rclone?

If a folder is date modified it will detected as modified?

Plexdrive and rclone are different in how they buffer. I use plexdrive to do my basic mount and since I’m encrypted with rclone, I mount that over the plexdrive mount and have a fuse mount for the local and cloud storage together.

My mount script looks like:

/home/felix/scripts/plexdrive --chunk-size=25M -o allow_other -v 2 /GD >>/home/felix/logs/plexdrive.log 2>&1 &
sleep 2

# Mount the 3 Directories via rclone for the encrypt
/usr/bin/rclone mount \
--allow-other \
--default-permissions \
--uid 1000 \
--gid 1000 \
--umask 002 \
--syslog \
--stats 1m \
-v \
media: /media &

# Wait a sec
sleep 2
/usr/bin/unionfs-fuse -o cow,allow_other,auto_cache,sync_read /local/movies=RW:/media/Movies=RO /Movies
/usr/bin/unionfs-fuse -o cow,allow_other,auto_cache,sync_read /local/tv=RW:/media/TV=RO /TV

With my rclone.conf:

type = crypt
remote = /GD/media
filename_encryption = standard
keys and such


I will try this evening, Im tired of 403 errors

How can i Buffer the Files and folders with rclone mount? I Scan with kodi

You can't using rclone till metadata is implemented:

Kodi doesn’t work like Plex. A scan just looks at the filename. Works fine with kodi. Just add the buffer-size flag to the mount.

OK thanks.

When i try to copy Files with rclone i recieve the Errors “error 403 rate exceed” have you any idea why?

You’re either exceeding the global limits assigned to rclone’s api or the user limits. Remember that you’re sort of sharing some of those limits. How are you copying files? Above you mentioned ‘scanning’ but now you mentioned copying so im trying to understand what you’re doing at the time (and before).

where do we find our API for GDrive?