SA vs Client ID

What is the problem you are having with rclone?

One of my mounts is throwing a 403: Download quota exceeded, with absolutely no reason behind it (there is no way any of the files in that location have exceeded the 2TB download limit, they are purely there for streaming and I have extensive logs, monitoring and such to know how the files are being used. The SA/Machine/Mount are at my house and not in use anywhere else.

The odd part is the issue is only on one machine. Here are the peculiarities

  • Another location, with the same mount, same client ID, but different SA is not having the issue.

  • The same location, with a different mount, same client ID, same SA, no issue

  • I've swapped SA's, I've swapped mounting locations on this machine and the issue persists.

The only thing that would make sense to me is the difference between SA and client ID. The mounts share a client ID for general access, but I have a different SA for each location and mount eg. (location)_(mount).json. Is the client ID the particular variable when it comes to each mounts limits (2TB Down/750GB Up) or is it the SA? I was under the impression previously it was the SA.


What is your rclone version (output from rclone version)


Which OS you are using and how many bits (eg Windows 7, 64 bit)

The OS hosting is Debian 10 with OMV 5.

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Description=RClone Service


ExecStart=/usr/bin/rclone mount GTVShowsCrypt: /#######/TVShows \
--allow-other \
--dir-cache-time 160h \
--fuse-flag sync_read \
--tpslimit 10 \
--tpslimit-burst 10 \
--buffer-size=1G \
--attr-timeout=1s \
--vfs-read-chunk-size=256M \
--vfs-cache-max-age=5m \
--vfs-cache-mode=writes \
--vfs-read-chunk-size-limit=off \
--log-level DEBUG \
--log-file /var/log/rclone-gtvshows.log \
--timeout 1h \
--umask 002 

ExecStop=/bin/fusermount -uz /########/TVShows


The rclone config contents with secrets removed.

type = drive
client_id = 460149975567-i5hpituk#########
client_secret = j2ZIZgpt2ic#########
scope = drive
service_account_file = /home/s0n1cm0nk3y/.config/rclone/serviceaccounts/cloud_tvshows.json
team_drive = 0ACzZ##########

A log from the command with the -vv flag

2020/10/03 13:34:36 DEBUG : &{Chico Bon Bon - Monkey with a Tool Belt/Season 2/Chico Bon Bon - Monkey with a Tool Belt - S02E02 - Sandwich Safe WEBDL-1080p.mkv (r)}: >Read: read=0, err=open file failed: googleapi: Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded

Service accounts use their own client ID and secret.

There are a few posts about service accounts being limited sometime recently but I don't use one so I can't confirm. Try using your own account rather than service account.

I would but I use service accounts to improve access utilization. On one location I use a SA for that location and mount eg. (location)_(mount).json. At another location I use a different SA. That way if location A hits limit, location B is not affected.

What is odd is I've swapped the SA for that mount to another SA that isn't being used, and then to another SA that is but rarely. The same issues persists. At another location with the same mount and a different SA, there is not an issue at all.

The curiousity is why this mount/location/regardless of SA is the issue.

I think the change is any SA uses the owner's quotas but not sure as it's not published so it's a bit of guessing game as you don't know what specifically changed and what didn't.

I don't use any SA accounts and have had no issues.

Can confirm the issue persists using your own account, I've somehow hit DownloadQuotaExceeded with very minimal use

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.