I just started getting this out of the blue. I'm using the Google Drive backend that is using encryption and I have never shared any files. I'm not sure why I would suddenly be exceeding my quota.
Is anyone else experiencing this? Will things go back to normal eventually?
It should go away with a little time (minutes or hours)
This would indicate that a spesific file has been referenced to download too many times within a few hours timespan. I think this has to be at least a few hundred to trigger the quota, so if you haven't been using your drive particularly heavily then it could indicate a bug causing looping download or something (may be easy to see in your network graph or rclone's file transfers output if the same file just restarts and restarts).
I would refer you to go speak with @Animosity022. He is collecting data on this spesific phenomenon to try to establish more concretely what the undocumented limit is - and the best strategies to avoid it (which should only be needed for very heavy users or users with unusual usage patterns)
My mention of Animosity should alert him to this topic so he hopefully stops by)
You probably want to check out this topic too and answer his questions there - FOR SCIENCE!!
As the linked post explains - what you want to check for is a large amount of file.get requests in a relatively short timeframe (a few hours think). Especially if it has all been from one file, or a few files.
I guess this quota is basically there to prevent you from abusing the google drive ability to create shareable links as a mass-distribution server.
Those programs like that, Plex, Embry ect. (unless carefully configured) can be unintentionally EXTREMELY taxing on your drive API and quotas.
The reason is that they were all designed to talk to regular harddrives, so many of them scan the entirety of every file to collect a lot of data (which is not a huge deal on local HDD, but is on a cloud remote) - so you may literally be downloading your entire archive unless you set your settings carefully. I would certainly recommend against automatic scans except for the most basic detection of new files.
If you did hit a specific-file quota then the other option is you could have hit the general download quota, which so far there seems to be a great deal of agreement is 10GB/day. You certainly need a lot of bandwidth to even be able to reach that quota - but it is possible. (if this is happening it should be really obvious from your bandwidth utilization and your Google metrics for the period)
I do not use jellyfin myself so I can't help with configuration of that. Animosity may know about it, you could try asking him for rclone-spesific tips for scan-configuration. He uses Plex mostly I think, but I wouldn't be surprised if he knows about alternative software on Linux - I assume this is on Linux.