Fundemental Issues with GDrive Storage

I’ll jump right into my rant.

So I see a few issues with using google drive as a file storage system and I need to see where I’m messing up or how to mitigate these issues.

I frequently run into issues where either I’m becoming rate limited or transfers stop altogether (see below):

2018/02/16 13:17:12 INFO :
Transferred: 29.588 GBytes (10.525 MBytes/s)
Errors: 0
Checks: 0
Transferred: 0
Elapsed time: 47m58.6s
Transferring:

  • … Deja vu [BDRip 1920x1080 x264 FLAC].mkv: 34% /10.901G, 245/s, 8647h13m28s
  • …2013 1080p Blu-ray AVC DTS-HD MA 5.1.mkv: 17% /20.911G, 251/s, 20443h43m5s
  • …[1080p,BluRay,flac,dts,x264]_-_THORA.mkv: 35% /11.009G, 244/s, 8667h53m37s
  • …[1080p,BluRay,flac,dts,x264]_-_THORA.mkv: 42% /7.907G, 243/s, 5592h19m6s
  • …[Blu-ray.1080p.H264.FLAC]_[DAAF0E78].mkv: 68% /5.326G, 236/s, 2106h3m57s
  • …bellion R2 - PD 10 - Kiseki Birthday.mkv: 55% /7.334G, 245/s, 4001h12m1s
  • …cence.BD(1080p.FLAC)[Afro][1A8E4E34].mkv: 25% /9.904G, 222/s, 9860h16m44s
  • …i - Epilogue [BDRip 1080p x264 FLAC].mkv: 95% /4.845G, 269/s, 233h54m4s

It seems my transfers are slowed to 0 bytes/s and no reason why. What is going on and why is this happening? Should I limit my number of transfers or rate-limit myself? I see many people transferring “200mb/s” and doing 1 Terabytes or something or 450 GB in 24 hours. Why can’t I do that?? All I want to do is transfer 4 TB to this Google Education account (unlimited storage) and I see that I can’t go very far without these damn issues…

Hi, I just needed to add on too.

I’m not familiar with PLEX or anything and only know of Rclone Sync and Copy – Should I try something else?

Mostly what I’m doing is uploading media (MKV) files and kinda using Google Drive as a backup database. What should I look into for trying to transfer these files from a CLI (preferred) or maybe a GUI is okay?

Well for one you do not want to use --transfers 8 for googledrive because googledrive is heavily api request limited, few concurrents will work better… also you might want to try --drive-chunk-size 128M

Transfering 30gigabytes in 45minutes is a prefectly good pace (the absolute daily limit is 750GB). the problem is that you ended up with 0useful data in the end because of your command flags…

also without -v or -vv there’s not enough information to know if google is throttling you or not…

You don’t actually mention where you are transferring from?
Are you uploading from home or a seedbox or a dedicated server etc…
What is the upload speed of whichever of the above you use? Personally i always found the default 4 simultaneous transfer was optimal. Absolutely nothing will be gained using --transfers 8 imho.

I’m transferring from a 4tb external drive. Its an at-home seedbox running from an RPi 3 B model. Gotcha

It’s the GD upload quota limit that Google implemented last year.
Basically you can only upload in total about 750GB a day. Beyond that the transfer rate reduces to 0 until midnight (of the Google server you are using.)
Some people set -bwlimit=8000 so that they don’t transfer more than 750GB a day. Otherwise just let it reach the quota because the second day it will still continue. People also set --tpslimit=9 as well.
Google the keyword “Google API 403 quota limit” or something like that and you will find tons of discussion about it

Yes, I tried what both of you said: default 4 connections and also limited my bandwidth and also changed chunk size [–drive-chunk-size=128M].

I promise I’ll add more chunks but again today the speed went from 10mb/s and each upload (4) were going at 2mb/s and then rapidly decreased 2->1mb/s->200kbps->etc. until finally zero. What do I do??

Looks like some kind of Google Drive limit is reached therefore you received another temporary ban from Google.

Do you use rclone cache or rclone mount? What’s the log file said? 403 error? You need to at least provide the error log otherwise it’s difficult for people to advise. :slight_smile:

Hi,
Yes the error is a 403: Rate Limit Exceeded

I’ve tried: Homedir GoogleDriveFolder --checkers=8 --drive-chunk-size=128M -vv --stats=2s

I’m transferring at 10mbps and sometimes at 5mbps do why do I keep getting rate limited and shit?

Hi,

I’m guessing you’re running this as a one time sync and then you only sync the changes?
Have a look at the issues I had in the past:

Now, I can tell you that with me it helped a lot having two different setups in rclone, so I use my own API one to basically sync all of my local (home) data and then I use another one to replicate an “offsite data” folder I keep only on Gdrive. That way, if somebody by mistake deletes something in the offsite folder, I can grab it back thanks to the backup I have on gdrive itself.
Once the full backup was finally completed, the sync’s won’t take all of the 500/750GB day limits and so everything worked ok.

Now, as for the limits, I was told 500GB in my thread, now it seems it’s 750GB. Now, one issue I’ve been having recently is that I moved off Plex Cloud (it was just s*it and kept getting stuck) so I use Google’s Drive File Stream and linked a few folders to my own plex server. That started making so much traffic that every day I’d get banned until the midnight. So, I’m hoping that plex is gonna finish soon what it’s doing :slight_smile:

I don’t want to go too much off topic, but I started wondering: I know how much data rclone uses, but I don’t know how much data Plex is downloading for its metadata updates. So I have had like 4 chats with Google’s Business support which were horrible. They were not releasing any official data on the actual amount per day and nor they have an alert in GDrive’s Admin console to send notifications whether you’re reaching your quota at least you could proactively stop any process and not lock the entire thing. It’s ridiculous that nobody that uses Google Suits for Business every complained about setting alerts, I just can’t believe it! Anyhow, I’ve sent an idea about this (upvote here if you like it: https://www.cloudconnect.goog/ideas/14077)
All of this to say that yes, Google Drive for Business is probably c*ap if used a bit more intensively. It’s worth it only for small business who do not need a lot of data exchange. But hey, you have unlimited storage…
All of this to say that, I was told by their support that if it gets locked,you can unlock it from your own Admin console, however, if you only have one single account, you won’t be able to and to contact them back to get it unlocked it before time (which I did, and guess what? They asked me to wait until the day after…).
I’ve got some screenshots of the chats that are like the stupidest thing you may look at today.

Sorry if I went a bit off topic.

Check our code on https://plexguide.com - our github project prevents this issue; I was asking the same questions 6 months ago.

There are multiple 403 errors. The rate limit error is actually referring to the limit of api requests per minute which is something like uh, roughly somewhere between 10-100. When you’re 750gb daily quota limited you’ll see error 403 with both normal rate error’s and user rate errors, both randomly.

also use -vv to help people understand what’s wrong.

–checkers=8 is the default, so that command shouldn’t actually do anything

Also if you really really don’t want to see 403 rate limited errors you can use tpslimit=1 or something,

but one type of 403 rate limit error is unavoidable and perfectly fine to see because it only lasts 10-60seconds. The other 403 error lasts until midnight for the google server. HOWEVER I’ll note that in EST my googledrive daily quota resets at 7pm because it’s using GMT time on it’s server. So it’s not your local time, nor even the local time of your server, but the time your server is using, which is likely to be the same as a GCC VM’s internal clock time. Although midnight GMT is probably very popular as the reset time.

If I were you I’d probably try something like --transfers 1 or --transfers 2, although this is likely to slow you down unnecessarily, without better details on how much usage you’re able to achieve it’s hard to tell what if anything is actually wrong.

I’d say googledrive would likely accept roughly 200,000 1kilobyte files per day or 750 1gb files (files can be much larger than this, this was just a simple example). With a mix of filesizes and filecounts you’ll likely reach something in between. Large files make it easy to hit the 750GB daily quota limit, tiny files make it impossible because there’s just a flat limit to how many files google will take in per minute (which is roughly I dunno 50-100ish) which is based on it’s API throttling which works differently than it’s bandwidth throttling although both for some silly reason use error 403.

Just for clarification, there are two 403 errors similar in naming:

403: User Rate Limit Exceeded
You passed the 750gb/10tb drive limit and need to wait for the daily reset.

403: Rate Limit Exceeded
You passed the API rate limit; rclone should backoff the requests and continue.

2 Likes

One thing that will confuse the issue though is that when the user rate limit message is being given you’ll also still at the same time get the other message along side it. presumably because spamming api requests that all result in error messages ends up resulting in sending too many api requests on top of your daily quota limit being reached.

So the two possibilities will be spam of the latter message or both messages (I’ve never seen the user message totally alone for very long.)

If you have; Generate video preview thumbnails, Generate chapter thumbnails, Upgrade media analysis during maintenance, Perform extensive media analysis during maintenance plex options on, you’ll likely get banned as it’s downloading and reading a lot of each video file to do its indexing.

@left1000 So here’s the thing. I only have 4 transfers going at a time and generally, I get to around 40 GB before my transfers’ speed reduces to zero.

I use -vvv and even that I get no info as to why my transfers suddenly stop. It’s not like it’s a network problem because I’ve tried over different networks, VPNs, etc.

What I see when I try to restart rclone (Ctrl + Z,X,C) and then open it again I see a 403 rate limit error and then my uploads start again. Other times I’ve seen my uploads not even start and I see the status page refresh with nothing there but a 0byte/sec transfer rate.

It’s not like I’m having a ton of transfers and random stuff going on to send this many API requests. I just can’t wrap my head around why I’m getting these issues.


@gforce Can you please explain from a high-level (or not) how your Plex project circumvents these issues? Also to be clear, I’m uploading to google drive media files and such - not downloading/streaming yet.


@aj1252 I’ll get back to you as what the error is but I’m not certain which but I’d assume it’s the API Rate Limit Exceeded. I get to 40 GB before I get those errors let alone 750 GB.

Edit: Here’s a paste of my transfers today I just tried. You have to scroll a lot but after going 3/4ths of the way down and look around that area you’ll see my speed go from ~2mb/s to 0 in maybe 30 seconds.
https://pastebin.com/raw/23TNnng2

Having a similar issue here, after 2h or so rclone copy just stops. no error no messages in log. I am copying from gdrive to gdrive using: /usr/bin/rclone copy /root/clone/unencrypted/film GCrypt:film --no-traverse --no-update-modtime --transfers=3 --checkers=3 --min-age 180m --drive-chunk-size 128M -vv --stats 30s --log-file=/root/clone/copy.log

Does anyone have any ideas as to what’s going on??

Did you try with the latest beta? I improved the timeout code recently which might help. So after --timeout of no transfer it should kill the connection and retry.

Can you post a log somewhere with -vv

That is a good log showing the problem! The problem appeared to start at about 2018/02/22 13:34:17. I would have expected stuff to timeout 5m after that at 13:39 - unfortunately the log didn’t quite on on long enough!

Can you try the latest beta and see if it does timeout after --timeout (5 mins by default).

I spent some time debugging a similar issue which turned out to be some sort of driver problem.

Hi, I have the beta from the 23rd but I’ll check again. I’ll test with timeout as well but I was wondering as to where this issue might stem from.

I assume timeout would allow the transfers to resume or would they restart and we would be overwriting the file again before moving on? I’ll my findings later today or tomorrow.