Google Drive Throttling

You got some goofy mount commands going on too.

What happens if you just run a rclone copy from the GD local and use -P to see the speeds? Still slow too?

I’m always game to rule out pieces.

What’s your actual speed from your ISP?

I just use a 100M test file as an example:

rclone copy GD:100M.out test -P
Transferred:   	      100M / 100 MBytes, 100%, 12.100 MBytes/s, ETA 0s
Errors:                 0
Checks:                 0 / 0, -
Transferred:            1 / 1, 100%
Elapsed time:        8.2s

Is playback pretty much unusable for you when Plex scans are happening? This has been happening since the beginning, so I just wanted to make sure its not just me.

Not at all.

So with Plex, there are a few things to think about.

When media is first added, it needs to be analyzed, which runs a ffprobe/mediainfo type thing against it. Once that is done, it just checks file size/mod times on it (super fast).

My 60TB library scan pretty much takes less than 20 seconds.

I usually stream at most 5-7 streams at times and no real issues as I have a gigabit FIOS link.

My actual speeds are 1000/40 from Comcast (fastest in my area unfortunately). Right now, I’m only getting ~500mbps maximum, but haven’t had time to diagnose the issue with that yet.

Here’s the 100mb test file using Ubuntu server. No docker involved.
image

I have done some more testing, originally thinking it had something to do with my Sophos gateway between my devices and router. I bypassed it and had the same results so that ruled it out. A simple wget got me about 150-200 mbps download with or without it.

Its just Google Drive that is giving me issues.

When rclone is restarted, the directory cache is lost so when Plex does a scan then, does it make things unusable? I believe I saw in your scripts something about warming up the cache, so if you try to playback media during that period, is it possible?

No, nothing is ‘unusable’. If you want to scan the whole directory structure, that depends on how many directory/files you have.

A full refresh from cold takes about 25-30 seconds for me.

I have like 30k ish files.

If you hit play and it was cold, it would take a few API hits to grab the directory structure so maybe 10 seconds.

[felix@gemini ~]$ time ls -al /GD/TV/Preacher/Preacher.S01E01.mkv
-rw-rw-r-- 1 felix felix 2661673016 May 22  2016 /GD/TV/Preacher/Preacher.S01E01.mkv

real	0m0.003s
user	0m0.002s
sys	0m0.000s
[felix@gemini ~]$ time ls -al /Test/TV/Preacher/Preacher.S01E01.mkv
-rw-r--r-- 1 felix felix 2661673016 May 22  2016 /Test/TV/Preacher/Preacher.S01E01.mkv

real	0m10.969s
user	0m0.004s
sys	0m0.000s

An example above.

I use the rc commands to build the cache on boot as that’s a bit faster than manually walking the tree.

 time /usr/bin/rclone rc vfs/refresh recursive=true
{
	"result": {
		"": "OK"
	}
}

real	0m26.480s
user	0m0.007s
sys	0m0.015s

At no point would anything ever be ‘unusable’. It may take a few seconds if cold and that’s it.

I must be doing something wrong then…

When I run vfs/refresh, it takes over 1 minute, and I have significantly fewer files than you.

I also get these errors when running it which is why I think I added the tpslimit in the first place

2019/04/13 20:35:23 DEBUG : pacer: Rate limited, sleeping for 1.364894562s (1 consecutive low level retries)

4/13/2019 9:35:23 PM 2019/04/13 20:35:23 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=919561382685, userRateLimitExceeded)

4/13/2019 9:35:23 PM 2019/04/13 20:35:23 DEBUG : pacer: Rate limited, sleeping for 2.231453049s (2 consecutive low level retries)

4/13/2019 9:35:23 PM 2019/04/13 20:35:23 DEBUG : pacer: low level retry 3/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=919561382685, userRateLimitExceeded)

4/13/2019 9:35:23 PM 2019/04/13 20:35:23 DEBUG : pacer: Rate limited, sleeping for 4.741085994s (3 consecutive low level retries)

4/13/2019 9:35:23 PM 2019/04/13 20:35:23 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=919561382685, userRateLimitExceeded)

4/13/2019 9:35:23 PM 2019/04/13 20:35:23 DEBUG : pacer: Rate limited, sleeping for 8.866874665s (4 consecutive low level retries)

4/13/2019 9:35:23 PM 2019/04/13 20:35:23 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=919561382685, userRateLimitExceeded)

4/13/2019 9:35:23 PM 2019/04/13 20:35:23 DEBUG : pacer: Rate limited, sleeping for 16.821812793s (5 consecutive low level retries)

4/13/2019 9:35:23 PM 2019/04/13 20:35:23 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=919561382685, userRateLimitExceeded)

4/13/2019 9:35:24 PM 2019/04/13 20:35:24 DEBUG : pacer: Resetting sleep to minimum 100ms on success

Are you sure you have your own key and it’s being used? That single command should not give you 403s.

Yeah, I don’t even have rclone under the manage apps section in drive. The link in the error message also links back to my key. I can also see the errors in the api console.


Also Plex scans take several minutes with the warm cache, so I’m not sure how your’s are so fast. I would like to learn. (Just ran it again and it took 5 full minutes with a warm cache)

If you check your metrics, what do you see?


From the last hour.

Yeah, you have a really odd setup going on with your rancheros/dockers/merger/etc. I wouldn’t even fathom where to start looking as I don’t use any dockers or anything else as it just over complicates stuff for me.

If your refresh is 403’ing out, you definitely got something else going on but not sure what else I’d check.

Is your account shared or someone else using it?

No, its just me. I also use drive file stream for other files but its primarily just rclone…

I’m going to try a fresh ubuntu server vm and see where it takes me.

I clean installed Ubuntu and still had the same issues.
I tried to use your scripts from the github, but kept getting an error message

Apr 14 02:22:48 rclonetest systemd[1]: gmedia-rclone.service: Service hold-off time over, scheduling restart.
Apr 14 02:22:48 rclonetest systemd[1]: gmedia-rclone.service: Scheduled restart job, restart counter is at 5.
Apr 14 02:22:48 rclonetest systemd[1]: Stopped RClone Service.
Apr 14 02:22:48 rclonetest systemd[1]: gmedia-rclone.service: Start request repeated too quickly.
Apr 14 02:22:48 rclonetest systemd[1]: gmedia-rclone.service: Failed with result 'exit-code'.
Apr 14 02:22:48 rclonetest systemd[1]: Failed to start RClone Service.

So I just ran Rclone manually in screen

sudo /usr/bin/rclone mount gcrypt: /GD --allow-other --config /rclone/rclone.conf --dir-cache-time 96h --drive-chunk-size 32M --timeout 1h --umask 002 --rc -vv

I had very similar results about 10mbps with playback on the other machine accelerating the copy.

I also tried putting this machine in its own vlan with only an internet connection.

Same slow results, but this time playing back video in the other vlan didn’t cause the speed to increase??

Any ideas on what else to try?

Yeah, plug directly into your router and bypass everything.

If that doesn’t work, check out the router.

If works, checkout everything in between.

So with regards to the vfs/refresh causing 403 errors, it was because I had taken out the tpslimit 5. With the tpslimit, there are no errors at all. Still not sure why you don’t have any errors without tpslimit.

Though it still takes over 1 minute, compared the the 30 seconds for you.

2019/04/13 23:37:54 DEBUG : : Reading directory tree done in 1m37.224054462s

5 is too low as you can do 10 per second with Google. I’d at least set it to 10.

Check and see how many directories you have.

[felix@gemini GD]$ find . -type d | wc -l
1067

Well I think I just found the problem with the refresh taking so long. You asked just the right question!

Good news:
I had a folder with many many small documents that were inside the refresh scope. I adjusted the command with the “rc dir” option which limited the scope to just the media files. That significantly increased the scan speed from over 1 minute to just 10 seconds.

Bad news:

  1. Without tpslimit, I still get rate limiting issues. I was, however, able to get tpslimit significantly higher at 1750 being the maximum without facing any issues. Any higher, the rate issues begin popping up again.

  2. Plex scans still take over 3 minutes to complete. There are 414 directories and 5443 files. Running the find command for directories and files each take less than 1 second. No idea why plex scans still take so long…

Would be in the plex logs most likely as usually that relates back to media.

I use:

To see what is left to analyze in my library.

Should look something like this if it’s all analyzed.

[felix@gemini ~]$ plex-library-stats.sh  | tail -10
Minutes = 11278
  Hours = 187
   Days = 7

25823 files in library
0 files missing analyzation info
0 media_parts marked as deleted
0 metadata_items marked as deleted
0 directories marked as deleted
25209 files missing deep analyzation info.

Deep analysis is only used for bandwidth restrictions/limiting and I don’t use that and it’s off. It is super intensive as it needs to analyze the entire movie as it is looking for bitrates.