My rclone mount setting is ok for plex on gdrive?

Hi guys, i just want to know if my mount config is ok for plex. Because am ban for 24hour after scan couple library on my gdrive for the first time. "Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded"

I run rclone 1.55.1

[Unit]
Description=rclonemount
After=network.target
[Service]
Type=simple
User=***
Group=***
#ExecStartPre=-/bin/mkdir -p /home/%i/cloud/
ExecStart=/usr/bin/rclone mount gdcrypt: /home/blabla/blabla/ARCHiVES
--config /home/blabla/.config/rclone/rclone.conf
--log-level INFO
--log-file /home/blabla/rclone.log
--fast-list
--tpslimit 10
--checksum
--no-modtime
--timeout 30s
--umask 022
--read-only
--allow-other
--poll-interval=1h
--dir-cache-time 99999h
--vfs-read-chunk-size 2048M
--vfs-read-chunk-size-limit 8192M
--vfs-cache-mode full
--vfs-cache-max-size 1024G
--vfs-cache-max-age 168h
ExecStop=/bin/fusermount -u /home/blabla/blabla/ARCHiVES
Restart=on-failure
RestartSec=30
StartLimitInterval=60s
StartLimitBurst=3
[Install]
WantedBy=multi-user.target

Thx guys for your reply :slight_smile: Have nice day !

just use the settings from this and this

--fast-list does nothing on a mount

Same thing with the config change.... :frowning: i'm ban again after scan my library.

post the exact message you get from the rclone debug log.
and post the current systemd file with the rclone settings?

INFO  : *****.mkv: vfs cache: downloader: error count now 5: vfs reader: failed to write to cache file: open file failed: googleapi: Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded
ERROR : *****.mkv: vfs cache: failed to download: vfs reader: failed to write to cache file: open file failed: googleapi: Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded
DEBUG : *****.mkv: ChunkedReader.RangeSeek from -1 to 0 length -1
DEBUG : *****.mkv: ChunkedReader.Read at -1 length 4096 chunkOffset 0 chunkSize 2147483648
DEBUG : *****.mkv: ChunkedReader.openRange at 0 length 2147483648

That the same error for all scanned files.

My systemd setting is:

[Unit]
Description=rclonemount
After=network.target
[Service]
Type=simple
User=***
Group=***
ExecStart=/usr/bin/rclone mount gdcrypt: /home/BLABLA/BLABLA/ARCHiVES
--log-level INFO
--log-file /home/***/rclone.log
--tpslimit 5
--umask 022
--read-only
--allow-other
--poll-interval=5m
--dir-cache-time 99999h
--vfs-read-ahead 256M
--vfs-cache-mode full
--vfs-cache-max-size 800G
--vfs-cache-max-age 1000h
ExecStop=/bin/fusermount -u /home/BLABLA/BLABLA/ARCHiVES
Restart=on-failure
RestartSec=30
StartLimitInterval=60s
StartLimitBurst=3
[Install]
WantedBy=multi-user.target

and did you tweak the plex settings as per that link i shared with you?

i tweak with this Yes and all auto scan is disable... thumbnails too all the shit is disable.

# Plex optimizations
fs.inotify.max_user_watches=262144

not sure what to tell you, other than to wait until gdrive resets the quota.

Hehe no problem mate , thx :slight_smile: Maybe someone else have a idea on that lol :expressionless:

How much data are we talking here? I recently tried to scan in about 600 shows (~10k files), and I hit the quota limit pretty quickly. Don't think Plex had even scanned in 10%. This was with the new Plex TV scanner and agent. I've been planning to start over and use the legacy scanner and agent instead, but I haven't had a chance to do so. It's probably best to scan in small portions of your library, e.g. first "A", then "B", etc.

Honestly, my gdrive have 8.7TB total... and i scan 3 library (4 or 5TB max total)... And for the numbers of files idk, is only 3 library series so... a lot lol

I don't think there's anything you can really do beyond trying to slow down the scan. Either with settings or start-stopping within Plex.

Gdrive hates it when you scan through TB's worth of stuff in one go. Doesn't matter the app.

The best way I've dealt with it is to only allow about 500GB worth of scan a day (you can do more but this leaves some wriggle room to watch stuff), then stop it and wait until tomorrow to kick it off again. Eventually it will scan it all and you'll be good from then on.

Never had that problem.

I've normally scanned anywhere from 60-70 TB a day without an issue as I've rebuilt my library a few times. A fresh Emby scan maybe takes 2 days for me. Never had an issue.

Unless I'm doing my math wrong, 70tb in a day is over 6 gigabits per second.

What kind of crazy insane internet do you have.

It doesn't download an entire file on a plex scan. Only a few KB to get mediainfo/ffprobe a file to figure out what information on the file.

Right. So you're not actually downloading 70tb a day, you're doing much less.

As I said. OP needs to try not to download more than 500gb per day to avoid temp bans. (The real limit is more like 800gb give or take, but you know, wriggle room).

I know with Emby it first does a file scan, which basically just lists files in folders. This is like you said basically nothing. It will then do a ffmpeg mediainfo scan at a later scheduled date or when you go to play one of these new files. This uses a bit more data as it needs to download either a partial bit of the file when using VFS (about 50MB or so) or the entire file if you're not using VFS. And then of course there's thumbnail creation which does grab the whole file regardless.

I don't know the specifics on how Plex does its scans now days, it's been a while since I used it. But back in the day Plex didn't seperate the file scan and mediainfo tasks like Emby does. It scans them as it goes, which of course requires some file downloads from gdrive.

Anyway, OP keep a eye on how much is being downloaded when you do a scan and stop it after about 500gb to prevent yourself from being temp banned for 24 hours. Maybe not the most appropriate app, but I like using ncdu (and r to refresh) to keep a eye on file sizes inside the VFS cache so I can at a glance see what's being downloaded and how much.

Oh! I just saw this in OPs config.
--vfs-read-chunk-size 2048M

So those ffmpeg mediainfo scans are grabbing up to 2gb per file to do it's scan. That'll hit the limits pretty quickly

That's not correct. Emby also does a ffrpobe on each file and it's in the log file if. you want to validate it.

Both Plex and Emby do an analyze on media when it's added to the library.

That's also not right so please be careful on things.

Rclone does chunked downloaded for quite some time now as you'd have to be on a version multiple years old to see an issue, which is why we ask for rclone version as part of the help template.

That parameter is a range request for a download and it'll close out if not fully downloaded so it doesn't grab 2GB per request. That is validated via the debug log as well.

We've got many, many users that download more than 500GB or even a 1TB in a day. Majority of posts/feedback/things I've read/seen is more like 10TB per day, albeit team drives act different and number of download per file has some limits depending on account type/if shared/etc.

Look I don't know what to tell you. In my experience downloading close to a TB within a day results in a temp ban from gdrive with 403 errors. And it goes away the next day until I do it again. This is irrelevant to the app, happens in rclone and
the officiall gdrive app.

This has happened to me over many different servers over many different setups. And apparently is happening to OP too.

Maybe it's different per data centre region? Maybe the data centre in your country allows more bandwidth than mine (Aus)? I have asked Google support about it in the past and they wouldn't tell me what the limits are, I suspect it's a little fluid based on several factors.

With the VFS chunk. I'm using default VFS full. And I have literally watched how much gets downloaded per file when Emby is doing a mediascan. It changes per file but is in the 50MB ball park before it moves on to the next file. Maybe there's different factors to cause this like internet speed or CPU power. But that's what I see so it's right based on my experience.

More likely a setup issue/Plex configuration issue rather than a download issue. The OP has some interesting settings which are most likely the issue.

If you check the Emby logs, you can see each time a media is added, it runs a ffprobe on the file to get the information on the container contents to add it to the library.

Test example of ffprobe on a clean mount with default options with vfs-cache-mode full.

felix@gemini:~/test/Movies/Unsane (2018)$ time ffprobe Unsane\ \(2018\).mkv
ffprobe version 4.2.4-1ubuntu0.1 Copyright (c) 2007-2020 the FFmpeg developers
  built with gcc 9 (Ubuntu 9.3.0-10ubuntu2)
  configuration: --prefix=/usr --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64
<removed some fluff to make the paste shorter>
real	0m1.501s
user	0m0.093s
sys	0m0.078s

You can use something like iftop to measure how much is downloaded:

image

You get a bit of overhead with vfs-read-chunk-size as it will continue on the request until the file is closed.

If you make that something small knowing you are scanning a lot, you'd get a much smaller download. Example of 1M request size.

image

With a 2G read-chunk-size, it makes no difference on the download as it's fast enough to close out before anything really happens.

image

Maybe I wasn't clear. I'm not talking about rclone and Plex. I'm talking about limits on Google's side.

In a previous job we needed to move several TB out of Gdrive and into a local file share. We used the official app, Google file stream. Simple copy paste.

Before it hit a TB the download stopped and wouldn't continue again until the next day. Repeat until it was finally done.

The 2gb chunk test is interesting. 20mb sounds about right if as you say it's fast enough. My server is is only a dual core VPS but with gigabit internet. Maybe for me it's getting to 50mb before ffprobe finishes and closes the file.