Getting banned from GDrive for the initial scan in Plex

What is the problem you are having with rclone?

I'm getting banned from GDrive for the initial scan in Plex. I've been running the same command for a couple years and all was well until I moved to a new server and tried to use the same rclone command. It seems to be having trouble with scanning my library. Google is reporting 403's as a result of the ban.

How's my command? What should I tweak here to avoid getting banned? I have about 500gigs free and a library of roughly 150TB of data on GDrive with about 5-10 streams happening at once at any given time.

I did make sure things like thumbnail, analyze, intro detection, etc are all off on Plex.

Run the command 'rclone version' and share the full output of the command.

rclone v1.58.1

  • os/version: ubuntu 22.04 (64 bit)
  • os/kernel: 5.15.0-30-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.17.9
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Here's my rclone service.

ExecStart=/usr/bin/rclone mount \
        --config=/.config/rclone/rclone.conf \
        --allow-other \
        --buffer-size 256M \
        --dir-cache-time 96h \
        --drive-chunk-size 128M \
        --log-level INFO \
        --log-file /logs/rclone.log \
        --umask 002 \
        --vfs-read-chunk-size 128M \
        --vfs-read-chunk-size-limit off \
        gdrive: /mnt

The rclone config contents with secrets removed.

[gdrive]
type = drive
client_id = removed
client_secret = removed
scope = drive.readonly
token

logs

Just a bunch of

downloadQuotaExceeded

hi,

i would mimic these settings
https://github.com/animosity22/homescripts/blob/master/systemd/rclone-drive.service

and do you run rclone vfs/refresh before running the plex scan?

1 Like

Thank you for that. I would have thought vfs-read-chunk-size would help here but I don't see it there. Any thoughts on that?

i could be wrong, but --vfs-read-chunk-size is used when downloading an entire file, same as with rclone copy
not for streaming, random access reads and plex scanning.

also, the default value of 128M, so adding it your mount command, does nothing.

Alright, here's the full service which seems to be working when I input the ExecStart manually but nothing happens when doing systemctl start

# /etc/systemd/system/rclone.service
[Unit]
Description=Google Drive (rclone)
Wants=network-online.target
After=network-online.target

[Service]
Type=notify
User=root
Group=root
KillMode=none
RestartSec=10
ExecStart=/usr/bin/rclone mount gdrive: /mnt \
        --config=/.config/rclone/rclone.conf \
        --user-agent server \
        --allow-other \
        --cache-dir=/tmp/cache \
        --buffer-size 256M \
        --dir-cache-time 5000h \
        --drive-chunk-size 128M \
        --log-level INFO \
        --poll-interval 10s \
        --drive-pacer-min-sleep 10ms \
        --log-file /logs/rclone.log \
        --drive-pacer-burst 200 \
        --umask 002 \
        --vfs-cache-mode full \
        --vfs-cache-max-size 250G \
        --vfs-cache-max-age 5000h \
        --vfs-cache-poll-interval 5m \
        --vfs-read-chunk-size-limit off
ExecStop=/bin/fusermount -uz /mnt
ExecStartPost=/usr/bin/rclone rc vfs/refresh recursive=true --rc-addr 127.0.0.1:5572 _async=true
Restart=on-failure

[Install]
WantedBy=multi-user.target
1 Like

well, what i think, oh never mind, here comes @Animosity022 :wink:

Not sure you'll tune out of this.

If the library paths changed or adjusted, it'll reanalyze the files. If you have a large library, it'll probably trigger your download quota, which is why I left Google as those quotas just annoy me as there's no way for Google to tell you what threshold you crossed, nor what the issue is.

They give you a generic answer and it's different for regular drives, shared drives, etc.

1 Like

hi @Animosity022, is this correct?

That's just the HTTP range request used when it requests something from a cloud remote.

This would be specific to a mount and the cache mode doesn't matter.

If you set it high, you might get a little bloat / extra data before it closes out. Specifically how Google calculates that range request to your quota is unknown.

So if I have a file and I want to read say 1KB of it and my default range request is 128M, I start a larger chunk download for 128M but once I get my 1KB, it'll close out and I'll bleed into a little bit extra download.

@VBB was testing with 1MB range requests and that generally seemed to work and on a large library scan, it would have some impact. Without knowing how the quotas work though, it's a guess if it is a material impact or not.

In the logs, you see something like chunksize on the debug as it'll scale up based sequential reading as if it reads a file longer, it'll start to double the ranges to help performance.

This gives a great explanation on it:

New Feature: vfs-read-chunk-size - Howto Guides - rclone forum

I've been using 1MB for about a year now without any negative impact. Initially, it was for the Plex agent upgrade, but then I enabled "Upgrade media analysis during maintenance", and it's been running nightly ever since (not sure if it's supposed to finish at some point, but mine doesn't).

So, for an initial library scan as well as other, more involved scans, I'd recommend setting --vfs-read-chunk-size 1M.

1 Like

Yep, this is normal. You'll just need to deal with the temporary 403's until your entire library scan is complete. It's annoying, but it is what it is. Plex's aggressive scanning isn't really meant to be used with google cloud storage, but you'll get through it eventually.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.