Hitting Google Workspace download limit daily trying to scan in Plex

So for me, I was always a solo user and I’ve never had the need for a team drive and not sure if that’s why I never had similar issues.

Same here. Never used team drives.

hi, i would think that should not make a difference, for an initial scan, based on this.
"Rclone will start reading a chunk of size --vfs-read-chunk-size, and then double the size for each read."

that the first read is 1MB but the next read is 2MB, then 4MB, 8MB and so on.

so to keep rclone reading 1MiB chunks, need to use
--vfs-read-chunk-size=1M --vfs-read-chunk-size-limit=0

if just --vfs-read-chunk-size=1M works for you,
then perhaps that setting is per file, not per all reads for all files during the lifetime of the mount?
tho the documentation makes not mention if the setting is per file, per mount, or what?

Well, it's incremental, but the lower you start, the smaller the downloads. The initial scan only analyzes a tiny fraction of a file, so instead of requesting a chunk of 128MB (the default), it only requests a chunk of 1MB (and so on). Makes sense to me, but perhaps I'm not understanding it correctly :wink:

As in "This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read, at the cost of an increased number of requests."

I think for this to stay at 1MB chunks, you'd want to use --vfs-read-chunk-size 1M and --vfs-read-chunk-size-limit 1M

EDIT: or perhaps you're right LOL

this is so confusing, as when i first wrote that post i had --vfs-read-chunk-size 1M and --vfs-read-chunk-size-limit 1M then i thought about it and decided what i had written was correct.

somehow i think they might achieve the same result, to keep the chunk size at 1MiB and not to grow.

but still i am not clear if those settings are per file or per mount.

i could do a bunch of tests, but i have a strong dislike of analyzing rclone debug logs for mounts.

I think you're right about that.

Per file read, as I understand it, but it applies to the mount, of course.

jeez, the rclone mount documentation is inscrutable, second only to the bible.
every time i read it, i am still confused but i learn something new.

that these settings are per file.
"This means that rather than requesting the whole file rclone reads the chunk specified"

rclone sync brainhive:ncw/rclone.mount.knowledge brainhive:jojothehumanmonkey/rclone.mount.knowledge --server-side-across-brains --magic
1 Like

Try looking at the WinFSP github. That'll make your brain explode :wink:

1 Like

I might be wrong but forcing chunk size not to grow wouldn't lead to download quota errors fast? For a 4K file you'd be looking at an enormous amount of download calls, no?

well, here comes @Animosity022, so let's hear what he/she/it has to say

I think it would lead to way more requests to download, but the downloaded chunks are also much smaller. So, for when you want Plex to only analyze a fraction of a file, it will use less data overall.

When you open a file on a mount, the mount sends a HTTP Range request to do a chunked download of the file.

If the file stays continually open, it'll grow over a period of time to the limit set.

When the file is closed and opened back up, it starts the whole process over again.

Best way to test is to use mediainfo or ffprobe on a file on Linux. To see what Plex does, you'd have to analyze a file as it's slightly different as it used to open and close a file 3 time per analyze to get the right info. I haven't retested in quite a few versions.

Generally, the chunk size doesn't matter much unless you are scanning many, many files and you are using a quota'ed remote for downloads.

I had a post previously and measured the waste of a larger chunk size using mediainfo and it was fairly negligible on actual download since if you set a large chunk size, it starts a large request and aborts once the file is closed.

The fun comes into how exactly does Google count that range request to your download total / download per file / download per whatever else that measure and quota on.

Right, that's why I started using the smaller chunk size to begin with. This was actually your suggestion back then :slight_smile:

I've so far stuck with it due to my scheduled tasks. It seems to not have any negative impact on streaming either. Eventually, if Plex ever finishes analyzing my files, I'll remove the flag.

It's more about perspective as it'll take a little longer as say to start playing you need say 100Mb of data (not sure what that number is).

If you start a 1M, it might take you 30-40ish API calls to get that 100MB starting at 1MB chunk size to grow up and catch up, but with 128M, it's just 1.

How much times do those extra API calls take? Depends on what's going on and how fast things return, etc.

I feel like a good suite spot is probably around that 32-64 range but the possible gains are so small / hard to measure, I don't care enough as the default does pretty well for more uses and does well enough for streaming that I just leave it.

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.