New Feature: vfs-read-chunk-size

I never used cache. My mount point was always a gdrive > crypt remote. My drive will soon reach 200TB everything included. So there are a few more files to be scanned :sweat_smile:

You are right, rclone will only download the needed bytes, that mediainfo reads. But the call that rclone sends to the remote will request the whole file, starting at byte 0 until the end. This “requested bytes” number is counted towards the daily quota, not the “downlaoded bytes” (at least for Google Drive).

My config is quite simple:

[gsuite]
type = drive
client_id = my.client_id.apps.googleusercontent.com
client_secret = my_client_secret
token = {...}

[gsuite-crypt]
type = crypt
remote = gsuite:data
filename_encryption = standard
password = password
password2 = password2

When running a Plex scan using this config against let’s say 1TB new files, this would result in a 24h ban.
Plex will open every file, read some data and seek to multiple positions in the file to collect the metadata.
As described above, every open and seek will count to the “requested bytes” limit (which seems to be 10TB for Google Drive). When Plex seek’s 10 times, 1TB of new files will exceed the 10TB limit.

Here is a example strace of mediainfo where this behavior can bee seen.
I annotated the calls that trigger the rclone request with the byte range requested. Running mediainfo on this 77 GB file caused 3 opens and 13 seeks, in total 16 requests send by rclone. Summing up the bytes requested, this would add 1080.09 GB to the “requested bytes” limit.

Running the same command with --vfs-read-chunk-size 128M would only add 2 GB to this limit.

I hope I could explain the difference between “downlaoded bytes” and “requested bytes”.