Issue with using rclone + gdrive for plex

What is the problem you are having with rclone?

So I'm using the below config for rclone to mount my Team Drive as plex library.

 rclone mount --allow-other --allow-non-empty \
    --drive-pacer-min-sleep=10ms --drive-pacer-burst=200 \
    --vfs-cache-mode=full --dir-cache-time 72h \
    --log-file /home/user/.cache/plex/log \
    --cache-dir /home/user/.cache/plex \
    --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off \
    --vfs-cache-max-size=50G --vfs-cache-max-age=72h \
    --vfs-cache-poll-interval=1m --vfs-read-ahead=4G \
    --bwlimit-file=25M -v \
    plexcloud: /home/user/plexcloud &

The issue I'm facing is that when I ran a scan after adding 10-15 new TV-Series, the scan was taking much longer than I expected so I took a look and found that I was downloading chunks of each episode for each series into my cache folder. Prior to this I was using a simple rclone mount command and that does not download such chunks of data or atleast not as big chunks as with this new command. (I checked network traffic when running library scans) The old mount command was just a simple;

`rclone mount --allow-other --allow-non-empty -v plexcloud: /home/user/plexcloud &`

This is running on a RaspberryPi 4, I have set my own client_id and client_secret for rclone.

Am I doing something wrong with the new config? Is there any way to stop rclone from downloading such huge chunks of data when scanning library? If there is anyone else running a similar set-up on their RaspberryPi then please do share your rclone ccommad.

Run the command 'rclone version' and share the full output of the command.

rclone v1.61.1
- os/version: raspbian 11.5 (64 bit)
- os/kernel: 5.15.76-v8+ (aarch64)
- os/type: linux
- os/arch: arm64
- go/version: go1.19.4
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive (Shared Drive)

The rclone config contents with secrets removed.

[plexcloud]
type = drive
scope = drive
token = {"access_token":"yaxxxxxxx6","token_type":"Bearer","refresh_token":"1//0xxxxxxxfM","expiry":"2023-02-09T23:27:05.580111591+05:30"}
team_drive = 0xxxxxxxxVA
root_folder_id = 
client_id = 9xxxxapps.googleusercontent.com
client_secret = Gxxxxxxxxxx5

A log from the command with the -vv flag

2023/02/09 21:33:05 INFO  : vfs cache: cleaned: objects 7 (was 7) in use 0, to upload 0, uploading 0, total size 6.079Gi (was 6.079Gi)
2023/02/09 21:34:05 INFO  : vfs cache: cleaned: objects 7 (was 7) in use 0, to upload 0, uploading 0, total size 6.079Gi (was 6.079Gi)
2023/02/09 21:35:05 INFO  : vfs cache: cleaned: objects 7 (was 7) in use 0, to upload 0, uploading 0, total size 6.079Gi (was 6.079Gi)
2023/02/09 21:36:05 INFO  : vfs cache: cleaned: objects 25 (was 25) in use 1, to upload 0, uploading 0, total size 6.680Gi (was 6.680Gi)
2023/02/09 21:37:05 INFO  : vfs cache: cleaned: objects 45 (was 45) in use 1, to upload 0, uploading 0, total size 7.096Gi (was 7.096Gi)
2023/02/09 21:38:05 INFO  : vfs cache: cleaned: objects 57 (was 57) in use 1, to upload 0, uploading 0, total size 7.415Gi (was 7.415Gi)

hello and welcome to the forum,

the vfs file cache is optional, some rcloners use it, some do not.

i would remove the following and test again.
--vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --vfs-read-ahead=4G

unless you are 1000% sure what the flags does, remove it.

try using -vv for debug output

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.