This morning was the first time in a week I was able to run a Plex scan without running into a ban. Very happy
I have a feeling the issue was being triggered by Bazarr on my Post-Processing server, which was triggering a Library Scan at 4-5am on Rclone 1.40.
So just to confirm, I have the following config on my Post Processing RClone mount - which handles Sonarr/Radarr etc. Iāve updated it to 1.45, but do I need to do anything else to stop bans moving forward?
MOUNT
ExecStart=/usr/bin/rclone mount edrive: /home/media/.media/rclone --allow-other
MOVE FROM LOCAL TO CLOUD
rclone move /home/media/.hardlinks/ edrive: -no-traverse --size-only --exclude-from /home/media/.bin/excludes --transfers 3 --log-file /home/media/.bin/rclone.log
Ah hah! Yes, that version would definitely cause a problem and very happy you found out the issue!
The defaults should be fine in majority of cases. The only default you may want to up is:
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
If you want to avoid some excessive API calls, make that like 24hours or something. New files are detected by API polling so changes should process every minute anyway even if that value is high. The other VFS values are the ones Iām which are also the defaults now.
Yep, depending on how you want to do things, you could always put the defaults in if they decide to change.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
--vfs-read-chunk-size int Read the source objects in chunks. (default 128M)
--vfs-read-chunk-size-limit int If greater than --vfs-read-chunk-size, double the chunk size after each chunk read, until the limit is reached. 'off' is unlimited. (default off)
The values are listed up there if you want to plug 'em in or just leave as the defaults.
My backup rclone copy command needs to check 1.5 Million files each day, and for about 2 hours it seems to be to check if there are new files. Only after 2 hours it starts transfering new stuff.
When it plays from my Google Drive, it grabs chunks of the file to play it and not the whole thing. Same thing once it analyzes the file. It only grabs some chunks of the file.
If you changed paths in plex, it has to re-analyze your files so thatāll take some time / data usage.