Rclone Syncing is causing streaming to be impossible. I am wanting to know if there is a way to specify a specific time of day for the syncing part of rclone to take place while keeping the drive mounted for use. Or is there something I should add/remove to my command to block/keep syncing from being a thing and then add/remove this part of the command before I go to bed each night to allow syncing to resume manually?
To clarify, when I download new items, let's say 200 individual episodes. Rclone encrypts and puts those on my google drive. while it does that my plex server is basically unusable. So I am wanting to know if there is a way to schedule the encrypting and uploading of new files to gdrive. Like, if I download new files in the middle of the day it will only encrypt and add those to gdrive between the hours of 10pm and 8am.
Gonna level with you, I am not entirely clear on how the process works. I think it goes download file to temp directory>sonarr/radarr move file to correct directory>rclone sees new file and encrypts it>rclone reuploads newly encrypted file to gdrive.
If someone is streaming from the server and the server is using gdrive and transcoding. Would the stream be an upload or a download? Basically not sure what to limit here.
So then would this command work for buffering issues? screen rclone mount --cache-dir=~/cache --vfs-cache-mode=full -v --dir-cache-time 1000h --poll-interval 0s --bwlimit 10M:off --allow-other --vfs-cache-max-size 200G --vfs-cache-max-age 1h secret: ~/drive
The way the bwlimit works is for both upload and download. If you set it to 10M, that impacts both download and upload.
You can use a per file limit to make sure that no one file saturates your bandwidth instead:
--bwlimit-file BwTimetable Bandwidth limit per file in KiB/s, or use suffix B|K|M|G|T|P or a full timetable
Personally on Linux, I use mergerfs which adds another layer on top and all my writes happen to a local mounted area and I upload that on a scheduled basis since you can schedule rclone move/copy from the local mount point to your remote.
This requires a bit more knowledge and some time to setup but I do have my process documented here:
Doesn't really matter what backend as the process is still the same. That might be a bit much if you are not that well versed on the Linux side of things so sticking with the per file limit would be an easier route, but with less customization.