I'm not having any kind of problem. I'm just wondering if anyone is currently seeding torrents via their Rclone mount? If they are, I'm curious as to what their setup is and what the potential pros and cons are of doing so?
Run the command 'rclone version' and share the full output of the command.
rclone v1.58.1
os/version: alpine 3.15.4 (64 bit)
os/kernel: 5.4.0-113-generic (x86_64)
os/type: linux
os/arch: amd64
go/version: go1.18.1
go/linking: static
go/tags: none
Which cloud storage system are you using? (eg Google Drive)
Google Drive
The command you were trying to run (eg rclone copy /tmp remote:tmp)
Ok, so let's theoreticaly say I do all my seeding from the mount and set a VFS cache of say 500g. If all my torrents are active at the same time (unlikely but possible) and that amount exceeds 500g, what will happen? Will I see errors thrown in the torrent client?
Would the cache count the size of the full files or only the chunks it's requesting?
I presume I also risk hitting transaction limits at the Google end, unless I slow everything down?
It would be great to give myself some extra seeding capacity, but not at the expense of stability, hence looking for pros and cons
Torrents will cause the in use files to be loaded into the cache completely as they are random access.
Rclone will exceed the 500G limit - it doesn't delete files that are in use from the cache.
I'd say seeding from a mount is only worth doing provided that the total GB that you are seeding is less than the amount of disk space you can devote to the cache.
Just out of interest, if the cache were to become full due to too many torrents seeding at any one time, where would be the point of failure? Would I expect the torrent client to start throwing errors as it was unable to read files that couldn't be loaded to the stuffed cache? The second point of failure I presume would be somewhere like Plex that would be unable to stream anything?
I suppose I could mitigate the risk by setting a limit to the amount of torrents that are able to be active at any one time?
Regarding speed limits set on the mount, would I be correct in thinking this would only be an issue once the torrent is seeding and not the initial leaching, as the file would be loaded to the cache first? Or would bandwidth limits be in place from the outset?
It's quite possible I won't go ahead with this, but I just want to make sure I fully understand the risk to reward ratio before I make any decisions
Yeah I hear you. I thought it was probably a fools errand, but felt I should do my due diligence and at least try to understand the pros and cons
While I have your attention mate, I'm going to slightly hijack my own thread here and ask you about your mount settings. I read some posts recently where I saw you had toyed with running mounts without a cache. Just curious about whether you have kept that going, or have you gone back to a cached mount? Bar some slight variations I believe my mount is fairly similar to what you were previously running
vfs-read-chunk-size is not related to the cache mode at all so that operates regardless of the cache mode.
vfs-read-chunk-size is the http range request it sends to the remote when you request to read a file. Generally, if you are reading sequentially, it really doesn't matter much as it'll ramp up anyway.
If you are doing random stuff, a smaller chunk size would be beneficial but adds more overhead / more API calls.
Say you want a 256MB of data and you have a 32M read size, it'll make 8 API calls to get that. If you have 64, 4 calls, 128, 2 calls, etc.
If it's all random access like torrenting, I'd keep it small like 16M or something probably and test out. You'll get more API calls but the pattern is random anyway so having it large adds a "bit" of waste.
Thanks, that is interesting and I clearly had misunderstood.
So presuming I am taking torrenting out of the equation, would I be better served having a smaller or larger read size, alongside the mount I shared above? It's currently mainly accessed by Radarr, Sonarr and Plex
I thought I saw mention of a 1mb read (was it @VBB who mentioned it?) working well with Plex background tasks and scans, however that seems counterintuitive to me as I would have thought that would increase the api calls to the point of triggering quota limits on a large library. Apologies if I've misunderstood once again
There's no real tangible API limit for Google Drive as you have like 1 billion calls per day or something.
You do have download and upload quotas and while there is always a lot of conjecture, I (in my testing and experience) have never found any correlation to more API calls to hitting a download quota.
When plex analyzes a file, it only reads a few MB on the file and if you have the default and/or a larger read request, you do get a little bit of waste as it'll read a little longer and close it out so if you were scanning say 100TB, even a small amount of waste adds up so you get a better experience with smaller.
If you are streaming, that's all sequential so doesn't really matter much albeit a slightly slower start time with a 1M chunk size as it has to ramp up a bit. For me, I really don't care if it takes 1s or 1.5s to start as it's fast enough for me.
I try to leave everything on defaults unless I have a very good reason to change and it is a very large impact on performance. I go with less is more philosophy.
What will happen if rclone starts getting disk full errors is that it will chuck clean files out of the cache even if they are open. They will immediately be re-opened - you'll just lose the cached data.
This is probably more-or less what you want! I'd set any mount I was using for torrenting to read only to save on mistakes.
Yes, that would do it.
If you use --bwlimit it will be active for all network transactions so while the file is being downloaded from the storage