How practical is seeding from an Rclone mount?

What is the problem you are having with rclone?

I'm not having any kind of problem. I'm just wondering if anyone is currently seeding torrents via their Rclone mount? If they are, I'm curious as to what their setup is and what the potential pros and cons are of doing so?

Run the command 'rclone version' and share the full output of the command.

rclone v1.58.1

  • os/version: alpine 3.15.4 (64 bit)
  • os/kernel: 5.4.0-113-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.18.1
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Paste command here

My Docker script (just the mount section)

mount "RichFlixCrypt:/Encrypted/" /mnt/Skull --allow-other --dir-cache-time 1000h --log-level INFO --log-file /logs/Plex_Mount.log --poll-interval 15s --timeout 1h --umask 002 --cache-dir=/mnt/rclone_cache --vfs-cache-mode full --vfs-cache-max-size 100G --vfs-cache-max-age 9999h --vfs-read-ahead 1G --uid 1000 --gid 1000 --tpslimit 3 --bwlimit-file 20M

The rclone config contents with secrets removed.

N/A

A log from the command with the -vv flag

N/A

sure, you can seed from a rclone mount.
really, not much different from streaming a media file.

in either case, rclone will download the chunks from gdrive.

note: rclone will not delete an in-use file from the local vfs file cache.

Ok, so let's theoreticaly say I do all my seeding from the mount and set a VFS cache of say 500g. If all my torrents are active at the same time (unlikely but possible) and that amount exceeds 500g, what will happen? Will I see errors thrown in the torrent client?

Would the cache count the size of the full files or only the chunks it's requesting?

I presume I also risk hitting transaction limits at the Google end, unless I slow everything down?

It would be great to give myself some extra seeding capacity, but not at the expense of stability, hence looking for pros and cons

Cheers

Also is there anything about my mount commands you might change?

Torrents will cause the in use files to be loaded into the cache completely as they are random access.

Rclone will exceed the 500G limit - it doesn't delete files that are in use from the cache.

I'd say seeding from a mount is only worth doing provided that the total GB that you are seeding is less than the amount of disk space you can devote to the cache.

Thanks Nick,

Just out of interest, if the cache were to become full due to too many torrents seeding at any one time, where would be the point of failure? Would I expect the torrent client to start throwing errors as it was unable to read files that couldn't be loaded to the stuffed cache? The second point of failure I presume would be somewhere like Plex that would be unable to stream anything?

I suppose I could mitigate the risk by setting a limit to the amount of torrents that are able to be active at any one time?

Regarding speed limits set on the mount, would I be correct in thinking this would only be an issue once the torrent is seeding and not the initial leaching, as the file would be loaded to the cache first? Or would bandwidth limits be in place from the outset?

It's quite possible I won't go ahead with this, but I just want to make sure I fully understand the risk to reward ratio before I make any decisions

Cheers

Seeding from a cloud mount is really a horrific use case for cloud storage.

If you don't run out of disk space, it just eats more than you want on the local disk until it frees up.

Yeah I hear you. I thought it was probably a fools errand, but felt I should do my due diligence and at least try to understand the pros and cons

While I have your attention mate, I'm going to slightly hijack my own thread here and ask you about your mount settings. I read some posts recently where I saw you had toyed with running mounts without a cache. Just curious about whether you have kept that going, or have you gone back to a cached mount? Bar some slight variations I believe my mount is fairly similar to what you were previously running

I've used a cache mount since it's been released. I haven't changed from that.

My only process change was to remove mergerfs and just let the mount do the uploads as it's been that way for me for a few months now.

Ah ok, I clearly misunderstood your part of the conversation. It was related to using vfs-read-chunk-size

vfs-read-chunk-size is not related to the cache mode at all so that operates regardless of the cache mode.

vfs-read-chunk-size is the http range request it sends to the remote when you request to read a file. Generally, if you are reading sequentially, it really doesn't matter much as it'll ramp up anyway.

If you are doing random stuff, a smaller chunk size would be beneficial but adds more overhead / more API calls.

Say you want a 256MB of data and you have a 32M read size, it'll make 8 API calls to get that. If you have 64, 4 calls, 128, 2 calls, etc.

If it's all random access like torrenting, I'd keep it small like 16M or something probably and test out. You'll get more API calls but the pattern is random anyway so having it large adds a "bit" of waste.

1 Like

Thanks, that is interesting and I clearly had misunderstood.

So presuming I am taking torrenting out of the equation, would I be better served having a smaller or larger read size, alongside the mount I shared above? It's currently mainly accessed by Radarr, Sonarr and Plex

I thought I saw mention of a 1mb read (was it @VBB who mentioned it?) working well with Plex background tasks and scans, however that seems counterintuitive to me as I would have thought that would increase the api calls to the point of triggering quota limits on a large library. Apologies if I've misunderstood once again

There's no real tangible API limit for Google Drive as you have like 1 billion calls per day or something.

You do have download and upload quotas and while there is always a lot of conjecture, I (in my testing and experience) have never found any correlation to more API calls to hitting a download quota.

When plex analyzes a file, it only reads a few MB on the file and if you have the default and/or a larger read request, you do get a little bit of waste as it'll read a little longer and close it out so if you were scanning say 100TB, even a small amount of waste adds up so you get a better experience with smaller.

If you are streaming, that's all sequential so doesn't really matter much albeit a slightly slower start time with a 1M chunk size as it has to ramp up a bit. For me, I really don't care if it takes 1s or 1.5s to start as it's fast enough for me.

I try to leave everything on defaults unless I have a very good reason to change and it is a very large impact on performance. I go with less is more philosophy.

Thanks, great explanation

So adding something like the following to my script might be worth an experiment?

--vfs-read-chunk-size 1M --vfs-read-chunk-size-limit 2G

I would definitely experiment as that should be fine.

1 Like

This will get you banned on a lot of trackers.

What will happen if rclone starts getting disk full errors is that it will chuck clean files out of the cache even if they are open. They will immediately be re-opened - you'll just lose the cached data.

This is probably more-or less what you want! I'd set any mount I was using for torrenting to read only to save on mistakes.

Yes, that would do it.

If you use --bwlimit it will be active for all network transactions so while the file is being downloaded from the storage

What do you mean?

Several trackers explicitly have rules against seeding from online storage / cloud. Just a friendly reminder to check so you don't get banned

There's not a way you can tell if if online storage or a slow disk so not sure how that would ever be enforceable.