Rclone mount torrent downloading/seeding

What is the problem you are having with rclone?

I'm getting an error in qbittorrent whenever I use a path mounted with rclone. The download will start but within 10 or 15 seconds the status will just change to 'error' and it will stop. This does not happen if I change the download location to the local drive. I've done some searching but most of the threads with good answer that I could find were older so wondering what the current state of this issue is.

I've tried several different settings but current settings are the following:


Run the command 'rclone version' and share the full output of the command.

- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 5.15.0-1039-oracle (aarch64)
- os/type: linux
- os/arch: arm64 (ARMv8 compatible)
- go/version: go1.20.2
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)


The rclone config contents with secrets removed.

type = s3
provider = Minio
env_auth = false
access_key_id = xxxx
secret_access_key =  xxx
region = us-east-1
endpoint = http://xxxxx
location_constraint =
server_side_encryption =

Do not torrent directly to rclone mount.
Only move downloaded torrent to mount when it is finished. Most torrent clients have option to do this.

That's how I'm doing it now but it results in a lot of manual work for me because I want to be able to download and seed from my rclone mount without having to move things around or learn to script. If I could download direct to the rclone mount that would help a lot

need to use a rclone debug log, figure out the exact problem.

fair enough, ill give it a look tonight.

i thought maybe the torrent thing was figured out after all of these years of people trying to get it to work

torrent downloading is random writes. rclone is not meant for random writing, its meant for streaming writes. It does support random reads (though perhaps slower).

I can't imagine it will ever support random writes as that isn't the use case the physical backends it interfaces with are trying to solve.

Why wouldn’t rclone work with random writes?

If the OP shares a log, that’s what is really needed to see what issue is there.

Streaming is one thing that people use rclone for but it has quite a lot of other uses as well.

I stand by my statement that rclone mounts aren't that good for random writes with the majority of cloud service providers and that there are better methods if one want to use them, especially in the context of torrent clients opening / closing files as needed and not keeping them open permanently (could easily hit open file limits in any torrent client with a decent number of torrents shared otherwise). while one can try to shoehorn them in, performance will always suffer, there are better ways of doing it.

Rclone mounts work just fine for random writes.

I think what you are trying to say is that cloud providers and random writes and/or reads are slower than local disk. There isn't a technical reason or any reason rclone doesn't work with random patterns as it's just slower.

It really isn't rclone per se either as it's the latency from a client to the data that causes the delays.

That's the same for local and cloud storage.

Better is a relative term and depends on what the use case and what you want to happen. If my goal is to seed long term, I'd have no issues tossing a mount in front of a lot of data and a big cache disk for performance.

What works for me may not work for you.

  1. I didn't say one can't seed off an rclone mount (that easily works, I do similar things, have an 800GB SLC cache as my rclone cache for mount, very good speeds). I was discussing "writing" (i.e. downloading) a torrent directly to an rclone mount.

  2. Most torrent clients open/close files on demand (including when writing). My understanding is that writing to rclone mount in this manner would be problematic (rclone would try to sync the file to remote on close, when its incomplete) and would continually do that.

  3. I'll get back to my Q, do any cloud providers provide "random write". i.e. the ability to overwrite only a section of the file? I'm guessing perhaps one could simulate random writes if used with chunker? (though possibly get into a read modify write scenarios)

Indeed seeding is not a problem at all given that there is enough cache available.

Challenge is writing - torrent download is probably the absolute worst case scenario of random writes to rclone mount:) One can experiment with delayed writing (--vfs-write-back) but it is not real solution, only workaround. Torrent inactivity can always exceed write back delay.

The easiest and safest solution is to use local disk directory to write initial torrent file and move it to rclone mount when finished. Most torrent clients support such functionality out of the box - no scripting needed.

Writing works just fine.

That's not right as it keeps a file open and writes it usually in random order as you are getting pieces of data from the file. Most clients support pre-allocation of files as rclone supports that as well so you get it preallocated anyway.

That's not right as the file is open. Changing files (open and closing) also isn't an issue as rclone detects that. If there are 'longer' periods of being closed, you can configure --vfs-write-back to something longer.

That's not really a relevant question as that doesn't really matter if it's local disk or a rclone cloud mount. The only difference tends to be latency for the work to be done. Rclone handles all this just fine.

I prefer to use a mount directly as it's less and you have to script the moves and such to get that to work. There's really no issue using a rclone mount for any of this other than the inherent latency of a cloud mount. That's really the only downside.

Yes - performance and tuning aside original OP situation should not happen with torrent client unexplained error. But it is IMO probably more torrent client issue than VFS. As no DEBUG log was provided it was not possible to suggest anything else than write to local disk and move later:)

I still stand by my statement, trying to save a torrent to an rclone mount for torrents isn't the use case it was designed for. While one can get it to work, there are many "corner" (i'm not even sure corner is the right word, as they are so common) where it will fail.

yes. you can try to configure the heck out of your rclone mount to cover all the cases, but its always going to be fragile and risk running into problems.

It take me about an hour or two to whip up a bare bones torrent client with GitHub - anacrolix/torrent: Full-featured BitTorrent client package and utilities that downloads the torrent locally, and on completion automatically moves it to the cloud storage and then long term seeds it from cloud storage. Would be much more reliably.

Been running my setup for over a year with less things breaking so it's not quite that hard to setup and use :slight_smile:

I removed many extra pieces of software and use a rclone mount for handling uploads and found it for me, to be much more reliable than having more parts in process and/or writing a custom piece of software to do what already works perfect for me.

If your setup works how you want it to, by all means use it.

I think Nick is the only person that could possibly answer that as I'm sure rclone has evolved to much more than what it was originally designed for.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.