Rclone mount synchronously upload?

Hi all,

I'm wondering is it possible for rclone to upload chunks synchronously when the files being written in the disk buffer.

I ran rclone mount remote:/ ./mount --vfs-cache-mode writes -vv, and found that rclone won't upload the file until the writing stream to the file is closed which is excatly it should behave as this page says. And as it mentioned too, using --vfs-cache-mode off won't work for some application, for me it's qBittorrent which fails in seeks when downloading torrent.

I know it would be many cases. But if file are being written sequentially, and in which threads wouldn't change the data it writes, is it possible for rclone to upload the file in chunks synchronously to the remote. So the things would be: whenever it writes, rclone waits for a configured chunk size being written and uploads it, rather than waitng for the entire file being writted into the buffer.

Thanks in advance

Short answer: No.

Remotes need full files to upload as you can't upload partial files.

Longer answer:

If qBittorrent needs to seek the file then it won't work with --vfs-cache-mode off. --vfs-cache-mode writes and higher will only upload the file when it is closed.

As @Animosity022 said we need the whole file to write to the cloud storage. So either we get that in sequence with no seeking, or we buffer it to disk first, wait until it is closed, then start uploading it.

If you could persuade qBittorrent to close the file long enough for rclone to upload it, then it would get uploaded. However rclone will upload the entire file which will have chunks missing from it.

Bittorrent is pretty much the worst possible case for rclone without buffering!

I don't know whether you can configure qBittorrent to keeps its temporary files somewhere else and move them into place when they are complete - that would fix the problem too.

very short answer, i have a howto about that
https://forum.rclone.org/t/how-to-move-a-downloaded-torrent-to-the-cloud/23493

1 Like

In my impression there are some APIs allows differrent sizes of chunks when creating files, but it makes sense to me.

What I'm trying to do is let qBittorrent downloads directly into cloud storage, and when the file is needed by other peers, it fetchs the file from the cloud and then distribute it, without using much of local storage.

It's a very specific case which rclone mount fully provided the storage bittorrent needs without or with little local cache. Maybe I should start my own journey with FUSE :slight_smile:

Thanks everyone though

Specifically, rclone doesn't have partial uploads or resumable uploads at the moment.

Torrents are a very bad choice for cloud storage based on their random access patterns.

Yeah that's true. So I need to write something specifically for the "feature".

Turn on the sequential write should help a lot. And for the upload, Dropbox's API provides a "temporary link" which should support Range header and doesn't count into the API calls limit so that it should be OK.

Dropbox doesn't have API call limits. They have some rate limiting per second, but that's it.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.