Rclone mount - slow download performance

What is the problem you are having with rclone?

To slow downloads on high frequent access

Run the command 'rclone version' and share the full output of the command.

rclone v1.59.2

  • os/version: debian 11.5 (64 bit)
  • os/kernel: 5.4.0-125-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.18.6
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

StorJ Gateway-ST, basically minio

The command you were trying to run (eg rclone copy /tmp remote:tmp)

No direct rclone cmd related, I'm using rclone mount.

The rclone config contents with secrets removed.

type = s3
provider = Other
access_key_id = %hidden%
secret_access_key = %hidden%
region = minio
endpoint =
acl = private
chunk_size = 64M

A log from the command with the -vv flag

Nothing necessary here.

I'm kinda stuck with my HLS/DASH streaming implementation as I cannot serve enough clients at the same time, meaning I get stuck at about 50 Mbit/s in total where my system is able to handle 5000 Mbit/s. In my opinion, this issue comes from rclone. I'm using rclone to mount an S3 bucket on a Linux machine where a web server (nginx) than pulls its files from using a location block like this:

location /hls {
   alias /srv/hls/; # rclone mount-point
   auth_jwt_enabled on;
   add_header 'cache-control' 'no-cache';
   add_header 'access-control-allow-credentials' 'true';
   add_header 'access-control-allow-origin' $allow_origin;
   add_header 'access-control-expose-headers' 'content-encoding,content-length,content-range,date';
   add_header 'access-control-allow-headers' 'authorization,accept,accept-encoding,accept-language,access-control-request-headers,access-control-request-method,cache-control,connection,dnt,host,pragma,sec-fetch-dest,sec-fetch-mode,sec-fetch-site,sec-gpc,te,user-agent';
   # Trick preflight option requests of media players like VideoJS
   if ($request_method = OPTIONS) {
       add_header 'access-control-allow-credentials' 'true';
       add_header 'access-control-allow-headers' 'authorization,accept,accept-encoding,accept-language,access-control-request-headers,access-control-request-method,cache-control,connection,dnt,host,pragma,sec-fetch-dest,sec-fetch-mode,sec-fetch-site,sec-gpc,te,user-agent';
       add_header 'access-control-allow-methods' 'GET, HEAD, OPTIONS';
       add_header 'access-control-allow-origin' $allow_origin;
       add_header 'content-length' 0;
       return 204;

   if ($invalid_referer){
      return 403;

   if ($request_method !~ ^(GET|HEAD|OPTIONS)$ ){
      return 405;

The problem now is that as soon as the client pulls a file, rclone only opens a single connection for this, meaning I get a maximum of 2-3 MB/s trough this single connection, which kinda sucks... In my opinion, this behavior sits very deep inside the rclone code. As rclone mount does not support multi-part downloads, only the pure rclone command does. Long story short, using rclone mount does not give me the needed performance to realize my project, only speaking from a software perspective. I also tried S3fs, which does support multi-part uploads but not downloads ... which again brings me to the same outcome.
Is there any chance to get multi-part downloads for a mounted S3 volume here? This would really help me to speed up my performance.

This is how I mount the rclone volume:

rclone mount cdn:vod /srv/hls
--umask 0000
--buffer-size 16M
--dir-cache-time 180s
--poll-interval 0m30s
--vfs-cache-max-age 43200s
--vfs-cache-mode full
--vfs-read-ahead 2M
--vfs-read-chunk-size 16M
--cache-dir /cache/vod
--max-read-ahead 512Ki
--transfers 1000
--checkers 1000
--drive-chunk-size 2M
--volname vod

Kind regards and thanks in advance

How big are the files?

Is there more than one client at once reading them?

rclone mount supports multiple readers from one file, so if the client read from multiple places in the file at once, then rclone will open one stream for each place.

Do the clients open and read the files sequentially normally?

I think this is the relevant issue

I had a commercial sponsor for this work, but the job fell through alas. Maybe your company would like to sponsor the work?

Hello Nick,

For this specific mount, the files are always between 150 KB - 8 MB. These are HLS/DASH segments loaded by the client's browser using VideoJS. In the end, many clients should be able to access the same files blazing fast…

I also have other mounts with bigger files like 250MB - 25GB where I have basically the exact same issue with.

According to your question, Do the clients open and read the files sequentially normally? ->
Not 100% sure what you mean by that. I can say that each HLS/DASH segments getting pulled sequentially by VideoJS, but I don't know what exactly happens on Rclone side here. Basically, this is the path of implementation I have done:

VideoJS -> (NGINX (http2) -> Rclone mount -> minio/StorJ Gateway (http1.1)) -> I placed NGINX up to minio into extra brackets as these components all sit inside the same container/machine where VideoJS runs only on client side of course.

Sadly I'm not a company, I'm developing a white label product on my own. One of the later posts was from myself on Github. Anyway, what does it cost ^^ ?

Once they are loaded into the cache they should be super quick!

If VideoJS reads the files sequentially then rclone will fetch them sequentially at roughly the rate they are being read. If you want rclone to read ahead then play with these parameters

  --buffer-size SizeSuffix   In memory buffer size when reading files for each --transfer (default 16Mi)
  --vfs-read-ahead SizeSuffix              Extra read ahead over --buffer-size when using cache-mode full

Its probably a couple of weeks work for me, so imagine 2 weeks of expensive developer time and that will give you an idea! If you want to know more drop me an email nick@craig-wood.com

Note that this is pretty much the same architecture I use for serving beta.rclone.org which is served from a swift object storage system where I use these parameters on the mount

/usr/bin/rclone mount -v --read-only --config XXX --cache-dir XXX --dir-cache-time 1m --vfs-cache-mode full --vfs-cache-max-age 168h --allow-non-empty --allow-other --use-mmap=true --vfs-cache-max-size 30G --rc remote:beta-rclone-org /mnt/beta.rclone.org

I have considered switching over to rclone serve http and proxying that instead of running an rclone mount. This has some advantages with using the simpler and more optimised tcp sockets rather than the fuse layer.

The first thing that looks strange to me is that fact that even if a file is already cached, the download is slow using nginx ... what again leads me to point that rclone might not even be the problem here.

I'm a developer myself, I really appreciate your work, but not sure if this is all in all the thought of open-source software. Of-course, it's also not about that you put work over to others. Anyway, can you maybe imagine why I get slow download performance if I let nginx read files from a rclone mount. Are there possibly other bottlenecks ?

Okay, I found the bottleneck, this might also be quite interesting to you @ncw ... I thought that my slow transfer speeds are coming from rclone in the background, but this wasn't the issue. The real issue was that the sndbuf of nginx was to small... with a to small send buffer, the performance of nginx is absolutely horrorable if you are downloading large files or big quantities of files...

I changed the following two lines in nginx, where 8mb is a bit too much, but I really wanted to know it:

- listen 443 http2 ssl default_server reuseport backlog=131072 so_keepalive=off rcvbuf=8m sndbuf=8m fastopen=500;
- listen [::]:443 http2 ssl default_server reuseport backlog=131072 so_keepalive=off rcvbuf=65536 sndbuf=65536 fastopen=500;

And my performance now is outstanding good!
fastopen is optional and must be supported by the kernel of your linux OS. If you want to use it, please also make sure to set the following at /etc/sysctl.conf:

# Add support for fastopen transmissions
net.ipv4.tcp_fastopen = 3

Still the multi-part download feature would make a lot of sense for the initial pull of the data. If I pull a file for the first time, the StorJ gateway-st I'm using in the background on the same server just gives me about 20Mb/s (per connection) which is okay to forward the file to the enduser which has requested it, but if the file is not already in the cache and lets say two or more users pulling the file initially, meaning for the very first time the files gets pulled from the backend into the cache, of course all users do it all at once not one after another, then you have a strange behavior. it really takes about 10-20 seconds until the download starts for the second user as the first user is already downloading.

If we had multi-part download, the time to get the file from the backend into the cache would be much shorter. using "s5cmd" for example, which can do heavy parallelism and gives me a way higher download speed, about 4-10x faster than rclone... Meaning the StorJ Gateway-st can in theory also provide me with several Gbit/s for a single file request from the backend ... To be honest this would be an awesome feature to have in the near future.

If, for any reason, my hobby project might go productive, and I make some with it, I hire you for the feature :slight_smile: It's really worth it if you want to use rclone in the background of your service to exchanges files.

Well done for finding the problem and thank you for the update.

Good luck!

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.