Download transfer speed via mount 20x slower than via copy/sync

What is the problem you are having with rclone?

I'm experiencing very slow transfer rates when I'm using mount vs copy/sync during downloading of test folder.
Difference is about 20x slower (I see speed in File Exlorer when using mount and in command line when using copy/sync). I have fast fiber optics/Gigabit Ethernet, no Wifi. Both mount and copy/sync are configured with below switches:

--disable-http2
--transfers 20
--multi-thread-streams=20 
--multi-thread-cutoff=200M 

Especially this "--disable-http2" is critical, as without it koofr seems to drastically slash transfers even in copy/sync mode to 1MB/s or worse....
So my qustion, is there any way how to fine tune rclone mount to be faster? Currently with test directory with mixed small, medium and one big 4GB the transfer rate is ~1MB/s-mount vs 20MB/s-sync/copy.

Run the command 'rclone version' and share the full output of the command.

rclone v1.62.2

  • os/version: ubuntu 22.04 (64 bit)
  • os/kernel: 5.15.0-73-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.20.2
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Koofr

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Mount command (and download initiated in GUI by Ubuntu Nautilus File Explorer Ctrl-C -> Ctrl-P) and below the corresponding commandline copy/sync command:

rclone 	-v \
	--transfers 20 \
	--multi-thread-streams=20 \
	--multi-thread-cutoff=200M \
	--disable-http2 \
	mount "${RCLONE_REMOTE}" "${RCLONE_MOUNT_DIRECTORY}" \
	--vfs-cache-mode writes \
	--vfs-cache-poll-interval 10s \
	--vfs-cache-max-age 0h0m10s \
	--log-file "${RCLONE_LOG_FILE}" \
	--dir-perms 0700 \
	--file-perms 0600 \
	--default-permissions
rclone -P --transfers 20 --disable-http2 \
--multi-thread-streams=20 --multi-thread-cutoff=200M \
copy "koofr:TEST_DIR" some_target_local_dir/

The rclone config contents with secrets removed.

type = koofr
provider = koofr
user = *** redacted ***
password = *** ENCRYPTED ***

That's expected.

Log file if you had included it would show you why.

Mounts are not multi threaded on downloads.

Copies/Sync are.

Hmm, so the current documentation is incorrent then?
I'm using vfs-cache-mode writes

Excerpt from Documentation

This will work with the `sync`/`copy`/`move` commands and friends `copyto`/`moveto`. 
Multi thread downloads will be used with `rclone mount` and `rclone serve` if `--vfs-cache-mode` is set to `writes` or above.

It would be very helpful if you share a log file and we can see exactly what's going on as that's why we ask for it. We play a guessing game without a log file which is wasteful in terms of your time and my time.

Hello, sorry, my bad.

Just to summarize what I'm doing in my test procedure for both copy/sync (works fine) and mount CtrlC-Ctrp-P (works terribly slow).

For simplification one big test file ~4.5GB test.mp4 is downloaded in both use cases.

Downloaded with below "rclone copy" gets there in less than 4minutes and 20MB/s transfer, all nice here, blazing fast:

rclone -P --transfers 20 --disable-http2 --multi-thread-streams=20 --multi-thread-cutoff=200M copy  "koofr:Pub/test.mp4"  Downloads/

But when I'm doing this big test file transfer via the mount operation + Nautilus File Explorer (exactly as specified in my initial post, with same flags as for 'rclone copy') transfer is slashed to ~1MB (It's currently ongoing with ETA of 1h more... :frowning: ).

Remark 1: for a moment I've changed vfs-cache-mode from 'writes' to 'full' but no difference in download speed via Nautilus File Explorer, still ~1MB/s...
Remark 2: It's funny, but upload speeds seem to be much nicer and it gets to 8-10MB/s similar to upload with copy/sync. I'm checking the logs to see when file actually uploaded as Nautilus reports completion of upload very quickly due to vfs-cache, all as expected)

Logs of download via mount with "-vv" option are probably just endless stream of following similar lines :

2023/06/14 18:46:29 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=446824448
2023/06/14 18:46:29 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=446955520
2023/06/14 18:46:29 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:29 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:29 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=447086592
2023/06/14 18:46:29 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:29 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=447217664
2023/06/14 18:46:29 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:29 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=447348736
2023/06/14 18:46:29 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:29 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=447479808
2023/06/14 18:46:29 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:29 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=447610880
2023/06/14 18:46:31 DEBUG : Pub/test.mp4: ChunkedReader.Read at 448786432 length 1048576 chunkOffset 402653184 chunkSize 536870912
2023/06/14 18:46:31 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:31 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=447741952
2023/06/14 18:46:31 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:31 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=447873024
2023/06/14 18:46:31 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=448004096
2023/06/14 18:46:31 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:31 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:31 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=448135168
2023/06/14 18:46:31 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:31 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=448266240
2023/06/14 18:46:31 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:31 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=448397312
2023/06/14 18:46:31 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:31 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=448528384
2023/06/14 18:46:31 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:31 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=448659456
2023/06/14 18:46:31 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/06/14 18:46:32 DEBUG : Pub/test.mp4: ChunkedReader.Read at 449835008 length 1048576 chunkOffset 402653184 chunkSize 536870912
2023/06/14 18:46:32 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:32 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=448790528
2023/06/14 18:46:32 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:32 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=448921600
2023/06/14 18:46:32 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=449052672
2023/06/14 18:46:32 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:32 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:32 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=449183744
2023/06/14 18:46:32 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:32 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=449314816
2023/06/14 18:46:32 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:32 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=449445888
2023/06/14 18:46:32 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:32 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=449576960
2023/06/14 18:46:32 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:32 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=449708032
2023/06/14 18:46:33 DEBUG : Pub/test.mp4: ChunkedReader.Read at 450883584 length 1048576 chunkOffset 402653184 chunkSize 536870912
2023/06/14 18:46:33 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>
2023/06/14 18:46:33 DEBUG : &{Pub/test.mp4 (r)}: Read: len=131072, offset=449839104
2023/06/14 18:46:33 DEBUG : &{Pub/test.mp4 (r)}: >Read: read=131072, err=<nil>

Every 10 seconds also occurs (as expected for "writes" vfs-cache-mode with 10s polling) below line:

2023/06/14 18:46:21 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)

The first part of the debug ls what I'm looking for :frowning:

What is Downloads? I can't figure out how a mount comes into play.

That "koofr:Pub/test.mp4 Downloads/" is just part of rclone copy and it works well, 20MB/s, all good with rclone copy during testing. Downloads/ is just a default Ubuntu Dowload folder (/home//Downloads)

The second test is using koofr via rclone mount. And here is the problem. Once the mount is active the same copy procedure (via Nautilus file explorer in Ubuntu 22.04) takes so much time and the download transfer of test.mp4 file is only 1MB/s. That's the issue I'm worried about, the mount download speed.

Here is my mount command that gives me trouble (~/KOOFR is my empty dir that rclone uses for mounting remote called plain and simple 'koofr:'):

rclone 	-vv \
	--transfers 20 \
	--multi-thread-streams=20 \
	--multi-thread-cutoff=200M \
	--disable-http2 \
	mount "koofr:"  "~/KOOFR" \
	--vfs-cache-mode writes \
	--vfs-cache-poll-interval 10s \
	--vfs-cache-max-age 0h0m10s \
	--log-file "~/some_log_file.txt" \
	--dir-perms 0700 \
	--file-perms 0600 \
	--default-permissions

All that repeating log lines in my previous message are from above mount command (once the ~/KOOFR folder is mounted with my files I begin test by Copy from ~/KOOFR/Pub/test.mp4 to ~/Downloads via Nautilus GUI File Explorer)

Right - rclone copy uses multi threaded downloads.

That's actually the problem. It isn't the same procedure. The copy command you are using there isn't multistream, thus takes long. The mount isn't the issue.

That's easy to see with a debug log from each one that is shared completely.

Rclone copy

[felix@gemini data]$ rclone copyto testfile /home/felix/test/copytest -vv --multi-thread-streams=5 --multi-thread-cutoff=200M
2023/06/14 15:03:51 DEBUG : Setting --config "/opt/rclone/rclone.conf" from environment variable RCLONE_CONFIG="/opt/rclone/rclone.conf"
2023/06/14 15:03:51 DEBUG : rclone: Version "v1.63.0-beta.7013.d05393b86.fix-6986-times" starting with parameters ["rclone" "copyto" "testfile" "/home/felix/test/copytest" "-vv" "--multi-thread-streams=5" "--multi-thread-cutoff=200M"]
2023/06/14 15:03:51 DEBUG : Creating backend with remote "testfile"
2023/06/14 15:03:51 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2023/06/14 15:03:51 DEBUG : fs cache: adding new entry for parent of "testfile", "/data"
2023/06/14 15:03:51 DEBUG : Creating backend with remote "/home/felix/test/"
2023/06/14 15:03:51 DEBUG : fs cache: renaming cache item "/home/felix/test/" to be canonical "/home/felix/test"
2023/06/14 15:03:51 DEBUG : testfile: Sizes differ (src 5368709120 vs dst 2281762816)
2023/06/14 15:03:56 DEBUG : preAllocate: got error on fallocate, trying combination 1/2: operation not supported
2023/06/14 15:03:56 DEBUG : preAllocate: got error on fallocate, trying combination 2/2: operation not supported
2023/06/14 15:03:56 DEBUG : testfile: Starting multi-thread copy with 5 parts of size 1Gi
2023/06/14 15:03:56 DEBUG : testfile: multi-thread copy: stream 5/5 (4294967296-5368709120) size 1Gi starting
2023/06/14 15:03:56 DEBUG : testfile: multi-thread copy: stream 2/5 (1073741824-2147483648) size 1Gi starting
2023/06/14 15:03:56 DEBUG : testfile: multi-thread copy: stream 1/5 (0-1073741824) size 1Gi starting
2023/06/14 15:03:56 DEBUG : testfile: multi-thread copy: stream 3/5 (2147483648-3221225472) size 1Gi starting
2023/06/14 15:03:56 DEBUG : testfile: multi-thread copy: stream 4/5 (3221225472-4294967296) size 1Gi starting

and a copy command on Linux which isn't multi threaded.

[felix@gemini data]$ cp testfile /home/felix/test

The latter only runs 1 stream as that's how it works.

Ok, so I assume that any native file functions like cp from command line or GUI Nautilus copy/paste are inherently single threaded and will not benefit from:

--multi-thread-streams=20

Then again this piece of documentation I mentioned is misleading:

 Multi thread downloads will be used with rclone mount and rclone serve if --vfs-cache-mode is set to writes or above.

I'm confused here, does this "rclone mount AND rclone serve" mean that it's related to a case when we use both those solutions at the same time?

Because if not, then rclone mount with "vfs-cache mode writes" is what I'm using, but have no clue how to benefit from faster downloads (at least in Ubuntu environment using standard cp, rsync or Nauitlus GUI)

You can run a mount via a serve so that's using both at the same time. You can use a mount by itself. You can use a serve command by itself.

It's saying it will benefit both situations of running a mount and also a serve command.

Multi threaded works via the application/program that is calling it so if cp, rsync or whatever are not doing multi threaded things, it won't matter if rclone does or does not.

Hmm, does it then boil to the below rough summary?

'rclone mount --vfs-cache-mode writes' in Ubuntu with standard commands like cp/rsync/nautilus will execute download with performance similar to:

rclone \
--transfers 1  \
--multi-thread-streams=0   \
copy   some_remote:SOME_DIR_FULL_OF_FILES   target_dir/

In upload mode the performance may be significantly better as files are first copied to cache and cp/rsync/nautilus reports "all-copied-great-job" and then the real transfer can begin

Not sure I would lump all those programs together like that.

cp is much different from rsync in what it does and I have no idea what nautilus does in terms of copying.

I'd only say rclone copy / move / sync can launch multiple streams.

If you are using a cache, it may or may not be faster depending on what the cache disk is underneath and the specs / load on the machine. I try to avoid making general / broad statements as there are always exceptions to it.

So the last question here, may the --vfs-cache-mode "full" help somehow for download?

In my tests there is no improvement, but it helps dramatically in upload. I see for example something like:

2023/06/14 16:59:12 INFO  : vfs cache: cleaned: objects 16 (was 16) in use 16, to upload 5, uploading 10, total size 9.496Gi (was 9.496Gi)

Nice upload with 10 files being transfered/uploaded simultanously.

If I understand correctlly that "full" mode does not have any impact on rclone mount during copying in download direction.

Important Edit:
Thanks for your extensive help!

This issue was already mentioned in the past... So probably case closed, nothing can be done. Upload will usually be much faster than download via rclone mount, without below feature being implemented.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.