RClone Mount Read Speed: Yandex/WebDav vs WebDavFS. How can it be improved?

Hello community,
I am trying to get started a little personal project which involves basically reading files on external cloud storage services within Ubuntu filesystem. Actually I am starting from basic WebDAV protocol, as using Yandex Disk give the opportunity to test this from a relatively stable source (and I don’t have to mess with another VPS :grinning:).

Actually only two methods worked from me WebDAVFS and your brilliant RClone project (FUSEDav and davfs2 didn’t even mount for me, does it happen to everyone else?). However, I found some strange results using both solutions.

My first test is very simple. Running Thunar (Xubuntu) I simply copy a file from the mounted filesystem to my local hard disk, this gave me the following download speeds:

  • RClone: ~1 Mb/s
  • WebDavFS: ~0.4 Mb/s

Apparently RClone is faster than WebDavFS, However I did a second test by an .sh script and results where quite the opposite:

  • RClone: ~0.04 MB/s
  • WebDavFS: ~ 0.30 MB/

Commands used to mount:

  • rclone mount 1: ~/1 --vfs-cache-mode off --read-only --allow-other
  • mount -t webdavfs -o username=user,password=pass,ro,allow_other,async_read https://webdav.yandex.com /home/user/1

While WebDavFS was already faster than RClone mounted filesystem, I achieved a little more with “async_read” option.

I am missing something configuring RClone?

Thanks for your time!

1 Like

What version are you running? What backend are you using?

dade@dade-Lap:~$ rclone version
rclone v1.45

  • os/arch: linux/amd64
  • go version: go1.11.2

Actually on Xubuntu 18.04, or you mean anything else by backend?

He meant what is the dav pointing to. Google drive?

I run webdav and get really good performance. I’ve played with davfs and it works well.

Sorry I thought I made that clear on title. It’s Yandex Disk provider, they also let you use WebDav. I used both yandex and webdav RClone mounts and getting same results.

Try experimenting with the value of --buffer-size (0 and bigger) this may make a difference.

Also try experimenting with this

  --vfs-read-chunk-size int            Read the source objects in chunks. (default 128M)

Will do, sharing results ASAP :slight_smile:

1 Like