WebDAV mount extremely slow when file list grows in a directory

What is the problem you are having with rclone?

WebDAV mount extremely slow when file list grows in a directory.

We use rclone to access a directory in Nextcloud and write files which are synced to clients in real time, and works perfectly! Issues arise when the file list in directories grows. We noticed at some point, and for the most used path we worked around by daily moving older files to a subdirectory. Unfortunately this is not always possible.

We checked docs and ended up with this caching config which slightly improved, but didn't completely solve.

 --vfs-cache-mode=writes --dir-cache-time=10s

I was wondering if there's anything else to improve the situation.

If it can be of any help, 90% of the activities performed on the mount point are writes, reads are much less frequent.

I tried adding --vfs-read-chunk-size-limit 500M --vfs-read-chunk-size 64M but didn't improve significantly.


Run the command 'rclone version' and share the full output of the command.

rclone v1.60.1
- os/version: centos 7.9.2009 (64 bit)
- os/kernel: 3.10.0-1160.76.1.el7.x86_64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.19.3
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Nextcloud 24 (WebDAV)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

/usr/bin/rclone mount SLNextcloud: /tmp/NCtmp/ --allow-other --vfs-cache-mode=writes --dir-cache-time=10s --config=/dati/yetopen/rclone/.rclone.conf --cache-dir=/var/rclone --dir-perms=0770 --file-perms=0660 --gid=10513 --umask=0007

The rclone config contents with secrets removed.

type = webdav
url = https://nextcloud.domain.it/remote.php/webdav/
vendor = nextcloud
user = srv_gestionale
pass = abcdef

For a reference, listing in this directory takes

real    9m20.328s
user    0m0.212s
sys     0m0.390s

Hi maxxer,

It may help to increase this to refresh less often from the webdav server (on the expense of less frequent updates of changes from the server performed by other mounts/processes)

I suggest you try --dir-cache-time=1h, to see if this reduces the performance degradation to only occur for 10 minutes every hour, when refreshing.

Exactly which rclone command did you use?
Was it the recursing ls, lsl or the single folder lsd or lsf?
How many items were listed?
How many items do you have in the folder with most items?

What is your available bandwidth to the webdav server?
Do you know the latency to the webdav server (using ping or similar)?

the one in the first post:

/usr/bin/rclone mount SLNextcloud: /tmp/NCtmp/ --allow-other --vfs-cache-mode=writes --dir-cache-time=10s --config=/dati/yetopen/rclone/.rclone.conf --cache-dir=/var/rclone --dir-perms=0770 --file-perms=0660 --gid=10513 --umask=0007

Just a plain time ls /path

more than 13000, unfortunately the terminal history doesn't go that further.

this is one of the biggest

The Nextcloud server is local to the mount point. I know I can write directly to the filesystem, but running occ files:scan with something like iwatch is a killer, and running through cron involves an average of 30s delay.

Did you try this?

Not sure you got my intention with the question, but no problem, still got some useful info.

Please post the output from this command with some/path replaced by the path to the big folder:

time rclone lsf SLNextcloud:some/path | wc -l

Perfect, good to know!

Are you able to monitor the load on the Nextcloud server? (CPU, Memory, Disk latency,...) while performing the above rclone lsf?

time rclone lsf SLNextcloud:somedir | wc -l

real    0m17.960s
user    0m2.047s
sys     0m0.284s

when running the command, I see mariadb/php-fpm/rclone spiking to 100% at turn, but server load doesn't go up too much.

Adding --dir-cache-time=1h indeed makes the second ls immediate. I might try with this, and educate users about the implications for the single folder we use for reading on the server.

Perfect, I will return shortly with a tip to refresh a single folder with higher frequency.

What is the output of the above command, if replacing some/folder with the folder you use for reading?

... and what is the output of this command listing the content (including size and mod time) of all folders:

time rclone lsl SLNextcloud: | wc -l

This is on the read dir:

time rclone lsf SLNextcloud:gestionale-cl/aggio | wc -l

real    0m0.902s
user    0m0.104s
sys     0m0.040s

The full lsl has been pretty much stressful for the server, but was completed faster than expected:

time rclone lsl SLNextcloud: | wc -l

real    15m6.843s
user    1m29.694s
sys     0m9.437s

Perfect, the trick to refresh this folder on demand/schedule is to add --rc to you mount command to allow for remote control.

Then you can refresh the folder from another terminal (on the same machine) with an rclone rc vfs/refresh command like this:

rclone rc vfs/refresh dir="gestionale-cl/aggio"

Once tested you can schedule it in Task Scheduler. I think it will be OK with 10s intervals with this very small folder, but don't think you should go lower.

Very good, I asked because I was considering to increase --dir-cache-time to something high (e.g. 48h) and then refresh the entire folder tree every night like this:

rclone rc vfs/refresh recursive=true

That would make the write folders quick all day long. The downside is that item in the tree takes app. 1K so it would make the mount use app. 700MB memory. That may be OK, that is up to you.

Another less memory requiring approach is to set --dir-cache-time=24 and only refresh the most used write folders every night.

Hope you can follow my ideas and find the optimal settings for your use, otherwise please ask.


This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.