I dont know how to speed up the directory listings in mounted folders. It seems that directory caching does not work on my system. I checked .cache/rclone folder and it is almost empty. How to make the directory cache work? I would like to fill my disk cache once and then not to update it, I tried a lof ot mount options with various cache settings. Then I found a "rclone rc vfs/refresh" command but it does not seem to work.
Run the command 'rclone version' and share the full output of the command.
clone v1.60.0
os/version: ubuntu 22.10 (64 bit)
os/kernel: 6.0.0-cadmium (aarch64)
os/type: linux
os/arch: arm64
go/version: go1.19.2
go/linking: static
go/tags: none
Which cloud storage system are you using? (eg Google Drive)
FTP, HTTP
The command you were trying to run (eg rclone copy /tmp remote:tmp)
Thank you very much for the fast reply. So if I understand it correctly, there is no such function, because it was removed? (in which version?) Also I am not sure if I understand reason for this. Or is it just the feature not implemented yet? Thanks!
Thank you very much. So maybe another question, what is that command "rclone rc vfs/refresh" for?
Also, in this thread (How to get full directory tree cached on first mount?) it looks like someone solved my problem, but I was not able to reproduce that.
A rclone mount has a dir-cache-time which stores the directory and file metadata in memory.
--dir-cache-time 9999h
So that means once the mount starts, it is not populated. That command you shared, runs against a mount with remote control enabled so basically populate the directory and file cache in memory.
Once the mount is stopped, the cache is gone as it's in memory.
It's the same basically as doing a "find ." on the drive and walking through the file system.
Thank you very much! Although it does not help me, cause I run Linux and that network mode parameter is Windows only. So with this setup my file manager seem to freeze when browsing the files. I think I need to use different configuration.
Out of that whole list, browsing a directory is only influenced by use-server-modtime and that's generally for S3 remotes that require an extra call to get the modification time.
Listing a directory and files are related to --dir-cache-time and the only way to populate that is by browsing the remote or using a refresh call to 'prime' the directory cache.
It really depends on what the OP means by listing/browsing and specifically what is being used and if that tool/application is actually getting file contents or not.
With FTP/HTTP remotes, I believe the mod time comes with the listing if I'm not mistake but that would have to be validated.
I timed how long to open a 1.4GB folder of audio files on my mounted drive using each parameter in isolation:
Using only --vfs-cache-mode off
03.67s
08.51s
30.46s
03.21s
02.68s
Using only --vfs-cache-mode full
03.77s
03.56s
19.72s
03.65s
03.79s
Using only --use-server-modtime
01.05s
00.21s
00.88s
00.22s
01.83s
Using only --buffer-size=0
03.70s
05.37s
04.45s
23.80s
26.31s
Using only --network-mode
14.54s
03.23s
02.50s
03.07s
02.78s
There are however some strange anomalies where it takes much longer than usual for the folder to open. I can't explain this. I thought the results would be much more consistent on an otherwise idle machine. The only variation I can somewhat explain is that of --use-server-modtime, where opening time seemed related to how long I hovered/focused the folder before entering it — hovering/focusing for longer seemed to result in faster opening due to anticipatory loading. I didn't notice this relationship with the other parameters.
I also can't explain why combining parameters gives faster, more consistent results:
Using --vf-cache-mode full --use-server-modtime --buffer-size=0 --network-mode
00.06s
00.05s
00.06s
00.06s
00.06s
I might be more confused now than before I did this test...
The trick comes down to what does a 'list' or 'open' actually do. I'm primarily on Linux so when I 'list' or 'open', I'm only doing a ls command or a find command, which only looks at the metadata on the directory / file (name/size/mod time) and does not check any of the file contents nor does it do an actual 'open' on the file.
I'd be super mindful on what Windows does when it 'opens' a folder as it may do some extra stuff depending on the method as that's not my expertise.
I know on default on Explorer if you 'open' a directory, it's doing quite a lot more on the defaults and actually reading contents of the file to do its magic which I'd imagine is different if you opened that via powershell or a cmd prompt and did recursive dir.
It tends to be in the details as a 'open' might not be the same or mean the same to everyone so that's why I was trying to be very specific in my definition to compare the two together.
Once you run things against files like a ffprobe/mediainfo like Plex or other tools do, you definitely get into situations where cache mode is faster and the tweaks you do speeds up or down things quite a bit.
For S3 storage, use-server-modtime would help as in my understanding that's less API calls to get the metadata on the file. Buffer-size shouldn't matter much on listing a directory as nothing has been 'opened' for reading as the default is 16M anyway, it really doesn't have a large impact either as it's dumped once the file is closed.