Ram cache for dir/file list?

Hello.
I noticed that when i do ‘ls /mnt/rclone/bigfolder/’ for a folder that’s so many files, it takes a long time before i actually start outputting the result of ‘ls’.
I’m guessing because the directory list info cache is by default is saved on disk.
How can i make this commans feels snappier?
Is it something to do with setting ram as cache location?
Or modifying rclone mount option so that it will send less API request?
Or maybe have something like keep-alive?

I’m talking about every time i want to use linux command such as ‘ls’ and ‘find’ in a big rclone folder, it feels like rclone hang for about 30 seconds before starts outputting the result of the command.
And there was no stress on the server cpu, ram, nothing that shows load.

–dir-cache-time

As far as I know this is stored in memory, rather than on disk. ls / find etc run very quickly on mine with this flag on the mount.

With dir-cache-time, it’s definitely snappy. I keep mine at 48h:

felix@gemini:/gmedia$ find . | wc -l
21878

I’m using --dir-cache-time=4h already because my workflow is to add media from outside rclone…

I have similar number of files as yours but I guess the different is that my hierarchy is flat, inside one big folder I have around that amount of files, not nested in folders.
Anyway, this results in delay for using linux commands and load time for Plex Movies browsing to load Plex PrePlay screen. so my solution is to separated them into nested folders a,b,c,d, etc. and each files into their own movie folders.
So far Linux command becomes snappy. I’m rebuilding my Plex Movies DB, hopefully this will make Movie browsing much snappier.

The problem if you have that many files in a single folder is there is that each time something is copied, it invalidates the directory and has to pull a new list.

You could break up into a few sections as that would help.

It seems like plex made calls to check files within a folder in the media we access. So every time i click on a movie title, my old setup caused plex to check all files.
Once i have each movie in its own folder and each alphabet, plex scanning and browsing becomes much faster

Regardless of the directory structure, plex will check the size/mod time of each file to make sure nothing is new.

Each call for a directory listing would be an API call so having a large number of files in a directory would be quicker as having 1000 Directories would be 1 call for each directory so 1000 calls.

I think what you are seeing is when items are not analyzed as that requires checking each file and isn’t the same as a scan.