My idea is to keep 100K+ 80MB files in encrypted rclone repositories (S3, Google Drive) and sporadically download a few of them. MacOS environment. I would like to use rclone mount for that (read only is fine, I can upload through normal means).
The problem is no matter if I use MacFuse or Fuse-T, the whole computer slows to a crawl and fails to read the mounted directory.
Is there any way to solve it? Or maybe NFS serve in the pipeline?
Other thing would be to forget Finder - stick to terminal. With cache warming it should be much faster already. Why not Finder? As it (like most of modern files' browsers) tries to be clever and inspects every file for example to display thumbnails of known documents. It mean reading 100k+ files (even if only partially) and it can be disaster.
most users do not realize that rclone mount is comprised to two caches, rclone file cache and rclone dir cache
i wrote a summary here
the problem is the OP is not providing any real details, as asked for in the help and support template.
gdrive and s3 are very different when it comes to mounts.
i would have two mount commands, optimized for gdrive and optimized for s3.
windows explorer is also a nightmare, i stop using it decades ago,
a quick internet search lists many guides to tweak macos finder.
well, if you started a new topic about a bug, then no point in continuing this topic for now.
if you get the bug fixed, then come back here and i can help tweak your command
Hi. I got the bug fixed with longer timeout settings (Google drive). Would you recommend caching strategy to cope with very long refresh/cache filling times so it is relatively fixed, for a read only situation?