My idea is to keep 100K+ 80MB files in encrypted rclone repositories (S3, Google Drive) and sporadically download a few of them. MacOS environment. I would like to use rclone mount for that (read only is fine, I can upload through normal means).
The problem is no matter if I use MacFuse or Fuse-T, the whole computer slows to a crawl and fails to read the mounted directory.
Is there any way to solve it? Or maybe NFS serve in the pipeline?
I am using rclone mount on macOS and it is fine.
Now why it is so slow in your case? I can only speculate. You have quite a lot of files - by any chance not all in mount root folder?
Like you I only use mount to sporadically download some files so I use default and do not bother with any special options:
rclone mount onedrive: ./onedrive
This remote has about 300k files but no more than 1000 in single folder. All works nice and smooth.
It’s a flat structure (one folder) to reflect local structure. I cannot do much about it
The finder slows to a crawl and then times out without ever populating the directory.
Fuse debug just reports extended attributes errors. Nothing to shed more light.
yes flat structure might be the root cause of the problem. Even on local drive 100k+ files in one folder is challenging.
i see there is a beta version that deals with extended attributes. maybe it will spark an idea for you.
I've now tested that beta branch, and it appears to solve the problem.
after the mount is running, but before you access the mountpoint, have rclone pre-cache the dir/file structure.
--rc to the mount command
- after the mount is running, run this command
rclone rc vfs/refresh recursive=true
- now access the mountpoint
in addition, gdrive and s3 need different flags/settings.
there are a number of things that can be done to improve performance.
so can you answer the questions in the help and support template,
rclone version, redacted config,
rclone mount command and a debug log.
very neat idea.
Other thing would be to forget Finder - stick to terminal. With cache warming it should be much faster already. Why not Finder? As it (like most of modern files' browsers) tries to be clever and inspects every file for example to display thumbnails of known documents. It mean reading 100k+ files (even if only partially) and it can be disaster.
most users do not realize that
rclone mount is comprised to two caches,
rclone file cache and
rclone dir cache
i wrote a summary here
the problem is the OP is not providing any real details, as asked for in the help and support template.
gdrive and s3 are very different when it comes to mounts.
i would have two mount commands, optimized for gdrive and optimized for s3.
windows explorer is also a nightmare, i stop using it decades ago,
a quick internet search lists many guides to tweak macos finder.
fwiw, i use double commander. great that it runs on windows and linux.
never tried but it does run on macos
or try midnight commander
mc is the first thing I install on any unix or unix-like system. On windows total commander.
@o1o1oo1 - you can try mc in macOS, if you are using
brew install mc
Let's see if any of the above ideas will help to solve your issue. Let us know.
I've tried all the tips, thank you so much for taking time to help me.
Unfortunately it does not cut it, even with the latest beta. I posted a bug report.
@asdffdsa If I'd stick to Google Drive what would be your advice to improve the performance?
well, if you started a new topic about a bug, then no point in continuing this topic for now.
if you get the bug fixed, then come back here and i can help tweak your command
Hi. I got the bug fixed with longer timeout settings (Google drive). Would you recommend caching strategy to cope with very long refresh/cache filling times so it is relatively fixed, for a read only situation?
This is good example:
It uses Linux systemd but you can adapt it to cmd or macOS launchd
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.