Is there a way to preload --dir-cache or attr-cache from existing files or directories, instead of fetching them directly from the remote backend?
My remote backend has about 200 thousand files and because I often load them with rclone in temporary environments (containers), it will take a lot of time (and often hit the API requests limit) because of such numerous files.
Since they are mainly documents, with very infrequent changes or updates, I consider them as read-only and I want to preload structure of directories/files cache for them from an external file/folder (that saved locally in disk).
So that when anything changes, whenever a file query arises, I will just need to retrieve it from the backend instead of working through the entire remote directory with hundreds of thousands of files.
Is there any way to do that? I have carefully looked at the rclone mount docs, but there are no instructions mentioned.
Has anyone encountered this user case before?
Look forward to receiving help from you all!
Run the command 'rclone version' and share the full output of the command.
rclone v1.67.0
os/version: ubuntu 22.04 (64 bit)
os/kernel: 4.15.0-213-generic (x86_64)
os/type: linux
os/arch: amd64
go/version: go1.22.4
go/linking: static
go/tags: none
Which cloud storage system are you using? (eg Google Drive)
Dropbox for Bussiness.
The command you were trying to run (eg rclone copy /tmp remote:tmp)
/usr/bin/rclone mount --config <my-config.conf> --daemon --file-perms 0755 --dir-perms 0777 --allow-non-empty --vfs-cache-mode full --vfs-cache-max-age 3d --attr-timeout 1w --dir-cache-time 4w <source> <dest>
This happens to me often and all remote dropbox backends are affected, regardless of whether they share the same account or not. Usually their limit is 300s before a new request can be sent.
I'm not sure if --vfs-refresh applies to my situation. I use docker containers to separate the working environment and they are configured to ignore the cache (~/.cache), so the cache entries are always empty every reboot.