VFS Cache Pre-Warming possible?

Is there a way to warm up the VFS cache for a remote as a mount? So not the directory structure but really a download of the files? A complete image in the VFS to keep the performance high?

Sure, you can read every file via a script completely.

I thought maybe there is a function via rclone rc or similar that I can trigger so I don't have to write my own script, but alright then I know. Thanks for the quick feedback!

Can do something pretty simple like:

find . -type f -exec cat {} \; > /dev/null

Thank you, I didn't mean it that way. But especially with remotes where it's about really many small files, you run into limits so I have to delay the query e.g. because rclone just doesn't handle it well.

Sorry as I'm not sure what that means or what remotes that would refer to.

With the right transactions limits, there's really no issue with that simple script.

You'd have to share more details as you've somewhat ignored our template, asked a broad question and got a broad answer.

For specifics, please share the help/support template and it's easier to understand the use case.

You're right, of course, that you're missing context. Here is the configuration I am working with. I use Dropbox as remote.

/usr/bin/rclone rcd \
  --config=/home/<obfuscated>/.config/rclone/rclone.conf \
  --rc-addr localhost:5572 \
  --rc-user <obfuscated> \
  --rc-pass <obfuscated> \
  --rc-enable-metrics \
  --rc-web-gui \
  --rc-web-gui-no-open-browser \
  --rc-web-gui-update \
  --log-file /var/log/rclone/data.log \
  --tpslimit 15 \
  --tpslimit-burst 15 \
  --transfers 16 \
  --checkers 16 \
  --use-mmap \
  --timeout 15s \
  --buffer-size 256M \
  --dropbox-batch-mode async

/usr/bin/rclone rc --rc-user <obfuscated> --rc-pass <obfuscated> mount/mount fs=dbx_fbx_storage_combined: mountPoint=/mnt/cloud/Data vfsOpt='{"CacheMode": 3, "CacheMaxAge": 60000000000, "WriteBack": 120000000000, "ChunkSize": "512M"}' mountOpt='{"AllowNonEmpty": true, "AllowOther": true,"AttrTimeout": 600000000000,"DirCacheTime": 86400000000000}'
rclone v1.59.1
- os/version: ubuntu 20.04 (64 bit)
- os/kernel: 5.4.0-124-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.18.5
- go/linking: static
- go/tags: none

Quite generally the limits work with Dropbox, only as soon as I download a large number of small files I run into Dropbox API errors as the queries are too fast. But maybe there is a best practice for Dropbox limits, which I don't know yet.

From the general consensus of folks using Dropbox, it seems to be around 12 TPS per app registration so I use an app registration for each mount I have.

I'm not 100% sure if the TPS goes down to the mount but I'm not 100% either way.

Okay, then I will go down to 12 to test. I also have one scoped app per mount.

I have opened a support case for Dropbox before, but like Google, they really don't share the nitty gritty specifics on their rate limiting and how it works so that part is a bit of trial and error.

Since I've set to 12, I've never had an issue.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.